problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
25.4k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 582
39.1k
| num_tokens
int64 271
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_62334
|
rasdani/github-patches
|
git_diff
|
googleapis__google-api-python-client-129
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
discovery_cache module not packaged during installation.
I've installed `google-api-python-client` from source, but when at some point my application was failing with this message:
```
...
...
File "build/bdist.linux-x86_64/egg/oauth2client/util.py", line 142, in positional_wrapper
return wrapped(*args, **kwargs)
File "build/bdist.linux-x86_64/egg/googleapiclient/discovery.py", line 193, in build
content = _retrieve_discovery_doc(requested_url, http, cache_discovery, cache)
File "build/bdist.linux-x86_64/egg/googleapiclient/discovery.py", line 215, in _retrieve_discovery_doc
from . import discovery_cache
ImportError: cannot import name discovery_cache
```
I've checked if `discovery_cache` module was actually part of the `egg`, and unfortunately it was not:
```
[root@e42fb97ce657 unit]# python
Python 2.7.5 (default, Jun 24 2015, 00:41:19)
[GCC 4.8.3 20140911 (Red Hat 4.8.3-9)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import googleapiclient.discovery_cache
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named discovery_cache
>>>
```
Here are all the files in `egg`
```
[root@e42fb97ce657 ~]# unzip -l /usr/lib/python2.7/site-packages/google_api_python_client-1.4.1-py2.7.egg
Archive: /usr/lib/python2.7/site-packages/google_api_python_client-1.4.1-py2.7.egg
Length Date Time Name
--------- ---------- ----- ----
1169 09-03-2015 16:09 apiclient/__init__.py
1301 09-03-2015 16:09 apiclient/__init__.pyc
1 09-03-2015 16:09 EGG-INFO/dependency_links.txt
62 09-03-2015 16:09 EGG-INFO/requires.txt
26 09-03-2015 16:09 EGG-INFO/top_level.txt
969 09-03-2015 16:09 EGG-INFO/PKG-INFO
1 09-03-2015 16:09 EGG-INFO/zip-safe
545 09-03-2015 16:09 EGG-INFO/SOURCES.txt
53575 09-03-2015 16:09 googleapiclient/http.py
9910 09-03-2015 16:09 googleapiclient/channel.py
40890 09-03-2015 16:09 googleapiclient/discovery.py
9907 09-03-2015 16:09 googleapiclient/schema.pyc
620 09-03-2015 16:09 googleapiclient/__init__.py
9317 09-03-2015 16:09 googleapiclient/schema.py
11830 09-03-2015 16:09 googleapiclient/model.py
4047 09-03-2015 16:09 googleapiclient/sample_tools.py
6552 09-03-2015 16:09 googleapiclient/mimeparse.py
53976 09-03-2015 16:09 googleapiclient/http.pyc
7043 09-03-2015 16:09 googleapiclient/mimeparse.pyc
6333 09-03-2015 16:09 googleapiclient/errors.pyc
3131 09-03-2015 16:09 googleapiclient/sample_tools.pyc
3622 09-03-2015 16:09 googleapiclient/errors.py
35534 09-03-2015 16:09 googleapiclient/discovery.pyc
14028 09-03-2015 16:09 googleapiclient/model.pyc
175 09-03-2015 16:09 googleapiclient/__init__.pyc
10690 09-03-2015 16:09 googleapiclient/channel.pyc
--------- -------
285254 26 files
[root@e42fb97ce657 ~]#
```
As a workaround I had to add `googleapiclient/discovery_cache` to the `packages` in `setup.py` so it looked like that:
```
[root@e42fb97ce657 google-api-python-client]# more setup.py | grep packages -A 4 -m1
packages = [
'apiclient',
'googleapiclient',
'googleapiclient/discovery_cache'
]
```
Then installed and everything magically started working.
```
[root@e42fb97ce657 google-api-python-client]# python
Python 2.7.5 (default, Jun 24 2015, 00:41:19)
[GCC 4.8.3 20140911 (Red Hat 4.8.3-9)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import googleapiclient.discovery_cache
>>>
```
Here is a quick sample that looks similar to my environment using `Docker`:
```
FROM centos:centos7
RUN yum install -y git python-devel python-setuptools unzip
RUN easy_install pip
RUN cd /tmp ;\
git clone https://github.com/google/google-api-python-client && \
cd google-api-python-client && \
python setup.py install
```
I've also tried to follow preferred suggestion from the `README.md` and install it from `pip` but it ended up in the same situation.
Please advice on how to proceed without making "manual" modifications to the official package?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright 2014 Google Inc. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Setup script for Google API Python client.
16
17 Also installs included versions of third party libraries, if those libraries
18 are not already installed.
19 """
20 from __future__ import print_function
21
22 import sys
23
24 if sys.version_info < (2, 6):
25 print('google-api-python-client requires python version >= 2.6.',
26 file=sys.stderr)
27 sys.exit(1)
28 if (3, 1) <= sys.version_info < (3, 3):
29 print('google-api-python-client requires python3 version >= 3.3.',
30 file=sys.stderr)
31 sys.exit(1)
32
33 from setuptools import setup
34 import pkg_resources
35
36 def _DetectBadness():
37 import os
38 if 'SKIP_GOOGLEAPICLIENT_COMPAT_CHECK' in os.environ:
39 return
40 o2c_pkg = None
41 try:
42 o2c_pkg = pkg_resources.get_distribution('oauth2client')
43 except pkg_resources.DistributionNotFound:
44 pass
45 oauth2client = None
46 try:
47 import oauth2client
48 except ImportError:
49 pass
50 if o2c_pkg is None and oauth2client is not None:
51 raise RuntimeError(
52 'Previous version of google-api-python-client detected; due to a '
53 'packaging issue, we cannot perform an in-place upgrade. Please remove '
54 'the old version and re-install this package.'
55 )
56
57 _DetectBadness()
58
59 packages = [
60 'apiclient',
61 'googleapiclient',
62 ]
63
64 install_requires = [
65 'httplib2>=0.8',
66 'oauth2client>=1.4.6',
67 'six>=1.6.1',
68 'uritemplate>=0.6',
69 ]
70
71 if sys.version_info < (2, 7):
72 install_requires.append('argparse')
73
74 long_desc = """The Google API Client for Python is a client library for
75 accessing the Plus, Moderator, and many other Google APIs."""
76
77 import googleapiclient
78 version = googleapiclient.__version__
79
80 setup(
81 name="google-api-python-client",
82 version=version,
83 description="Google API Client Library for Python",
84 long_description=long_desc,
85 author="Google Inc.",
86 url="http://github.com/google/google-api-python-client/",
87 install_requires=install_requires,
88 packages=packages,
89 package_data={},
90 license="Apache 2.0",
91 keywords="google api client",
92 classifiers=[
93 'Programming Language :: Python :: 2',
94 'Programming Language :: Python :: 2.6',
95 'Programming Language :: Python :: 2.7',
96 'Programming Language :: Python :: 3',
97 'Programming Language :: Python :: 3.3',
98 'Programming Language :: Python :: 3.4',
99 'Development Status :: 5 - Production/Stable',
100 'Intended Audience :: Developers',
101 'License :: OSI Approved :: Apache Software License',
102 'Operating System :: OS Independent',
103 'Topic :: Internet :: WWW/HTTP',
104 ],
105 )
106
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -59,6 +59,7 @@
packages = [
'apiclient',
'googleapiclient',
+ 'googleapiclient/discovery_cache',
]
install_requires = [
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -59,6 +59,7 @@\n packages = [\n 'apiclient',\n 'googleapiclient',\n+ 'googleapiclient/discovery_cache',\n ]\n \n install_requires = [\n", "issue": "discovery_cache module not packaged during installation.\nI've installed `google-api-python-client` from source, but when at some point my application was failing with this message:\n\n```\n ...\n ...\n File \"build/bdist.linux-x86_64/egg/oauth2client/util.py\", line 142, in positional_wrapper\n return wrapped(*args, **kwargs)\n File \"build/bdist.linux-x86_64/egg/googleapiclient/discovery.py\", line 193, in build\n content = _retrieve_discovery_doc(requested_url, http, cache_discovery, cache)\n File \"build/bdist.linux-x86_64/egg/googleapiclient/discovery.py\", line 215, in _retrieve_discovery_doc\n from . import discovery_cache\nImportError: cannot import name discovery_cache\n```\n\nI've checked if `discovery_cache` module was actually part of the `egg`, and unfortunately it was not:\n\n```\n[root@e42fb97ce657 unit]# python\nPython 2.7.5 (default, Jun 24 2015, 00:41:19) \n[GCC 4.8.3 20140911 (Red Hat 4.8.3-9)] on linux2\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import googleapiclient.discovery_cache\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nImportError: No module named discovery_cache\n>>> \n```\n\nHere are all the files in `egg`\n\n```\n[root@e42fb97ce657 ~]# unzip -l /usr/lib/python2.7/site-packages/google_api_python_client-1.4.1-py2.7.egg \nArchive: /usr/lib/python2.7/site-packages/google_api_python_client-1.4.1-py2.7.egg\n Length Date Time Name\n--------- ---------- ----- ----\n 1169 09-03-2015 16:09 apiclient/__init__.py\n 1301 09-03-2015 16:09 apiclient/__init__.pyc\n 1 09-03-2015 16:09 EGG-INFO/dependency_links.txt\n 62 09-03-2015 16:09 EGG-INFO/requires.txt\n 26 09-03-2015 16:09 EGG-INFO/top_level.txt\n 969 09-03-2015 16:09 EGG-INFO/PKG-INFO\n 1 09-03-2015 16:09 EGG-INFO/zip-safe\n 545 09-03-2015 16:09 EGG-INFO/SOURCES.txt\n 53575 09-03-2015 16:09 googleapiclient/http.py\n 9910 09-03-2015 16:09 googleapiclient/channel.py\n 40890 09-03-2015 16:09 googleapiclient/discovery.py\n 9907 09-03-2015 16:09 googleapiclient/schema.pyc\n 620 09-03-2015 16:09 googleapiclient/__init__.py\n 9317 09-03-2015 16:09 googleapiclient/schema.py\n 11830 09-03-2015 16:09 googleapiclient/model.py\n 4047 09-03-2015 16:09 googleapiclient/sample_tools.py\n 6552 09-03-2015 16:09 googleapiclient/mimeparse.py\n 53976 09-03-2015 16:09 googleapiclient/http.pyc\n 7043 09-03-2015 16:09 googleapiclient/mimeparse.pyc\n 6333 09-03-2015 16:09 googleapiclient/errors.pyc\n 3131 09-03-2015 16:09 googleapiclient/sample_tools.pyc\n 3622 09-03-2015 16:09 googleapiclient/errors.py\n 35534 09-03-2015 16:09 googleapiclient/discovery.pyc\n 14028 09-03-2015 16:09 googleapiclient/model.pyc\n 175 09-03-2015 16:09 googleapiclient/__init__.pyc\n 10690 09-03-2015 16:09 googleapiclient/channel.pyc\n--------- -------\n 285254 26 files\n[root@e42fb97ce657 ~]# \n```\n\nAs a workaround I had to add `googleapiclient/discovery_cache` to the `packages` in `setup.py` so it looked like that:\n\n```\n[root@e42fb97ce657 google-api-python-client]# more setup.py | grep packages -A 4 -m1\npackages = [\n 'apiclient',\n 'googleapiclient',\n 'googleapiclient/discovery_cache'\n]\n```\n\nThen installed and everything magically started working.\n\n```\n[root@e42fb97ce657 google-api-python-client]# python\nPython 2.7.5 (default, Jun 24 2015, 00:41:19) \n[GCC 4.8.3 20140911 (Red Hat 4.8.3-9)] on linux2\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import googleapiclient.discovery_cache\n>>> \n```\n\nHere is a quick sample that looks similar to my environment using `Docker`:\n\n```\nFROM centos:centos7\n\nRUN yum install -y git python-devel python-setuptools unzip\nRUN easy_install pip\nRUN cd /tmp ;\\\n git clone https://github.com/google/google-api-python-client && \\\n cd google-api-python-client && \\\n python setup.py install \n```\n\nI've also tried to follow preferred suggestion from the `README.md` and install it from `pip` but it ended up in the same situation.\n\nPlease advice on how to proceed without making \"manual\" modifications to the official package?\n\n", "before_files": [{"content": "# Copyright 2014 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Setup script for Google API Python client.\n\nAlso installs included versions of third party libraries, if those libraries\nare not already installed.\n\"\"\"\nfrom __future__ import print_function\n\nimport sys\n\nif sys.version_info < (2, 6):\n print('google-api-python-client requires python version >= 2.6.',\n file=sys.stderr)\n sys.exit(1)\nif (3, 1) <= sys.version_info < (3, 3):\n print('google-api-python-client requires python3 version >= 3.3.',\n file=sys.stderr)\n sys.exit(1)\n\nfrom setuptools import setup\nimport pkg_resources\n\ndef _DetectBadness():\n import os\n if 'SKIP_GOOGLEAPICLIENT_COMPAT_CHECK' in os.environ:\n return\n o2c_pkg = None\n try:\n o2c_pkg = pkg_resources.get_distribution('oauth2client')\n except pkg_resources.DistributionNotFound:\n pass\n oauth2client = None\n try:\n import oauth2client\n except ImportError:\n pass\n if o2c_pkg is None and oauth2client is not None:\n raise RuntimeError(\n 'Previous version of google-api-python-client detected; due to a '\n 'packaging issue, we cannot perform an in-place upgrade. Please remove '\n 'the old version and re-install this package.'\n )\n\n_DetectBadness()\n\npackages = [\n 'apiclient',\n 'googleapiclient',\n]\n\ninstall_requires = [\n 'httplib2>=0.8',\n 'oauth2client>=1.4.6',\n 'six>=1.6.1',\n 'uritemplate>=0.6',\n]\n\nif sys.version_info < (2, 7):\n install_requires.append('argparse')\n\nlong_desc = \"\"\"The Google API Client for Python is a client library for\naccessing the Plus, Moderator, and many other Google APIs.\"\"\"\n\nimport googleapiclient\nversion = googleapiclient.__version__\n\nsetup(\n name=\"google-api-python-client\",\n version=version,\n description=\"Google API Client Library for Python\",\n long_description=long_desc,\n author=\"Google Inc.\",\n url=\"http://github.com/google/google-api-python-client/\",\n install_requires=install_requires,\n packages=packages,\n package_data={},\n license=\"Apache 2.0\",\n keywords=\"google api client\",\n classifiers=[\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: OS Independent',\n 'Topic :: Internet :: WWW/HTTP',\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright 2014 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Setup script for Google API Python client.\n\nAlso installs included versions of third party libraries, if those libraries\nare not already installed.\n\"\"\"\nfrom __future__ import print_function\n\nimport sys\n\nif sys.version_info < (2, 6):\n print('google-api-python-client requires python version >= 2.6.',\n file=sys.stderr)\n sys.exit(1)\nif (3, 1) <= sys.version_info < (3, 3):\n print('google-api-python-client requires python3 version >= 3.3.',\n file=sys.stderr)\n sys.exit(1)\n\nfrom setuptools import setup\nimport pkg_resources\n\ndef _DetectBadness():\n import os\n if 'SKIP_GOOGLEAPICLIENT_COMPAT_CHECK' in os.environ:\n return\n o2c_pkg = None\n try:\n o2c_pkg = pkg_resources.get_distribution('oauth2client')\n except pkg_resources.DistributionNotFound:\n pass\n oauth2client = None\n try:\n import oauth2client\n except ImportError:\n pass\n if o2c_pkg is None and oauth2client is not None:\n raise RuntimeError(\n 'Previous version of google-api-python-client detected; due to a '\n 'packaging issue, we cannot perform an in-place upgrade. Please remove '\n 'the old version and re-install this package.'\n )\n\n_DetectBadness()\n\npackages = [\n 'apiclient',\n 'googleapiclient',\n 'googleapiclient/discovery_cache',\n]\n\ninstall_requires = [\n 'httplib2>=0.8',\n 'oauth2client>=1.4.6',\n 'six>=1.6.1',\n 'uritemplate>=0.6',\n]\n\nif sys.version_info < (2, 7):\n install_requires.append('argparse')\n\nlong_desc = \"\"\"The Google API Client for Python is a client library for\naccessing the Plus, Moderator, and many other Google APIs.\"\"\"\n\nimport googleapiclient\nversion = googleapiclient.__version__\n\nsetup(\n name=\"google-api-python-client\",\n version=version,\n description=\"Google API Client Library for Python\",\n long_description=long_desc,\n author=\"Google Inc.\",\n url=\"http://github.com/google/google-api-python-client/\",\n install_requires=install_requires,\n packages=packages,\n package_data={},\n license=\"Apache 2.0\",\n keywords=\"google api client\",\n classifiers=[\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: OS Independent',\n 'Topic :: Internet :: WWW/HTTP',\n ],\n)\n", "path": "setup.py"}]}
| 2,939 | 65 |
gh_patches_debug_11388
|
rasdani/github-patches
|
git_diff
|
google__mobly-222
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`snippet_client._start_event_client` crashes
because of the extra `host_port` arg.
This crashes all tests that use `@AsyncRpc`.
We should add some unit tests for this...
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mobly/controllers/android_device_lib/snippet_client.py`
Content:
```
1 # Copyright 2016 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """JSON RPC interface to Mobly Snippet Lib."""
15 import logging
16 import re
17 import time
18
19 from mobly import utils
20 from mobly.controllers.android_device_lib import adb
21 from mobly.controllers.android_device_lib import jsonrpc_client_base
22
23 _INSTRUMENTATION_RUNNER_PACKAGE = (
24 'com.google.android.mobly.snippet.SnippetRunner')
25
26 # TODO(adorokhine): delete this in Mobly 1.6 when snippet v0 support is removed.
27 _LAUNCH_CMD_V0 = ('am instrument -w -e action start -e port %s %s/' +
28 _INSTRUMENTATION_RUNNER_PACKAGE)
29
30 _LAUNCH_CMD_V1 = (
31 'am instrument -w -e action start %s/' + _INSTRUMENTATION_RUNNER_PACKAGE)
32
33 _STOP_CMD = (
34 'am instrument -w -e action stop %s/' + _INSTRUMENTATION_RUNNER_PACKAGE)
35
36 # Maximum time to wait for a v0 snippet to start on the device (10 minutes).
37 # TODO(adorokhine): delete this in Mobly 1.6 when snippet v0 support is removed.
38 _APP_START_WAIT_TIME_V0 = 10 * 60
39
40
41 class Error(Exception):
42 pass
43
44
45 class ProtocolVersionError(Error):
46 """Raised when the protocol reported by the snippet is unknown."""
47
48
49 class SnippetClient(jsonrpc_client_base.JsonRpcClientBase):
50 """A client for interacting with snippet APKs using Mobly Snippet Lib.
51
52 See superclass documentation for a list of public attributes.
53
54 It currently supports both v0 and v1 snippet launch protocols, although
55 support for v0 will be removed in a future version.
56
57 For a description of the launch protocols, see the documentation in
58 mobly-snippet-lib, SnippetRunner.java.
59 """
60
61 def __init__(self, package, adb_proxy, log=logging.getLogger()):
62 """Initializes a SnippetClient.
63
64 Args:
65 package: (str) The package name of the apk where the snippets are
66 defined.
67 adb_proxy: (adb.AdbProxy) Adb proxy for running adb commands.
68 log: (logging.Logger) logger to which to send log messages.
69 """
70 super(SnippetClient, self).__init__(app_name=package, log=log)
71 self.package = package
72 self._adb = adb_proxy
73 self._proc = None
74
75 def start_app_and_connect(self):
76 """Overrides superclass. Launches a snippet app and connects to it."""
77 self._check_app_installed()
78
79 # Try launching the app with the v1 protocol. If that fails, fall back
80 # to v0 for compatibility. Use info here so people know exactly what's
81 # happening here, which is helpful since they need to create their own
82 # instrumentations and manifest.
83 self.log.info('Launching snippet apk %s with protocol v1',
84 self.package)
85 cmd = _LAUNCH_CMD_V1 % self.package
86 start_time = time.time()
87 self._proc = self._do_start_app(cmd)
88
89 # "Instrumentation crashed" could be due to several reasons, eg
90 # exception thrown during startup or just a launch protocol 0 snippet
91 # dying because it needs the port flag. Sadly we have no way to tell so
92 # just warn and retry as v0.
93 # TODO(adorokhine): delete this in Mobly 1.6 when snippet v0 support is
94 # removed.
95 line = self._read_line()
96 if line == 'INSTRUMENTATION_RESULT: shortMsg=Process crashed.':
97 self.log.warning('Snippet %s crashed on startup. This might be an '
98 'actual error or a snippet using deprecated v0 '
99 'start protocol. Retrying as a v0 snippet.',
100 self.package)
101 self.host_port = utils.get_available_host_port()
102 # Reuse the host port as the device port in v0 snippet. This isn't
103 # safe in general, but the protocol is deprecated.
104 cmd = _LAUNCH_CMD_V0 % (self.host_port, self.package)
105 self._proc = self._do_start_app(cmd)
106 self._connect_to_v0()
107 else:
108 # Check protocol version and get the device port
109 match = re.match('^SNIPPET START, PROTOCOL ([0-9]+) ([0-9]+)$',
110 line)
111 if not match or match.group(1) != '1':
112 raise ProtocolVersionError(line)
113 self._connect_to_v1()
114 self.log.debug('Snippet %s started after %.1fs on host port %s',
115 self.package, time.time() - start_time, self.host_port)
116
117 def stop_app(self):
118 # Kill the pending 'adb shell am instrument -w' process if there is one.
119 # Although killing the snippet apk would abort this process anyway, we
120 # want to call stop_standing_subprocess() to perform a health check,
121 # print the failure stack trace if there was any, and reap it from the
122 # process table.
123 self.log.debug('Stopping snippet apk %s', self.package)
124 try:
125 # Close the socket connection.
126 self.disconnect()
127 if self._proc:
128 utils.stop_standing_subprocess(self._proc)
129 out = self._adb.shell(_STOP_CMD % self.package).decode('utf-8')
130 if 'OK (0 tests)' not in out:
131 raise Error('Failed to stop existing apk. Unexpected '
132 'output: %s' % out)
133 finally:
134 # Always clean up the adb port
135 if self.host_port:
136 self._adb.forward(['--remove', 'tcp:%d' % self.host_port])
137
138 def _start_event_client(self):
139 """Overrides superclass."""
140 event_client = SnippetClient(
141 package=self.package,
142 host_port=self.host_port,
143 adb_proxy=self._adb,
144 log=self.log)
145 event_client.connect(self.uid,
146 jsonrpc_client_base.JsonRpcCommand.CONTINUE)
147 return event_client
148
149 def _check_app_installed(self):
150 # Check that the Mobly Snippet app is installed.
151 out = self._adb.shell('pm list package')
152 if not utils.grep('^package:%s$' % self.package, out):
153 raise jsonrpc_client_base.AppStartError(
154 '%s is not installed on %s' % (self.package, self._adb.serial))
155 # Check that the app is instrumented.
156 out = self._adb.shell('pm list instrumentation')
157 matched_out = utils.grep('^instrumentation:%s/%s' %
158 (self.package,
159 _INSTRUMENTATION_RUNNER_PACKAGE), out)
160 if not matched_out:
161 raise jsonrpc_client_base.AppStartError(
162 '%s is installed on %s, but it is not instrumented.' %
163 (self.package, self._adb.serial))
164 match = re.search('^instrumentation:(.*)\/(.*) \(target=(.*)\)$',
165 matched_out[0])
166 target_name = match.group(3)
167 # Check that the instrumentation target is installed if it's not the
168 # same as the snippet package.
169 if target_name != self.package:
170 out = self._adb.shell('pm list package')
171 if not utils.grep('^package:%s$' % target_name, out):
172 raise jsonrpc_client_base.AppStartError(
173 'Instrumentation target %s is not installed on %s' %
174 (target_name, self._adb.serial))
175
176 def _do_start_app(self, launch_cmd):
177 adb_cmd = [adb.ADB]
178 if self._adb.serial:
179 adb_cmd += ['-s', self._adb.serial]
180 adb_cmd += ['shell', launch_cmd]
181 return utils.start_standing_subprocess(adb_cmd, shell=False)
182
183 # TODO(adorokhine): delete this in Mobly 1.6 when snippet v0 support is
184 # removed.
185 def _connect_to_v0(self):
186 self.device_port = self.host_port
187 self._adb.forward(
188 ['tcp:%d' % self.host_port, 'tcp:%d' % self.device_port])
189 start_time = time.time()
190 expiration_time = start_time + _APP_START_WAIT_TIME_V0
191 while time.time() < expiration_time:
192 self.log.debug('Attempting to start %s.', self.package)
193 try:
194 self.connect()
195 return
196 except:
197 self.log.debug(
198 'v0 snippet %s is not yet running, retrying',
199 self.package,
200 exc_info=True)
201 time.sleep(1)
202 raise jsonrpc_client_base.AppStartError(
203 '%s failed to start on %s.' % (self.package, self._adb.serial))
204
205 def _connect_to_v1(self):
206 line = self._read_line()
207 match = re.match('^SNIPPET SERVING, PORT ([0-9]+)$', line)
208 if not match:
209 raise ProtocolVersionError(line)
210 self.device_port = int(match.group(1))
211
212 # Forward the device port to a new host port, and connect to that port
213 self.host_port = utils.get_available_host_port()
214 self._adb.forward(
215 ['tcp:%d' % self.host_port, 'tcp:%d' % self.device_port])
216 self.connect()
217
218 def _read_line(self):
219 line = self._proc.stdout.readline().rstrip()
220 self.log.debug('Read line from instrumentation output: "%s"', line)
221 return line
222
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mobly/controllers/android_device_lib/snippet_client.py b/mobly/controllers/android_device_lib/snippet_client.py
--- a/mobly/controllers/android_device_lib/snippet_client.py
+++ b/mobly/controllers/android_device_lib/snippet_client.py
@@ -138,10 +138,8 @@
def _start_event_client(self):
"""Overrides superclass."""
event_client = SnippetClient(
- package=self.package,
- host_port=self.host_port,
- adb_proxy=self._adb,
- log=self.log)
+ package=self.package, adb_proxy=self._adb, log=self.log)
+ event_client.host_port = self.host_port
event_client.connect(self.uid,
jsonrpc_client_base.JsonRpcCommand.CONTINUE)
return event_client
|
{"golden_diff": "diff --git a/mobly/controllers/android_device_lib/snippet_client.py b/mobly/controllers/android_device_lib/snippet_client.py\n--- a/mobly/controllers/android_device_lib/snippet_client.py\n+++ b/mobly/controllers/android_device_lib/snippet_client.py\n@@ -138,10 +138,8 @@\n def _start_event_client(self):\n \"\"\"Overrides superclass.\"\"\"\n event_client = SnippetClient(\n- package=self.package,\n- host_port=self.host_port,\n- adb_proxy=self._adb,\n- log=self.log)\n+ package=self.package, adb_proxy=self._adb, log=self.log)\n+ event_client.host_port = self.host_port\n event_client.connect(self.uid,\n jsonrpc_client_base.JsonRpcCommand.CONTINUE)\n return event_client\n", "issue": "`snippet_client._start_event_client` crashes\nbecause of the extra `host_port` arg.\r\n\r\nThis crashes all tests that use `@AsyncRpc`.\r\n\r\nWe should add some unit tests for this...\n", "before_files": [{"content": "# Copyright 2016 Google Inc.\n# \n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# \n# http://www.apache.org/licenses/LICENSE-2.0\n# \n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"JSON RPC interface to Mobly Snippet Lib.\"\"\"\nimport logging\nimport re\nimport time\n\nfrom mobly import utils\nfrom mobly.controllers.android_device_lib import adb\nfrom mobly.controllers.android_device_lib import jsonrpc_client_base\n\n_INSTRUMENTATION_RUNNER_PACKAGE = (\n 'com.google.android.mobly.snippet.SnippetRunner')\n\n# TODO(adorokhine): delete this in Mobly 1.6 when snippet v0 support is removed.\n_LAUNCH_CMD_V0 = ('am instrument -w -e action start -e port %s %s/' +\n _INSTRUMENTATION_RUNNER_PACKAGE)\n\n_LAUNCH_CMD_V1 = (\n 'am instrument -w -e action start %s/' + _INSTRUMENTATION_RUNNER_PACKAGE)\n\n_STOP_CMD = (\n 'am instrument -w -e action stop %s/' + _INSTRUMENTATION_RUNNER_PACKAGE)\n\n# Maximum time to wait for a v0 snippet to start on the device (10 minutes).\n# TODO(adorokhine): delete this in Mobly 1.6 when snippet v0 support is removed.\n_APP_START_WAIT_TIME_V0 = 10 * 60\n\n\nclass Error(Exception):\n pass\n\n\nclass ProtocolVersionError(Error):\n \"\"\"Raised when the protocol reported by the snippet is unknown.\"\"\"\n\n\nclass SnippetClient(jsonrpc_client_base.JsonRpcClientBase):\n \"\"\"A client for interacting with snippet APKs using Mobly Snippet Lib.\n\n See superclass documentation for a list of public attributes.\n\n It currently supports both v0 and v1 snippet launch protocols, although\n support for v0 will be removed in a future version.\n\n For a description of the launch protocols, see the documentation in\n mobly-snippet-lib, SnippetRunner.java.\n \"\"\"\n\n def __init__(self, package, adb_proxy, log=logging.getLogger()):\n \"\"\"Initializes a SnippetClient.\n \n Args:\n package: (str) The package name of the apk where the snippets are\n defined.\n adb_proxy: (adb.AdbProxy) Adb proxy for running adb commands.\n log: (logging.Logger) logger to which to send log messages.\n \"\"\"\n super(SnippetClient, self).__init__(app_name=package, log=log)\n self.package = package\n self._adb = adb_proxy\n self._proc = None\n\n def start_app_and_connect(self):\n \"\"\"Overrides superclass. Launches a snippet app and connects to it.\"\"\"\n self._check_app_installed()\n\n # Try launching the app with the v1 protocol. If that fails, fall back\n # to v0 for compatibility. Use info here so people know exactly what's\n # happening here, which is helpful since they need to create their own\n # instrumentations and manifest.\n self.log.info('Launching snippet apk %s with protocol v1',\n self.package)\n cmd = _LAUNCH_CMD_V1 % self.package\n start_time = time.time()\n self._proc = self._do_start_app(cmd)\n\n # \"Instrumentation crashed\" could be due to several reasons, eg\n # exception thrown during startup or just a launch protocol 0 snippet\n # dying because it needs the port flag. Sadly we have no way to tell so\n # just warn and retry as v0.\n # TODO(adorokhine): delete this in Mobly 1.6 when snippet v0 support is\n # removed.\n line = self._read_line()\n if line == 'INSTRUMENTATION_RESULT: shortMsg=Process crashed.':\n self.log.warning('Snippet %s crashed on startup. This might be an '\n 'actual error or a snippet using deprecated v0 '\n 'start protocol. Retrying as a v0 snippet.',\n self.package)\n self.host_port = utils.get_available_host_port()\n # Reuse the host port as the device port in v0 snippet. This isn't\n # safe in general, but the protocol is deprecated.\n cmd = _LAUNCH_CMD_V0 % (self.host_port, self.package)\n self._proc = self._do_start_app(cmd)\n self._connect_to_v0()\n else:\n # Check protocol version and get the device port\n match = re.match('^SNIPPET START, PROTOCOL ([0-9]+) ([0-9]+)$',\n line)\n if not match or match.group(1) != '1':\n raise ProtocolVersionError(line)\n self._connect_to_v1()\n self.log.debug('Snippet %s started after %.1fs on host port %s',\n self.package, time.time() - start_time, self.host_port)\n\n def stop_app(self):\n # Kill the pending 'adb shell am instrument -w' process if there is one.\n # Although killing the snippet apk would abort this process anyway, we\n # want to call stop_standing_subprocess() to perform a health check,\n # print the failure stack trace if there was any, and reap it from the\n # process table.\n self.log.debug('Stopping snippet apk %s', self.package)\n try:\n # Close the socket connection.\n self.disconnect()\n if self._proc:\n utils.stop_standing_subprocess(self._proc)\n out = self._adb.shell(_STOP_CMD % self.package).decode('utf-8')\n if 'OK (0 tests)' not in out:\n raise Error('Failed to stop existing apk. Unexpected '\n 'output: %s' % out)\n finally:\n # Always clean up the adb port\n if self.host_port:\n self._adb.forward(['--remove', 'tcp:%d' % self.host_port])\n\n def _start_event_client(self):\n \"\"\"Overrides superclass.\"\"\"\n event_client = SnippetClient(\n package=self.package,\n host_port=self.host_port,\n adb_proxy=self._adb,\n log=self.log)\n event_client.connect(self.uid,\n jsonrpc_client_base.JsonRpcCommand.CONTINUE)\n return event_client\n\n def _check_app_installed(self):\n # Check that the Mobly Snippet app is installed.\n out = self._adb.shell('pm list package')\n if not utils.grep('^package:%s$' % self.package, out):\n raise jsonrpc_client_base.AppStartError(\n '%s is not installed on %s' % (self.package, self._adb.serial))\n # Check that the app is instrumented.\n out = self._adb.shell('pm list instrumentation')\n matched_out = utils.grep('^instrumentation:%s/%s' %\n (self.package,\n _INSTRUMENTATION_RUNNER_PACKAGE), out)\n if not matched_out:\n raise jsonrpc_client_base.AppStartError(\n '%s is installed on %s, but it is not instrumented.' %\n (self.package, self._adb.serial))\n match = re.search('^instrumentation:(.*)\\/(.*) \\(target=(.*)\\)$',\n matched_out[0])\n target_name = match.group(3)\n # Check that the instrumentation target is installed if it's not the\n # same as the snippet package.\n if target_name != self.package:\n out = self._adb.shell('pm list package')\n if not utils.grep('^package:%s$' % target_name, out):\n raise jsonrpc_client_base.AppStartError(\n 'Instrumentation target %s is not installed on %s' %\n (target_name, self._adb.serial))\n\n def _do_start_app(self, launch_cmd):\n adb_cmd = [adb.ADB]\n if self._adb.serial:\n adb_cmd += ['-s', self._adb.serial]\n adb_cmd += ['shell', launch_cmd]\n return utils.start_standing_subprocess(adb_cmd, shell=False)\n\n # TODO(adorokhine): delete this in Mobly 1.6 when snippet v0 support is\n # removed.\n def _connect_to_v0(self):\n self.device_port = self.host_port\n self._adb.forward(\n ['tcp:%d' % self.host_port, 'tcp:%d' % self.device_port])\n start_time = time.time()\n expiration_time = start_time + _APP_START_WAIT_TIME_V0\n while time.time() < expiration_time:\n self.log.debug('Attempting to start %s.', self.package)\n try:\n self.connect()\n return\n except:\n self.log.debug(\n 'v0 snippet %s is not yet running, retrying',\n self.package,\n exc_info=True)\n time.sleep(1)\n raise jsonrpc_client_base.AppStartError(\n '%s failed to start on %s.' % (self.package, self._adb.serial))\n\n def _connect_to_v1(self):\n line = self._read_line()\n match = re.match('^SNIPPET SERVING, PORT ([0-9]+)$', line)\n if not match:\n raise ProtocolVersionError(line)\n self.device_port = int(match.group(1))\n\n # Forward the device port to a new host port, and connect to that port\n self.host_port = utils.get_available_host_port()\n self._adb.forward(\n ['tcp:%d' % self.host_port, 'tcp:%d' % self.device_port])\n self.connect()\n\n def _read_line(self):\n line = self._proc.stdout.readline().rstrip()\n self.log.debug('Read line from instrumentation output: \"%s\"', line)\n return line\n", "path": "mobly/controllers/android_device_lib/snippet_client.py"}], "after_files": [{"content": "# Copyright 2016 Google Inc.\n# \n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# \n# http://www.apache.org/licenses/LICENSE-2.0\n# \n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"JSON RPC interface to Mobly Snippet Lib.\"\"\"\nimport logging\nimport re\nimport time\n\nfrom mobly import utils\nfrom mobly.controllers.android_device_lib import adb\nfrom mobly.controllers.android_device_lib import jsonrpc_client_base\n\n_INSTRUMENTATION_RUNNER_PACKAGE = (\n 'com.google.android.mobly.snippet.SnippetRunner')\n\n# TODO(adorokhine): delete this in Mobly 1.6 when snippet v0 support is removed.\n_LAUNCH_CMD_V0 = ('am instrument -w -e action start -e port %s %s/' +\n _INSTRUMENTATION_RUNNER_PACKAGE)\n\n_LAUNCH_CMD_V1 = (\n 'am instrument -w -e action start %s/' + _INSTRUMENTATION_RUNNER_PACKAGE)\n\n_STOP_CMD = (\n 'am instrument -w -e action stop %s/' + _INSTRUMENTATION_RUNNER_PACKAGE)\n\n# Maximum time to wait for a v0 snippet to start on the device (10 minutes).\n# TODO(adorokhine): delete this in Mobly 1.6 when snippet v0 support is removed.\n_APP_START_WAIT_TIME_V0 = 10 * 60\n\n\nclass Error(Exception):\n pass\n\n\nclass ProtocolVersionError(Error):\n \"\"\"Raised when the protocol reported by the snippet is unknown.\"\"\"\n\n\nclass SnippetClient(jsonrpc_client_base.JsonRpcClientBase):\n \"\"\"A client for interacting with snippet APKs using Mobly Snippet Lib.\n\n See superclass documentation for a list of public attributes.\n\n It currently supports both v0 and v1 snippet launch protocols, although\n support for v0 will be removed in a future version.\n\n For a description of the launch protocols, see the documentation in\n mobly-snippet-lib, SnippetRunner.java.\n \"\"\"\n\n def __init__(self, package, adb_proxy, log=logging.getLogger()):\n \"\"\"Initializes a SnippetClient.\n \n Args:\n package: (str) The package name of the apk where the snippets are\n defined.\n adb_proxy: (adb.AdbProxy) Adb proxy for running adb commands.\n log: (logging.Logger) logger to which to send log messages.\n \"\"\"\n super(SnippetClient, self).__init__(app_name=package, log=log)\n self.package = package\n self._adb = adb_proxy\n self._proc = None\n\n def start_app_and_connect(self):\n \"\"\"Overrides superclass. Launches a snippet app and connects to it.\"\"\"\n self._check_app_installed()\n\n # Try launching the app with the v1 protocol. If that fails, fall back\n # to v0 for compatibility. Use info here so people know exactly what's\n # happening here, which is helpful since they need to create their own\n # instrumentations and manifest.\n self.log.info('Launching snippet apk %s with protocol v1',\n self.package)\n cmd = _LAUNCH_CMD_V1 % self.package\n start_time = time.time()\n self._proc = self._do_start_app(cmd)\n\n # \"Instrumentation crashed\" could be due to several reasons, eg\n # exception thrown during startup or just a launch protocol 0 snippet\n # dying because it needs the port flag. Sadly we have no way to tell so\n # just warn and retry as v0.\n # TODO(adorokhine): delete this in Mobly 1.6 when snippet v0 support is\n # removed.\n line = self._read_line()\n if line == 'INSTRUMENTATION_RESULT: shortMsg=Process crashed.':\n self.log.warning('Snippet %s crashed on startup. This might be an '\n 'actual error or a snippet using deprecated v0 '\n 'start protocol. Retrying as a v0 snippet.',\n self.package)\n self.host_port = utils.get_available_host_port()\n # Reuse the host port as the device port in v0 snippet. This isn't\n # safe in general, but the protocol is deprecated.\n cmd = _LAUNCH_CMD_V0 % (self.host_port, self.package)\n self._proc = self._do_start_app(cmd)\n self._connect_to_v0()\n else:\n # Check protocol version and get the device port\n match = re.match('^SNIPPET START, PROTOCOL ([0-9]+) ([0-9]+)$',\n line)\n if not match or match.group(1) != '1':\n raise ProtocolVersionError(line)\n self._connect_to_v1()\n self.log.debug('Snippet %s started after %.1fs on host port %s',\n self.package, time.time() - start_time, self.host_port)\n\n def stop_app(self):\n # Kill the pending 'adb shell am instrument -w' process if there is one.\n # Although killing the snippet apk would abort this process anyway, we\n # want to call stop_standing_subprocess() to perform a health check,\n # print the failure stack trace if there was any, and reap it from the\n # process table.\n self.log.debug('Stopping snippet apk %s', self.package)\n try:\n # Close the socket connection.\n self.disconnect()\n if self._proc:\n utils.stop_standing_subprocess(self._proc)\n out = self._adb.shell(_STOP_CMD % self.package).decode('utf-8')\n if 'OK (0 tests)' not in out:\n raise Error('Failed to stop existing apk. Unexpected '\n 'output: %s' % out)\n finally:\n # Always clean up the adb port\n if self.host_port:\n self._adb.forward(['--remove', 'tcp:%d' % self.host_port])\n\n def _start_event_client(self):\n \"\"\"Overrides superclass.\"\"\"\n event_client = SnippetClient(\n package=self.package, adb_proxy=self._adb, log=self.log)\n event_client.host_port = self.host_port\n event_client.connect(self.uid,\n jsonrpc_client_base.JsonRpcCommand.CONTINUE)\n return event_client\n\n def _check_app_installed(self):\n # Check that the Mobly Snippet app is installed.\n out = self._adb.shell('pm list package')\n if not utils.grep('^package:%s$' % self.package, out):\n raise jsonrpc_client_base.AppStartError(\n '%s is not installed on %s' % (self.package, self._adb.serial))\n # Check that the app is instrumented.\n out = self._adb.shell('pm list instrumentation')\n matched_out = utils.grep('^instrumentation:%s/%s' %\n (self.package,\n _INSTRUMENTATION_RUNNER_PACKAGE), out)\n if not matched_out:\n raise jsonrpc_client_base.AppStartError(\n '%s is installed on %s, but it is not instrumented.' %\n (self.package, self._adb.serial))\n match = re.search('^instrumentation:(.*)\\/(.*) \\(target=(.*)\\)$',\n matched_out[0])\n target_name = match.group(3)\n # Check that the instrumentation target is installed if it's not the\n # same as the snippet package.\n if target_name != self.package:\n out = self._adb.shell('pm list package')\n if not utils.grep('^package:%s$' % target_name, out):\n raise jsonrpc_client_base.AppStartError(\n 'Instrumentation target %s is not installed on %s' %\n (target_name, self._adb.serial))\n\n def _do_start_app(self, launch_cmd):\n adb_cmd = [adb.ADB]\n if self._adb.serial:\n adb_cmd += ['-s', self._adb.serial]\n adb_cmd += ['shell', launch_cmd]\n return utils.start_standing_subprocess(adb_cmd, shell=False)\n\n # TODO(adorokhine): delete this in Mobly 1.6 when snippet v0 support is\n # removed.\n def _connect_to_v0(self):\n self.device_port = self.host_port\n self._adb.forward(\n ['tcp:%d' % self.host_port, 'tcp:%d' % self.device_port])\n start_time = time.time()\n expiration_time = start_time + _APP_START_WAIT_TIME_V0\n while time.time() < expiration_time:\n self.log.debug('Attempting to start %s.', self.package)\n try:\n self.connect()\n return\n except:\n self.log.debug(\n 'v0 snippet %s is not yet running, retrying',\n self.package,\n exc_info=True)\n time.sleep(1)\n raise jsonrpc_client_base.AppStartError(\n '%s failed to start on %s.' % (self.package, self._adb.serial))\n\n def _connect_to_v1(self):\n line = self._read_line()\n match = re.match('^SNIPPET SERVING, PORT ([0-9]+)$', line)\n if not match:\n raise ProtocolVersionError(line)\n self.device_port = int(match.group(1))\n\n # Forward the device port to a new host port, and connect to that port\n self.host_port = utils.get_available_host_port()\n self._adb.forward(\n ['tcp:%d' % self.host_port, 'tcp:%d' % self.device_port])\n self.connect()\n\n def _read_line(self):\n line = self._proc.stdout.readline().rstrip()\n self.log.debug('Read line from instrumentation output: \"%s\"', line)\n return line\n", "path": "mobly/controllers/android_device_lib/snippet_client.py"}]}
| 3,028 | 170 |
gh_patches_debug_1253
|
rasdani/github-patches
|
git_diff
|
translate__pootle-3671
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Confusing sentence in permissions view
There is a permission called "Can review translations" that confused me as I thought that there are also reviewers beside suggesters and translators! Hopefully you fix it so that it lands in 2.7.0.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pootle/core/initdb.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 #
4 # Copyright (C) Pootle contributors.
5 #
6 # This file is a part of the Pootle project. It is distributed under the GPL3
7 # or later license. See the LICENSE file for a copy of the license and the
8 # AUTHORS file for copyright and authorship information.
9
10
11 from django.contrib.auth import get_user_model
12 from django.contrib.auth.models import Permission
13 from django.contrib.contenttypes.models import ContentType
14 from django.utils.translation import ugettext_noop as _
15
16 from pootle.core.models import Revision
17 from pootle_app.models import Directory
18 from pootle_app.models.permissions import PermissionSet, get_pootle_permission
19 from pootle_language.models import Language
20 from pootle_project.models import Project
21 from staticpages.models import StaticPage as Announcement
22
23
24 def initdb():
25 """Populate the database with default initial data.
26
27 This creates the default database to get a working Pootle installation.
28 """
29 create_revision()
30 create_essential_users()
31 create_root_directories()
32 create_template_languages()
33 create_terminology_project()
34 create_pootle_permissions()
35 create_pootle_permission_sets()
36
37 create_default_projects()
38 create_default_languages()
39 create_default_admin()
40
41
42 def create_revision():
43 Revision.initialize()
44
45
46 def create_essential_users():
47 """Create the 'default' and 'nobody' User instances.
48
49 These users are required for Pootle's permission system.
50 """
51 User = get_user_model()
52
53 # The nobody user is used to represent an anonymous user in cases where
54 # we need to associate model information with such a user. An example is
55 # in the permission system: we need a way to store rights for anonymous
56 # users; thus we use the nobody user.
57 criteria = {
58 'username': u"nobody",
59 'full_name': u"any anonymous user",
60 'is_active': True,
61 }
62 nobody, created = User.objects.get_or_create(**criteria)
63 if created:
64 nobody.set_unusable_password()
65 nobody.save()
66
67 # The 'default' user represents any valid, non-anonymous user and is used
68 # to associate information any such user. An example is in the permission
69 # system: we need a way to store default rights for users. We use the
70 # 'default' user for this.
71 #
72 # In a future version of Pootle we should think about using Django's
73 # groups to do better permissions handling.
74 criteria = {
75 'username': u"default",
76 'full_name': u"any authenticated user",
77 'is_active': True,
78 }
79 default, created = User.objects.get_or_create(**criteria)
80 if created:
81 default.set_unusable_password()
82 default.save()
83
84 # The system user represents a system, and is used to
85 # associate updates done by bulk commands as update_stores.
86 criteria = {
87 'username': u"system",
88 'full_name': u"system user",
89 'is_active': True,
90 }
91 system, created = User.objects.get_or_create(**criteria)
92 if created:
93 system.set_unusable_password()
94 system.save()
95
96
97 def create_pootle_permissions():
98 """Create Pootle's directory level permissions."""
99
100 args = {
101 'app_label': "pootle_app",
102 'model': "directory",
103 }
104 pootle_content_type, created = ContentType.objects.get_or_create(**args)
105 pootle_content_type.name = 'pootle'
106 pootle_content_type.save()
107
108 # Create the permissions.
109 permissions = [
110 {
111 'name': _("Can access a project"),
112 'codename': "view",
113 },
114 {
115 'name': _("Cannot access a project"),
116 'codename': "hide",
117 },
118 {
119 'name': _("Can make a suggestion for a translation"),
120 'codename': "suggest",
121 },
122 {
123 'name': _("Can submit a translation"),
124 'codename': "translate",
125 },
126 {
127 'name': _("Can review translations"),
128 'codename': "review",
129 },
130 {
131 'name': _("Can administrate a translation project"),
132 'codename': "administrate",
133 },
134 ]
135
136 criteria = {
137 'content_type': pootle_content_type,
138 }
139
140 for permission in permissions:
141 criteria.update(permission)
142 obj, created = Permission.objects.get_or_create(**criteria)
143
144
145 def create_pootle_permission_sets():
146 """Create the default permission set for the 'nobody' and 'default' users.
147
148 'nobody' is the anonymous (non-logged in) user, and 'default' is the logged
149 in user.
150 """
151 User = get_user_model()
152
153 nobody = User.objects.get(username='nobody')
154 default = User.objects.get(username='default')
155
156 view = get_pootle_permission('view')
157 suggest = get_pootle_permission('suggest')
158 translate = get_pootle_permission('translate')
159
160 # Default permissions for tree root.
161 criteria = {
162 'user': nobody,
163 'directory': Directory.objects.root,
164 }
165 permission_set, created = PermissionSet.objects.get_or_create(**criteria)
166 if created:
167 permission_set.positive_permissions = [view, suggest]
168 permission_set.save()
169
170 criteria['user'] = default
171 permission_set, created = PermissionSet.objects.get_or_create(**criteria)
172 if created:
173 permission_set.positive_permissions = [view, suggest, translate]
174 permission_set.save()
175
176 # Default permissions for templates language.
177 # Override with no permissions for templates language.
178 criteria = {
179 'user': nobody,
180 'directory': Directory.objects.get(pootle_path="/templates/"),
181 }
182 permission_set, created = PermissionSet.objects.get_or_create(**criteria)
183 if created:
184 permission_set.positive_permissions = []
185 permission_set.save()
186
187 criteria['user'] = default
188 permission_set, created = PermissionSet.objects.get_or_create(**criteria)
189 if created:
190 permission_set.positive_permissions = []
191 permission_set.save()
192
193
194 def require_english():
195 """Create the English Language item."""
196 criteria = {
197 'code': "en",
198 'fullname': u"English",
199 'nplurals': 2,
200 'pluralequation': "(n != 1)",
201 }
202 en, created = Language.objects.get_or_create(**criteria)
203 return en
204
205
206 def create_root_directories():
207 """Create the root Directory items."""
208 root, created = Directory.objects.get_or_create(name='')
209 projects, created = Directory.objects.get_or_create(name='projects',
210 parent=root)
211
212
213 def create_template_languages():
214 """Create the 'templates' and English languages.
215
216 The 'templates' language is used to give users access to the untranslated
217 template files.
218 """
219 templates, created = Language.objects.get_or_create(code="templates",
220 fullname=u'Templates')
221 require_english()
222
223
224 def create_terminology_project():
225 """Create the terminology project.
226
227 The terminology project is used to display terminology suggestions while
228 translating.
229 """
230 criteria = {
231 'code': "terminology",
232 'fullname': u"Terminology",
233 'source_language': require_english(),
234 'checkstyle': "terminology",
235 }
236 terminology, created = Project.objects.get_or_create(**criteria)
237
238
239 def create_default_projects():
240 """Create the default projects that we host.
241
242 You might want to add your projects here, although you can also add things
243 through the web interface later.
244 """
245 from pootle_project.models import Project
246
247 en = require_english()
248
249 criteria = {
250 'code': u"tutorial",
251 'source_language': en,
252 'fullname': u"Tutorial",
253 'checkstyle': "standard",
254 'localfiletype': "po",
255 'treestyle': "auto",
256 }
257 tutorial = Project(**criteria)
258 tutorial.save()
259
260 criteria = {
261 'active': True,
262 'title': "Project instructions",
263 'body': ('<div dir="ltr" lang="en">Tutorial project where users can '
264 'play with Pootle and learn more about translation and '
265 'localisation.<br />For more help on localisation, visit the '
266 '<a href="http://docs.translatehouse.org/projects/'
267 'localization-guide/en/latest/guide/start.html">localisation '
268 'guide</a>.</div>'),
269 'virtual_path': "announcements/projects/"+tutorial.code,
270 }
271 ann = Announcement(**criteria)
272 ann.save()
273
274
275 def create_default_languages():
276 """Create the default languages."""
277 from translate.lang import data, factory
278
279 from pootle_language.models import Language
280
281 # import languages from toolkit
282 for code in data.languages.keys():
283 try:
284 tk_lang = factory.getlanguage(code)
285 criteria = {
286 'code': code,
287 'fullname': tk_lang.fullname,
288 'nplurals': tk_lang.nplurals,
289 'pluralequation': tk_lang.pluralequation,
290 }
291 try:
292 criteria['specialchars'] = tk_lang.specialchars
293 except AttributeError:
294 pass
295 lang, created = Language.objects.get_or_create(**criteria)
296 except:
297 pass
298
299
300 def create_default_admin():
301 """Create the default admin user for Pootle.
302
303 You definitely want to change the admin account so that your default
304 install is not accessible with the default credentials. The users 'noboby'
305 and 'default' should be left as is.
306 """
307 User = get_user_model()
308
309 criteria = {
310 'username': u"admin",
311 'full_name': u"Administrator",
312 'is_active': True,
313 'is_superuser': True,
314 }
315 admin = User(**criteria)
316 admin.set_password("admin")
317 admin.save()
318
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pootle/core/initdb.py b/pootle/core/initdb.py
--- a/pootle/core/initdb.py
+++ b/pootle/core/initdb.py
@@ -124,7 +124,7 @@
'codename': "translate",
},
{
- 'name': _("Can review translations"),
+ 'name': _("Can review suggestions"),
'codename': "review",
},
{
|
{"golden_diff": "diff --git a/pootle/core/initdb.py b/pootle/core/initdb.py\n--- a/pootle/core/initdb.py\n+++ b/pootle/core/initdb.py\n@@ -124,7 +124,7 @@\n 'codename': \"translate\",\n },\n {\n- 'name': _(\"Can review translations\"),\n+ 'name': _(\"Can review suggestions\"),\n 'codename': \"review\",\n },\n {\n", "issue": "Confusing sentence in permissions view\nThere is a permission called \"Can review translations\" that confused me as I thought that there are also reviewers beside suggesters and translators! Hopefully you fix it so that it lands in 2.7.0.\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\n\nfrom django.contrib.auth import get_user_model\nfrom django.contrib.auth.models import Permission\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.utils.translation import ugettext_noop as _\n\nfrom pootle.core.models import Revision\nfrom pootle_app.models import Directory\nfrom pootle_app.models.permissions import PermissionSet, get_pootle_permission\nfrom pootle_language.models import Language\nfrom pootle_project.models import Project\nfrom staticpages.models import StaticPage as Announcement\n\n\ndef initdb():\n \"\"\"Populate the database with default initial data.\n\n This creates the default database to get a working Pootle installation.\n \"\"\"\n create_revision()\n create_essential_users()\n create_root_directories()\n create_template_languages()\n create_terminology_project()\n create_pootle_permissions()\n create_pootle_permission_sets()\n\n create_default_projects()\n create_default_languages()\n create_default_admin()\n\n\ndef create_revision():\n Revision.initialize()\n\n\ndef create_essential_users():\n \"\"\"Create the 'default' and 'nobody' User instances.\n\n These users are required for Pootle's permission system.\n \"\"\"\n User = get_user_model()\n\n # The nobody user is used to represent an anonymous user in cases where\n # we need to associate model information with such a user. An example is\n # in the permission system: we need a way to store rights for anonymous\n # users; thus we use the nobody user.\n criteria = {\n 'username': u\"nobody\",\n 'full_name': u\"any anonymous user\",\n 'is_active': True,\n }\n nobody, created = User.objects.get_or_create(**criteria)\n if created:\n nobody.set_unusable_password()\n nobody.save()\n\n # The 'default' user represents any valid, non-anonymous user and is used\n # to associate information any such user. An example is in the permission\n # system: we need a way to store default rights for users. We use the\n # 'default' user for this.\n #\n # In a future version of Pootle we should think about using Django's\n # groups to do better permissions handling.\n criteria = {\n 'username': u\"default\",\n 'full_name': u\"any authenticated user\",\n 'is_active': True,\n }\n default, created = User.objects.get_or_create(**criteria)\n if created:\n default.set_unusable_password()\n default.save()\n\n # The system user represents a system, and is used to\n # associate updates done by bulk commands as update_stores.\n criteria = {\n 'username': u\"system\",\n 'full_name': u\"system user\",\n 'is_active': True,\n }\n system, created = User.objects.get_or_create(**criteria)\n if created:\n system.set_unusable_password()\n system.save()\n\n\ndef create_pootle_permissions():\n \"\"\"Create Pootle's directory level permissions.\"\"\"\n\n args = {\n 'app_label': \"pootle_app\",\n 'model': \"directory\",\n }\n pootle_content_type, created = ContentType.objects.get_or_create(**args)\n pootle_content_type.name = 'pootle'\n pootle_content_type.save()\n\n # Create the permissions.\n permissions = [\n {\n 'name': _(\"Can access a project\"),\n 'codename': \"view\",\n },\n {\n 'name': _(\"Cannot access a project\"),\n 'codename': \"hide\",\n },\n {\n 'name': _(\"Can make a suggestion for a translation\"),\n 'codename': \"suggest\",\n },\n {\n 'name': _(\"Can submit a translation\"),\n 'codename': \"translate\",\n },\n {\n 'name': _(\"Can review translations\"),\n 'codename': \"review\",\n },\n {\n 'name': _(\"Can administrate a translation project\"),\n 'codename': \"administrate\",\n },\n ]\n\n criteria = {\n 'content_type': pootle_content_type,\n }\n\n for permission in permissions:\n criteria.update(permission)\n obj, created = Permission.objects.get_or_create(**criteria)\n\n\ndef create_pootle_permission_sets():\n \"\"\"Create the default permission set for the 'nobody' and 'default' users.\n\n 'nobody' is the anonymous (non-logged in) user, and 'default' is the logged\n in user.\n \"\"\"\n User = get_user_model()\n\n nobody = User.objects.get(username='nobody')\n default = User.objects.get(username='default')\n\n view = get_pootle_permission('view')\n suggest = get_pootle_permission('suggest')\n translate = get_pootle_permission('translate')\n\n # Default permissions for tree root.\n criteria = {\n 'user': nobody,\n 'directory': Directory.objects.root,\n }\n permission_set, created = PermissionSet.objects.get_or_create(**criteria)\n if created:\n permission_set.positive_permissions = [view, suggest]\n permission_set.save()\n\n criteria['user'] = default\n permission_set, created = PermissionSet.objects.get_or_create(**criteria)\n if created:\n permission_set.positive_permissions = [view, suggest, translate]\n permission_set.save()\n\n # Default permissions for templates language.\n # Override with no permissions for templates language.\n criteria = {\n 'user': nobody,\n 'directory': Directory.objects.get(pootle_path=\"/templates/\"),\n }\n permission_set, created = PermissionSet.objects.get_or_create(**criteria)\n if created:\n permission_set.positive_permissions = []\n permission_set.save()\n\n criteria['user'] = default\n permission_set, created = PermissionSet.objects.get_or_create(**criteria)\n if created:\n permission_set.positive_permissions = []\n permission_set.save()\n\n\ndef require_english():\n \"\"\"Create the English Language item.\"\"\"\n criteria = {\n 'code': \"en\",\n 'fullname': u\"English\",\n 'nplurals': 2,\n 'pluralequation': \"(n != 1)\",\n }\n en, created = Language.objects.get_or_create(**criteria)\n return en\n\n\ndef create_root_directories():\n \"\"\"Create the root Directory items.\"\"\"\n root, created = Directory.objects.get_or_create(name='')\n projects, created = Directory.objects.get_or_create(name='projects',\n parent=root)\n\n\ndef create_template_languages():\n \"\"\"Create the 'templates' and English languages.\n\n The 'templates' language is used to give users access to the untranslated\n template files.\n \"\"\"\n templates, created = Language.objects.get_or_create(code=\"templates\",\n fullname=u'Templates')\n require_english()\n\n\ndef create_terminology_project():\n \"\"\"Create the terminology project.\n\n The terminology project is used to display terminology suggestions while\n translating.\n \"\"\"\n criteria = {\n 'code': \"terminology\",\n 'fullname': u\"Terminology\",\n 'source_language': require_english(),\n 'checkstyle': \"terminology\",\n }\n terminology, created = Project.objects.get_or_create(**criteria)\n\n\ndef create_default_projects():\n \"\"\"Create the default projects that we host.\n\n You might want to add your projects here, although you can also add things\n through the web interface later.\n \"\"\"\n from pootle_project.models import Project\n\n en = require_english()\n\n criteria = {\n 'code': u\"tutorial\",\n 'source_language': en,\n 'fullname': u\"Tutorial\",\n 'checkstyle': \"standard\",\n 'localfiletype': \"po\",\n 'treestyle': \"auto\",\n }\n tutorial = Project(**criteria)\n tutorial.save()\n\n criteria = {\n 'active': True,\n 'title': \"Project instructions\",\n 'body': ('<div dir=\"ltr\" lang=\"en\">Tutorial project where users can '\n 'play with Pootle and learn more about translation and '\n 'localisation.<br />For more help on localisation, visit the '\n '<a href=\"http://docs.translatehouse.org/projects/'\n 'localization-guide/en/latest/guide/start.html\">localisation '\n 'guide</a>.</div>'),\n 'virtual_path': \"announcements/projects/\"+tutorial.code,\n }\n ann = Announcement(**criteria)\n ann.save()\n\n\ndef create_default_languages():\n \"\"\"Create the default languages.\"\"\"\n from translate.lang import data, factory\n\n from pootle_language.models import Language\n\n # import languages from toolkit\n for code in data.languages.keys():\n try:\n tk_lang = factory.getlanguage(code)\n criteria = {\n 'code': code,\n 'fullname': tk_lang.fullname,\n 'nplurals': tk_lang.nplurals,\n 'pluralequation': tk_lang.pluralequation,\n }\n try:\n criteria['specialchars'] = tk_lang.specialchars\n except AttributeError:\n pass\n lang, created = Language.objects.get_or_create(**criteria)\n except:\n pass\n\n\ndef create_default_admin():\n \"\"\"Create the default admin user for Pootle.\n\n You definitely want to change the admin account so that your default\n install is not accessible with the default credentials. The users 'noboby'\n and 'default' should be left as is.\n \"\"\"\n User = get_user_model()\n\n criteria = {\n 'username': u\"admin\",\n 'full_name': u\"Administrator\",\n 'is_active': True,\n 'is_superuser': True,\n }\n admin = User(**criteria)\n admin.set_password(\"admin\")\n admin.save()\n", "path": "pootle/core/initdb.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\n\nfrom django.contrib.auth import get_user_model\nfrom django.contrib.auth.models import Permission\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.utils.translation import ugettext_noop as _\n\nfrom pootle.core.models import Revision\nfrom pootle_app.models import Directory\nfrom pootle_app.models.permissions import PermissionSet, get_pootle_permission\nfrom pootle_language.models import Language\nfrom pootle_project.models import Project\nfrom staticpages.models import StaticPage as Announcement\n\n\ndef initdb():\n \"\"\"Populate the database with default initial data.\n\n This creates the default database to get a working Pootle installation.\n \"\"\"\n create_revision()\n create_essential_users()\n create_root_directories()\n create_template_languages()\n create_terminology_project()\n create_pootle_permissions()\n create_pootle_permission_sets()\n\n create_default_projects()\n create_default_languages()\n create_default_admin()\n\n\ndef create_revision():\n Revision.initialize()\n\n\ndef create_essential_users():\n \"\"\"Create the 'default' and 'nobody' User instances.\n\n These users are required for Pootle's permission system.\n \"\"\"\n User = get_user_model()\n\n # The nobody user is used to represent an anonymous user in cases where\n # we need to associate model information with such a user. An example is\n # in the permission system: we need a way to store rights for anonymous\n # users; thus we use the nobody user.\n criteria = {\n 'username': u\"nobody\",\n 'full_name': u\"any anonymous user\",\n 'is_active': True,\n }\n nobody, created = User.objects.get_or_create(**criteria)\n if created:\n nobody.set_unusable_password()\n nobody.save()\n\n # The 'default' user represents any valid, non-anonymous user and is used\n # to associate information any such user. An example is in the permission\n # system: we need a way to store default rights for users. We use the\n # 'default' user for this.\n #\n # In a future version of Pootle we should think about using Django's\n # groups to do better permissions handling.\n criteria = {\n 'username': u\"default\",\n 'full_name': u\"any authenticated user\",\n 'is_active': True,\n }\n default, created = User.objects.get_or_create(**criteria)\n if created:\n default.set_unusable_password()\n default.save()\n\n # The system user represents a system, and is used to\n # associate updates done by bulk commands as update_stores.\n criteria = {\n 'username': u\"system\",\n 'full_name': u\"system user\",\n 'is_active': True,\n }\n system, created = User.objects.get_or_create(**criteria)\n if created:\n system.set_unusable_password()\n system.save()\n\n\ndef create_pootle_permissions():\n \"\"\"Create Pootle's directory level permissions.\"\"\"\n\n args = {\n 'app_label': \"pootle_app\",\n 'model': \"directory\",\n }\n pootle_content_type, created = ContentType.objects.get_or_create(**args)\n pootle_content_type.name = 'pootle'\n pootle_content_type.save()\n\n # Create the permissions.\n permissions = [\n {\n 'name': _(\"Can access a project\"),\n 'codename': \"view\",\n },\n {\n 'name': _(\"Cannot access a project\"),\n 'codename': \"hide\",\n },\n {\n 'name': _(\"Can make a suggestion for a translation\"),\n 'codename': \"suggest\",\n },\n {\n 'name': _(\"Can submit a translation\"),\n 'codename': \"translate\",\n },\n {\n 'name': _(\"Can review suggestions\"),\n 'codename': \"review\",\n },\n {\n 'name': _(\"Can administrate a translation project\"),\n 'codename': \"administrate\",\n },\n ]\n\n criteria = {\n 'content_type': pootle_content_type,\n }\n\n for permission in permissions:\n criteria.update(permission)\n obj, created = Permission.objects.get_or_create(**criteria)\n\n\ndef create_pootle_permission_sets():\n \"\"\"Create the default permission set for the 'nobody' and 'default' users.\n\n 'nobody' is the anonymous (non-logged in) user, and 'default' is the logged\n in user.\n \"\"\"\n User = get_user_model()\n\n nobody = User.objects.get(username='nobody')\n default = User.objects.get(username='default')\n\n view = get_pootle_permission('view')\n suggest = get_pootle_permission('suggest')\n translate = get_pootle_permission('translate')\n\n # Default permissions for tree root.\n criteria = {\n 'user': nobody,\n 'directory': Directory.objects.root,\n }\n permission_set, created = PermissionSet.objects.get_or_create(**criteria)\n if created:\n permission_set.positive_permissions = [view, suggest]\n permission_set.save()\n\n criteria['user'] = default\n permission_set, created = PermissionSet.objects.get_or_create(**criteria)\n if created:\n permission_set.positive_permissions = [view, suggest, translate]\n permission_set.save()\n\n # Default permissions for templates language.\n # Override with no permissions for templates language.\n criteria = {\n 'user': nobody,\n 'directory': Directory.objects.get(pootle_path=\"/templates/\"),\n }\n permission_set, created = PermissionSet.objects.get_or_create(**criteria)\n if created:\n permission_set.positive_permissions = []\n permission_set.save()\n\n criteria['user'] = default\n permission_set, created = PermissionSet.objects.get_or_create(**criteria)\n if created:\n permission_set.positive_permissions = []\n permission_set.save()\n\n\ndef require_english():\n \"\"\"Create the English Language item.\"\"\"\n criteria = {\n 'code': \"en\",\n 'fullname': u\"English\",\n 'nplurals': 2,\n 'pluralequation': \"(n != 1)\",\n }\n en, created = Language.objects.get_or_create(**criteria)\n return en\n\n\ndef create_root_directories():\n \"\"\"Create the root Directory items.\"\"\"\n root, created = Directory.objects.get_or_create(name='')\n projects, created = Directory.objects.get_or_create(name='projects',\n parent=root)\n\n\ndef create_template_languages():\n \"\"\"Create the 'templates' and English languages.\n\n The 'templates' language is used to give users access to the untranslated\n template files.\n \"\"\"\n templates, created = Language.objects.get_or_create(code=\"templates\",\n fullname=u'Templates')\n require_english()\n\n\ndef create_terminology_project():\n \"\"\"Create the terminology project.\n\n The terminology project is used to display terminology suggestions while\n translating.\n \"\"\"\n criteria = {\n 'code': \"terminology\",\n 'fullname': u\"Terminology\",\n 'source_language': require_english(),\n 'checkstyle': \"terminology\",\n }\n terminology, created = Project.objects.get_or_create(**criteria)\n\n\ndef create_default_projects():\n \"\"\"Create the default projects that we host.\n\n You might want to add your projects here, although you can also add things\n through the web interface later.\n \"\"\"\n from pootle_project.models import Project\n\n en = require_english()\n\n criteria = {\n 'code': u\"tutorial\",\n 'source_language': en,\n 'fullname': u\"Tutorial\",\n 'checkstyle': \"standard\",\n 'localfiletype': \"po\",\n 'treestyle': \"auto\",\n }\n tutorial = Project(**criteria)\n tutorial.save()\n\n criteria = {\n 'active': True,\n 'title': \"Project instructions\",\n 'body': ('<div dir=\"ltr\" lang=\"en\">Tutorial project where users can '\n 'play with Pootle and learn more about translation and '\n 'localisation.<br />For more help on localisation, visit the '\n '<a href=\"http://docs.translatehouse.org/projects/'\n 'localization-guide/en/latest/guide/start.html\">localisation '\n 'guide</a>.</div>'),\n 'virtual_path': \"announcements/projects/\"+tutorial.code,\n }\n ann = Announcement(**criteria)\n ann.save()\n\n\ndef create_default_languages():\n \"\"\"Create the default languages.\"\"\"\n from translate.lang import data, factory\n\n from pootle_language.models import Language\n\n # import languages from toolkit\n for code in data.languages.keys():\n try:\n tk_lang = factory.getlanguage(code)\n criteria = {\n 'code': code,\n 'fullname': tk_lang.fullname,\n 'nplurals': tk_lang.nplurals,\n 'pluralequation': tk_lang.pluralequation,\n }\n try:\n criteria['specialchars'] = tk_lang.specialchars\n except AttributeError:\n pass\n lang, created = Language.objects.get_or_create(**criteria)\n except:\n pass\n\n\ndef create_default_admin():\n \"\"\"Create the default admin user for Pootle.\n\n You definitely want to change the admin account so that your default\n install is not accessible with the default credentials. The users 'noboby'\n and 'default' should be left as is.\n \"\"\"\n User = get_user_model()\n\n criteria = {\n 'username': u\"admin\",\n 'full_name': u\"Administrator\",\n 'is_active': True,\n 'is_superuser': True,\n }\n admin = User(**criteria)\n admin.set_password(\"admin\")\n admin.save()\n", "path": "pootle/core/initdb.py"}]}
| 3,297 | 98 |
gh_patches_debug_6228
|
rasdani/github-patches
|
git_diff
|
apache__airflow-22772
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
LocalFileSystemToGCSOperator give false positive while copying file from src to dest, even when src has no file
### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==6.4.0
### Apache Airflow version
2.1.4
### Operating System
Debian GNU/Linux 10 (buster)
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
When you run LocalFilesSystemToGCSOperator with the params for src and dest, the operator reports a false positive when there are no files present under the specified src directory. I expected it to fail stating the specified directory doesn't have any file.
[2022-03-15 14:26:15,475] {taskinstance.py:1107} INFO - Executing <Task(LocalFilesystemToGCSOperator): upload_files_to_GCS> on 2022-03-15T14:25:59.554459+00:00
[2022-03-15 14:26:15,484] {standard_task_runner.py:52} INFO - Started process 709 to run task
[2022-03-15 14:26:15,492] {standard_task_runner.py:76} INFO - Running: ['***', 'tasks', 'run', 'dag', 'upload_files_to_GCS', '2022-03-15T14:25:59.554459+00:00', '--job-id', '1562', '--pool', 'default_pool', '--raw', '--subdir', 'DAGS_FOLDER/dag.py', '--cfg-path', '/tmp/tmp_e9t7pl9', '--error-file', '/tmp/tmpyij6m4er']
[2022-03-15 14:26:15,493] {standard_task_runner.py:77} INFO - Job 1562: Subtask upload_files_to_GCS
[2022-03-15 14:26:15,590] {logging_mixin.py:104} INFO - Running <TaskInstance: dag.upload_files_to_GCS 2022-03-15T14:25:59.554459+00:00 [running]> on host 653e566fd372
[2022-03-15 14:26:15,752] {taskinstance.py:1300} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=jet2
AIRFLOW_CTX_DAG_ID=dag
AIRFLOW_CTX_TASK_ID=upload_files_to_GCS
AIRFLOW_CTX_EXECUTION_DATE=2022-03-15T14:25:59.554459+00:00
AIRFLOW_CTX_DAG_RUN_ID=manual__2022-03-15T14:25:59.554459+00:00
[2022-03-15 14:26:19,357] {taskinstance.py:1204} INFO - Marking task as SUCCESS. gag, task_id=upload_files_to_GCS, execution_date=20220315T142559, start_date=20220315T142615, end_date=20220315T142619
[2022-03-15 14:26:19,422] {taskinstance.py:1265} INFO - 1 downstream tasks scheduled from follow-on schedule check
[2022-03-15 14:26:19,458] {local_task_job.py:149} INFO - Task exited with return code 0
### What you think should happen instead
The operator should at least info that no files were copied than just making it successful.
### How to reproduce
- create a Dag with LocalFilesSystemToGCSOperator
- specify an empty directory as src and a gcp bucket as bucket_name, dest param(can be blank).
- run the dag
### Anything else
No
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `airflow/providers/google/cloud/transfers/local_to_gcs.py`
Content:
```
1 #
2 # Licensed to the Apache Software Foundation (ASF) under one
3 # or more contributor license agreements. See the NOTICE file
4 # distributed with this work for additional information
5 # regarding copyright ownership. The ASF licenses this file
6 # to you under the Apache License, Version 2.0 (the
7 # "License"); you may not use this file except in compliance
8 # with the License. You may obtain a copy of the License at
9 #
10 # http://www.apache.org/licenses/LICENSE-2.0
11 #
12 # Unless required by applicable law or agreed to in writing,
13 # software distributed under the License is distributed on an
14 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
15 # KIND, either express or implied. See the License for the
16 # specific language governing permissions and limitations
17 # under the License.
18 """This module contains operator for uploading local file(s) to GCS."""
19 import os
20 import warnings
21 from glob import glob
22 from typing import TYPE_CHECKING, Optional, Sequence, Union
23
24 from airflow.models import BaseOperator
25 from airflow.providers.google.cloud.hooks.gcs import GCSHook
26
27 if TYPE_CHECKING:
28 from airflow.utils.context import Context
29
30
31 class LocalFilesystemToGCSOperator(BaseOperator):
32 """
33 Uploads a file or list of files to Google Cloud Storage.
34 Optionally can compress the file for upload.
35
36 .. seealso::
37 For more information on how to use this operator, take a look at the guide:
38 :ref:`howto/operator:LocalFilesystemToGCSOperator`
39
40 :param src: Path to the local file, or list of local files. Path can be either absolute
41 (e.g. /path/to/file.ext) or relative (e.g. ../../foo/*/*.csv). (templated)
42 :param dst: Destination path within the specified bucket on GCS (e.g. /path/to/file.ext).
43 If multiple files are being uploaded, specify object prefix with trailing backslash
44 (e.g. /path/to/directory/) (templated)
45 :param bucket: The bucket to upload to. (templated)
46 :param gcp_conn_id: (Optional) The connection ID used to connect to Google Cloud.
47 :param google_cloud_storage_conn_id: (Deprecated) The connection ID used to connect to Google Cloud.
48 This parameter has been deprecated. You should pass the gcp_conn_id parameter instead.
49 :param mime_type: The mime-type string
50 :param delegate_to: The account to impersonate, if any
51 :param gzip: Allows for file to be compressed and uploaded as gzip
52 :param impersonation_chain: Optional service account to impersonate using short-term
53 credentials, or chained list of accounts required to get the access_token
54 of the last account in the list, which will be impersonated in the request.
55 If set as a string, the account must grant the originating account
56 the Service Account Token Creator IAM role.
57 If set as a sequence, the identities from the list must grant
58 Service Account Token Creator IAM role to the directly preceding identity, with first
59 account from the list granting this role to the originating account (templated).
60 """
61
62 template_fields: Sequence[str] = (
63 'src',
64 'dst',
65 'bucket',
66 'impersonation_chain',
67 )
68
69 def __init__(
70 self,
71 *,
72 src,
73 dst,
74 bucket,
75 gcp_conn_id='google_cloud_default',
76 google_cloud_storage_conn_id=None,
77 mime_type='application/octet-stream',
78 delegate_to=None,
79 gzip=False,
80 impersonation_chain: Optional[Union[str, Sequence[str]]] = None,
81 **kwargs,
82 ):
83 super().__init__(**kwargs)
84
85 if google_cloud_storage_conn_id:
86 warnings.warn(
87 "The google_cloud_storage_conn_id parameter has been deprecated. You should pass "
88 "the gcp_conn_id parameter.",
89 DeprecationWarning,
90 stacklevel=3,
91 )
92 gcp_conn_id = google_cloud_storage_conn_id
93
94 self.src = src
95 self.dst = dst
96 self.bucket = bucket
97 self.gcp_conn_id = gcp_conn_id
98 self.mime_type = mime_type
99 self.delegate_to = delegate_to
100 self.gzip = gzip
101 self.impersonation_chain = impersonation_chain
102
103 def execute(self, context: 'Context'):
104 """Uploads a file or list of files to Google Cloud Storage"""
105 hook = GCSHook(
106 gcp_conn_id=self.gcp_conn_id,
107 delegate_to=self.delegate_to,
108 impersonation_chain=self.impersonation_chain,
109 )
110
111 filepaths = self.src if isinstance(self.src, list) else glob(self.src)
112 if os.path.basename(self.dst): # path to a file
113 if len(filepaths) > 1: # multiple file upload
114 raise ValueError(
115 "'dst' parameter references filepath. Please specify "
116 "directory (with trailing backslash) to upload multiple "
117 "files. e.g. /path/to/directory/"
118 )
119 object_paths = [self.dst]
120 else: # directory is provided
121 object_paths = [os.path.join(self.dst, os.path.basename(filepath)) for filepath in filepaths]
122
123 for filepath, object_path in zip(filepaths, object_paths):
124 hook.upload(
125 bucket_name=self.bucket,
126 object_name=object_path,
127 mime_type=self.mime_type,
128 filename=filepath,
129 gzip=self.gzip,
130 )
131
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/airflow/providers/google/cloud/transfers/local_to_gcs.py b/airflow/providers/google/cloud/transfers/local_to_gcs.py
--- a/airflow/providers/google/cloud/transfers/local_to_gcs.py
+++ b/airflow/providers/google/cloud/transfers/local_to_gcs.py
@@ -109,6 +109,8 @@
)
filepaths = self.src if isinstance(self.src, list) else glob(self.src)
+ if not filepaths:
+ raise FileNotFoundError(self.src)
if os.path.basename(self.dst): # path to a file
if len(filepaths) > 1: # multiple file upload
raise ValueError(
|
{"golden_diff": "diff --git a/airflow/providers/google/cloud/transfers/local_to_gcs.py b/airflow/providers/google/cloud/transfers/local_to_gcs.py\n--- a/airflow/providers/google/cloud/transfers/local_to_gcs.py\n+++ b/airflow/providers/google/cloud/transfers/local_to_gcs.py\n@@ -109,6 +109,8 @@\n )\n \n filepaths = self.src if isinstance(self.src, list) else glob(self.src)\n+ if not filepaths:\n+ raise FileNotFoundError(self.src)\n if os.path.basename(self.dst): # path to a file\n if len(filepaths) > 1: # multiple file upload\n raise ValueError(\n", "issue": "LocalFileSystemToGCSOperator give false positive while copying file from src to dest, even when src has no file\n### Apache Airflow Provider(s)\n\ngoogle\n\n### Versions of Apache Airflow Providers\n\napache-airflow-providers-google==6.4.0\n\n### Apache Airflow version\n\n2.1.4\n\n### Operating System\n\nDebian GNU/Linux 10 (buster)\n\n### Deployment\n\nDocker-Compose\n\n### Deployment details\n\n_No response_\n\n### What happened\n\nWhen you run LocalFilesSystemToGCSOperator with the params for src and dest, the operator reports a false positive when there are no files present under the specified src directory. I expected it to fail stating the specified directory doesn't have any file.\r\n\r\n[2022-03-15 14:26:15,475] {taskinstance.py:1107} INFO - Executing <Task(LocalFilesystemToGCSOperator): upload_files_to_GCS> on 2022-03-15T14:25:59.554459+00:00\r\n[2022-03-15 14:26:15,484] {standard_task_runner.py:52} INFO - Started process 709 to run task\r\n[2022-03-15 14:26:15,492] {standard_task_runner.py:76} INFO - Running: ['***', 'tasks', 'run', 'dag', 'upload_files_to_GCS', '2022-03-15T14:25:59.554459+00:00', '--job-id', '1562', '--pool', 'default_pool', '--raw', '--subdir', 'DAGS_FOLDER/dag.py', '--cfg-path', '/tmp/tmp_e9t7pl9', '--error-file', '/tmp/tmpyij6m4er']\r\n[2022-03-15 14:26:15,493] {standard_task_runner.py:77} INFO - Job 1562: Subtask upload_files_to_GCS\r\n[2022-03-15 14:26:15,590] {logging_mixin.py:104} INFO - Running <TaskInstance: dag.upload_files_to_GCS 2022-03-15T14:25:59.554459+00:00 [running]> on host 653e566fd372\r\n[2022-03-15 14:26:15,752] {taskinstance.py:1300} INFO - Exporting the following env vars:\r\nAIRFLOW_CTX_DAG_OWNER=jet2\r\nAIRFLOW_CTX_DAG_ID=dag\r\nAIRFLOW_CTX_TASK_ID=upload_files_to_GCS\r\nAIRFLOW_CTX_EXECUTION_DATE=2022-03-15T14:25:59.554459+00:00\r\nAIRFLOW_CTX_DAG_RUN_ID=manual__2022-03-15T14:25:59.554459+00:00\r\n[2022-03-15 14:26:19,357] {taskinstance.py:1204} INFO - Marking task as SUCCESS. gag, task_id=upload_files_to_GCS, execution_date=20220315T142559, start_date=20220315T142615, end_date=20220315T142619\r\n[2022-03-15 14:26:19,422] {taskinstance.py:1265} INFO - 1 downstream tasks scheduled from follow-on schedule check\r\n[2022-03-15 14:26:19,458] {local_task_job.py:149} INFO - Task exited with return code 0\n\n### What you think should happen instead\n\nThe operator should at least info that no files were copied than just making it successful. \n\n### How to reproduce\n\n- create a Dag with LocalFilesSystemToGCSOperator \r\n- specify an empty directory as src and a gcp bucket as bucket_name, dest param(can be blank). \r\n- run the dag\n\n### Anything else\n\nNo\n\n### Are you willing to submit PR?\n\n- [ ] Yes I am willing to submit a PR!\n\n### Code of Conduct\n\n- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)\n\n", "before_files": [{"content": "#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\"\"\"This module contains operator for uploading local file(s) to GCS.\"\"\"\nimport os\nimport warnings\nfrom glob import glob\nfrom typing import TYPE_CHECKING, Optional, Sequence, Union\n\nfrom airflow.models import BaseOperator\nfrom airflow.providers.google.cloud.hooks.gcs import GCSHook\n\nif TYPE_CHECKING:\n from airflow.utils.context import Context\n\n\nclass LocalFilesystemToGCSOperator(BaseOperator):\n \"\"\"\n Uploads a file or list of files to Google Cloud Storage.\n Optionally can compress the file for upload.\n\n .. seealso::\n For more information on how to use this operator, take a look at the guide:\n :ref:`howto/operator:LocalFilesystemToGCSOperator`\n\n :param src: Path to the local file, or list of local files. Path can be either absolute\n (e.g. /path/to/file.ext) or relative (e.g. ../../foo/*/*.csv). (templated)\n :param dst: Destination path within the specified bucket on GCS (e.g. /path/to/file.ext).\n If multiple files are being uploaded, specify object prefix with trailing backslash\n (e.g. /path/to/directory/) (templated)\n :param bucket: The bucket to upload to. (templated)\n :param gcp_conn_id: (Optional) The connection ID used to connect to Google Cloud.\n :param google_cloud_storage_conn_id: (Deprecated) The connection ID used to connect to Google Cloud.\n This parameter has been deprecated. You should pass the gcp_conn_id parameter instead.\n :param mime_type: The mime-type string\n :param delegate_to: The account to impersonate, if any\n :param gzip: Allows for file to be compressed and uploaded as gzip\n :param impersonation_chain: Optional service account to impersonate using short-term\n credentials, or chained list of accounts required to get the access_token\n of the last account in the list, which will be impersonated in the request.\n If set as a string, the account must grant the originating account\n the Service Account Token Creator IAM role.\n If set as a sequence, the identities from the list must grant\n Service Account Token Creator IAM role to the directly preceding identity, with first\n account from the list granting this role to the originating account (templated).\n \"\"\"\n\n template_fields: Sequence[str] = (\n 'src',\n 'dst',\n 'bucket',\n 'impersonation_chain',\n )\n\n def __init__(\n self,\n *,\n src,\n dst,\n bucket,\n gcp_conn_id='google_cloud_default',\n google_cloud_storage_conn_id=None,\n mime_type='application/octet-stream',\n delegate_to=None,\n gzip=False,\n impersonation_chain: Optional[Union[str, Sequence[str]]] = None,\n **kwargs,\n ):\n super().__init__(**kwargs)\n\n if google_cloud_storage_conn_id:\n warnings.warn(\n \"The google_cloud_storage_conn_id parameter has been deprecated. You should pass \"\n \"the gcp_conn_id parameter.\",\n DeprecationWarning,\n stacklevel=3,\n )\n gcp_conn_id = google_cloud_storage_conn_id\n\n self.src = src\n self.dst = dst\n self.bucket = bucket\n self.gcp_conn_id = gcp_conn_id\n self.mime_type = mime_type\n self.delegate_to = delegate_to\n self.gzip = gzip\n self.impersonation_chain = impersonation_chain\n\n def execute(self, context: 'Context'):\n \"\"\"Uploads a file or list of files to Google Cloud Storage\"\"\"\n hook = GCSHook(\n gcp_conn_id=self.gcp_conn_id,\n delegate_to=self.delegate_to,\n impersonation_chain=self.impersonation_chain,\n )\n\n filepaths = self.src if isinstance(self.src, list) else glob(self.src)\n if os.path.basename(self.dst): # path to a file\n if len(filepaths) > 1: # multiple file upload\n raise ValueError(\n \"'dst' parameter references filepath. Please specify \"\n \"directory (with trailing backslash) to upload multiple \"\n \"files. e.g. /path/to/directory/\"\n )\n object_paths = [self.dst]\n else: # directory is provided\n object_paths = [os.path.join(self.dst, os.path.basename(filepath)) for filepath in filepaths]\n\n for filepath, object_path in zip(filepaths, object_paths):\n hook.upload(\n bucket_name=self.bucket,\n object_name=object_path,\n mime_type=self.mime_type,\n filename=filepath,\n gzip=self.gzip,\n )\n", "path": "airflow/providers/google/cloud/transfers/local_to_gcs.py"}], "after_files": [{"content": "#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\"\"\"This module contains operator for uploading local file(s) to GCS.\"\"\"\nimport os\nimport warnings\nfrom glob import glob\nfrom typing import TYPE_CHECKING, Optional, Sequence, Union\n\nfrom airflow.models import BaseOperator\nfrom airflow.providers.google.cloud.hooks.gcs import GCSHook\n\nif TYPE_CHECKING:\n from airflow.utils.context import Context\n\n\nclass LocalFilesystemToGCSOperator(BaseOperator):\n \"\"\"\n Uploads a file or list of files to Google Cloud Storage.\n Optionally can compress the file for upload.\n\n .. seealso::\n For more information on how to use this operator, take a look at the guide:\n :ref:`howto/operator:LocalFilesystemToGCSOperator`\n\n :param src: Path to the local file, or list of local files. Path can be either absolute\n (e.g. /path/to/file.ext) or relative (e.g. ../../foo/*/*.csv). (templated)\n :param dst: Destination path within the specified bucket on GCS (e.g. /path/to/file.ext).\n If multiple files are being uploaded, specify object prefix with trailing backslash\n (e.g. /path/to/directory/) (templated)\n :param bucket: The bucket to upload to. (templated)\n :param gcp_conn_id: (Optional) The connection ID used to connect to Google Cloud.\n :param google_cloud_storage_conn_id: (Deprecated) The connection ID used to connect to Google Cloud.\n This parameter has been deprecated. You should pass the gcp_conn_id parameter instead.\n :param mime_type: The mime-type string\n :param delegate_to: The account to impersonate, if any\n :param gzip: Allows for file to be compressed and uploaded as gzip\n :param impersonation_chain: Optional service account to impersonate using short-term\n credentials, or chained list of accounts required to get the access_token\n of the last account in the list, which will be impersonated in the request.\n If set as a string, the account must grant the originating account\n the Service Account Token Creator IAM role.\n If set as a sequence, the identities from the list must grant\n Service Account Token Creator IAM role to the directly preceding identity, with first\n account from the list granting this role to the originating account (templated).\n \"\"\"\n\n template_fields: Sequence[str] = (\n 'src',\n 'dst',\n 'bucket',\n 'impersonation_chain',\n )\n\n def __init__(\n self,\n *,\n src,\n dst,\n bucket,\n gcp_conn_id='google_cloud_default',\n google_cloud_storage_conn_id=None,\n mime_type='application/octet-stream',\n delegate_to=None,\n gzip=False,\n impersonation_chain: Optional[Union[str, Sequence[str]]] = None,\n **kwargs,\n ):\n super().__init__(**kwargs)\n\n if google_cloud_storage_conn_id:\n warnings.warn(\n \"The google_cloud_storage_conn_id parameter has been deprecated. You should pass \"\n \"the gcp_conn_id parameter.\",\n DeprecationWarning,\n stacklevel=3,\n )\n gcp_conn_id = google_cloud_storage_conn_id\n\n self.src = src\n self.dst = dst\n self.bucket = bucket\n self.gcp_conn_id = gcp_conn_id\n self.mime_type = mime_type\n self.delegate_to = delegate_to\n self.gzip = gzip\n self.impersonation_chain = impersonation_chain\n\n def execute(self, context: 'Context'):\n \"\"\"Uploads a file or list of files to Google Cloud Storage\"\"\"\n hook = GCSHook(\n gcp_conn_id=self.gcp_conn_id,\n delegate_to=self.delegate_to,\n impersonation_chain=self.impersonation_chain,\n )\n\n filepaths = self.src if isinstance(self.src, list) else glob(self.src)\n if not filepaths:\n raise FileNotFoundError(self.src)\n if os.path.basename(self.dst): # path to a file\n if len(filepaths) > 1: # multiple file upload\n raise ValueError(\n \"'dst' parameter references filepath. Please specify \"\n \"directory (with trailing backslash) to upload multiple \"\n \"files. e.g. /path/to/directory/\"\n )\n object_paths = [self.dst]\n else: # directory is provided\n object_paths = [os.path.join(self.dst, os.path.basename(filepath)) for filepath in filepaths]\n\n for filepath, object_path in zip(filepaths, object_paths):\n hook.upload(\n bucket_name=self.bucket,\n object_name=object_path,\n mime_type=self.mime_type,\n filename=filepath,\n gzip=self.gzip,\n )\n", "path": "airflow/providers/google/cloud/transfers/local_to_gcs.py"}]}
| 2,806 | 148 |
gh_patches_debug_9386
|
rasdani/github-patches
|
git_diff
|
rotki__rotki-2259
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
test_upgrade_db_22_to_23_without_frontend_settings fails in OSX tests
## Problem Definition
We noticed this in the nightly runs here: https://github.com/rotki/rotki/runs/1811061566?check_suite_focus=true
It fails only in OSX and has never failed in linux.
```
______________ test_upgrade_db_22_to_23_without_frontend_settings ______________
data_dir = PosixPath('/Users/runner/.cache/.rotkehlchen-test-dir')
user_data_dir = PosixPath('/Users/runner/.cache/.rotkehlchen-test-dir/testuser')
def test_upgrade_db_22_to_23_without_frontend_settings(data_dir, user_data_dir):
"""Test upgrading the DB from version 22 to version 23.
Tests the case where frontend settings were not populated and also the cache
file movement and deletion. Also test deleletion of deprecated historical_data_start
"""
msg_aggregator = MessagesAggregator()
_use_prepared_db(user_data_dir, 'v22_rotkehlchen_wo_frontend_settings.db')
db_v22 = _init_db_with_target_version(
target_version=22,
user_data_dir=user_data_dir,
msg_aggregator=msg_aggregator,
)
cursor = db_v22.conn.cursor()
# Create cache files under the data directory
(data_dir / 'forex_history_file.json').touch()
(data_dir / 'price_history_BTC_EUR.json').touch()
(data_dir / 'price_history_aDAI_USD.json').touch()
(data_dir / 'price_history_YFI_USD.json').touch()
# Also create an innocent json file and a random file
(data_dir / 'random.json').touch()
(data_dir / 'random.txt').touch()
# Check all settings except 'frontend_settings' exist
assert cursor.execute(
'SELECT COUNT(*) FROM settings WHERE name = "frontend_settings";',
).fetchone()[0] == 0
assert cursor.execute(
'SELECT COUNT(*) FROM settings WHERE name IN '
'("thousand_separator", "decimal_separator", "currency_location");',
).fetchone()[0] == 3
# Check we got a historical data start entry to remove
assert cursor.execute(
'SELECT COUNT(*) FROM settings WHERE name = "historical_data_start";',
).fetchone()[0] == 1
# Migrate to v23
db = _init_db_with_target_version(
target_version=23,
user_data_dir=user_data_dir,
msg_aggregator=msg_aggregator,
)
cursor = db.conn.cursor()
# Make sure the settings have been removed
assert cursor.execute(
'SELECT COUNT(*) FROM settings WHERE name IN '
'("thousand_separator", "decimal_separator", "currency_location");',
).fetchone()[0] == 0
assert cursor.execute(
'SELECT COUNT(*) FROM settings WHERE name = "historical_data_start";',
).fetchone()[0] == 0
# Make sure the settings have been migrated into 'frontend_settings'
frontend_settings = cursor.execute(
'SELECT value FROM settings WHERE name = "frontend_settings";',
).fetchone()[0]
frontend_settings_map = json.loads(frontend_settings)
assert frontend_settings_map['thousand_separator'] == ','
assert frontend_settings_map['decimal_separator'] == '.'
assert frontend_settings_map['currency_location'] == 'after'
# Assure the cache files were deleted
assert not (data_dir / 'price_history_BTC_EUR.json').is_file()
assert not (data_dir / 'price_history_aDAI_USD.json').is_file()
assert not (data_dir / 'price_history_YFI_USD.json').is_file()
# and that the forex history cache file moved
> assert (data_dir / 'price_history' / 'forex_history_file.json').is_file()
E AssertionError: assert False
E + where False = <bound method Path.is_file of PosixPath('/Users/runner/.cache/.rotkehlchen-test-dir/price_history/forex_history_file.json')>()
E + where <bound method Path.is_file of PosixPath('/Users/runner/.cache/.rotkehlchen-test-dir/price_history/forex_history_file.json')> = ((PosixPath('/Users/runner/.cache/.rotkehlchen-test-dir') / 'price_history') / 'forex_history_file.json').is_file
rotkehlchen/tests/db/test_db_upgrades.py:1311: AssertionError
```
## Task
Figure out why and fix it
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rotkehlchen/db/upgrades/v22_v23.py`
Content:
```
1 import json
2 from typing import TYPE_CHECKING
3 from rotkehlchen.utils.misc import get_or_make_price_history_dir
4 import os
5 from pathlib import Path
6 import glob
7 import shutil
8
9 if TYPE_CHECKING:
10 from rotkehlchen.db.dbhandler import DBHandler
11
12
13 def upgrade_v22_to_v23(db: 'DBHandler') -> None:
14 """Upgrades the DB from v22 to v23
15
16 - Migrates the settings entries 'thousand_separator', 'decimal_separator'
17 and 'currency_location' into the 'frontend_settings' entry.
18 - Deletes Bitfinex trades and their used query range, so trades can be
19 populated again with the right `fee_asset`.
20 - Delete all cryptocompare price cache files. Move forex price cache to price_history directory
21 """
22 settings = ('"thousand_separator"', '"decimal_separator"', '"currency_location"')
23 cursor = db.conn.cursor()
24 # Get the settings and put them in a dict
25 setting_value_map = dict(
26 cursor.execute(
27 f'SELECT name, value FROM settings WHERE name IN ({",".join(settings)});',
28 ).fetchall(),
29 )
30 # If the settings exist, migrate them into the 'frontend_settings' entry
31 if setting_value_map:
32 frontend_settings = cursor.execute(
33 'SELECT value FROM settings WHERE name = "frontend_settings";',
34 ).fetchone()
35
36 if frontend_settings is not None:
37 setting_value_map.update(json.loads(frontend_settings[0]))
38
39 cursor.execute(
40 'INSERT OR REPLACE INTO settings(name, value) VALUES(?, ?)',
41 ('frontend_settings', json.dumps(setting_value_map)),
42 )
43 # Delete the settings
44 cursor.execute(f'DELETE FROM settings WHERE name IN ({",".join(settings)});')
45 # Delete Bitfinex used_query_ranges
46 cursor.execute('DELETE FROM used_query_ranges WHERE name = "bitfinex_trades";')
47 # Delete Bitfinex trades
48 cursor.execute('DELETE FROM trades WHERE location = "T";')
49 # Delete deprecated historical data start setting
50 cursor.execute('DELETE from settings WHERE name="historical_data_start";')
51 db.conn.commit()
52
53 # -- Now move forex history to the new directory and remove all old cache files
54 data_directory = db.user_data_dir.parent
55 price_history_dir = get_or_make_price_history_dir(data_directory)
56 forex_history_file = data_directory / 'price_history_forex.json'
57 if forex_history_file.is_file():
58 shutil.move(
59 forex_history_file, # type: ignore
60 price_history_dir / 'forex_history_file.json',
61 )
62
63 prefix = os.path.join(str(data_directory), 'price_history_')
64 prefix = prefix.replace('\\', '\\\\')
65 files_list = glob.glob(prefix + '*.json')
66 for file_ in files_list:
67 file_ = file_.replace('\\\\', '\\')
68 try:
69 Path(file_).unlink()
70 except OSError:
71 pass
72
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/rotkehlchen/db/upgrades/v22_v23.py b/rotkehlchen/db/upgrades/v22_v23.py
--- a/rotkehlchen/db/upgrades/v22_v23.py
+++ b/rotkehlchen/db/upgrades/v22_v23.py
@@ -51,6 +51,8 @@
db.conn.commit()
# -- Now move forex history to the new directory and remove all old cache files
+ # We botched this. Should have been forex_history_file.json -> price_history_forex.json
+ # and not the other way around
data_directory = db.user_data_dir.parent
price_history_dir = get_or_make_price_history_dir(data_directory)
forex_history_file = data_directory / 'price_history_forex.json'
|
{"golden_diff": "diff --git a/rotkehlchen/db/upgrades/v22_v23.py b/rotkehlchen/db/upgrades/v22_v23.py\n--- a/rotkehlchen/db/upgrades/v22_v23.py\n+++ b/rotkehlchen/db/upgrades/v22_v23.py\n@@ -51,6 +51,8 @@\n db.conn.commit()\n \n # -- Now move forex history to the new directory and remove all old cache files\n+ # We botched this. Should have been forex_history_file.json -> price_history_forex.json\n+ # and not the other way around\n data_directory = db.user_data_dir.parent\n price_history_dir = get_or_make_price_history_dir(data_directory)\n forex_history_file = data_directory / 'price_history_forex.json'\n", "issue": "test_upgrade_db_22_to_23_without_frontend_settings fails in OSX tests\n## Problem Definition\r\n\r\nWe noticed this in the nightly runs here: https://github.com/rotki/rotki/runs/1811061566?check_suite_focus=true\r\n\r\nIt fails only in OSX and has never failed in linux.\r\n\r\n```\r\n______________ test_upgrade_db_22_to_23_without_frontend_settings ______________\r\n\r\ndata_dir = PosixPath('/Users/runner/.cache/.rotkehlchen-test-dir')\r\nuser_data_dir = PosixPath('/Users/runner/.cache/.rotkehlchen-test-dir/testuser')\r\n\r\n def test_upgrade_db_22_to_23_without_frontend_settings(data_dir, user_data_dir):\r\n \"\"\"Test upgrading the DB from version 22 to version 23.\r\n \r\n Tests the case where frontend settings were not populated and also the cache\r\n file movement and deletion. Also test deleletion of deprecated historical_data_start\r\n \"\"\"\r\n msg_aggregator = MessagesAggregator()\r\n _use_prepared_db(user_data_dir, 'v22_rotkehlchen_wo_frontend_settings.db')\r\n db_v22 = _init_db_with_target_version(\r\n target_version=22,\r\n user_data_dir=user_data_dir,\r\n msg_aggregator=msg_aggregator,\r\n )\r\n cursor = db_v22.conn.cursor()\r\n \r\n # Create cache files under the data directory\r\n (data_dir / 'forex_history_file.json').touch()\r\n (data_dir / 'price_history_BTC_EUR.json').touch()\r\n (data_dir / 'price_history_aDAI_USD.json').touch()\r\n (data_dir / 'price_history_YFI_USD.json').touch()\r\n # Also create an innocent json file and a random file\r\n (data_dir / 'random.json').touch()\r\n (data_dir / 'random.txt').touch()\r\n # Check all settings except 'frontend_settings' exist\r\n assert cursor.execute(\r\n 'SELECT COUNT(*) FROM settings WHERE name = \"frontend_settings\";',\r\n ).fetchone()[0] == 0\r\n assert cursor.execute(\r\n 'SELECT COUNT(*) FROM settings WHERE name IN '\r\n '(\"thousand_separator\", \"decimal_separator\", \"currency_location\");',\r\n ).fetchone()[0] == 3\r\n # Check we got a historical data start entry to remove\r\n assert cursor.execute(\r\n 'SELECT COUNT(*) FROM settings WHERE name = \"historical_data_start\";',\r\n ).fetchone()[0] == 1\r\n \r\n # Migrate to v23\r\n db = _init_db_with_target_version(\r\n target_version=23,\r\n user_data_dir=user_data_dir,\r\n msg_aggregator=msg_aggregator,\r\n )\r\n cursor = db.conn.cursor()\r\n \r\n # Make sure the settings have been removed\r\n assert cursor.execute(\r\n 'SELECT COUNT(*) FROM settings WHERE name IN '\r\n '(\"thousand_separator\", \"decimal_separator\", \"currency_location\");',\r\n ).fetchone()[0] == 0\r\n assert cursor.execute(\r\n 'SELECT COUNT(*) FROM settings WHERE name = \"historical_data_start\";',\r\n ).fetchone()[0] == 0\r\n \r\n # Make sure the settings have been migrated into 'frontend_settings'\r\n frontend_settings = cursor.execute(\r\n 'SELECT value FROM settings WHERE name = \"frontend_settings\";',\r\n ).fetchone()[0]\r\n frontend_settings_map = json.loads(frontend_settings)\r\n assert frontend_settings_map['thousand_separator'] == ','\r\n assert frontend_settings_map['decimal_separator'] == '.'\r\n assert frontend_settings_map['currency_location'] == 'after'\r\n \r\n # Assure the cache files were deleted\r\n assert not (data_dir / 'price_history_BTC_EUR.json').is_file()\r\n assert not (data_dir / 'price_history_aDAI_USD.json').is_file()\r\n assert not (data_dir / 'price_history_YFI_USD.json').is_file()\r\n # and that the forex history cache file moved\r\n> assert (data_dir / 'price_history' / 'forex_history_file.json').is_file()\r\nE AssertionError: assert False\r\nE + where False = <bound method Path.is_file of PosixPath('/Users/runner/.cache/.rotkehlchen-test-dir/price_history/forex_history_file.json')>()\r\nE + where <bound method Path.is_file of PosixPath('/Users/runner/.cache/.rotkehlchen-test-dir/price_history/forex_history_file.json')> = ((PosixPath('/Users/runner/.cache/.rotkehlchen-test-dir') / 'price_history') / 'forex_history_file.json').is_file\r\n\r\nrotkehlchen/tests/db/test_db_upgrades.py:1311: AssertionError\r\n```\r\n\r\n## Task\r\n\r\nFigure out why and fix it\n", "before_files": [{"content": "import json\nfrom typing import TYPE_CHECKING\nfrom rotkehlchen.utils.misc import get_or_make_price_history_dir\nimport os\nfrom pathlib import Path\nimport glob\nimport shutil\n\nif TYPE_CHECKING:\n from rotkehlchen.db.dbhandler import DBHandler\n\n\ndef upgrade_v22_to_v23(db: 'DBHandler') -> None:\n \"\"\"Upgrades the DB from v22 to v23\n\n - Migrates the settings entries 'thousand_separator', 'decimal_separator'\n and 'currency_location' into the 'frontend_settings' entry.\n - Deletes Bitfinex trades and their used query range, so trades can be\n populated again with the right `fee_asset`.\n - Delete all cryptocompare price cache files. Move forex price cache to price_history directory\n \"\"\"\n settings = ('\"thousand_separator\"', '\"decimal_separator\"', '\"currency_location\"')\n cursor = db.conn.cursor()\n # Get the settings and put them in a dict\n setting_value_map = dict(\n cursor.execute(\n f'SELECT name, value FROM settings WHERE name IN ({\",\".join(settings)});',\n ).fetchall(),\n )\n # If the settings exist, migrate them into the 'frontend_settings' entry\n if setting_value_map:\n frontend_settings = cursor.execute(\n 'SELECT value FROM settings WHERE name = \"frontend_settings\";',\n ).fetchone()\n\n if frontend_settings is not None:\n setting_value_map.update(json.loads(frontend_settings[0]))\n\n cursor.execute(\n 'INSERT OR REPLACE INTO settings(name, value) VALUES(?, ?)',\n ('frontend_settings', json.dumps(setting_value_map)),\n )\n # Delete the settings\n cursor.execute(f'DELETE FROM settings WHERE name IN ({\",\".join(settings)});')\n # Delete Bitfinex used_query_ranges\n cursor.execute('DELETE FROM used_query_ranges WHERE name = \"bitfinex_trades\";')\n # Delete Bitfinex trades\n cursor.execute('DELETE FROM trades WHERE location = \"T\";')\n # Delete deprecated historical data start setting\n cursor.execute('DELETE from settings WHERE name=\"historical_data_start\";')\n db.conn.commit()\n\n # -- Now move forex history to the new directory and remove all old cache files\n data_directory = db.user_data_dir.parent\n price_history_dir = get_or_make_price_history_dir(data_directory)\n forex_history_file = data_directory / 'price_history_forex.json'\n if forex_history_file.is_file():\n shutil.move(\n forex_history_file, # type: ignore\n price_history_dir / 'forex_history_file.json',\n )\n\n prefix = os.path.join(str(data_directory), 'price_history_')\n prefix = prefix.replace('\\\\', '\\\\\\\\')\n files_list = glob.glob(prefix + '*.json')\n for file_ in files_list:\n file_ = file_.replace('\\\\\\\\', '\\\\')\n try:\n Path(file_).unlink()\n except OSError:\n pass\n", "path": "rotkehlchen/db/upgrades/v22_v23.py"}], "after_files": [{"content": "import json\nfrom typing import TYPE_CHECKING\nfrom rotkehlchen.utils.misc import get_or_make_price_history_dir\nimport os\nfrom pathlib import Path\nimport glob\nimport shutil\n\nif TYPE_CHECKING:\n from rotkehlchen.db.dbhandler import DBHandler\n\n\ndef upgrade_v22_to_v23(db: 'DBHandler') -> None:\n \"\"\"Upgrades the DB from v22 to v23\n\n - Migrates the settings entries 'thousand_separator', 'decimal_separator'\n and 'currency_location' into the 'frontend_settings' entry.\n - Deletes Bitfinex trades and their used query range, so trades can be\n populated again with the right `fee_asset`.\n - Delete all cryptocompare price cache files. Move forex price cache to price_history directory\n \"\"\"\n settings = ('\"thousand_separator\"', '\"decimal_separator\"', '\"currency_location\"')\n cursor = db.conn.cursor()\n # Get the settings and put them in a dict\n setting_value_map = dict(\n cursor.execute(\n f'SELECT name, value FROM settings WHERE name IN ({\",\".join(settings)});',\n ).fetchall(),\n )\n # If the settings exist, migrate them into the 'frontend_settings' entry\n if setting_value_map:\n frontend_settings = cursor.execute(\n 'SELECT value FROM settings WHERE name = \"frontend_settings\";',\n ).fetchone()\n\n if frontend_settings is not None:\n setting_value_map.update(json.loads(frontend_settings[0]))\n\n cursor.execute(\n 'INSERT OR REPLACE INTO settings(name, value) VALUES(?, ?)',\n ('frontend_settings', json.dumps(setting_value_map)),\n )\n # Delete the settings\n cursor.execute(f'DELETE FROM settings WHERE name IN ({\",\".join(settings)});')\n # Delete Bitfinex used_query_ranges\n cursor.execute('DELETE FROM used_query_ranges WHERE name = \"bitfinex_trades\";')\n # Delete Bitfinex trades\n cursor.execute('DELETE FROM trades WHERE location = \"T\";')\n # Delete deprecated historical data start setting\n cursor.execute('DELETE from settings WHERE name=\"historical_data_start\";')\n db.conn.commit()\n\n # -- Now move forex history to the new directory and remove all old cache files\n # We botched this. Should have been forex_history_file.json -> price_history_forex.json\n # and not the other way around\n data_directory = db.user_data_dir.parent\n price_history_dir = get_or_make_price_history_dir(data_directory)\n forex_history_file = data_directory / 'price_history_forex.json'\n if forex_history_file.is_file():\n shutil.move(\n forex_history_file, # type: ignore\n price_history_dir / 'forex_history_file.json',\n )\n\n prefix = os.path.join(str(data_directory), 'price_history_')\n prefix = prefix.replace('\\\\', '\\\\\\\\')\n files_list = glob.glob(prefix + '*.json')\n for file_ in files_list:\n file_ = file_.replace('\\\\\\\\', '\\\\')\n try:\n Path(file_).unlink()\n except OSError:\n pass\n", "path": "rotkehlchen/db/upgrades/v22_v23.py"}]}
| 2,050 | 179 |
gh_patches_debug_26190
|
rasdani/github-patches
|
git_diff
|
numpy__numpy-13320
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BUILD: what is generate_mtrand_c.py ?
xref #13163
Does anyone know what is ['numpy/random/mtrand/generate_mtrand_c.py'](https://github.com/numpy/numpy/blob/v1.16.2/numpy/random/mtrand/generate_mtrand_c.py) and why it is in ['MANIFEST.in'](https://github.com/numpy/numpy/blob/v1.16.2/MANIFEST.in#L11) ? Is it used by some downstream package? It does not seem to be used in [creating the wheels](https://github.com/MacPython/numpy-wheels)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `numpy/random/mtrand/generate_mtrand_c.py`
Content:
```
1 #!/usr/bin/env python
2 from __future__ import division, absolute_import, print_function
3
4 import sys
5 import re
6 import os
7
8 unused_internal_funcs = ['__Pyx_PrintItem',
9 '__Pyx_PrintNewline',
10 '__Pyx_ReRaise',
11 #'__Pyx_GetExcValue',
12 '__Pyx_ArgTypeTest',
13 '__Pyx_SetVtable',
14 '__Pyx_GetVtable',
15 '__Pyx_CreateClass']
16
17 if __name__ == '__main__':
18 # Use cython here so that long docstrings are broken up.
19 # This is needed for some VC++ compilers.
20 os.system('cython mtrand.pyx')
21 mtrand_c = open('mtrand.c', 'r')
22 processed = open('mtrand_pp.c', 'w')
23 unused_funcs_str = '(' + '|'.join(unused_internal_funcs) + ')'
24 uifpat = re.compile(r'static \w+ \*?'+unused_funcs_str+r'.*/\*proto\*/')
25 linepat = re.compile(r'/\* ".*/mtrand.pyx":')
26 for linenum, line in enumerate(mtrand_c):
27 m = re.match(r'^(\s+arrayObject\w*\s*=\s*[(])[(]PyObject\s*[*][)]',
28 line)
29 if m:
30 line = '%s(PyArrayObject *)%s' % (m.group(1), line[m.end():])
31 m = uifpat.match(line)
32 if m:
33 line = ''
34 m = re.search(unused_funcs_str, line)
35 if m:
36 print("%s was declared unused, but is used at line %d" % (m.group(),
37 linenum+1), file=sys.stderr)
38 line = linepat.sub(r'/* "mtrand.pyx":', line)
39 processed.write(line)
40 mtrand_c.close()
41 processed.close()
42 os.rename('mtrand_pp.c', 'mtrand.c')
43
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/numpy/random/mtrand/generate_mtrand_c.py b/numpy/random/mtrand/generate_mtrand_c.py
deleted file mode 100644
--- a/numpy/random/mtrand/generate_mtrand_c.py
+++ /dev/null
@@ -1,42 +0,0 @@
-#!/usr/bin/env python
-from __future__ import division, absolute_import, print_function
-
-import sys
-import re
-import os
-
-unused_internal_funcs = ['__Pyx_PrintItem',
- '__Pyx_PrintNewline',
- '__Pyx_ReRaise',
- #'__Pyx_GetExcValue',
- '__Pyx_ArgTypeTest',
- '__Pyx_SetVtable',
- '__Pyx_GetVtable',
- '__Pyx_CreateClass']
-
-if __name__ == '__main__':
- # Use cython here so that long docstrings are broken up.
- # This is needed for some VC++ compilers.
- os.system('cython mtrand.pyx')
- mtrand_c = open('mtrand.c', 'r')
- processed = open('mtrand_pp.c', 'w')
- unused_funcs_str = '(' + '|'.join(unused_internal_funcs) + ')'
- uifpat = re.compile(r'static \w+ \*?'+unused_funcs_str+r'.*/\*proto\*/')
- linepat = re.compile(r'/\* ".*/mtrand.pyx":')
- for linenum, line in enumerate(mtrand_c):
- m = re.match(r'^(\s+arrayObject\w*\s*=\s*[(])[(]PyObject\s*[*][)]',
- line)
- if m:
- line = '%s(PyArrayObject *)%s' % (m.group(1), line[m.end():])
- m = uifpat.match(line)
- if m:
- line = ''
- m = re.search(unused_funcs_str, line)
- if m:
- print("%s was declared unused, but is used at line %d" % (m.group(),
- linenum+1), file=sys.stderr)
- line = linepat.sub(r'/* "mtrand.pyx":', line)
- processed.write(line)
- mtrand_c.close()
- processed.close()
- os.rename('mtrand_pp.c', 'mtrand.c')
|
{"golden_diff": "diff --git a/numpy/random/mtrand/generate_mtrand_c.py b/numpy/random/mtrand/generate_mtrand_c.py\ndeleted file mode 100644\n--- a/numpy/random/mtrand/generate_mtrand_c.py\n+++ /dev/null\n@@ -1,42 +0,0 @@\n-#!/usr/bin/env python\n-from __future__ import division, absolute_import, print_function\n-\n-import sys\n-import re\n-import os\n-\n-unused_internal_funcs = ['__Pyx_PrintItem',\n- '__Pyx_PrintNewline',\n- '__Pyx_ReRaise',\n- #'__Pyx_GetExcValue',\n- '__Pyx_ArgTypeTest',\n- '__Pyx_SetVtable',\n- '__Pyx_GetVtable',\n- '__Pyx_CreateClass']\n-\n-if __name__ == '__main__':\n- # Use cython here so that long docstrings are broken up.\n- # This is needed for some VC++ compilers.\n- os.system('cython mtrand.pyx')\n- mtrand_c = open('mtrand.c', 'r')\n- processed = open('mtrand_pp.c', 'w')\n- unused_funcs_str = '(' + '|'.join(unused_internal_funcs) + ')'\n- uifpat = re.compile(r'static \\w+ \\*?'+unused_funcs_str+r'.*/\\*proto\\*/')\n- linepat = re.compile(r'/\\* \".*/mtrand.pyx\":')\n- for linenum, line in enumerate(mtrand_c):\n- m = re.match(r'^(\\s+arrayObject\\w*\\s*=\\s*[(])[(]PyObject\\s*[*][)]',\n- line)\n- if m:\n- line = '%s(PyArrayObject *)%s' % (m.group(1), line[m.end():])\n- m = uifpat.match(line)\n- if m:\n- line = ''\n- m = re.search(unused_funcs_str, line)\n- if m:\n- print(\"%s was declared unused, but is used at line %d\" % (m.group(),\n- linenum+1), file=sys.stderr)\n- line = linepat.sub(r'/* \"mtrand.pyx\":', line)\n- processed.write(line)\n- mtrand_c.close()\n- processed.close()\n- os.rename('mtrand_pp.c', 'mtrand.c')\n", "issue": "BUILD: what is generate_mtrand_c.py ?\nxref #13163 \r\n\r\nDoes anyone know what is ['numpy/random/mtrand/generate_mtrand_c.py'](https://github.com/numpy/numpy/blob/v1.16.2/numpy/random/mtrand/generate_mtrand_c.py) and why it is in ['MANIFEST.in'](https://github.com/numpy/numpy/blob/v1.16.2/MANIFEST.in#L11) ? Is it used by some downstream package? It does not seem to be used in [creating the wheels](https://github.com/MacPython/numpy-wheels)\n", "before_files": [{"content": "#!/usr/bin/env python\nfrom __future__ import division, absolute_import, print_function\n\nimport sys\nimport re\nimport os\n\nunused_internal_funcs = ['__Pyx_PrintItem',\n '__Pyx_PrintNewline',\n '__Pyx_ReRaise',\n #'__Pyx_GetExcValue',\n '__Pyx_ArgTypeTest',\n '__Pyx_SetVtable',\n '__Pyx_GetVtable',\n '__Pyx_CreateClass']\n\nif __name__ == '__main__':\n # Use cython here so that long docstrings are broken up.\n # This is needed for some VC++ compilers.\n os.system('cython mtrand.pyx')\n mtrand_c = open('mtrand.c', 'r')\n processed = open('mtrand_pp.c', 'w')\n unused_funcs_str = '(' + '|'.join(unused_internal_funcs) + ')'\n uifpat = re.compile(r'static \\w+ \\*?'+unused_funcs_str+r'.*/\\*proto\\*/')\n linepat = re.compile(r'/\\* \".*/mtrand.pyx\":')\n for linenum, line in enumerate(mtrand_c):\n m = re.match(r'^(\\s+arrayObject\\w*\\s*=\\s*[(])[(]PyObject\\s*[*][)]',\n line)\n if m:\n line = '%s(PyArrayObject *)%s' % (m.group(1), line[m.end():])\n m = uifpat.match(line)\n if m:\n line = ''\n m = re.search(unused_funcs_str, line)\n if m:\n print(\"%s was declared unused, but is used at line %d\" % (m.group(),\n linenum+1), file=sys.stderr)\n line = linepat.sub(r'/* \"mtrand.pyx\":', line)\n processed.write(line)\n mtrand_c.close()\n processed.close()\n os.rename('mtrand_pp.c', 'mtrand.c')\n", "path": "numpy/random/mtrand/generate_mtrand_c.py"}], "after_files": [{"content": null, "path": "numpy/random/mtrand/generate_mtrand_c.py"}]}
| 895 | 522 |
gh_patches_debug_41602
|
rasdani/github-patches
|
git_diff
|
oppia__oppia-3461
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update docstrings in the Python backend code to follow the style guide.
We've been a bit lax about adding docstrings to code in the backend. Instead, we should be following the style guide: https://google.github.io/styleguide/pyguide.html?showone=Comments#Comments -- docstrings should define args, return values, exceptions, etc. The aim of this issue is to ensure that the backend code is properly documented; we will tackle this on a file-by-file basis.
Note: if you're a new contributor, this issue is a particularly good introductory one to tackle, since it will require reading, understanding and documenting the purpose of a file in the codebase, and help you get acquainted with how the Oppia backend is structured. It's important that you're able to communicate and write clearly, though!
Some useful tips:
- Please follow our [style guide](https://github.com/oppia/oppia/wiki/Coding-style-guide#general) (and also the [Google Python style guide](https://google.github.io/styleguide/pyguide.html?showone=Comments#Comments)) when writing docstrings. Make sure to follow the patterns established there as closely as possible, including spacing, capitalization and punctuation. Also, it's a good idea to look at some of the existing files (see below) to get a sense of the correct format -- we recommend following this closely in order to maintain uniformity across the codebase.
- If applicable, make sure that the docstrings for all functions correctly specify the args, return values, etc. If a function has no args, you can omit the Args section; if it has no return value, you can omit the Returns section.
- If a docstring is not present in the code, or the existing one is too vague for you to understand what's going on just by looking at it, then it should be rewritten to be more clear. You'll often need to read the code as well, and do a grep on the callsites to see how it's being used, in order to get a clear understanding of what it's doing. This is good practice for learning how to navigate the codebase, and is an intended part of this project.
- If you feel that parts of the codebase are unclear or incorrectly factored, please feel free to bring this up! The whole point of docstrings is to improve code health and accessibility for new developers, so if you spot other things that can be done in service of that aim, we should also look into doing that.
**Completed:**
- [x] core.controllers.base @shaz13
- [x] core.controllers.cron @saeedjassani
- [x] core.controllers.reader @96mohitm
- [x] core.domain.activity_domain @andromfins
- [x] core.domain.activity_services @souravsingh
- [x] core.domain.classifier_services @andromfins
- [x] core.domain.collection_domain @domevit
- [x] core.domain.collection_services @kahkhang
- [x] core.domain.config_services @vibhor98
- [x] core.domain.email_manager @Oishikatta
- [x] core.domain.exp_domain @domevit
- [x] core.domain.exp_services @DubeySandeep
- [x] core.domain.feedback_domain @terrameijar
- [x] core.domain.feedback_jobs_continuous
- [x] core.domain.feedback_services @andimeo
- [x] core.domain.fs_domain @kartik-25
- [x] core.domain.html_cleaner @seanlip
- [x] core.domain.param_domain @rohan-gulati
- [x] core.domain.rating_services @BojOnTheBeat
- [x] core.domain.rights_manager @ayejay
- [x] core.domain.stats_domain @Arunabh98
- [x] core.domain.stats_jobs_continuous @Arunabh98
- [x] core.domain.stats_jobs_one_off @zpuller
- [x] core.domain.stats_services @manoj-makkuboy
- [x] core.domain.subscription_services @vibhor98
- [x] core.domain.summary_services @domevit
- [x] core.domain.user_jobs_continuous @shubha1593
- [x] core.domain.user_services @Nalinc
- [x] core.domain.value_generators_domain @meetsha
- [x] core.jobs @brianrodri
- [x] core.platform.app_identity.gae_app_identity_services @ubhisat
- [x] core.platform.models @pranavsid98
- [x] core.platform.transactions.gae_transaction_services @ubhisat
- [x] core.storage.base_model.gae_models @vojtechjelinek
- [x] core.storage.collection.gae_models @kaggis
- [x] core.storage.email.gae_models @rajan-garg
- [x] core.storage.exploration.gae_models @kirankonduru
- [x] core.storage.feedback.gae_models @naveenshukla
- [x] core.storage.file.gae_models @Ian91
- [x] core.storage.job.gae_models @sasacocic
- [x] core.storage.statistics.gae_models @abhishekarya286
- [x] core.storage.user.gae_models @Anthony-Alridge
- [x] core.tests.test_util_jobs @seanlip
- [x] core.tests.test_utils @vibhor98
- [x] utils @yashLadha
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/domain/stats_jobs_one_off.py`
Content:
```
1 # Copyright 2014 The Oppia Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS-IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Jobs for statistics views."""
16
17 import ast
18 import collections
19
20 from core import jobs
21 from core.domain import stats_jobs_continuous
22 from core.platform import models
23
24 (stats_models,) = models.Registry.import_models([
25 models.NAMES.statistics
26 ])
27
28
29 class StatisticsAudit(jobs.BaseMapReduceJobManager):
30 """Performs a brief audit of exploration completions and state hit counts to
31 make sure they match counts stored in StateCounterModel. It also checks for
32 some possible error cases like negative counts.
33 """
34 _STATE_COUNTER_ERROR_KEY = 'State Counter ERROR'
35
36 @classmethod
37 def entity_classes_to_map_over(cls):
38 return [
39 stats_models.ExplorationAnnotationsModel,
40 stats_models.StateCounterModel]
41
42 @staticmethod
43 def map(item):
44 if isinstance(item, stats_models.StateCounterModel):
45 if item.first_entry_count < 0:
46 yield (
47 StatisticsAudit._STATE_COUNTER_ERROR_KEY,
48 'Less than 0: %s %d' % (item.key, item.first_entry_count))
49 return
50 # Older versions of ExplorationAnnotations didn't store exp_id
51 # This is short hand for making sure we get ones updated most recently
52 else:
53 if item.exploration_id is not None:
54 yield (item.exploration_id, {
55 'version': item.version,
56 'starts': item.num_starts,
57 'completions': item.num_completions,
58 'state_hit': item.state_hit_counts
59 })
60
61 @staticmethod
62 def reduce(key, stringified_values):
63 if key == StatisticsAudit._STATE_COUNTER_ERROR_KEY:
64 for value_str in stringified_values:
65 yield (value_str,)
66 return
67
68 # If the code reaches this point, we are looking at values that
69 # correspond to each version of a particular exploration.
70
71 # These variables correspond to the VERSION_ALL version.
72 all_starts = 0
73 all_completions = 0
74 all_state_hit = collections.defaultdict(int)
75
76 # These variables correspond to the sum of counts for all other
77 # versions besides VERSION_ALL.
78 sum_starts = 0
79 sum_completions = 0
80 sum_state_hit = collections.defaultdict(int)
81
82 for value_str in stringified_values:
83 value = ast.literal_eval(value_str)
84 if value['starts'] < 0:
85 yield (
86 'Negative start count: exp_id:%s version:%s starts:%s' %
87 (key, value['version'], value['starts']),)
88
89 if value['completions'] < 0:
90 yield (
91 'Negative completion count: exp_id:%s version:%s '
92 'completions:%s' %
93 (key, value['version'], value['completions']),)
94
95 if value['completions'] > value['starts']:
96 yield ('Completions > starts: exp_id:%s version:%s %s>%s' % (
97 key, value['version'], value['completions'],
98 value['starts']),)
99
100 if value['version'] == stats_jobs_continuous.VERSION_ALL:
101 all_starts = value['starts']
102 all_completions = value['completions']
103 for (state_name, counts) in value['state_hit'].iteritems():
104 all_state_hit[state_name] = counts['first_entry_count']
105 else:
106 sum_starts += value['starts']
107 sum_completions += value['completions']
108 for (state_name, counts) in value['state_hit'].iteritems():
109 sum_state_hit[state_name] += counts['first_entry_count']
110
111 if sum_starts != all_starts:
112 yield (
113 'Non-all != all for starts: exp_id:%s sum: %s all: %s'
114 % (key, sum_starts, all_starts),)
115 if sum_completions != all_completions:
116 yield (
117 'Non-all != all for completions: exp_id:%s sum: %s all: %s'
118 % (key, sum_completions, all_completions),)
119
120 for state_name in all_state_hit:
121 if (state_name not in sum_state_hit and
122 all_state_hit[state_name] != 0):
123 yield (
124 'state hit count not same exp_id:%s state:%s, '
125 'all:%s sum: null' % (
126 key, state_name, all_state_hit[state_name]),)
127 elif all_state_hit[state_name] != sum_state_hit[state_name]:
128 yield (
129 'state hit count not same exp_id: %s state: %s '
130 'all: %s sum:%s' % (
131 key, state_name, all_state_hit[state_name],
132 sum_state_hit[state_name]),)
133
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/core/domain/stats_jobs_one_off.py b/core/domain/stats_jobs_one_off.py
--- a/core/domain/stats_jobs_one_off.py
+++ b/core/domain/stats_jobs_one_off.py
@@ -27,7 +27,9 @@
class StatisticsAudit(jobs.BaseMapReduceJobManager):
- """Performs a brief audit of exploration completions and state hit counts to
+ """A one-off statistics audit.
+
+ Performs a brief audit of exploration completions and state hit counts to
make sure they match counts stored in StateCounterModel. It also checks for
some possible error cases like negative counts.
"""
@@ -41,6 +43,39 @@
@staticmethod
def map(item):
+ """Implements the map function. Must be declared @staticmethod.
+
+ Args:
+ item: ExplorationAnnotationsModel or
+ StateCounterModel.
+
+ Yields:
+ tuple. For StateCounterModel, a 2-tuple in the form
+ (_STATE_COUNTER_ERROR_KEY, error message).
+ tuple. For ExplorationAnnotationModel, a 2-tuple in the form
+ ('exploration_id', value).
+ 'exploration_id': str. the id of the exploration.
+ 'value': a dict, whose structure is as follows:
+ {
+ 'version': str. version of the exploration.
+ 'starts': int. # of times exploration was started.
+ 'completions': int. # of times exploration was
+ completed.
+ 'state_hit': a dict containing the hit counts for the
+ states in the exploration. It is formatted as
+ follows:
+ {
+ state_name: {
+ 'first_entry_count': int. # of sessions
+ which hit this state.
+ 'total_entry_count': int. # of total hits
+ for this state.
+ 'no_answer_count': int. # of hits with no
+ answer for this state.
+ }
+ }
+ }
+ """
if isinstance(item, stats_models.StateCounterModel):
if item.first_entry_count < 0:
yield (
@@ -60,6 +95,36 @@
@staticmethod
def reduce(key, stringified_values):
+ """Updates statistics for the given exploration.
+
+ Args:
+ key: str. The id of the exploration.
+ stringified_values: list(str). A list of stringified values
+ associated with the given key. An element of stringified_values
+ would be of the form:
+ {
+ 'version': str. version of the exploration.
+ 'starts': int. # of times exploration was started.
+ 'completions': int. # of times exploration was
+ completed.
+ 'state_hit': dict. a dict containing the hit counts
+ for the states in the exploration. It is formatted
+ as follows:
+ {
+ state_name: {
+ 'first_entry_count': int. # of sessions
+ which hit this state.
+ 'total_entry_count': int. # of total
+ hits for this state.
+ 'no_answer_count': int. # of hits with
+ no answer for this state.
+ }
+ }
+ }
+
+ Yields:
+ tuple(str). A 1-tuple whose only element is an error message.
+ """
if key == StatisticsAudit._STATE_COUNTER_ERROR_KEY:
for value_str in stringified_values:
yield (value_str,)
|
{"golden_diff": "diff --git a/core/domain/stats_jobs_one_off.py b/core/domain/stats_jobs_one_off.py\n--- a/core/domain/stats_jobs_one_off.py\n+++ b/core/domain/stats_jobs_one_off.py\n@@ -27,7 +27,9 @@\n \n \n class StatisticsAudit(jobs.BaseMapReduceJobManager):\n- \"\"\"Performs a brief audit of exploration completions and state hit counts to\n+ \"\"\"A one-off statistics audit.\n+\n+ Performs a brief audit of exploration completions and state hit counts to\n make sure they match counts stored in StateCounterModel. It also checks for\n some possible error cases like negative counts.\n \"\"\"\n@@ -41,6 +43,39 @@\n \n @staticmethod\n def map(item):\n+ \"\"\"Implements the map function. Must be declared @staticmethod.\n+\n+ Args:\n+ item: ExplorationAnnotationsModel or\n+ StateCounterModel.\n+\n+ Yields:\n+ tuple. For StateCounterModel, a 2-tuple in the form\n+ (_STATE_COUNTER_ERROR_KEY, error message).\n+ tuple. For ExplorationAnnotationModel, a 2-tuple in the form\n+ ('exploration_id', value).\n+ 'exploration_id': str. the id of the exploration.\n+ 'value': a dict, whose structure is as follows:\n+ {\n+ 'version': str. version of the exploration.\n+ 'starts': int. # of times exploration was started.\n+ 'completions': int. # of times exploration was\n+ completed.\n+ 'state_hit': a dict containing the hit counts for the\n+ states in the exploration. It is formatted as\n+ follows:\n+ {\n+ state_name: {\n+ 'first_entry_count': int. # of sessions\n+ which hit this state.\n+ 'total_entry_count': int. # of total hits\n+ for this state.\n+ 'no_answer_count': int. # of hits with no\n+ answer for this state.\n+ }\n+ }\n+ }\n+ \"\"\"\n if isinstance(item, stats_models.StateCounterModel):\n if item.first_entry_count < 0:\n yield (\n@@ -60,6 +95,36 @@\n \n @staticmethod\n def reduce(key, stringified_values):\n+ \"\"\"Updates statistics for the given exploration.\n+\n+ Args:\n+ key: str. The id of the exploration.\n+ stringified_values: list(str). A list of stringified values\n+ associated with the given key. An element of stringified_values\n+ would be of the form:\n+ {\n+ 'version': str. version of the exploration.\n+ 'starts': int. # of times exploration was started.\n+ 'completions': int. # of times exploration was\n+ completed.\n+ 'state_hit': dict. a dict containing the hit counts\n+ for the states in the exploration. It is formatted\n+ as follows:\n+ {\n+ state_name: {\n+ 'first_entry_count': int. # of sessions\n+ which hit this state.\n+ 'total_entry_count': int. # of total\n+ hits for this state.\n+ 'no_answer_count': int. # of hits with\n+ no answer for this state.\n+ }\n+ }\n+ }\n+\n+ Yields:\n+ tuple(str). A 1-tuple whose only element is an error message.\n+ \"\"\"\n if key == StatisticsAudit._STATE_COUNTER_ERROR_KEY:\n for value_str in stringified_values:\n yield (value_str,)\n", "issue": "Update docstrings in the Python backend code to follow the style guide.\nWe've been a bit lax about adding docstrings to code in the backend. Instead, we should be following the style guide: https://google.github.io/styleguide/pyguide.html?showone=Comments#Comments -- docstrings should define args, return values, exceptions, etc. The aim of this issue is to ensure that the backend code is properly documented; we will tackle this on a file-by-file basis.\r\n\r\nNote: if you're a new contributor, this issue is a particularly good introductory one to tackle, since it will require reading, understanding and documenting the purpose of a file in the codebase, and help you get acquainted with how the Oppia backend is structured. It's important that you're able to communicate and write clearly, though!\r\n\r\nSome useful tips:\r\n- Please follow our [style guide](https://github.com/oppia/oppia/wiki/Coding-style-guide#general) (and also the [Google Python style guide](https://google.github.io/styleguide/pyguide.html?showone=Comments#Comments)) when writing docstrings. Make sure to follow the patterns established there as closely as possible, including spacing, capitalization and punctuation. Also, it's a good idea to look at some of the existing files (see below) to get a sense of the correct format -- we recommend following this closely in order to maintain uniformity across the codebase.\r\n- If applicable, make sure that the docstrings for all functions correctly specify the args, return values, etc. If a function has no args, you can omit the Args section; if it has no return value, you can omit the Returns section.\r\n- If a docstring is not present in the code, or the existing one is too vague for you to understand what's going on just by looking at it, then it should be rewritten to be more clear. You'll often need to read the code as well, and do a grep on the callsites to see how it's being used, in order to get a clear understanding of what it's doing. This is good practice for learning how to navigate the codebase, and is an intended part of this project.\r\n- If you feel that parts of the codebase are unclear or incorrectly factored, please feel free to bring this up! The whole point of docstrings is to improve code health and accessibility for new developers, so if you spot other things that can be done in service of that aim, we should also look into doing that.\r\n\r\n**Completed:**\r\n- [x] core.controllers.base @shaz13 \r\n- [x] core.controllers.cron @saeedjassani\r\n- [x] core.controllers.reader @96mohitm\r\n- [x] core.domain.activity_domain @andromfins\r\n- [x] core.domain.activity_services @souravsingh\r\n- [x] core.domain.classifier_services @andromfins \r\n- [x] core.domain.collection_domain @domevit\r\n- [x] core.domain.collection_services @kahkhang \r\n- [x] core.domain.config_services @vibhor98 \r\n- [x] core.domain.email_manager @Oishikatta \r\n- [x] core.domain.exp_domain @domevit\r\n- [x] core.domain.exp_services @DubeySandeep\r\n- [x] core.domain.feedback_domain @terrameijar \r\n- [x] core.domain.feedback_jobs_continuous\r\n- [x] core.domain.feedback_services @andimeo\r\n- [x] core.domain.fs_domain @kartik-25 \r\n- [x] core.domain.html_cleaner @seanlip\r\n- [x] core.domain.param_domain @rohan-gulati\r\n- [x] core.domain.rating_services @BojOnTheBeat \r\n- [x] core.domain.rights_manager @ayejay\r\n- [x] core.domain.stats_domain @Arunabh98\r\n- [x] core.domain.stats_jobs_continuous @Arunabh98 \r\n- [x] core.domain.stats_jobs_one_off @zpuller\r\n- [x] core.domain.stats_services @manoj-makkuboy \r\n- [x] core.domain.subscription_services @vibhor98 \r\n- [x] core.domain.summary_services @domevit \r\n- [x] core.domain.user_jobs_continuous @shubha1593 \r\n- [x] core.domain.user_services @Nalinc \r\n- [x] core.domain.value_generators_domain @meetsha \r\n- [x] core.jobs @brianrodri\r\n- [x] core.platform.app_identity.gae_app_identity_services @ubhisat\r\n- [x] core.platform.models @pranavsid98 \r\n- [x] core.platform.transactions.gae_transaction_services @ubhisat\r\n- [x] core.storage.base_model.gae_models @vojtechjelinek\r\n- [x] core.storage.collection.gae_models @kaggis\r\n- [x] core.storage.email.gae_models @rajan-garg\r\n- [x] core.storage.exploration.gae_models @kirankonduru\r\n- [x] core.storage.feedback.gae_models @naveenshukla\r\n- [x] core.storage.file.gae_models @Ian91 \r\n- [x] core.storage.job.gae_models @sasacocic\r\n- [x] core.storage.statistics.gae_models @abhishekarya286\r\n- [x] core.storage.user.gae_models @Anthony-Alridge \r\n- [x] core.tests.test_util_jobs @seanlip\r\n- [x] core.tests.test_utils @vibhor98\r\n- [x] utils @yashLadha \n", "before_files": [{"content": "# Copyright 2014 The Oppia Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS-IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Jobs for statistics views.\"\"\"\n\nimport ast\nimport collections\n\nfrom core import jobs\nfrom core.domain import stats_jobs_continuous\nfrom core.platform import models\n\n(stats_models,) = models.Registry.import_models([\n models.NAMES.statistics\n])\n\n\nclass StatisticsAudit(jobs.BaseMapReduceJobManager):\n \"\"\"Performs a brief audit of exploration completions and state hit counts to\n make sure they match counts stored in StateCounterModel. It also checks for\n some possible error cases like negative counts.\n \"\"\"\n _STATE_COUNTER_ERROR_KEY = 'State Counter ERROR'\n\n @classmethod\n def entity_classes_to_map_over(cls):\n return [\n stats_models.ExplorationAnnotationsModel,\n stats_models.StateCounterModel]\n\n @staticmethod\n def map(item):\n if isinstance(item, stats_models.StateCounterModel):\n if item.first_entry_count < 0:\n yield (\n StatisticsAudit._STATE_COUNTER_ERROR_KEY,\n 'Less than 0: %s %d' % (item.key, item.first_entry_count))\n return\n # Older versions of ExplorationAnnotations didn't store exp_id\n # This is short hand for making sure we get ones updated most recently\n else:\n if item.exploration_id is not None:\n yield (item.exploration_id, {\n 'version': item.version,\n 'starts': item.num_starts,\n 'completions': item.num_completions,\n 'state_hit': item.state_hit_counts\n })\n\n @staticmethod\n def reduce(key, stringified_values):\n if key == StatisticsAudit._STATE_COUNTER_ERROR_KEY:\n for value_str in stringified_values:\n yield (value_str,)\n return\n\n # If the code reaches this point, we are looking at values that\n # correspond to each version of a particular exploration.\n\n # These variables correspond to the VERSION_ALL version.\n all_starts = 0\n all_completions = 0\n all_state_hit = collections.defaultdict(int)\n\n # These variables correspond to the sum of counts for all other\n # versions besides VERSION_ALL.\n sum_starts = 0\n sum_completions = 0\n sum_state_hit = collections.defaultdict(int)\n\n for value_str in stringified_values:\n value = ast.literal_eval(value_str)\n if value['starts'] < 0:\n yield (\n 'Negative start count: exp_id:%s version:%s starts:%s' %\n (key, value['version'], value['starts']),)\n\n if value['completions'] < 0:\n yield (\n 'Negative completion count: exp_id:%s version:%s '\n 'completions:%s' %\n (key, value['version'], value['completions']),)\n\n if value['completions'] > value['starts']:\n yield ('Completions > starts: exp_id:%s version:%s %s>%s' % (\n key, value['version'], value['completions'],\n value['starts']),)\n\n if value['version'] == stats_jobs_continuous.VERSION_ALL:\n all_starts = value['starts']\n all_completions = value['completions']\n for (state_name, counts) in value['state_hit'].iteritems():\n all_state_hit[state_name] = counts['first_entry_count']\n else:\n sum_starts += value['starts']\n sum_completions += value['completions']\n for (state_name, counts) in value['state_hit'].iteritems():\n sum_state_hit[state_name] += counts['first_entry_count']\n\n if sum_starts != all_starts:\n yield (\n 'Non-all != all for starts: exp_id:%s sum: %s all: %s'\n % (key, sum_starts, all_starts),)\n if sum_completions != all_completions:\n yield (\n 'Non-all != all for completions: exp_id:%s sum: %s all: %s'\n % (key, sum_completions, all_completions),)\n\n for state_name in all_state_hit:\n if (state_name not in sum_state_hit and\n all_state_hit[state_name] != 0):\n yield (\n 'state hit count not same exp_id:%s state:%s, '\n 'all:%s sum: null' % (\n key, state_name, all_state_hit[state_name]),)\n elif all_state_hit[state_name] != sum_state_hit[state_name]:\n yield (\n 'state hit count not same exp_id: %s state: %s '\n 'all: %s sum:%s' % (\n key, state_name, all_state_hit[state_name],\n sum_state_hit[state_name]),)\n", "path": "core/domain/stats_jobs_one_off.py"}], "after_files": [{"content": "# Copyright 2014 The Oppia Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS-IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Jobs for statistics views.\"\"\"\n\nimport ast\nimport collections\n\nfrom core import jobs\nfrom core.domain import stats_jobs_continuous\nfrom core.platform import models\n\n(stats_models,) = models.Registry.import_models([\n models.NAMES.statistics\n])\n\n\nclass StatisticsAudit(jobs.BaseMapReduceJobManager):\n \"\"\"A one-off statistics audit.\n\n Performs a brief audit of exploration completions and state hit counts to\n make sure they match counts stored in StateCounterModel. It also checks for\n some possible error cases like negative counts.\n \"\"\"\n _STATE_COUNTER_ERROR_KEY = 'State Counter ERROR'\n\n @classmethod\n def entity_classes_to_map_over(cls):\n return [\n stats_models.ExplorationAnnotationsModel,\n stats_models.StateCounterModel]\n\n @staticmethod\n def map(item):\n \"\"\"Implements the map function. Must be declared @staticmethod.\n\n Args:\n item: ExplorationAnnotationsModel or\n StateCounterModel.\n\n Yields:\n tuple. For StateCounterModel, a 2-tuple in the form\n (_STATE_COUNTER_ERROR_KEY, error message).\n tuple. For ExplorationAnnotationModel, a 2-tuple in the form\n ('exploration_id', value).\n 'exploration_id': str. the id of the exploration.\n 'value': a dict, whose structure is as follows:\n {\n 'version': str. version of the exploration.\n 'starts': int. # of times exploration was started.\n 'completions': int. # of times exploration was\n completed.\n 'state_hit': a dict containing the hit counts for the\n states in the exploration. It is formatted as\n follows:\n {\n state_name: {\n 'first_entry_count': int. # of sessions\n which hit this state.\n 'total_entry_count': int. # of total hits\n for this state.\n 'no_answer_count': int. # of hits with no\n answer for this state.\n }\n }\n }\n \"\"\"\n if isinstance(item, stats_models.StateCounterModel):\n if item.first_entry_count < 0:\n yield (\n StatisticsAudit._STATE_COUNTER_ERROR_KEY,\n 'Less than 0: %s %d' % (item.key, item.first_entry_count))\n return\n # Older versions of ExplorationAnnotations didn't store exp_id\n # This is short hand for making sure we get ones updated most recently\n else:\n if item.exploration_id is not None:\n yield (item.exploration_id, {\n 'version': item.version,\n 'starts': item.num_starts,\n 'completions': item.num_completions,\n 'state_hit': item.state_hit_counts\n })\n\n @staticmethod\n def reduce(key, stringified_values):\n \"\"\"Updates statistics for the given exploration.\n\n Args:\n key: str. The id of the exploration.\n stringified_values: list(str). A list of stringified values\n associated with the given key. An element of stringified_values\n would be of the form:\n {\n 'version': str. version of the exploration.\n 'starts': int. # of times exploration was started.\n 'completions': int. # of times exploration was\n completed.\n 'state_hit': dict. a dict containing the hit counts\n for the states in the exploration. It is formatted\n as follows:\n {\n state_name: {\n 'first_entry_count': int. # of sessions\n which hit this state.\n 'total_entry_count': int. # of total\n hits for this state.\n 'no_answer_count': int. # of hits with\n no answer for this state.\n }\n }\n }\n\n Yields:\n tuple(str). A 1-tuple whose only element is an error message.\n \"\"\"\n if key == StatisticsAudit._STATE_COUNTER_ERROR_KEY:\n for value_str in stringified_values:\n yield (value_str,)\n return\n\n # If the code reaches this point, we are looking at values that\n # correspond to each version of a particular exploration.\n\n # These variables correspond to the VERSION_ALL version.\n all_starts = 0\n all_completions = 0\n all_state_hit = collections.defaultdict(int)\n\n # These variables correspond to the sum of counts for all other\n # versions besides VERSION_ALL.\n sum_starts = 0\n sum_completions = 0\n sum_state_hit = collections.defaultdict(int)\n\n for value_str in stringified_values:\n value = ast.literal_eval(value_str)\n if value['starts'] < 0:\n yield (\n 'Negative start count: exp_id:%s version:%s starts:%s' %\n (key, value['version'], value['starts']),)\n\n if value['completions'] < 0:\n yield (\n 'Negative completion count: exp_id:%s version:%s '\n 'completions:%s' %\n (key, value['version'], value['completions']),)\n\n if value['completions'] > value['starts']:\n yield ('Completions > starts: exp_id:%s version:%s %s>%s' % (\n key, value['version'], value['completions'],\n value['starts']),)\n\n if value['version'] == stats_jobs_continuous.VERSION_ALL:\n all_starts = value['starts']\n all_completions = value['completions']\n for (state_name, counts) in value['state_hit'].iteritems():\n all_state_hit[state_name] = counts['first_entry_count']\n else:\n sum_starts += value['starts']\n sum_completions += value['completions']\n for (state_name, counts) in value['state_hit'].iteritems():\n sum_state_hit[state_name] += counts['first_entry_count']\n\n if sum_starts != all_starts:\n yield (\n 'Non-all != all for starts: exp_id:%s sum: %s all: %s'\n % (key, sum_starts, all_starts),)\n if sum_completions != all_completions:\n yield (\n 'Non-all != all for completions: exp_id:%s sum: %s all: %s'\n % (key, sum_completions, all_completions),)\n\n for state_name in all_state_hit:\n if (state_name not in sum_state_hit and\n all_state_hit[state_name] != 0):\n yield (\n 'state hit count not same exp_id:%s state:%s, '\n 'all:%s sum: null' % (\n key, state_name, all_state_hit[state_name]),)\n elif all_state_hit[state_name] != sum_state_hit[state_name]:\n yield (\n 'state hit count not same exp_id: %s state: %s '\n 'all: %s sum:%s' % (\n key, state_name, all_state_hit[state_name],\n sum_state_hit[state_name]),)\n", "path": "core/domain/stats_jobs_one_off.py"}]}
| 2,886 | 779 |
gh_patches_debug_36488
|
rasdani/github-patches
|
git_diff
|
alltheplaces__alltheplaces-2871
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Spider rubios is broken
During the global build at 2021-05-26-14-42-23, spider **rubios** failed with **0 features** and **0 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/logs/rubios.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/rubios.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/rubios.geojson))
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/rubios.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 import scrapy
3 import re
4
5 from locations.items import GeojsonPointItem
6
7
8 class RubiosSpider(scrapy.Spider):
9 name = "rubios"
10 item_attributes = { 'brand': "Rubio's" }
11 allowed_domains = ['rubios.com']
12 start_urls = (
13 'https://www.rubios.com/sitemap.xml',
14 )
15
16 def parse(self, response):
17 response.selector.remove_namespaces()
18 city_urls = response.xpath('//url/loc/text()').extract()
19 regex = re.compile(r'http\S+rubios.com/store-locations/\S+/\S+/\S+')
20 for path in city_urls:
21 if re.search(regex, path):
22 yield scrapy.Request(
23 path.strip(),
24 callback=self.parse_store,
25 )
26
27 def parse_store(self, response):
28
29 properties = {
30 'name': response.xpath('//span[@itemprop="name"]/text()').extract_first(),
31 'ref': response.xpath('//span[@itemprop="name"]/text()').extract_first(),
32 'addr_full': response.xpath('//span[@itemprop="streetAddress"]/text()').extract_first(),
33 'city': response.xpath('//span[@itemprop="addressLocality"]/text()').extract_first(),
34 'state': response.xpath('//span[@itemprop="addressRegion"]/text()').extract_first(),
35 'postcode': response.xpath('//span[@itemprop="postalCode"]/text()').extract_first(),
36 'phone': response.xpath('//span[@itemprop="telephone"]/a/text()').extract_first(),
37 'website': response.url,
38 'opening_hours': "".join(response.xpath('//div/div/div/span/span/span/text()').extract()).strip(),
39 'lon': float(response.xpath('//head/script[9]').extract_first().split('"coordinates":[')[1].split(']')[0].split(',')[0]),
40 'lat': float(response.xpath('//head/script[9]').extract_first().split('"coordinates":[')[1].split(']')[0].split(',')[1]),
41 }
42
43 yield GeojsonPointItem(**properties)
44
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/locations/spiders/rubios.py b/locations/spiders/rubios.py
--- a/locations/spiders/rubios.py
+++ b/locations/spiders/rubios.py
@@ -16,7 +16,7 @@
def parse(self, response):
response.selector.remove_namespaces()
city_urls = response.xpath('//url/loc/text()').extract()
- regex = re.compile(r'http\S+rubios.com/store-locations/\S+/\S+/\S+')
+ regex = re.compile(r'http\S+rubios.com/restaurant-locations/\S+/\S+/\S+')
for path in city_urls:
if re.search(regex, path):
yield scrapy.Request(
@@ -27,17 +27,18 @@
def parse_store(self, response):
properties = {
- 'name': response.xpath('//span[@itemprop="name"]/text()').extract_first(),
- 'ref': response.xpath('//span[@itemprop="name"]/text()').extract_first(),
- 'addr_full': response.xpath('//span[@itemprop="streetAddress"]/text()').extract_first(),
- 'city': response.xpath('//span[@itemprop="addressLocality"]/text()').extract_first(),
- 'state': response.xpath('//span[@itemprop="addressRegion"]/text()').extract_first(),
- 'postcode': response.xpath('//span[@itemprop="postalCode"]/text()').extract_first(),
- 'phone': response.xpath('//span[@itemprop="telephone"]/a/text()').extract_first(),
+ 'name': response.xpath('//meta[@property="og:title"]/@content').extract_first(),
+ 'ref': response.url,
+ 'addr_full': response.xpath('//meta[@property="og:street_address"]/@content').extract_first(),
+ 'city': response.xpath('//meta[@property="og:locality"]/@content').extract_first(),
+ 'state': response.xpath('//meta[@property="og:region"]/@content').extract_first(),
+ 'postcode': response.xpath('//meta[@property="og:postal_code"]/@content').extract_first(),
+ 'country': response.xpath('//meta[@property="og:country_name"]/@content').extract_first(),
+ 'phone': response.xpath('//@href[contains(.,"tel:")]').extract_first().replace('tel:', ''),
'website': response.url,
- 'opening_hours': "".join(response.xpath('//div/div/div/span/span/span/text()').extract()).strip(),
- 'lon': float(response.xpath('//head/script[9]').extract_first().split('"coordinates":[')[1].split(']')[0].split(',')[0]),
- 'lat': float(response.xpath('//head/script[9]').extract_first().split('"coordinates":[')[1].split(']')[0].split(',')[1]),
+ 'opening_hours': ''.join(response.css('.oh-wrapper ::text').extract()),
+ 'lon': response.xpath('//meta[@property="og:longitude"]/@content').extract_first(),
+ 'lat': response.xpath('//meta[@property="og:latitude"]/@content').extract_first(),
}
yield GeojsonPointItem(**properties)
|
{"golden_diff": "diff --git a/locations/spiders/rubios.py b/locations/spiders/rubios.py\n--- a/locations/spiders/rubios.py\n+++ b/locations/spiders/rubios.py\n@@ -16,7 +16,7 @@\n def parse(self, response):\n response.selector.remove_namespaces()\n city_urls = response.xpath('//url/loc/text()').extract()\n- regex = re.compile(r'http\\S+rubios.com/store-locations/\\S+/\\S+/\\S+')\n+ regex = re.compile(r'http\\S+rubios.com/restaurant-locations/\\S+/\\S+/\\S+')\n for path in city_urls:\n if re.search(regex, path):\n yield scrapy.Request(\n@@ -27,17 +27,18 @@\n def parse_store(self, response):\n \n properties = {\n- 'name': response.xpath('//span[@itemprop=\"name\"]/text()').extract_first(),\n- 'ref': response.xpath('//span[@itemprop=\"name\"]/text()').extract_first(),\n- 'addr_full': response.xpath('//span[@itemprop=\"streetAddress\"]/text()').extract_first(),\n- 'city': response.xpath('//span[@itemprop=\"addressLocality\"]/text()').extract_first(),\n- 'state': response.xpath('//span[@itemprop=\"addressRegion\"]/text()').extract_first(),\n- 'postcode': response.xpath('//span[@itemprop=\"postalCode\"]/text()').extract_first(),\n- 'phone': response.xpath('//span[@itemprop=\"telephone\"]/a/text()').extract_first(),\n+ 'name': response.xpath('//meta[@property=\"og:title\"]/@content').extract_first(),\n+ 'ref': response.url,\n+ 'addr_full': response.xpath('//meta[@property=\"og:street_address\"]/@content').extract_first(),\n+ 'city': response.xpath('//meta[@property=\"og:locality\"]/@content').extract_first(),\n+ 'state': response.xpath('//meta[@property=\"og:region\"]/@content').extract_first(),\n+ 'postcode': response.xpath('//meta[@property=\"og:postal_code\"]/@content').extract_first(),\n+ 'country': response.xpath('//meta[@property=\"og:country_name\"]/@content').extract_first(),\n+ 'phone': response.xpath('//@href[contains(.,\"tel:\")]').extract_first().replace('tel:', ''),\n 'website': response.url,\n- 'opening_hours': \"\".join(response.xpath('//div/div/div/span/span/span/text()').extract()).strip(),\n- 'lon': float(response.xpath('//head/script[9]').extract_first().split('\"coordinates\":[')[1].split(']')[0].split(',')[0]),\n- 'lat': float(response.xpath('//head/script[9]').extract_first().split('\"coordinates\":[')[1].split(']')[0].split(',')[1]),\n+ 'opening_hours': ''.join(response.css('.oh-wrapper ::text').extract()),\n+ 'lon': response.xpath('//meta[@property=\"og:longitude\"]/@content').extract_first(),\n+ 'lat': response.xpath('//meta[@property=\"og:latitude\"]/@content').extract_first(),\n }\n \n yield GeojsonPointItem(**properties)\n", "issue": "Spider rubios is broken\nDuring the global build at 2021-05-26-14-42-23, spider **rubios** failed with **0 features** and **0 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/logs/rubios.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/rubios.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/rubios.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nimport re\n\nfrom locations.items import GeojsonPointItem\n\n\nclass RubiosSpider(scrapy.Spider):\n name = \"rubios\"\n item_attributes = { 'brand': \"Rubio's\" }\n allowed_domains = ['rubios.com']\n start_urls = (\n 'https://www.rubios.com/sitemap.xml',\n )\n\n def parse(self, response):\n response.selector.remove_namespaces()\n city_urls = response.xpath('//url/loc/text()').extract()\n regex = re.compile(r'http\\S+rubios.com/store-locations/\\S+/\\S+/\\S+')\n for path in city_urls:\n if re.search(regex, path):\n yield scrapy.Request(\n path.strip(),\n callback=self.parse_store,\n )\n\n def parse_store(self, response):\n\n properties = {\n 'name': response.xpath('//span[@itemprop=\"name\"]/text()').extract_first(),\n 'ref': response.xpath('//span[@itemprop=\"name\"]/text()').extract_first(),\n 'addr_full': response.xpath('//span[@itemprop=\"streetAddress\"]/text()').extract_first(),\n 'city': response.xpath('//span[@itemprop=\"addressLocality\"]/text()').extract_first(),\n 'state': response.xpath('//span[@itemprop=\"addressRegion\"]/text()').extract_first(),\n 'postcode': response.xpath('//span[@itemprop=\"postalCode\"]/text()').extract_first(),\n 'phone': response.xpath('//span[@itemprop=\"telephone\"]/a/text()').extract_first(),\n 'website': response.url,\n 'opening_hours': \"\".join(response.xpath('//div/div/div/span/span/span/text()').extract()).strip(),\n 'lon': float(response.xpath('//head/script[9]').extract_first().split('\"coordinates\":[')[1].split(']')[0].split(',')[0]),\n 'lat': float(response.xpath('//head/script[9]').extract_first().split('\"coordinates\":[')[1].split(']')[0].split(',')[1]),\n }\n\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/rubios.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nimport re\n\nfrom locations.items import GeojsonPointItem\n\n\nclass RubiosSpider(scrapy.Spider):\n name = \"rubios\"\n item_attributes = { 'brand': \"Rubio's\" }\n allowed_domains = ['rubios.com']\n start_urls = (\n 'https://www.rubios.com/sitemap.xml',\n )\n\n def parse(self, response):\n response.selector.remove_namespaces()\n city_urls = response.xpath('//url/loc/text()').extract()\n regex = re.compile(r'http\\S+rubios.com/restaurant-locations/\\S+/\\S+/\\S+')\n for path in city_urls:\n if re.search(regex, path):\n yield scrapy.Request(\n path.strip(),\n callback=self.parse_store,\n )\n\n def parse_store(self, response):\n\n properties = {\n 'name': response.xpath('//meta[@property=\"og:title\"]/@content').extract_first(),\n 'ref': response.url,\n 'addr_full': response.xpath('//meta[@property=\"og:street_address\"]/@content').extract_first(),\n 'city': response.xpath('//meta[@property=\"og:locality\"]/@content').extract_first(),\n 'state': response.xpath('//meta[@property=\"og:region\"]/@content').extract_first(),\n 'postcode': response.xpath('//meta[@property=\"og:postal_code\"]/@content').extract_first(),\n 'country': response.xpath('//meta[@property=\"og:country_name\"]/@content').extract_first(),\n 'phone': response.xpath('//@href[contains(.,\"tel:\")]').extract_first().replace('tel:', ''),\n 'website': response.url,\n 'opening_hours': ''.join(response.css('.oh-wrapper ::text').extract()),\n 'lon': response.xpath('//meta[@property=\"og:longitude\"]/@content').extract_first(),\n 'lat': response.xpath('//meta[@property=\"og:latitude\"]/@content').extract_first(),\n }\n\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/rubios.py"}]}
| 966 | 683 |
gh_patches_debug_31398
|
rasdani/github-patches
|
git_diff
|
nv-legate__cunumeric-450
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
random.randint() multiple errors
Calls result in errors when minimal input is provided (no error if dtype is provided).
```
>>> import numpy as np
>>> import cunumeric as cn
>>> np.random.randint(1000)
293
>>> cn.random.randint(1000)
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/rkarimi_l/Work/cunumeric/cunumeric/cunumeric/coverage.py", line 106, in wrapper
return func(*args, **kwargs)
File "/rkarimi_l/Work/cunumeric/cunumeric/cunumeric/random/random.py", line 113, in randint
return nprandom.randint(low=low, high=high, size=size, dtype=dtype)
File "mtrand.pyx", line 764, in numpy.random.mtrand.RandomState.randint
TypeError: Unsupported dtype dtype('float64') for randint
>>> cn.random.randint(1000,dtype=np.int32)
172
```
Calls result in errors when size is provided in array form (or if ND) (this happens regardless whether or not dtype is provided):
```
>>> cn.random.randint(1000,size=10)
array([317, 787, 968, 969, 10, 508, 778, 387, 976, 166])
>>> np.random.randint(1000,size=[10])
array([ 79, 325, 406, 740, 810, 830, 551, 640, 596, 857])
>>> cn.random.randint(1000,size=[10])
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/rkarimi_l/Work/cunumeric/cunumeric/cunumeric/coverage.py", line 106, in wrapper
return func(*args, **kwargs)
File "/rkarimi_l/Work/cunumeric/cunumeric/cunumeric/random/random.py", line 125, in randint
result = ndarray(size, dtype=dtype)
File "/rkarimi_l/Work/cunumeric/cunumeric/cunumeric/coverage.py", line 106, in wrapper
return func(*args, **kwargs)
File "/rkarimi_l/Work/cunumeric/cunumeric/cunumeric/array.py", line 190, in __init__
self._thunk = runtime.create_empty_thunk(shape, dtype, inputs)
File "/rkarimi_l/Work/cunumeric/cunumeric/cunumeric/runtime.py", line 499, in create_empty_thunk
self.is_eager_shape(computed_shape)
File "/rkarimi_l/Work/cunumeric/cunumeric/cunumeric/runtime.py", line 528, in is_eager_shape
return volume <= self.max_eager_volume
TypeError: '<=' not supported between instances of 'list' and 'int'
>>> np.random.randint(1000,size=[10,10])
array([[ 33, 401, 305, 230, 610, 912, 661, 89, 245, 35],
[639, 985, 902, 324, 270, 145, 121, 947, 817, 175],
[458, 623, 160, 994, 263, 824, 304, 94, 492, 552],
[907, 607, 16, 761, 323, 479, 122, 954, 749, 254],
[146, 627, 161, 855, 573, 794, 731, 192, 988, 999],
[628, 582, 815, 592, 765, 69, 416, 445, 339, 756],
[504, 332, 910, 478, 14, 686, 77, 527, 186, 545],
[353, 432, 38, 32, 229, 129, 248, 992, 292, 49],
[380, 835, 754, 874, 136, 167, 497, 984, 951, 353],
[164, 512, 899, 515, 405, 929, 268, 491, 542, 664]])
>>> cn.random.randint(1000,size=[10,10])
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/rkarimi_l/Work/cunumeric/cunumeric/cunumeric/coverage.py", line 106, in wrapper
return func(*args, **kwargs)
File "/rkarimi_l/Work/cunumeric/cunumeric/cunumeric/random/random.py", line 125, in randint
result = ndarray(size, dtype=dtype)
File "/rkarimi_l/Work/cunumeric/cunumeric/cunumeric/coverage.py", line 106, in wrapper
return func(*args, **kwargs)
File "/rkarimi_l/Work/cunumeric/cunumeric/cunumeric/array.py", line 190, in __init__
self._thunk = runtime.create_empty_thunk(shape, dtype, inputs)
File "/rkarimi_l/Work/cunumeric/cunumeric/cunumeric/runtime.py", line 499, in create_empty_thunk
self.is_eager_shape(computed_shape)
File "/rkarimi_l/Work/cunumeric/cunumeric/cunumeric/runtime.py", line 528, in is_eager_shape
return volume <= self.max_eager_volume
TypeError: '<=' not supported between instances of 'list' and 'int'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cunumeric/random/random.py`
Content:
```
1 # Copyright 2021-2022 NVIDIA Corporation
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 #
15 from __future__ import annotations
16
17 from typing import TYPE_CHECKING, Any, Union
18
19 import numpy as np
20 import numpy.random as nprandom
21 from cunumeric.array import ndarray
22 from cunumeric.runtime import runtime
23
24 if TYPE_CHECKING:
25 import numpy.typing as npt
26
27
28 def seed(init: Union[int, None] = None) -> None:
29 if init is None:
30 init = 0
31 runtime.set_next_random_epoch(int(init))
32
33
34 def rand(*shapeargs: int) -> Union[float, ndarray]:
35 """
36 rand(d0, d1, ..., dn)
37
38 Random values in a given shape.
39
40 Create an array of the given shape and populate it with random samples from
41 a uniform distribution over ``[0, 1)``.
42
43 Parameters
44 ----------
45 d0, d1, ..., dn : int, optional
46 The dimensions of the returned array, must be non-negative.
47 If no argument is given a single Python float is returned.
48
49 Returns
50 -------
51 out : ndarray, shape ``(d0, d1, ..., dn)``
52 Random values.
53
54 See Also
55 --------
56 numpy.random.rand
57
58 Availability
59 --------
60 Multiple GPUs, Multiple CPUs
61 """
62
63 if shapeargs == ():
64 return nprandom.rand()
65 result = ndarray(shapeargs, dtype=np.dtype(np.float64))
66 result._thunk.random_uniform()
67 return result
68
69
70 def randint(
71 low: Union[int, ndarray],
72 high: Union[int, ndarray, None] = None,
73 size: Union[int, tuple[int], None] = None,
74 dtype: Union[np.dtype[Any], None] = None,
75 ) -> Union[int, ndarray, npt.NDArray[Any]]:
76 """
77 Return random integers from `low` (inclusive) to `high` (exclusive).
78
79 Parameters
80 ----------
81 low : int or array_like[int]
82 Lowest (signed) integers to be drawn from the distribution (unless
83 ``high=None``, in which case this parameter is one above the
84 *highest* such integer).
85 high : int or array_like[int], optional
86 If provided, one above the largest (signed) integer to be drawn
87 from the distribution (see above for behavior if ``high=None``).
88 If array-like, must contain integer values
89 size : int or tuple[int], optional
90 Output shape. If the given shape is, e.g., ``(m, n, k)``, then
91 ``m * n * k`` samples are drawn. Default is None, in which case a
92 single value is returned.
93 dtype : data-type, optional
94 Desired dtype of the result. Byteorder must be native.
95 The default value is int.
96
97 Returns
98 -------
99 out : int or ndarray[int]
100 `size`-shaped array of random integers from the appropriate
101 distribution, or a single such random int if `size` not provided.
102
103 See Also
104 --------
105 numpy.random.randint
106
107 Availability
108 --------
109 Multiple GPUs, Multiple CPUs
110 """
111
112 if size is None:
113 return nprandom.randint(low=low, high=high, size=size, dtype=dtype)
114 if dtype is not None:
115 dtype = np.dtype(dtype)
116 else:
117 dtype = np.dtype(np.int64)
118 # TODO: randint must support unsigned integer dtypes as well
119 if dtype.kind != "i":
120 raise TypeError(
121 "cunumeric.random.randint must be given an integer dtype"
122 )
123 if not isinstance(size, tuple):
124 size = (size,)
125 result = ndarray(size, dtype=dtype)
126 if high is None:
127 if low <= 0:
128 raise ValueError(
129 "bound must be strictly greater than 0 for randint"
130 )
131 result._thunk.random_integer(low=0, high=low)
132 else:
133 if low >= high:
134 raise ValueError(
135 "'high' bound must be strictly greater than 'low' "
136 "bound for randint"
137 )
138 result._thunk.random_integer(low=low, high=high)
139 return result
140
141
142 def randn(*shapeargs: int) -> Union[float, ndarray]:
143 """
144 randn(d0, d1, ..., dn)
145
146 Return a sample (or samples) from the "standard normal" distribution.
147
148 Parameters
149 ----------
150 d0, d1, ..., dn : int, optional
151 The dimensions of the returned array, must be non-negative.
152 If no argument is given a single Python float is returned.
153
154 Returns
155 -------
156 Z : ndarray or float
157 A ``(d0, d1, ..., dn)``-shaped array of floating-point samples from
158 the standard normal distribution, or a single such float if
159 no parameters were supplied.
160
161 See Also
162 --------
163 numpy.random.randn
164
165 Availability
166 --------
167 Multiple GPUs, Multiple CPUs
168 """
169
170 if shapeargs == ():
171 return nprandom.randn()
172 result = ndarray(shapeargs, dtype=np.dtype(np.float64))
173 result._thunk.random_normal()
174 return result
175
176
177 def random(shape: Union[tuple[int], None] = None) -> Union[float, ndarray]:
178 """
179 random(size=None)
180
181 Return random floats in the half-open interval [0.0, 1.0).
182
183 See Also
184 --------
185 numpy.random.random
186
187 Availability
188 --------
189 Multiple GPUs, Multiple CPUs
190 """
191 if shape is None:
192 return nprandom.random()
193 result = ndarray(shape, dtype=np.dtype(np.float64))
194 result._thunk.random_uniform()
195 return result
196
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/cunumeric/random/random.py b/cunumeric/random/random.py
--- a/cunumeric/random/random.py
+++ b/cunumeric/random/random.py
@@ -14,7 +14,7 @@
#
from __future__ import annotations
-from typing import TYPE_CHECKING, Any, Union
+from typing import TYPE_CHECKING, Any, Sequence, Union
import numpy as np
import numpy.random as nprandom
@@ -68,10 +68,10 @@
def randint(
- low: Union[int, ndarray],
- high: Union[int, ndarray, None] = None,
- size: Union[int, tuple[int], None] = None,
- dtype: Union[np.dtype[Any], None] = None,
+ low: Union[int, Sequence[int]],
+ high: Union[int, Sequence[int], None] = None,
+ size: Union[int, Sequence[int], None] = None,
+ dtype: Union[np.dtype[Any], type, None] = int,
) -> Union[int, ndarray, npt.NDArray[Any]]:
"""
Return random integers from `low` (inclusive) to `high` (exclusive).
@@ -109,8 +109,14 @@
Multiple GPUs, Multiple CPUs
"""
+ if not isinstance(low, int):
+ raise NotImplementedError("'low' must be an integer")
+ if high is not None and not isinstance(high, int):
+ raise NotImplementedError("'high' must be an integer or None")
+
if size is None:
return nprandom.randint(low=low, high=high, size=size, dtype=dtype)
+
if dtype is not None:
dtype = np.dtype(dtype)
else:
@@ -120,7 +126,7 @@
raise TypeError(
"cunumeric.random.randint must be given an integer dtype"
)
- if not isinstance(size, tuple):
+ if isinstance(size, int):
size = (size,)
result = ndarray(size, dtype=dtype)
if high is None:
|
{"golden_diff": "diff --git a/cunumeric/random/random.py b/cunumeric/random/random.py\n--- a/cunumeric/random/random.py\n+++ b/cunumeric/random/random.py\n@@ -14,7 +14,7 @@\n #\n from __future__ import annotations\n \n-from typing import TYPE_CHECKING, Any, Union\n+from typing import TYPE_CHECKING, Any, Sequence, Union\n \n import numpy as np\n import numpy.random as nprandom\n@@ -68,10 +68,10 @@\n \n \n def randint(\n- low: Union[int, ndarray],\n- high: Union[int, ndarray, None] = None,\n- size: Union[int, tuple[int], None] = None,\n- dtype: Union[np.dtype[Any], None] = None,\n+ low: Union[int, Sequence[int]],\n+ high: Union[int, Sequence[int], None] = None,\n+ size: Union[int, Sequence[int], None] = None,\n+ dtype: Union[np.dtype[Any], type, None] = int,\n ) -> Union[int, ndarray, npt.NDArray[Any]]:\n \"\"\"\n Return random integers from `low` (inclusive) to `high` (exclusive).\n@@ -109,8 +109,14 @@\n Multiple GPUs, Multiple CPUs\n \"\"\"\n \n+ if not isinstance(low, int):\n+ raise NotImplementedError(\"'low' must be an integer\")\n+ if high is not None and not isinstance(high, int):\n+ raise NotImplementedError(\"'high' must be an integer or None\")\n+\n if size is None:\n return nprandom.randint(low=low, high=high, size=size, dtype=dtype)\n+\n if dtype is not None:\n dtype = np.dtype(dtype)\n else:\n@@ -120,7 +126,7 @@\n raise TypeError(\n \"cunumeric.random.randint must be given an integer dtype\"\n )\n- if not isinstance(size, tuple):\n+ if isinstance(size, int):\n size = (size,)\n result = ndarray(size, dtype=dtype)\n if high is None:\n", "issue": "random.randint() multiple errors\nCalls result in errors when minimal input is provided (no error if dtype is provided).\r\n```\r\n>>> import numpy as np\r\n>>> import cunumeric as cn\r\n>>> np.random.randint(1000)\r\n293\r\n>>> cn.random.randint(1000)\r\nTraceback (most recent call last):\r\n File \"<console>\", line 1, in <module>\r\n File \"/rkarimi_l/Work/cunumeric/cunumeric/cunumeric/coverage.py\", line 106, in wrapper\r\n return func(*args, **kwargs)\r\n File \"/rkarimi_l/Work/cunumeric/cunumeric/cunumeric/random/random.py\", line 113, in randint\r\n return nprandom.randint(low=low, high=high, size=size, dtype=dtype)\r\n File \"mtrand.pyx\", line 764, in numpy.random.mtrand.RandomState.randint\r\nTypeError: Unsupported dtype dtype('float64') for randint\r\n>>> cn.random.randint(1000,dtype=np.int32)\r\n172\r\n```\r\nCalls result in errors when size is provided in array form (or if ND) (this happens regardless whether or not dtype is provided):\r\n```\r\n>>> cn.random.randint(1000,size=10)\r\narray([317, 787, 968, 969, 10, 508, 778, 387, 976, 166])\r\n>>> np.random.randint(1000,size=[10])\r\narray([ 79, 325, 406, 740, 810, 830, 551, 640, 596, 857])\r\n>>> cn.random.randint(1000,size=[10])\r\nTraceback (most recent call last):\r\n File \"<console>\", line 1, in <module>\r\n File \"/rkarimi_l/Work/cunumeric/cunumeric/cunumeric/coverage.py\", line 106, in wrapper\r\n return func(*args, **kwargs)\r\n File \"/rkarimi_l/Work/cunumeric/cunumeric/cunumeric/random/random.py\", line 125, in randint\r\n result = ndarray(size, dtype=dtype)\r\n File \"/rkarimi_l/Work/cunumeric/cunumeric/cunumeric/coverage.py\", line 106, in wrapper\r\n return func(*args, **kwargs)\r\n File \"/rkarimi_l/Work/cunumeric/cunumeric/cunumeric/array.py\", line 190, in __init__\r\n self._thunk = runtime.create_empty_thunk(shape, dtype, inputs)\r\n File \"/rkarimi_l/Work/cunumeric/cunumeric/cunumeric/runtime.py\", line 499, in create_empty_thunk\r\n self.is_eager_shape(computed_shape)\r\n File \"/rkarimi_l/Work/cunumeric/cunumeric/cunumeric/runtime.py\", line 528, in is_eager_shape\r\n return volume <= self.max_eager_volume\r\nTypeError: '<=' not supported between instances of 'list' and 'int'\r\n>>> np.random.randint(1000,size=[10,10])\r\narray([[ 33, 401, 305, 230, 610, 912, 661, 89, 245, 35],\r\n [639, 985, 902, 324, 270, 145, 121, 947, 817, 175],\r\n [458, 623, 160, 994, 263, 824, 304, 94, 492, 552],\r\n [907, 607, 16, 761, 323, 479, 122, 954, 749, 254],\r\n [146, 627, 161, 855, 573, 794, 731, 192, 988, 999],\r\n [628, 582, 815, 592, 765, 69, 416, 445, 339, 756],\r\n [504, 332, 910, 478, 14, 686, 77, 527, 186, 545],\r\n [353, 432, 38, 32, 229, 129, 248, 992, 292, 49],\r\n [380, 835, 754, 874, 136, 167, 497, 984, 951, 353],\r\n [164, 512, 899, 515, 405, 929, 268, 491, 542, 664]])\r\n>>> cn.random.randint(1000,size=[10,10])\r\nTraceback (most recent call last):\r\n File \"<console>\", line 1, in <module>\r\n File \"/rkarimi_l/Work/cunumeric/cunumeric/cunumeric/coverage.py\", line 106, in wrapper\r\n return func(*args, **kwargs)\r\n File \"/rkarimi_l/Work/cunumeric/cunumeric/cunumeric/random/random.py\", line 125, in randint\r\n result = ndarray(size, dtype=dtype)\r\n File \"/rkarimi_l/Work/cunumeric/cunumeric/cunumeric/coverage.py\", line 106, in wrapper\r\n return func(*args, **kwargs)\r\n File \"/rkarimi_l/Work/cunumeric/cunumeric/cunumeric/array.py\", line 190, in __init__\r\n self._thunk = runtime.create_empty_thunk(shape, dtype, inputs)\r\n File \"/rkarimi_l/Work/cunumeric/cunumeric/cunumeric/runtime.py\", line 499, in create_empty_thunk\r\n self.is_eager_shape(computed_shape)\r\n File \"/rkarimi_l/Work/cunumeric/cunumeric/cunumeric/runtime.py\", line 528, in is_eager_shape\r\n return volume <= self.max_eager_volume\r\nTypeError: '<=' not supported between instances of 'list' and 'int'\r\n```\r\n\n", "before_files": [{"content": "# Copyright 2021-2022 NVIDIA Corporation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nfrom __future__ import annotations\n\nfrom typing import TYPE_CHECKING, Any, Union\n\nimport numpy as np\nimport numpy.random as nprandom\nfrom cunumeric.array import ndarray\nfrom cunumeric.runtime import runtime\n\nif TYPE_CHECKING:\n import numpy.typing as npt\n\n\ndef seed(init: Union[int, None] = None) -> None:\n if init is None:\n init = 0\n runtime.set_next_random_epoch(int(init))\n\n\ndef rand(*shapeargs: int) -> Union[float, ndarray]:\n \"\"\"\n rand(d0, d1, ..., dn)\n\n Random values in a given shape.\n\n Create an array of the given shape and populate it with random samples from\n a uniform distribution over ``[0, 1)``.\n\n Parameters\n ----------\n d0, d1, ..., dn : int, optional\n The dimensions of the returned array, must be non-negative.\n If no argument is given a single Python float is returned.\n\n Returns\n -------\n out : ndarray, shape ``(d0, d1, ..., dn)``\n Random values.\n\n See Also\n --------\n numpy.random.rand\n\n Availability\n --------\n Multiple GPUs, Multiple CPUs\n \"\"\"\n\n if shapeargs == ():\n return nprandom.rand()\n result = ndarray(shapeargs, dtype=np.dtype(np.float64))\n result._thunk.random_uniform()\n return result\n\n\ndef randint(\n low: Union[int, ndarray],\n high: Union[int, ndarray, None] = None,\n size: Union[int, tuple[int], None] = None,\n dtype: Union[np.dtype[Any], None] = None,\n) -> Union[int, ndarray, npt.NDArray[Any]]:\n \"\"\"\n Return random integers from `low` (inclusive) to `high` (exclusive).\n\n Parameters\n ----------\n low : int or array_like[int]\n Lowest (signed) integers to be drawn from the distribution (unless\n ``high=None``, in which case this parameter is one above the\n *highest* such integer).\n high : int or array_like[int], optional\n If provided, one above the largest (signed) integer to be drawn\n from the distribution (see above for behavior if ``high=None``).\n If array-like, must contain integer values\n size : int or tuple[int], optional\n Output shape. If the given shape is, e.g., ``(m, n, k)``, then\n ``m * n * k`` samples are drawn. Default is None, in which case a\n single value is returned.\n dtype : data-type, optional\n Desired dtype of the result. Byteorder must be native.\n The default value is int.\n\n Returns\n -------\n out : int or ndarray[int]\n `size`-shaped array of random integers from the appropriate\n distribution, or a single such random int if `size` not provided.\n\n See Also\n --------\n numpy.random.randint\n\n Availability\n --------\n Multiple GPUs, Multiple CPUs\n \"\"\"\n\n if size is None:\n return nprandom.randint(low=low, high=high, size=size, dtype=dtype)\n if dtype is not None:\n dtype = np.dtype(dtype)\n else:\n dtype = np.dtype(np.int64)\n # TODO: randint must support unsigned integer dtypes as well\n if dtype.kind != \"i\":\n raise TypeError(\n \"cunumeric.random.randint must be given an integer dtype\"\n )\n if not isinstance(size, tuple):\n size = (size,)\n result = ndarray(size, dtype=dtype)\n if high is None:\n if low <= 0:\n raise ValueError(\n \"bound must be strictly greater than 0 for randint\"\n )\n result._thunk.random_integer(low=0, high=low)\n else:\n if low >= high:\n raise ValueError(\n \"'high' bound must be strictly greater than 'low' \"\n \"bound for randint\"\n )\n result._thunk.random_integer(low=low, high=high)\n return result\n\n\ndef randn(*shapeargs: int) -> Union[float, ndarray]:\n \"\"\"\n randn(d0, d1, ..., dn)\n\n Return a sample (or samples) from the \"standard normal\" distribution.\n\n Parameters\n ----------\n d0, d1, ..., dn : int, optional\n The dimensions of the returned array, must be non-negative.\n If no argument is given a single Python float is returned.\n\n Returns\n -------\n Z : ndarray or float\n A ``(d0, d1, ..., dn)``-shaped array of floating-point samples from\n the standard normal distribution, or a single such float if\n no parameters were supplied.\n\n See Also\n --------\n numpy.random.randn\n\n Availability\n --------\n Multiple GPUs, Multiple CPUs\n \"\"\"\n\n if shapeargs == ():\n return nprandom.randn()\n result = ndarray(shapeargs, dtype=np.dtype(np.float64))\n result._thunk.random_normal()\n return result\n\n\ndef random(shape: Union[tuple[int], None] = None) -> Union[float, ndarray]:\n \"\"\"\n random(size=None)\n\n Return random floats in the half-open interval [0.0, 1.0).\n\n See Also\n --------\n numpy.random.random\n\n Availability\n --------\n Multiple GPUs, Multiple CPUs\n \"\"\"\n if shape is None:\n return nprandom.random()\n result = ndarray(shape, dtype=np.dtype(np.float64))\n result._thunk.random_uniform()\n return result\n", "path": "cunumeric/random/random.py"}], "after_files": [{"content": "# Copyright 2021-2022 NVIDIA Corporation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nfrom __future__ import annotations\n\nfrom typing import TYPE_CHECKING, Any, Sequence, Union\n\nimport numpy as np\nimport numpy.random as nprandom\nfrom cunumeric.array import ndarray\nfrom cunumeric.runtime import runtime\n\nif TYPE_CHECKING:\n import numpy.typing as npt\n\n\ndef seed(init: Union[int, None] = None) -> None:\n if init is None:\n init = 0\n runtime.set_next_random_epoch(int(init))\n\n\ndef rand(*shapeargs: int) -> Union[float, ndarray]:\n \"\"\"\n rand(d0, d1, ..., dn)\n\n Random values in a given shape.\n\n Create an array of the given shape and populate it with random samples from\n a uniform distribution over ``[0, 1)``.\n\n Parameters\n ----------\n d0, d1, ..., dn : int, optional\n The dimensions of the returned array, must be non-negative.\n If no argument is given a single Python float is returned.\n\n Returns\n -------\n out : ndarray, shape ``(d0, d1, ..., dn)``\n Random values.\n\n See Also\n --------\n numpy.random.rand\n\n Availability\n --------\n Multiple GPUs, Multiple CPUs\n \"\"\"\n\n if shapeargs == ():\n return nprandom.rand()\n result = ndarray(shapeargs, dtype=np.dtype(np.float64))\n result._thunk.random_uniform()\n return result\n\n\ndef randint(\n low: Union[int, Sequence[int]],\n high: Union[int, Sequence[int], None] = None,\n size: Union[int, Sequence[int], None] = None,\n dtype: Union[np.dtype[Any], type, None] = int,\n) -> Union[int, ndarray, npt.NDArray[Any]]:\n \"\"\"\n Return random integers from `low` (inclusive) to `high` (exclusive).\n\n Parameters\n ----------\n low : int or array_like[int]\n Lowest (signed) integers to be drawn from the distribution (unless\n ``high=None``, in which case this parameter is one above the\n *highest* such integer).\n high : int or array_like[int], optional\n If provided, one above the largest (signed) integer to be drawn\n from the distribution (see above for behavior if ``high=None``).\n If array-like, must contain integer values\n size : int or tuple[int], optional\n Output shape. If the given shape is, e.g., ``(m, n, k)``, then\n ``m * n * k`` samples are drawn. Default is None, in which case a\n single value is returned.\n dtype : data-type, optional\n Desired dtype of the result. Byteorder must be native.\n The default value is int.\n\n Returns\n -------\n out : int or ndarray[int]\n `size`-shaped array of random integers from the appropriate\n distribution, or a single such random int if `size` not provided.\n\n See Also\n --------\n numpy.random.randint\n\n Availability\n --------\n Multiple GPUs, Multiple CPUs\n \"\"\"\n\n if not isinstance(low, int):\n raise NotImplementedError(\"'low' must be an integer\")\n if high is not None and not isinstance(high, int):\n raise NotImplementedError(\"'high' must be an integer or None\")\n\n if size is None:\n return nprandom.randint(low=low, high=high, size=size, dtype=dtype)\n\n if dtype is not None:\n dtype = np.dtype(dtype)\n else:\n dtype = np.dtype(np.int64)\n # TODO: randint must support unsigned integer dtypes as well\n if dtype.kind != \"i\":\n raise TypeError(\n \"cunumeric.random.randint must be given an integer dtype\"\n )\n if isinstance(size, int):\n size = (size,)\n result = ndarray(size, dtype=dtype)\n if high is None:\n if low <= 0:\n raise ValueError(\n \"bound must be strictly greater than 0 for randint\"\n )\n result._thunk.random_integer(low=0, high=low)\n else:\n if low >= high:\n raise ValueError(\n \"'high' bound must be strictly greater than 'low' \"\n \"bound for randint\"\n )\n result._thunk.random_integer(low=low, high=high)\n return result\n\n\ndef randn(*shapeargs: int) -> Union[float, ndarray]:\n \"\"\"\n randn(d0, d1, ..., dn)\n\n Return a sample (or samples) from the \"standard normal\" distribution.\n\n Parameters\n ----------\n d0, d1, ..., dn : int, optional\n The dimensions of the returned array, must be non-negative.\n If no argument is given a single Python float is returned.\n\n Returns\n -------\n Z : ndarray or float\n A ``(d0, d1, ..., dn)``-shaped array of floating-point samples from\n the standard normal distribution, or a single such float if\n no parameters were supplied.\n\n See Also\n --------\n numpy.random.randn\n\n Availability\n --------\n Multiple GPUs, Multiple CPUs\n \"\"\"\n\n if shapeargs == ():\n return nprandom.randn()\n result = ndarray(shapeargs, dtype=np.dtype(np.float64))\n result._thunk.random_normal()\n return result\n\n\ndef random(shape: Union[tuple[int], None] = None) -> Union[float, ndarray]:\n \"\"\"\n random(size=None)\n\n Return random floats in the half-open interval [0.0, 1.0).\n\n See Also\n --------\n numpy.random.random\n\n Availability\n --------\n Multiple GPUs, Multiple CPUs\n \"\"\"\n if shape is None:\n return nprandom.random()\n result = ndarray(shape, dtype=np.dtype(np.float64))\n result._thunk.random_uniform()\n return result\n", "path": "cunumeric/random/random.py"}]}
| 3,636 | 447 |
gh_patches_debug_23811
|
rasdani/github-patches
|
git_diff
|
strawberry-graphql__strawberry-2137
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Subclassing `Info`
Subclassing `strawberry.types.Info` raises a warning when using it in resolvers.
`"Argument name-based matching of 'info' is deprecated and will be removed in v1.0."`
But it gets injected into resolvers without any problems, in my use case i only change `Info.context` to provide my custom `Context` class and original `Info` gets injected: `strawberry.types.info.Info`
## System Information
- Strawberry version: 0.126.2
## Additional Context
Repository with bug reproduction: https://gitlab.com/ThirVondukr/strawberry-info-warning
Strawberry code where warning originates from: https://github.com/strawberry-graphql/strawberry/blob/main/strawberry/types/fields/resolver.py#L126
Subclassing `Info`
Subclassing `strawberry.types.Info` raises a warning when using it in resolvers.
`"Argument name-based matching of 'info' is deprecated and will be removed in v1.0."`
But it gets injected into resolvers without any problems, in my use case i only change `Info.context` to provide my custom `Context` class and original `Info` gets injected: `strawberry.types.info.Info`
## System Information
- Strawberry version: 0.126.2
## Additional Context
Repository with bug reproduction: https://gitlab.com/ThirVondukr/strawberry-info-warning
Strawberry code where warning originates from: https://github.com/strawberry-graphql/strawberry/blob/main/strawberry/types/fields/resolver.py#L126
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `strawberry/types/fields/resolver.py`
Content:
```
1 from __future__ import annotations as _
2
3 import builtins
4 import inspect
5 import sys
6 import warnings
7 from inspect import isasyncgenfunction, iscoroutinefunction
8 from typing import ( # type: ignore[attr-defined]
9 Any,
10 Callable,
11 Dict,
12 ForwardRef,
13 Generic,
14 List,
15 Mapping,
16 NamedTuple,
17 Optional,
18 Tuple,
19 Type,
20 TypeVar,
21 Union,
22 _eval_type,
23 )
24
25 from typing_extensions import Annotated, Protocol, get_args, get_origin
26
27 from strawberry.annotation import StrawberryAnnotation
28 from strawberry.arguments import StrawberryArgument
29 from strawberry.exceptions import MissingArgumentsAnnotationsError
30 from strawberry.type import StrawberryType
31 from strawberry.types.info import Info
32 from strawberry.utils.cached_property import cached_property
33
34
35 class Parameter(inspect.Parameter):
36 def __hash__(self):
37 """Override to exclude default value from hash.
38
39 This adds compatibility for using unhashable default values in resolvers such as
40 list and dict. The present use-case is limited to analyzing parameters from one
41 resolver. Therefore, the name, kind, and annotation combination are guaranteed
42 to be unique since two arguments cannot have the same name in a callable.
43
44 Furthermore, even though it is not currently a use-case to collect parameters
45 from different resolvers, the likelihood of collision from having the same hash
46 value but different defaults is mitigated by Python invoking the
47 :py:meth:`__eq__` method if two items have the same hash. See the verification
48 of this behavior in the `test_parameter_hash_collision` test.
49 """
50 return hash((self.name, self.kind, self.annotation))
51
52
53 class Signature(inspect.Signature):
54
55 _parameter_cls = Parameter
56
57
58 class ReservedParameterSpecification(Protocol):
59 def find(
60 self, parameters: Tuple[inspect.Parameter, ...], resolver: StrawberryResolver
61 ) -> Optional[inspect.Parameter]:
62 """Finds the reserved parameter from ``parameters``."""
63
64
65 class ReservedName(NamedTuple):
66 name: str
67
68 def find(
69 self, parameters: Tuple[inspect.Parameter, ...], _: StrawberryResolver
70 ) -> Optional[inspect.Parameter]:
71 return next((p for p in parameters if p.name == self.name), None)
72
73
74 class ReservedNameBoundParameter(NamedTuple):
75 name: str
76
77 def find(
78 self, parameters: Tuple[inspect.Parameter, ...], _: StrawberryResolver
79 ) -> Optional[inspect.Parameter]:
80 if parameters: # Add compatibility for resolvers with no arguments
81 first_parameter = parameters[0]
82 return first_parameter if first_parameter.name == self.name else None
83 else:
84 return None
85
86
87 class ReservedType(NamedTuple):
88 """Define a reserved type by name or by type.
89
90 To preserve backwards-comaptibility, if an annotation was defined but does not match
91 :attr:`type`, then the name is used as a fallback.
92 """
93
94 name: str
95 type: Type
96
97 def find(
98 self, parameters: Tuple[inspect.Parameter, ...], resolver: StrawberryResolver
99 ) -> Optional[inspect.Parameter]:
100 for parameter in parameters:
101 annotation = parameter.annotation
102 try:
103 resolved_annotation = _eval_type(
104 ForwardRef(annotation)
105 if isinstance(annotation, str)
106 else annotation,
107 resolver._namespace,
108 None,
109 )
110 resolver._resolved_annotations[parameter] = resolved_annotation
111 except NameError:
112 # Type-annotation could not be resolved
113 resolved_annotation = annotation
114 if self.is_reserved_type(resolved_annotation):
115 return parameter
116
117 # Fallback to matching by name
118 reserved_name = ReservedName(name=self.name).find(parameters, resolver)
119 if reserved_name:
120 warning = DeprecationWarning(
121 f"Argument name-based matching of '{self.name}' is deprecated and will "
122 "be removed in v1.0. Ensure that reserved arguments are annotated "
123 "their respective types (i.e. use value: 'DirectiveValue[str]' instead "
124 "of 'value: str' and 'info: Info' instead of a plain 'info')."
125 )
126 warnings.warn(warning)
127 return reserved_name
128 else:
129 return None
130
131 def is_reserved_type(self, other: Type) -> bool:
132 if get_origin(other) is Annotated:
133 # Handle annotated arguments such as Private[str] and DirectiveValue[str]
134 return any(isinstance(argument, self.type) for argument in get_args(other))
135 else:
136 # Handle both concrete and generic types (i.e Info, and Info[Any, Any])
137 return other is self.type or get_origin(other) is self.type
138
139
140 SELF_PARAMSPEC = ReservedNameBoundParameter("self")
141 CLS_PARAMSPEC = ReservedNameBoundParameter("cls")
142 ROOT_PARAMSPEC = ReservedName("root")
143 INFO_PARAMSPEC = ReservedType("info", Info)
144
145 T = TypeVar("T")
146
147
148 class StrawberryResolver(Generic[T]):
149
150 RESERVED_PARAMSPEC: Tuple[ReservedParameterSpecification, ...] = (
151 SELF_PARAMSPEC,
152 CLS_PARAMSPEC,
153 ROOT_PARAMSPEC,
154 INFO_PARAMSPEC,
155 )
156
157 def __init__(
158 self,
159 func: Union[Callable[..., T], staticmethod, classmethod],
160 *,
161 description: Optional[str] = None,
162 type_override: Optional[Union[StrawberryType, type]] = None,
163 ):
164 self.wrapped_func = func
165 self._description = description
166 self._type_override = type_override
167 """Specify the type manually instead of calculating from wrapped func
168
169 This is used when creating copies of types w/ generics
170 """
171 self._resolved_annotations: Dict[inspect.Parameter, Any] = {}
172 """Populated during reserved parameter determination.
173
174 Caching resolved annotations this way prevents evaling them repeatedly.
175 """
176
177 # TODO: Use this when doing the actual resolving? How to deal with async resolvers?
178 def __call__(self, *args, **kwargs) -> T:
179 if not callable(self.wrapped_func):
180 raise UncallableResolverError(self)
181 return self.wrapped_func(*args, **kwargs)
182
183 @cached_property
184 def signature(self) -> inspect.Signature:
185 return Signature.from_callable(self._unbound_wrapped_func, follow_wrapped=True)
186
187 @cached_property
188 def reserved_parameters(
189 self,
190 ) -> Dict[ReservedParameterSpecification, Optional[inspect.Parameter]]:
191 """Mapping of reserved parameter specification to parameter."""
192 parameters = tuple(self.signature.parameters.values())
193 return {spec: spec.find(parameters, self) for spec in self.RESERVED_PARAMSPEC}
194
195 @cached_property
196 def arguments(self) -> List[StrawberryArgument]:
197 """Resolver arguments exposed in the GraphQL Schema."""
198 parameters = self.signature.parameters.values()
199 reserved_parameters = set(self.reserved_parameters.values())
200
201 missing_annotations = set()
202 arguments = []
203 user_parameters = (p for p in parameters if p not in reserved_parameters)
204 for param in user_parameters:
205 annotation = self._resolved_annotations.get(param, param.annotation)
206 if annotation is inspect.Signature.empty:
207 missing_annotations.add(param.name)
208 else:
209 argument = StrawberryArgument(
210 python_name=param.name,
211 graphql_name=None,
212 type_annotation=StrawberryAnnotation(
213 annotation=annotation, namespace=self._namespace
214 ),
215 default=param.default,
216 )
217 arguments.append(argument)
218 if missing_annotations:
219 raise MissingArgumentsAnnotationsError(self.name, missing_annotations)
220 return arguments
221
222 @cached_property
223 def info_parameter(self) -> Optional[inspect.Parameter]:
224 return self.reserved_parameters.get(INFO_PARAMSPEC)
225
226 @cached_property
227 def root_parameter(self) -> Optional[inspect.Parameter]:
228 return self.reserved_parameters.get(ROOT_PARAMSPEC)
229
230 @cached_property
231 def self_parameter(self) -> Optional[inspect.Parameter]:
232 return self.reserved_parameters.get(SELF_PARAMSPEC)
233
234 @cached_property
235 def name(self) -> str:
236 # TODO: What to do if resolver is a lambda?
237 return self._unbound_wrapped_func.__name__
238
239 @cached_property
240 def annotations(self) -> Dict[str, object]:
241 """Annotations for the resolver.
242
243 Does not include special args defined in `RESERVED_PARAMSPEC` (e.g. self, root,
244 info)
245 """
246 reserved_parameters = self.reserved_parameters
247 reserved_names = {p.name for p in reserved_parameters.values() if p is not None}
248
249 annotations = self._unbound_wrapped_func.__annotations__
250 annotations = {
251 name: annotation
252 for name, annotation in annotations.items()
253 if name not in reserved_names
254 }
255
256 return annotations
257
258 @cached_property
259 def type_annotation(self) -> Optional[StrawberryAnnotation]:
260 return_annotation = self.signature.return_annotation
261 if return_annotation is inspect.Signature.empty:
262 return None
263 else:
264 type_annotation = StrawberryAnnotation(
265 annotation=return_annotation, namespace=self._namespace
266 )
267 return type_annotation
268
269 @property
270 def type(self) -> Optional[Union[StrawberryType, type]]:
271 if self._type_override:
272 return self._type_override
273 if self.type_annotation is None:
274 return None
275 return self.type_annotation.resolve()
276
277 @cached_property
278 def is_async(self) -> bool:
279 return iscoroutinefunction(self._unbound_wrapped_func) or isasyncgenfunction(
280 self._unbound_wrapped_func
281 )
282
283 def copy_with(
284 self, type_var_map: Mapping[TypeVar, Union[StrawberryType, builtins.type]]
285 ) -> StrawberryResolver:
286 type_override = None
287
288 if self.type:
289 if isinstance(self.type, StrawberryType):
290 type_override = self.type.copy_with(type_var_map)
291 else:
292 type_override = self.type._type_definition.copy_with( # type: ignore
293 type_var_map,
294 )
295
296 return type(self)(
297 func=self.wrapped_func,
298 description=self._description,
299 type_override=type_override,
300 )
301
302 @cached_property
303 def _namespace(self) -> Dict[str, Any]:
304 return sys.modules[self._unbound_wrapped_func.__module__].__dict__
305
306 @cached_property
307 def _unbound_wrapped_func(self) -> Callable[..., T]:
308 if isinstance(self.wrapped_func, (staticmethod, classmethod)):
309 return self.wrapped_func.__func__
310
311 return self.wrapped_func
312
313
314 class UncallableResolverError(Exception):
315 def __init__(self, resolver: "StrawberryResolver"):
316 message = (
317 f"Attempted to call resolver {resolver} with uncallable function "
318 f"{resolver.wrapped_func}"
319 )
320 super().__init__(message)
321
322
323 __all__ = ["StrawberryResolver"]
324
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/strawberry/types/fields/resolver.py b/strawberry/types/fields/resolver.py
--- a/strawberry/types/fields/resolver.py
+++ b/strawberry/types/fields/resolver.py
@@ -20,6 +20,7 @@
TypeVar,
Union,
_eval_type,
+ cast,
)
from typing_extensions import Annotated, Protocol, get_args, get_origin
@@ -129,12 +130,17 @@
return None
def is_reserved_type(self, other: Type) -> bool:
- if get_origin(other) is Annotated:
+ origin = cast(type, get_origin(other)) or other
+ if origin is Annotated:
# Handle annotated arguments such as Private[str] and DirectiveValue[str]
return any(isinstance(argument, self.type) for argument in get_args(other))
else:
# Handle both concrete and generic types (i.e Info, and Info[Any, Any])
- return other is self.type or get_origin(other) is self.type
+ return (
+ issubclass(origin, self.type)
+ if isinstance(origin, type)
+ else origin is self.type
+ )
SELF_PARAMSPEC = ReservedNameBoundParameter("self")
|
{"golden_diff": "diff --git a/strawberry/types/fields/resolver.py b/strawberry/types/fields/resolver.py\n--- a/strawberry/types/fields/resolver.py\n+++ b/strawberry/types/fields/resolver.py\n@@ -20,6 +20,7 @@\n TypeVar,\n Union,\n _eval_type,\n+ cast,\n )\n \n from typing_extensions import Annotated, Protocol, get_args, get_origin\n@@ -129,12 +130,17 @@\n return None\n \n def is_reserved_type(self, other: Type) -> bool:\n- if get_origin(other) is Annotated:\n+ origin = cast(type, get_origin(other)) or other\n+ if origin is Annotated:\n # Handle annotated arguments such as Private[str] and DirectiveValue[str]\n return any(isinstance(argument, self.type) for argument in get_args(other))\n else:\n # Handle both concrete and generic types (i.e Info, and Info[Any, Any])\n- return other is self.type or get_origin(other) is self.type\n+ return (\n+ issubclass(origin, self.type)\n+ if isinstance(origin, type)\n+ else origin is self.type\n+ )\n \n \n SELF_PARAMSPEC = ReservedNameBoundParameter(\"self\")\n", "issue": "Subclassing `Info`\nSubclassing `strawberry.types.Info` raises a warning when using it in resolvers.\r\n`\"Argument name-based matching of 'info' is deprecated and will be removed in v1.0.\"`\r\nBut it gets injected into resolvers without any problems, in my use case i only change `Info.context` to provide my custom `Context` class and original `Info` gets injected: `strawberry.types.info.Info`\r\n\r\n## System Information\r\n\r\n - Strawberry version: 0.126.2\r\n\r\n## Additional Context\r\n\r\nRepository with bug reproduction: https://gitlab.com/ThirVondukr/strawberry-info-warning\r\nStrawberry code where warning originates from: https://github.com/strawberry-graphql/strawberry/blob/main/strawberry/types/fields/resolver.py#L126\r\n\nSubclassing `Info`\nSubclassing `strawberry.types.Info` raises a warning when using it in resolvers.\r\n`\"Argument name-based matching of 'info' is deprecated and will be removed in v1.0.\"`\r\nBut it gets injected into resolvers without any problems, in my use case i only change `Info.context` to provide my custom `Context` class and original `Info` gets injected: `strawberry.types.info.Info`\r\n\r\n## System Information\r\n\r\n - Strawberry version: 0.126.2\r\n\r\n## Additional Context\r\n\r\nRepository with bug reproduction: https://gitlab.com/ThirVondukr/strawberry-info-warning\r\nStrawberry code where warning originates from: https://github.com/strawberry-graphql/strawberry/blob/main/strawberry/types/fields/resolver.py#L126\r\n\n", "before_files": [{"content": "from __future__ import annotations as _\n\nimport builtins\nimport inspect\nimport sys\nimport warnings\nfrom inspect import isasyncgenfunction, iscoroutinefunction\nfrom typing import ( # type: ignore[attr-defined]\n Any,\n Callable,\n Dict,\n ForwardRef,\n Generic,\n List,\n Mapping,\n NamedTuple,\n Optional,\n Tuple,\n Type,\n TypeVar,\n Union,\n _eval_type,\n)\n\nfrom typing_extensions import Annotated, Protocol, get_args, get_origin\n\nfrom strawberry.annotation import StrawberryAnnotation\nfrom strawberry.arguments import StrawberryArgument\nfrom strawberry.exceptions import MissingArgumentsAnnotationsError\nfrom strawberry.type import StrawberryType\nfrom strawberry.types.info import Info\nfrom strawberry.utils.cached_property import cached_property\n\n\nclass Parameter(inspect.Parameter):\n def __hash__(self):\n \"\"\"Override to exclude default value from hash.\n\n This adds compatibility for using unhashable default values in resolvers such as\n list and dict. The present use-case is limited to analyzing parameters from one\n resolver. Therefore, the name, kind, and annotation combination are guaranteed\n to be unique since two arguments cannot have the same name in a callable.\n\n Furthermore, even though it is not currently a use-case to collect parameters\n from different resolvers, the likelihood of collision from having the same hash\n value but different defaults is mitigated by Python invoking the\n :py:meth:`__eq__` method if two items have the same hash. See the verification\n of this behavior in the `test_parameter_hash_collision` test.\n \"\"\"\n return hash((self.name, self.kind, self.annotation))\n\n\nclass Signature(inspect.Signature):\n\n _parameter_cls = Parameter\n\n\nclass ReservedParameterSpecification(Protocol):\n def find(\n self, parameters: Tuple[inspect.Parameter, ...], resolver: StrawberryResolver\n ) -> Optional[inspect.Parameter]:\n \"\"\"Finds the reserved parameter from ``parameters``.\"\"\"\n\n\nclass ReservedName(NamedTuple):\n name: str\n\n def find(\n self, parameters: Tuple[inspect.Parameter, ...], _: StrawberryResolver\n ) -> Optional[inspect.Parameter]:\n return next((p for p in parameters if p.name == self.name), None)\n\n\nclass ReservedNameBoundParameter(NamedTuple):\n name: str\n\n def find(\n self, parameters: Tuple[inspect.Parameter, ...], _: StrawberryResolver\n ) -> Optional[inspect.Parameter]:\n if parameters: # Add compatibility for resolvers with no arguments\n first_parameter = parameters[0]\n return first_parameter if first_parameter.name == self.name else None\n else:\n return None\n\n\nclass ReservedType(NamedTuple):\n \"\"\"Define a reserved type by name or by type.\n\n To preserve backwards-comaptibility, if an annotation was defined but does not match\n :attr:`type`, then the name is used as a fallback.\n \"\"\"\n\n name: str\n type: Type\n\n def find(\n self, parameters: Tuple[inspect.Parameter, ...], resolver: StrawberryResolver\n ) -> Optional[inspect.Parameter]:\n for parameter in parameters:\n annotation = parameter.annotation\n try:\n resolved_annotation = _eval_type(\n ForwardRef(annotation)\n if isinstance(annotation, str)\n else annotation,\n resolver._namespace,\n None,\n )\n resolver._resolved_annotations[parameter] = resolved_annotation\n except NameError:\n # Type-annotation could not be resolved\n resolved_annotation = annotation\n if self.is_reserved_type(resolved_annotation):\n return parameter\n\n # Fallback to matching by name\n reserved_name = ReservedName(name=self.name).find(parameters, resolver)\n if reserved_name:\n warning = DeprecationWarning(\n f\"Argument name-based matching of '{self.name}' is deprecated and will \"\n \"be removed in v1.0. Ensure that reserved arguments are annotated \"\n \"their respective types (i.e. use value: 'DirectiveValue[str]' instead \"\n \"of 'value: str' and 'info: Info' instead of a plain 'info').\"\n )\n warnings.warn(warning)\n return reserved_name\n else:\n return None\n\n def is_reserved_type(self, other: Type) -> bool:\n if get_origin(other) is Annotated:\n # Handle annotated arguments such as Private[str] and DirectiveValue[str]\n return any(isinstance(argument, self.type) for argument in get_args(other))\n else:\n # Handle both concrete and generic types (i.e Info, and Info[Any, Any])\n return other is self.type or get_origin(other) is self.type\n\n\nSELF_PARAMSPEC = ReservedNameBoundParameter(\"self\")\nCLS_PARAMSPEC = ReservedNameBoundParameter(\"cls\")\nROOT_PARAMSPEC = ReservedName(\"root\")\nINFO_PARAMSPEC = ReservedType(\"info\", Info)\n\nT = TypeVar(\"T\")\n\n\nclass StrawberryResolver(Generic[T]):\n\n RESERVED_PARAMSPEC: Tuple[ReservedParameterSpecification, ...] = (\n SELF_PARAMSPEC,\n CLS_PARAMSPEC,\n ROOT_PARAMSPEC,\n INFO_PARAMSPEC,\n )\n\n def __init__(\n self,\n func: Union[Callable[..., T], staticmethod, classmethod],\n *,\n description: Optional[str] = None,\n type_override: Optional[Union[StrawberryType, type]] = None,\n ):\n self.wrapped_func = func\n self._description = description\n self._type_override = type_override\n \"\"\"Specify the type manually instead of calculating from wrapped func\n\n This is used when creating copies of types w/ generics\n \"\"\"\n self._resolved_annotations: Dict[inspect.Parameter, Any] = {}\n \"\"\"Populated during reserved parameter determination.\n\n Caching resolved annotations this way prevents evaling them repeatedly.\n \"\"\"\n\n # TODO: Use this when doing the actual resolving? How to deal with async resolvers?\n def __call__(self, *args, **kwargs) -> T:\n if not callable(self.wrapped_func):\n raise UncallableResolverError(self)\n return self.wrapped_func(*args, **kwargs)\n\n @cached_property\n def signature(self) -> inspect.Signature:\n return Signature.from_callable(self._unbound_wrapped_func, follow_wrapped=True)\n\n @cached_property\n def reserved_parameters(\n self,\n ) -> Dict[ReservedParameterSpecification, Optional[inspect.Parameter]]:\n \"\"\"Mapping of reserved parameter specification to parameter.\"\"\"\n parameters = tuple(self.signature.parameters.values())\n return {spec: spec.find(parameters, self) for spec in self.RESERVED_PARAMSPEC}\n\n @cached_property\n def arguments(self) -> List[StrawberryArgument]:\n \"\"\"Resolver arguments exposed in the GraphQL Schema.\"\"\"\n parameters = self.signature.parameters.values()\n reserved_parameters = set(self.reserved_parameters.values())\n\n missing_annotations = set()\n arguments = []\n user_parameters = (p for p in parameters if p not in reserved_parameters)\n for param in user_parameters:\n annotation = self._resolved_annotations.get(param, param.annotation)\n if annotation is inspect.Signature.empty:\n missing_annotations.add(param.name)\n else:\n argument = StrawberryArgument(\n python_name=param.name,\n graphql_name=None,\n type_annotation=StrawberryAnnotation(\n annotation=annotation, namespace=self._namespace\n ),\n default=param.default,\n )\n arguments.append(argument)\n if missing_annotations:\n raise MissingArgumentsAnnotationsError(self.name, missing_annotations)\n return arguments\n\n @cached_property\n def info_parameter(self) -> Optional[inspect.Parameter]:\n return self.reserved_parameters.get(INFO_PARAMSPEC)\n\n @cached_property\n def root_parameter(self) -> Optional[inspect.Parameter]:\n return self.reserved_parameters.get(ROOT_PARAMSPEC)\n\n @cached_property\n def self_parameter(self) -> Optional[inspect.Parameter]:\n return self.reserved_parameters.get(SELF_PARAMSPEC)\n\n @cached_property\n def name(self) -> str:\n # TODO: What to do if resolver is a lambda?\n return self._unbound_wrapped_func.__name__\n\n @cached_property\n def annotations(self) -> Dict[str, object]:\n \"\"\"Annotations for the resolver.\n\n Does not include special args defined in `RESERVED_PARAMSPEC` (e.g. self, root,\n info)\n \"\"\"\n reserved_parameters = self.reserved_parameters\n reserved_names = {p.name for p in reserved_parameters.values() if p is not None}\n\n annotations = self._unbound_wrapped_func.__annotations__\n annotations = {\n name: annotation\n for name, annotation in annotations.items()\n if name not in reserved_names\n }\n\n return annotations\n\n @cached_property\n def type_annotation(self) -> Optional[StrawberryAnnotation]:\n return_annotation = self.signature.return_annotation\n if return_annotation is inspect.Signature.empty:\n return None\n else:\n type_annotation = StrawberryAnnotation(\n annotation=return_annotation, namespace=self._namespace\n )\n return type_annotation\n\n @property\n def type(self) -> Optional[Union[StrawberryType, type]]:\n if self._type_override:\n return self._type_override\n if self.type_annotation is None:\n return None\n return self.type_annotation.resolve()\n\n @cached_property\n def is_async(self) -> bool:\n return iscoroutinefunction(self._unbound_wrapped_func) or isasyncgenfunction(\n self._unbound_wrapped_func\n )\n\n def copy_with(\n self, type_var_map: Mapping[TypeVar, Union[StrawberryType, builtins.type]]\n ) -> StrawberryResolver:\n type_override = None\n\n if self.type:\n if isinstance(self.type, StrawberryType):\n type_override = self.type.copy_with(type_var_map)\n else:\n type_override = self.type._type_definition.copy_with( # type: ignore\n type_var_map,\n )\n\n return type(self)(\n func=self.wrapped_func,\n description=self._description,\n type_override=type_override,\n )\n\n @cached_property\n def _namespace(self) -> Dict[str, Any]:\n return sys.modules[self._unbound_wrapped_func.__module__].__dict__\n\n @cached_property\n def _unbound_wrapped_func(self) -> Callable[..., T]:\n if isinstance(self.wrapped_func, (staticmethod, classmethod)):\n return self.wrapped_func.__func__\n\n return self.wrapped_func\n\n\nclass UncallableResolverError(Exception):\n def __init__(self, resolver: \"StrawberryResolver\"):\n message = (\n f\"Attempted to call resolver {resolver} with uncallable function \"\n f\"{resolver.wrapped_func}\"\n )\n super().__init__(message)\n\n\n__all__ = [\"StrawberryResolver\"]\n", "path": "strawberry/types/fields/resolver.py"}], "after_files": [{"content": "from __future__ import annotations as _\n\nimport builtins\nimport inspect\nimport sys\nimport warnings\nfrom inspect import isasyncgenfunction, iscoroutinefunction\nfrom typing import ( # type: ignore[attr-defined]\n Any,\n Callable,\n Dict,\n ForwardRef,\n Generic,\n List,\n Mapping,\n NamedTuple,\n Optional,\n Tuple,\n Type,\n TypeVar,\n Union,\n _eval_type,\n cast,\n)\n\nfrom typing_extensions import Annotated, Protocol, get_args, get_origin\n\nfrom strawberry.annotation import StrawberryAnnotation\nfrom strawberry.arguments import StrawberryArgument\nfrom strawberry.exceptions import MissingArgumentsAnnotationsError\nfrom strawberry.type import StrawberryType\nfrom strawberry.types.info import Info\nfrom strawberry.utils.cached_property import cached_property\n\n\nclass Parameter(inspect.Parameter):\n def __hash__(self):\n \"\"\"Override to exclude default value from hash.\n\n This adds compatibility for using unhashable default values in resolvers such as\n list and dict. The present use-case is limited to analyzing parameters from one\n resolver. Therefore, the name, kind, and annotation combination are guaranteed\n to be unique since two arguments cannot have the same name in a callable.\n\n Furthermore, even though it is not currently a use-case to collect parameters\n from different resolvers, the likelihood of collision from having the same hash\n value but different defaults is mitigated by Python invoking the\n :py:meth:`__eq__` method if two items have the same hash. See the verification\n of this behavior in the `test_parameter_hash_collision` test.\n \"\"\"\n return hash((self.name, self.kind, self.annotation))\n\n\nclass Signature(inspect.Signature):\n\n _parameter_cls = Parameter\n\n\nclass ReservedParameterSpecification(Protocol):\n def find(\n self, parameters: Tuple[inspect.Parameter, ...], resolver: StrawberryResolver\n ) -> Optional[inspect.Parameter]:\n \"\"\"Finds the reserved parameter from ``parameters``.\"\"\"\n\n\nclass ReservedName(NamedTuple):\n name: str\n\n def find(\n self, parameters: Tuple[inspect.Parameter, ...], _: StrawberryResolver\n ) -> Optional[inspect.Parameter]:\n return next((p for p in parameters if p.name == self.name), None)\n\n\nclass ReservedNameBoundParameter(NamedTuple):\n name: str\n\n def find(\n self, parameters: Tuple[inspect.Parameter, ...], _: StrawberryResolver\n ) -> Optional[inspect.Parameter]:\n if parameters: # Add compatibility for resolvers with no arguments\n first_parameter = parameters[0]\n return first_parameter if first_parameter.name == self.name else None\n else:\n return None\n\n\nclass ReservedType(NamedTuple):\n \"\"\"Define a reserved type by name or by type.\n\n To preserve backwards-comaptibility, if an annotation was defined but does not match\n :attr:`type`, then the name is used as a fallback.\n \"\"\"\n\n name: str\n type: Type\n\n def find(\n self, parameters: Tuple[inspect.Parameter, ...], resolver: StrawberryResolver\n ) -> Optional[inspect.Parameter]:\n for parameter in parameters:\n annotation = parameter.annotation\n try:\n resolved_annotation = _eval_type(\n ForwardRef(annotation)\n if isinstance(annotation, str)\n else annotation,\n resolver._namespace,\n None,\n )\n resolver._resolved_annotations[parameter] = resolved_annotation\n except NameError:\n # Type-annotation could not be resolved\n resolved_annotation = annotation\n if self.is_reserved_type(resolved_annotation):\n return parameter\n\n # Fallback to matching by name\n reserved_name = ReservedName(name=self.name).find(parameters, resolver)\n if reserved_name:\n warning = DeprecationWarning(\n f\"Argument name-based matching of '{self.name}' is deprecated and will \"\n \"be removed in v1.0. Ensure that reserved arguments are annotated \"\n \"their respective types (i.e. use value: 'DirectiveValue[str]' instead \"\n \"of 'value: str' and 'info: Info' instead of a plain 'info').\"\n )\n warnings.warn(warning)\n return reserved_name\n else:\n return None\n\n def is_reserved_type(self, other: Type) -> bool:\n origin = cast(type, get_origin(other)) or other\n if origin is Annotated:\n # Handle annotated arguments such as Private[str] and DirectiveValue[str]\n return any(isinstance(argument, self.type) for argument in get_args(other))\n else:\n # Handle both concrete and generic types (i.e Info, and Info[Any, Any])\n return (\n issubclass(origin, self.type)\n if isinstance(origin, type)\n else origin is self.type\n )\n\n\nSELF_PARAMSPEC = ReservedNameBoundParameter(\"self\")\nCLS_PARAMSPEC = ReservedNameBoundParameter(\"cls\")\nROOT_PARAMSPEC = ReservedName(\"root\")\nINFO_PARAMSPEC = ReservedType(\"info\", Info)\n\nT = TypeVar(\"T\")\n\n\nclass StrawberryResolver(Generic[T]):\n\n RESERVED_PARAMSPEC: Tuple[ReservedParameterSpecification, ...] = (\n SELF_PARAMSPEC,\n CLS_PARAMSPEC,\n ROOT_PARAMSPEC,\n INFO_PARAMSPEC,\n )\n\n def __init__(\n self,\n func: Union[Callable[..., T], staticmethod, classmethod],\n *,\n description: Optional[str] = None,\n type_override: Optional[Union[StrawberryType, type]] = None,\n ):\n self.wrapped_func = func\n self._description = description\n self._type_override = type_override\n \"\"\"Specify the type manually instead of calculating from wrapped func\n\n This is used when creating copies of types w/ generics\n \"\"\"\n self._resolved_annotations: Dict[inspect.Parameter, Any] = {}\n \"\"\"Populated during reserved parameter determination.\n\n Caching resolved annotations this way prevents evaling them repeatedly.\n \"\"\"\n\n # TODO: Use this when doing the actual resolving? How to deal with async resolvers?\n def __call__(self, *args, **kwargs) -> T:\n if not callable(self.wrapped_func):\n raise UncallableResolverError(self)\n return self.wrapped_func(*args, **kwargs)\n\n @cached_property\n def signature(self) -> inspect.Signature:\n return Signature.from_callable(self._unbound_wrapped_func, follow_wrapped=True)\n\n @cached_property\n def reserved_parameters(\n self,\n ) -> Dict[ReservedParameterSpecification, Optional[inspect.Parameter]]:\n \"\"\"Mapping of reserved parameter specification to parameter.\"\"\"\n parameters = tuple(self.signature.parameters.values())\n return {spec: spec.find(parameters, self) for spec in self.RESERVED_PARAMSPEC}\n\n @cached_property\n def arguments(self) -> List[StrawberryArgument]:\n \"\"\"Resolver arguments exposed in the GraphQL Schema.\"\"\"\n parameters = self.signature.parameters.values()\n reserved_parameters = set(self.reserved_parameters.values())\n\n missing_annotations = set()\n arguments = []\n user_parameters = (p for p in parameters if p not in reserved_parameters)\n for param in user_parameters:\n annotation = self._resolved_annotations.get(param, param.annotation)\n if annotation is inspect.Signature.empty:\n missing_annotations.add(param.name)\n else:\n argument = StrawberryArgument(\n python_name=param.name,\n graphql_name=None,\n type_annotation=StrawberryAnnotation(\n annotation=annotation, namespace=self._namespace\n ),\n default=param.default,\n )\n arguments.append(argument)\n if missing_annotations:\n raise MissingArgumentsAnnotationsError(self.name, missing_annotations)\n return arguments\n\n @cached_property\n def info_parameter(self) -> Optional[inspect.Parameter]:\n return self.reserved_parameters.get(INFO_PARAMSPEC)\n\n @cached_property\n def root_parameter(self) -> Optional[inspect.Parameter]:\n return self.reserved_parameters.get(ROOT_PARAMSPEC)\n\n @cached_property\n def self_parameter(self) -> Optional[inspect.Parameter]:\n return self.reserved_parameters.get(SELF_PARAMSPEC)\n\n @cached_property\n def name(self) -> str:\n # TODO: What to do if resolver is a lambda?\n return self._unbound_wrapped_func.__name__\n\n @cached_property\n def annotations(self) -> Dict[str, object]:\n \"\"\"Annotations for the resolver.\n\n Does not include special args defined in `RESERVED_PARAMSPEC` (e.g. self, root,\n info)\n \"\"\"\n reserved_parameters = self.reserved_parameters\n reserved_names = {p.name for p in reserved_parameters.values() if p is not None}\n\n annotations = self._unbound_wrapped_func.__annotations__\n annotations = {\n name: annotation\n for name, annotation in annotations.items()\n if name not in reserved_names\n }\n\n return annotations\n\n @cached_property\n def type_annotation(self) -> Optional[StrawberryAnnotation]:\n return_annotation = self.signature.return_annotation\n if return_annotation is inspect.Signature.empty:\n return None\n else:\n type_annotation = StrawberryAnnotation(\n annotation=return_annotation, namespace=self._namespace\n )\n return type_annotation\n\n @property\n def type(self) -> Optional[Union[StrawberryType, type]]:\n if self._type_override:\n return self._type_override\n if self.type_annotation is None:\n return None\n return self.type_annotation.resolve()\n\n @cached_property\n def is_async(self) -> bool:\n return iscoroutinefunction(self._unbound_wrapped_func) or isasyncgenfunction(\n self._unbound_wrapped_func\n )\n\n def copy_with(\n self, type_var_map: Mapping[TypeVar, Union[StrawberryType, builtins.type]]\n ) -> StrawberryResolver:\n type_override = None\n\n if self.type:\n if isinstance(self.type, StrawberryType):\n type_override = self.type.copy_with(type_var_map)\n else:\n type_override = self.type._type_definition.copy_with( # type: ignore\n type_var_map,\n )\n\n return type(self)(\n func=self.wrapped_func,\n description=self._description,\n type_override=type_override,\n )\n\n @cached_property\n def _namespace(self) -> Dict[str, Any]:\n return sys.modules[self._unbound_wrapped_func.__module__].__dict__\n\n @cached_property\n def _unbound_wrapped_func(self) -> Callable[..., T]:\n if isinstance(self.wrapped_func, (staticmethod, classmethod)):\n return self.wrapped_func.__func__\n\n return self.wrapped_func\n\n\nclass UncallableResolverError(Exception):\n def __init__(self, resolver: \"StrawberryResolver\"):\n message = (\n f\"Attempted to call resolver {resolver} with uncallable function \"\n f\"{resolver.wrapped_func}\"\n )\n super().__init__(message)\n\n\n__all__ = [\"StrawberryResolver\"]\n", "path": "strawberry/types/fields/resolver.py"}]}
| 3,796 | 277 |
gh_patches_debug_63976
|
rasdani/github-patches
|
git_diff
|
buildbot__buildbot-1214
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
New web plugin idea: a health indicator
This ticket is a migrated Trac ticket [2966](http://trac.buildbot.net/ticket/2966)
People contributed to the original ticket: `benoit.allard@...`, @tardyp, @rutsky, @unknown_contributor, @sa2ajj, @rutsky
Ticket created on: `Oct 24 2014`
Ticket last modified on: `Mar 19 2016`
---
I like to extract as much useful indicator from my builds as possible (time, but also amount of warnings, and such ...)
It would be cool to have a web plugin that could print the evolution of my indicators over time ! (Of course, I would have to configure which indicator I want to see plotted, maybe the kind of plot, and so on ...)
---
_Comment from_: @sa2ajj
_Date_: `Oct 24 2014`
Could you please elaborate or provide a more specific example?
I think it's related to the [[metrics support|http://docs.buildbot.net/latest/developer/metrics.html]], but without an example I can easily be wrong :)
---
_Comment from_: @Ben
_Date_: `Oct 24 2014`
I was more aiming at [[Statistics|http://docs.buildbot.net/latest/developer/cls-buildsteps.html?highlight=statistics#buildbot.process.buildstep.[[BuildStep]].hasStatistic]], but I just realized that
Note that statistics are not preserved after a build is complete.
So metrics is probably where we want to interface with the master.
I used to abuse Properties for that purpose ...
---
_Comment from_: @tardyp
_Date_: `Oct 24 2014`
Buildbot plugin system is really made for enabling such dashboards.
A web ui plugin is not technically restricted to creating a bunch of js file, it could also create a twisted service.
For me having the JS only use existing data api to query the data will be very inefficient. I think we could easily create a service, like a status service that registers to a bunch of mq events, and create statistics based on that.
I also had in mind that plugins could have some table in the DB they could use to store the data, or maybe use another db configuration with all schema + migration stuff separated.
---
_Comment from_: @tardyp
_Date_: `Oct 26 2014`
on IRC, sa2ajj talked about http://graphite.readthedocs.org/
He told us, he actually uses it at work, and has integration working with eigth.
Looking at the documentation, the first reaction is how to integrate this with multimaster, as graphite has its own db called whisper.
I haven't look too much deeply, but I think this is still feasible as a external tool. Probably this would be much cheaper than making our own metrics system inside buildbot.
An external graphite server could be setup, and watch for the (TBD) mq server. As there are messages for nearly every kind of activity that happens in buildbot, this is a good mean of making solid analysis of what is going on. Of course, this solution would not be fully integrated, as probably the UI would be external, but anyway, I think this is a possible cheap path.
@sa2ajj do you think it is possible? How would you estimate the cost of integration?
---
_Comment from_: @Ben
_Date_: `Oct 26 2014`
There are a lot of [[interfaces|http://graphite.readthedocs.org/en/latest/tools.html#visualization]], as well as data collectors / forwarders (same page, a bit higher) available for graphite. It looks like some of them are js-only ! Still way to go for a www-plugin !
---
_Comment from_: @sa2ajj
_Date_: `Oct 27 2014`
What I said was that we indeed use graphite, but I did not say we use it with Buildbot.
I have an oldish branch where I tried to publish metrics to graphite. However I stopped working on that when I realised that it's not very straightforward to implement it to support multi-master case.
If there's an interest, I can revive the branch or, at least, publish what I have (after rebasing the latest master) so other could comment in what direction my thinking worked.
---
_Comment from_: @unknown_contributor
_Date_: `Mar 19 2016`
+1 for this feature. I suggest to merge the statistic API with the [[stats module|https://github.com/buildbot/buildbot/tree/master/master/buildbot/statistics]] developed last year by my GSOC student. And add a default backend that store a subset of these stat in the main database that would enable the development of heath/stats visualization modules directly installed by default (using Highchart js lib for example)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pkg/setup.py`
Content:
```
1 #!/usr/bin/env python
2 #
3 # This file is part of Buildbot. Buildbot is free software: you can
4 # redistribute it and/or modify it under the terms of the GNU General Public
5 # License as published by the Free Software Foundation, version 2.
6 #
7 # This program is distributed in the hope that it will be useful, but WITHOUT
8 # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
9 # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
10 # details.
11 #
12 # You should have received a copy of the GNU General Public License along with
13 # this program; if not, write to the Free Software Foundation, Inc., 51
14 # Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
15 #
16 # Copyright Buildbot Team Members
17
18 from setuptools import setup
19 import buildbot_pkg
20
21 setup(
22 name='buildbot_pkg',
23 version=buildbot_pkg.getVersion("."),
24 description='Buildbot packaging tools',
25 author=u'Pierre Tardy',
26 author_email=u'[email protected]',
27 url='http://buildbot.net/',
28 license='GNU GPL',
29 py_modules=['buildbot_pkg'],
30 )
31
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pkg/setup.py b/pkg/setup.py
--- a/pkg/setup.py
+++ b/pkg/setup.py
@@ -19,7 +19,7 @@
import buildbot_pkg
setup(
- name='buildbot_pkg',
+ name='buildbot-pkg',
version=buildbot_pkg.getVersion("."),
description='Buildbot packaging tools',
author=u'Pierre Tardy',
|
{"golden_diff": "diff --git a/pkg/setup.py b/pkg/setup.py\n--- a/pkg/setup.py\n+++ b/pkg/setup.py\n@@ -19,7 +19,7 @@\n import buildbot_pkg\n \n setup(\n- name='buildbot_pkg',\n+ name='buildbot-pkg',\n version=buildbot_pkg.getVersion(\".\"),\n description='Buildbot packaging tools',\n author=u'Pierre Tardy',\n", "issue": "New web plugin idea: a health indicator\nThis ticket is a migrated Trac ticket [2966](http://trac.buildbot.net/ticket/2966)\n\nPeople contributed to the original ticket: `benoit.allard@...`, @tardyp, @rutsky, @unknown_contributor, @sa2ajj, @rutsky\nTicket created on: `Oct 24 2014`\nTicket last modified on: `Mar 19 2016`\n\n---\n\nI like to extract as much useful indicator from my builds as possible (time, but also amount of warnings, and such ...)\n\nIt would be cool to have a web plugin that could print the evolution of my indicators over time ! (Of course, I would have to configure which indicator I want to see plotted, maybe the kind of plot, and so on ...)\n\n\n---\n\n_Comment from_: @sa2ajj\n_Date_: `Oct 24 2014`\n\nCould you please elaborate or provide a more specific example?\n\nI think it's related to the [[metrics support|http://docs.buildbot.net/latest/developer/metrics.html]], but without an example I can easily be wrong :)\n\n---\n_Comment from_: @Ben\n_Date_: `Oct 24 2014`\n\nI was more aiming at [[Statistics|http://docs.buildbot.net/latest/developer/cls-buildsteps.html?highlight=statistics#buildbot.process.buildstep.[[BuildStep]].hasStatistic]], but I just realized that\n\n Note that statistics are not preserved after a build is complete.\n\nSo metrics is probably where we want to interface with the master.\n\nI used to abuse Properties for that purpose ...\n\n---\n_Comment from_: @tardyp\n_Date_: `Oct 24 2014`\n\nBuildbot plugin system is really made for enabling such dashboards.\n\nA web ui plugin is not technically restricted to creating a bunch of js file, it could also create a twisted service.\n\nFor me having the JS only use existing data api to query the data will be very inefficient. I think we could easily create a service, like a status service that registers to a bunch of mq events, and create statistics based on that.\n\nI also had in mind that plugins could have some table in the DB they could use to store the data, or maybe use another db configuration with all schema + migration stuff separated.\n\n\n---\n_Comment from_: @tardyp\n_Date_: `Oct 26 2014`\n\non IRC, sa2ajj talked about http://graphite.readthedocs.org/\n\nHe told us, he actually uses it at work, and has integration working with eigth.\n\nLooking at the documentation, the first reaction is how to integrate this with multimaster, as graphite has its own db called whisper.\nI haven't look too much deeply, but I think this is still feasible as a external tool. Probably this would be much cheaper than making our own metrics system inside buildbot.\n\nAn external graphite server could be setup, and watch for the (TBD) mq server. As there are messages for nearly every kind of activity that happens in buildbot, this is a good mean of making solid analysis of what is going on. Of course, this solution would not be fully integrated, as probably the UI would be external, but anyway, I think this is a possible cheap path.\n\n@sa2ajj do you think it is possible? How would you estimate the cost of integration?\n\n\n\n\n---\n_Comment from_: @Ben\n_Date_: `Oct 26 2014`\n\nThere are a lot of [[interfaces|http://graphite.readthedocs.org/en/latest/tools.html#visualization]], as well as data collectors / forwarders (same page, a bit higher) available for graphite. It looks like some of them are js-only ! Still way to go for a www-plugin !\n\n---\n_Comment from_: @sa2ajj\n_Date_: `Oct 27 2014`\n\nWhat I said was that we indeed use graphite, but I did not say we use it with Buildbot.\n\nI have an oldish branch where I tried to publish metrics to graphite. However I stopped working on that when I realised that it's not very straightforward to implement it to support multi-master case.\n\nIf there's an interest, I can revive the branch or, at least, publish what I have (after rebasing the latest master) so other could comment in what direction my thinking worked.\n\n---\n_Comment from_: @unknown_contributor\n_Date_: `Mar 19 2016`\n\n+1 for this feature. I suggest to merge the statistic API with the [[stats module|https://github.com/buildbot/buildbot/tree/master/master/buildbot/statistics]] developed last year by my GSOC student. And add a default backend that store a subset of these stat in the main database that would enable the development of heath/stats visualization modules directly installed by default (using Highchart js lib for example) \n\n\n", "before_files": [{"content": "#!/usr/bin/env python\n#\n# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\nfrom setuptools import setup\nimport buildbot_pkg\n\nsetup(\n name='buildbot_pkg',\n version=buildbot_pkg.getVersion(\".\"),\n description='Buildbot packaging tools',\n author=u'Pierre Tardy',\n author_email=u'[email protected]',\n url='http://buildbot.net/',\n license='GNU GPL',\n py_modules=['buildbot_pkg'],\n)\n", "path": "pkg/setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n#\n# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\nfrom setuptools import setup\nimport buildbot_pkg\n\nsetup(\n name='buildbot-pkg',\n version=buildbot_pkg.getVersion(\".\"),\n description='Buildbot packaging tools',\n author=u'Pierre Tardy',\n author_email=u'[email protected]',\n url='http://buildbot.net/',\n license='GNU GPL',\n py_modules=['buildbot_pkg'],\n)\n", "path": "pkg/setup.py"}]}
| 1,606 | 86 |
gh_patches_debug_20490
|
rasdani/github-patches
|
git_diff
|
openvinotoolkit__datumaro-1284
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Automatic generation of specs.json by running specs.py is broken
After merging https://github.com/openvinotoolkit/datumaro/pull/1268, `metadata` is newly introduced in `specs.json`, but the automated `specs.json` doesn't create `metadata`, and hence it breaks the lazy_load of importers.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/datumaro/plugins/specs.py`
Content:
```
1 # Copyright (C) 2023 Intel Corporation
2 #
3 # SPDX-License-Identifier: MIT
4
5 from pathlib import Path
6
7 from datumaro.components.lazy_plugin import get_extra_deps, get_lazy_plugin
8 from datumaro.util import parse_json_file
9
10 _SOURCE_PATH = Path(__file__).resolve()
11 _SOURCE_DIR = _SOURCE_PATH.parent
12 _SPECS_JSON_PATH = _SOURCE_DIR / "specs.json"
13
14
15 def get_lazy_plugins():
16 return [
17 plugin
18 for plugin in [
19 get_lazy_plugin(
20 spec["import_path"],
21 spec["plugin_name"],
22 spec["plugin_type"],
23 spec.get("extra_deps", []),
24 spec.get("metadata", {}),
25 )
26 for spec in parse_json_file(str(_SPECS_JSON_PATH))
27 ]
28 if plugin is not None
29 ]
30
31
32 if __name__ == "__main__":
33 from datumaro.components.environment import Environment
34 from datumaro.util import dump_json_file
35
36 env = Environment(use_lazy_import=False)
37 plugin_specs = []
38
39 def _enroll_to_plugin_specs(plugins, plugin_type):
40 global plugin_specs
41
42 for _, plugin in plugins.items():
43 mod = plugin.__module__
44 class_name = plugin.__name__
45 plugin_name = plugin.NAME
46 plugin_specs += [
47 {
48 "import_path": f"{mod}.{class_name}",
49 "plugin_name": plugin_name,
50 "plugin_type": plugin_type,
51 "extra_deps": get_extra_deps(plugin),
52 }
53 ]
54
55 _enroll_to_plugin_specs(env.extractors, "DatasetBase")
56 _enroll_to_plugin_specs(env.importers, "Importer")
57 _enroll_to_plugin_specs(env.launchers, "Launcher")
58 _enroll_to_plugin_specs(env.exporters, "Exporter")
59 _enroll_to_plugin_specs(env.generators, "DatasetGenerator")
60 _enroll_to_plugin_specs(env.transforms, "Transform")
61 _enroll_to_plugin_specs(env.validators, "Validator")
62
63 dump_json_file(
64 _SPECS_JSON_PATH,
65 sorted(plugin_specs, key=lambda spec: spec["import_path"]),
66 indent=True,
67 append_newline=True,
68 )
69
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/datumaro/plugins/specs.py b/src/datumaro/plugins/specs.py
--- a/src/datumaro/plugins/specs.py
+++ b/src/datumaro/plugins/specs.py
@@ -43,14 +43,18 @@
mod = plugin.__module__
class_name = plugin.__name__
plugin_name = plugin.NAME
- plugin_specs += [
- {
- "import_path": f"{mod}.{class_name}",
- "plugin_name": plugin_name,
- "plugin_type": plugin_type,
- "extra_deps": get_extra_deps(plugin),
- }
- ]
+ extra_deps = get_extra_deps(plugin)
+ plugin_spec = {
+ "import_path": f"{mod}.{class_name}",
+ "plugin_name": plugin_name,
+ "plugin_type": plugin_type,
+ }
+ if extra_deps:
+ plugin_spec["extra_deps"] = extra_deps
+ # Setting metadata for importers
+ if hasattr(plugin, "get_file_extensions"):
+ plugin_spec["metadata"] = {"file_extensions": sorted(plugin.get_file_extensions())}
+ plugin_specs.append(plugin_spec)
_enroll_to_plugin_specs(env.extractors, "DatasetBase")
_enroll_to_plugin_specs(env.importers, "Importer")
|
{"golden_diff": "diff --git a/src/datumaro/plugins/specs.py b/src/datumaro/plugins/specs.py\n--- a/src/datumaro/plugins/specs.py\n+++ b/src/datumaro/plugins/specs.py\n@@ -43,14 +43,18 @@\n mod = plugin.__module__\n class_name = plugin.__name__\n plugin_name = plugin.NAME\n- plugin_specs += [\n- {\n- \"import_path\": f\"{mod}.{class_name}\",\n- \"plugin_name\": plugin_name,\n- \"plugin_type\": plugin_type,\n- \"extra_deps\": get_extra_deps(plugin),\n- }\n- ]\n+ extra_deps = get_extra_deps(plugin)\n+ plugin_spec = {\n+ \"import_path\": f\"{mod}.{class_name}\",\n+ \"plugin_name\": plugin_name,\n+ \"plugin_type\": plugin_type,\n+ }\n+ if extra_deps:\n+ plugin_spec[\"extra_deps\"] = extra_deps\n+ # Setting metadata for importers\n+ if hasattr(plugin, \"get_file_extensions\"):\n+ plugin_spec[\"metadata\"] = {\"file_extensions\": sorted(plugin.get_file_extensions())}\n+ plugin_specs.append(plugin_spec)\n \n _enroll_to_plugin_specs(env.extractors, \"DatasetBase\")\n _enroll_to_plugin_specs(env.importers, \"Importer\")\n", "issue": "Automatic generation of specs.json by running specs.py is broken\nAfter merging https://github.com/openvinotoolkit/datumaro/pull/1268, `metadata` is newly introduced in `specs.json`, but the automated `specs.json` doesn't create `metadata`, and hence it breaks the lazy_load of importers.\n", "before_files": [{"content": "# Copyright (C) 2023 Intel Corporation\n#\n# SPDX-License-Identifier: MIT\n\nfrom pathlib import Path\n\nfrom datumaro.components.lazy_plugin import get_extra_deps, get_lazy_plugin\nfrom datumaro.util import parse_json_file\n\n_SOURCE_PATH = Path(__file__).resolve()\n_SOURCE_DIR = _SOURCE_PATH.parent\n_SPECS_JSON_PATH = _SOURCE_DIR / \"specs.json\"\n\n\ndef get_lazy_plugins():\n return [\n plugin\n for plugin in [\n get_lazy_plugin(\n spec[\"import_path\"],\n spec[\"plugin_name\"],\n spec[\"plugin_type\"],\n spec.get(\"extra_deps\", []),\n spec.get(\"metadata\", {}),\n )\n for spec in parse_json_file(str(_SPECS_JSON_PATH))\n ]\n if plugin is not None\n ]\n\n\nif __name__ == \"__main__\":\n from datumaro.components.environment import Environment\n from datumaro.util import dump_json_file\n\n env = Environment(use_lazy_import=False)\n plugin_specs = []\n\n def _enroll_to_plugin_specs(plugins, plugin_type):\n global plugin_specs\n\n for _, plugin in plugins.items():\n mod = plugin.__module__\n class_name = plugin.__name__\n plugin_name = plugin.NAME\n plugin_specs += [\n {\n \"import_path\": f\"{mod}.{class_name}\",\n \"plugin_name\": plugin_name,\n \"plugin_type\": plugin_type,\n \"extra_deps\": get_extra_deps(plugin),\n }\n ]\n\n _enroll_to_plugin_specs(env.extractors, \"DatasetBase\")\n _enroll_to_plugin_specs(env.importers, \"Importer\")\n _enroll_to_plugin_specs(env.launchers, \"Launcher\")\n _enroll_to_plugin_specs(env.exporters, \"Exporter\")\n _enroll_to_plugin_specs(env.generators, \"DatasetGenerator\")\n _enroll_to_plugin_specs(env.transforms, \"Transform\")\n _enroll_to_plugin_specs(env.validators, \"Validator\")\n\n dump_json_file(\n _SPECS_JSON_PATH,\n sorted(plugin_specs, key=lambda spec: spec[\"import_path\"]),\n indent=True,\n append_newline=True,\n )\n", "path": "src/datumaro/plugins/specs.py"}], "after_files": [{"content": "# Copyright (C) 2023 Intel Corporation\n#\n# SPDX-License-Identifier: MIT\n\nfrom pathlib import Path\n\nfrom datumaro.components.lazy_plugin import get_extra_deps, get_lazy_plugin\nfrom datumaro.util import parse_json_file\n\n_SOURCE_PATH = Path(__file__).resolve()\n_SOURCE_DIR = _SOURCE_PATH.parent\n_SPECS_JSON_PATH = _SOURCE_DIR / \"specs.json\"\n\n\ndef get_lazy_plugins():\n return [\n plugin\n for plugin in [\n get_lazy_plugin(\n spec[\"import_path\"],\n spec[\"plugin_name\"],\n spec[\"plugin_type\"],\n spec.get(\"extra_deps\", []),\n spec.get(\"metadata\", {}),\n )\n for spec in parse_json_file(str(_SPECS_JSON_PATH))\n ]\n if plugin is not None\n ]\n\n\nif __name__ == \"__main__\":\n from datumaro.components.environment import Environment\n from datumaro.util import dump_json_file\n\n env = Environment(use_lazy_import=False)\n plugin_specs = []\n\n def _enroll_to_plugin_specs(plugins, plugin_type):\n global plugin_specs\n\n for _, plugin in plugins.items():\n mod = plugin.__module__\n class_name = plugin.__name__\n plugin_name = plugin.NAME\n extra_deps = get_extra_deps(plugin)\n plugin_spec = {\n \"import_path\": f\"{mod}.{class_name}\",\n \"plugin_name\": plugin_name,\n \"plugin_type\": plugin_type,\n }\n if extra_deps:\n plugin_spec[\"extra_deps\"] = extra_deps\n # Setting metadata for importers\n if hasattr(plugin, \"get_file_extensions\"):\n plugin_spec[\"metadata\"] = {\"file_extensions\": sorted(plugin.get_file_extensions())}\n plugin_specs.append(plugin_spec)\n\n _enroll_to_plugin_specs(env.extractors, \"DatasetBase\")\n _enroll_to_plugin_specs(env.importers, \"Importer\")\n _enroll_to_plugin_specs(env.launchers, \"Launcher\")\n _enroll_to_plugin_specs(env.exporters, \"Exporter\")\n _enroll_to_plugin_specs(env.generators, \"DatasetGenerator\")\n _enroll_to_plugin_specs(env.transforms, \"Transform\")\n _enroll_to_plugin_specs(env.validators, \"Validator\")\n\n dump_json_file(\n _SPECS_JSON_PATH,\n sorted(plugin_specs, key=lambda spec: spec[\"import_path\"]),\n indent=True,\n append_newline=True,\n )\n", "path": "src/datumaro/plugins/specs.py"}]}
| 914 | 282 |
gh_patches_debug_4520
|
rasdani/github-patches
|
git_diff
|
pytorch__vision-357
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error closing: 'NoneType' object has no attribute 'close' for ImageNet reading
When loading ImageNet, I get the following debug messages
`Error closing: 'NoneType' object has no attribute 'close' for ImageNet reading`
message called from https://github.com/pytorch/vision/blob/45dab0e4528333332f07ab1979e006857563cf99/torchvision/datasets/folder.py#L46
caused by
https://github.com/python-pillow/Pillow/blob/7d8c0d9e39d5ed79ea6aa590b9cbff5b0d17248c/PIL/Image.py#L573
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torchvision/datasets/folder.py`
Content:
```
1 import torch.utils.data as data
2
3 from PIL import Image
4 import os
5 import os.path
6
7 IMG_EXTENSIONS = ['.jpg', '.jpeg', '.png', '.ppm', '.bmp', '.pgm']
8
9
10 def is_image_file(filename):
11 """Checks if a file is an image.
12
13 Args:
14 filename (string): path to a file
15
16 Returns:
17 bool: True if the filename ends with a known image extension
18 """
19 filename_lower = filename.lower()
20 return any(filename_lower.endswith(ext) for ext in IMG_EXTENSIONS)
21
22
23 def find_classes(dir):
24 classes = [d for d in os.listdir(dir) if os.path.isdir(os.path.join(dir, d))]
25 classes.sort()
26 class_to_idx = {classes[i]: i for i in range(len(classes))}
27 return classes, class_to_idx
28
29
30 def make_dataset(dir, class_to_idx):
31 images = []
32 dir = os.path.expanduser(dir)
33 for target in sorted(os.listdir(dir)):
34 d = os.path.join(dir, target)
35 if not os.path.isdir(d):
36 continue
37
38 for root, _, fnames in sorted(os.walk(d)):
39 for fname in sorted(fnames):
40 if is_image_file(fname):
41 path = os.path.join(root, fname)
42 item = (path, class_to_idx[target])
43 images.append(item)
44
45 return images
46
47
48 def pil_loader(path):
49 # open path as file to avoid ResourceWarning (https://github.com/python-pillow/Pillow/issues/835)
50 with open(path, 'rb') as f:
51 with Image.open(f) as img:
52 return img.convert('RGB')
53
54
55 def accimage_loader(path):
56 import accimage
57 try:
58 return accimage.Image(path)
59 except IOError:
60 # Potentially a decoding problem, fall back to PIL.Image
61 return pil_loader(path)
62
63
64 def default_loader(path):
65 from torchvision import get_image_backend
66 if get_image_backend() == 'accimage':
67 return accimage_loader(path)
68 else:
69 return pil_loader(path)
70
71
72 class ImageFolder(data.Dataset):
73 """A generic data loader where the images are arranged in this way: ::
74
75 root/dog/xxx.png
76 root/dog/xxy.png
77 root/dog/xxz.png
78
79 root/cat/123.png
80 root/cat/nsdf3.png
81 root/cat/asd932_.png
82
83 Args:
84 root (string): Root directory path.
85 transform (callable, optional): A function/transform that takes in an PIL image
86 and returns a transformed version. E.g, ``transforms.RandomCrop``
87 target_transform (callable, optional): A function/transform that takes in the
88 target and transforms it.
89 loader (callable, optional): A function to load an image given its path.
90
91 Attributes:
92 classes (list): List of the class names.
93 class_to_idx (dict): Dict with items (class_name, class_index).
94 imgs (list): List of (image path, class_index) tuples
95 """
96
97 def __init__(self, root, transform=None, target_transform=None,
98 loader=default_loader):
99 classes, class_to_idx = find_classes(root)
100 imgs = make_dataset(root, class_to_idx)
101 if len(imgs) == 0:
102 raise(RuntimeError("Found 0 images in subfolders of: " + root + "\n"
103 "Supported image extensions are: " + ",".join(IMG_EXTENSIONS)))
104
105 self.root = root
106 self.imgs = imgs
107 self.classes = classes
108 self.class_to_idx = class_to_idx
109 self.transform = transform
110 self.target_transform = target_transform
111 self.loader = loader
112
113 def __getitem__(self, index):
114 """
115 Args:
116 index (int): Index
117
118 Returns:
119 tuple: (image, target) where target is class_index of the target class.
120 """
121 path, target = self.imgs[index]
122 img = self.loader(path)
123 if self.transform is not None:
124 img = self.transform(img)
125 if self.target_transform is not None:
126 target = self.target_transform(target)
127
128 return img, target
129
130 def __len__(self):
131 return len(self.imgs)
132
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/torchvision/datasets/folder.py b/torchvision/datasets/folder.py
--- a/torchvision/datasets/folder.py
+++ b/torchvision/datasets/folder.py
@@ -48,8 +48,8 @@
def pil_loader(path):
# open path as file to avoid ResourceWarning (https://github.com/python-pillow/Pillow/issues/835)
with open(path, 'rb') as f:
- with Image.open(f) as img:
- return img.convert('RGB')
+ img = Image.open(f)
+ return img.convert('RGB')
def accimage_loader(path):
|
{"golden_diff": "diff --git a/torchvision/datasets/folder.py b/torchvision/datasets/folder.py\n--- a/torchvision/datasets/folder.py\n+++ b/torchvision/datasets/folder.py\n@@ -48,8 +48,8 @@\n def pil_loader(path):\n # open path as file to avoid ResourceWarning (https://github.com/python-pillow/Pillow/issues/835)\n with open(path, 'rb') as f:\n- with Image.open(f) as img:\n- return img.convert('RGB')\n+ img = Image.open(f)\n+ return img.convert('RGB')\n \n \n def accimage_loader(path):\n", "issue": "Error closing: 'NoneType' object has no attribute 'close' for ImageNet reading\nWhen loading ImageNet, I get the following debug messages\r\n`Error closing: 'NoneType' object has no attribute 'close' for ImageNet reading`\r\nmessage called from https://github.com/pytorch/vision/blob/45dab0e4528333332f07ab1979e006857563cf99/torchvision/datasets/folder.py#L46\r\n\r\ncaused by\r\nhttps://github.com/python-pillow/Pillow/blob/7d8c0d9e39d5ed79ea6aa590b9cbff5b0d17248c/PIL/Image.py#L573\n", "before_files": [{"content": "import torch.utils.data as data\n\nfrom PIL import Image\nimport os\nimport os.path\n\nIMG_EXTENSIONS = ['.jpg', '.jpeg', '.png', '.ppm', '.bmp', '.pgm']\n\n\ndef is_image_file(filename):\n \"\"\"Checks if a file is an image.\n\n Args:\n filename (string): path to a file\n\n Returns:\n bool: True if the filename ends with a known image extension\n \"\"\"\n filename_lower = filename.lower()\n return any(filename_lower.endswith(ext) for ext in IMG_EXTENSIONS)\n\n\ndef find_classes(dir):\n classes = [d for d in os.listdir(dir) if os.path.isdir(os.path.join(dir, d))]\n classes.sort()\n class_to_idx = {classes[i]: i for i in range(len(classes))}\n return classes, class_to_idx\n\n\ndef make_dataset(dir, class_to_idx):\n images = []\n dir = os.path.expanduser(dir)\n for target in sorted(os.listdir(dir)):\n d = os.path.join(dir, target)\n if not os.path.isdir(d):\n continue\n\n for root, _, fnames in sorted(os.walk(d)):\n for fname in sorted(fnames):\n if is_image_file(fname):\n path = os.path.join(root, fname)\n item = (path, class_to_idx[target])\n images.append(item)\n\n return images\n\n\ndef pil_loader(path):\n # open path as file to avoid ResourceWarning (https://github.com/python-pillow/Pillow/issues/835)\n with open(path, 'rb') as f:\n with Image.open(f) as img:\n return img.convert('RGB')\n\n\ndef accimage_loader(path):\n import accimage\n try:\n return accimage.Image(path)\n except IOError:\n # Potentially a decoding problem, fall back to PIL.Image\n return pil_loader(path)\n\n\ndef default_loader(path):\n from torchvision import get_image_backend\n if get_image_backend() == 'accimage':\n return accimage_loader(path)\n else:\n return pil_loader(path)\n\n\nclass ImageFolder(data.Dataset):\n \"\"\"A generic data loader where the images are arranged in this way: ::\n\n root/dog/xxx.png\n root/dog/xxy.png\n root/dog/xxz.png\n\n root/cat/123.png\n root/cat/nsdf3.png\n root/cat/asd932_.png\n\n Args:\n root (string): Root directory path.\n transform (callable, optional): A function/transform that takes in an PIL image\n and returns a transformed version. E.g, ``transforms.RandomCrop``\n target_transform (callable, optional): A function/transform that takes in the\n target and transforms it.\n loader (callable, optional): A function to load an image given its path.\n\n Attributes:\n classes (list): List of the class names.\n class_to_idx (dict): Dict with items (class_name, class_index).\n imgs (list): List of (image path, class_index) tuples\n \"\"\"\n\n def __init__(self, root, transform=None, target_transform=None,\n loader=default_loader):\n classes, class_to_idx = find_classes(root)\n imgs = make_dataset(root, class_to_idx)\n if len(imgs) == 0:\n raise(RuntimeError(\"Found 0 images in subfolders of: \" + root + \"\\n\"\n \"Supported image extensions are: \" + \",\".join(IMG_EXTENSIONS)))\n\n self.root = root\n self.imgs = imgs\n self.classes = classes\n self.class_to_idx = class_to_idx\n self.transform = transform\n self.target_transform = target_transform\n self.loader = loader\n\n def __getitem__(self, index):\n \"\"\"\n Args:\n index (int): Index\n\n Returns:\n tuple: (image, target) where target is class_index of the target class.\n \"\"\"\n path, target = self.imgs[index]\n img = self.loader(path)\n if self.transform is not None:\n img = self.transform(img)\n if self.target_transform is not None:\n target = self.target_transform(target)\n\n return img, target\n\n def __len__(self):\n return len(self.imgs)\n", "path": "torchvision/datasets/folder.py"}], "after_files": [{"content": "import torch.utils.data as data\n\nfrom PIL import Image\nimport os\nimport os.path\n\nIMG_EXTENSIONS = ['.jpg', '.jpeg', '.png', '.ppm', '.bmp', '.pgm']\n\n\ndef is_image_file(filename):\n \"\"\"Checks if a file is an image.\n\n Args:\n filename (string): path to a file\n\n Returns:\n bool: True if the filename ends with a known image extension\n \"\"\"\n filename_lower = filename.lower()\n return any(filename_lower.endswith(ext) for ext in IMG_EXTENSIONS)\n\n\ndef find_classes(dir):\n classes = [d for d in os.listdir(dir) if os.path.isdir(os.path.join(dir, d))]\n classes.sort()\n class_to_idx = {classes[i]: i for i in range(len(classes))}\n return classes, class_to_idx\n\n\ndef make_dataset(dir, class_to_idx):\n images = []\n dir = os.path.expanduser(dir)\n for target in sorted(os.listdir(dir)):\n d = os.path.join(dir, target)\n if not os.path.isdir(d):\n continue\n\n for root, _, fnames in sorted(os.walk(d)):\n for fname in sorted(fnames):\n if is_image_file(fname):\n path = os.path.join(root, fname)\n item = (path, class_to_idx[target])\n images.append(item)\n\n return images\n\n\ndef pil_loader(path):\n # open path as file to avoid ResourceWarning (https://github.com/python-pillow/Pillow/issues/835)\n with open(path, 'rb') as f:\n img = Image.open(f)\n return img.convert('RGB')\n\n\ndef accimage_loader(path):\n import accimage\n try:\n return accimage.Image(path)\n except IOError:\n # Potentially a decoding problem, fall back to PIL.Image\n return pil_loader(path)\n\n\ndef default_loader(path):\n from torchvision import get_image_backend\n if get_image_backend() == 'accimage':\n return accimage_loader(path)\n else:\n return pil_loader(path)\n\n\nclass ImageFolder(data.Dataset):\n \"\"\"A generic data loader where the images are arranged in this way: ::\n\n root/dog/xxx.png\n root/dog/xxy.png\n root/dog/xxz.png\n\n root/cat/123.png\n root/cat/nsdf3.png\n root/cat/asd932_.png\n\n Args:\n root (string): Root directory path.\n transform (callable, optional): A function/transform that takes in an PIL image\n and returns a transformed version. E.g, ``transforms.RandomCrop``\n target_transform (callable, optional): A function/transform that takes in the\n target and transforms it.\n loader (callable, optional): A function to load an image given its path.\n\n Attributes:\n classes (list): List of the class names.\n class_to_idx (dict): Dict with items (class_name, class_index).\n imgs (list): List of (image path, class_index) tuples\n \"\"\"\n\n def __init__(self, root, transform=None, target_transform=None,\n loader=default_loader):\n classes, class_to_idx = find_classes(root)\n imgs = make_dataset(root, class_to_idx)\n if len(imgs) == 0:\n raise(RuntimeError(\"Found 0 images in subfolders of: \" + root + \"\\n\"\n \"Supported image extensions are: \" + \",\".join(IMG_EXTENSIONS)))\n\n self.root = root\n self.imgs = imgs\n self.classes = classes\n self.class_to_idx = class_to_idx\n self.transform = transform\n self.target_transform = target_transform\n self.loader = loader\n\n def __getitem__(self, index):\n \"\"\"\n Args:\n index (int): Index\n\n Returns:\n tuple: (image, target) where target is class_index of the target class.\n \"\"\"\n path, target = self.imgs[index]\n img = self.loader(path)\n if self.transform is not None:\n img = self.transform(img)\n if self.target_transform is not None:\n target = self.target_transform(target)\n\n return img, target\n\n def __len__(self):\n return len(self.imgs)\n", "path": "torchvision/datasets/folder.py"}]}
| 1,636 | 139 |
gh_patches_debug_14940
|
rasdani/github-patches
|
git_diff
|
scikit-image__scikit-image-1348
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
unwrap_phase SegFaults when passed a masked array with all elements masked
The **unwrap_phase** function in the skimage.restoration namespace Segmentation faults when passed a masked array with all elements masked. For example:
``` Python
import numpy as np
from skimage.restoration import unwrap_phase
image = np.ma.arange(100, dtype='float32').reshape(10, 10)
image[:] = np.ma.masked
unwrap_phase(image)
```
The Segmentation fault occurs in the find_pivot function in unwrap_2d_ljmu.c which is called from quicker_sort. The issue is that the routine is attempting to sort a length 0 array. A check should be added in the unwrap2D (and related functions) to see if params.no_of_edges is not 0 (or 1?)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `skimage/restoration/unwrap.py`
Content:
```
1 import numpy as np
2 import warnings
3 from six import string_types
4
5 from ._unwrap_1d import unwrap_1d
6 from ._unwrap_2d import unwrap_2d
7 from ._unwrap_3d import unwrap_3d
8
9
10 def unwrap_phase(image, wrap_around=False, seed=None):
11 '''Recover the original from a wrapped phase image.
12
13 From an image wrapped to lie in the interval [-pi, pi), recover the
14 original, unwrapped image.
15
16 Parameters
17 ----------
18 image : 1D, 2D or 3D ndarray of floats, optionally a masked array
19 The values should be in the range [-pi, pi). If a masked array is
20 provided, the masked entries will not be changed, and their values
21 will not be used to guide the unwrapping of neighboring, unmasked
22 values. Masked 1D arrays are not allowed, and will raise a
23 `ValueError`.
24 wrap_around : bool or sequence of bool, optional
25 When an element of the sequence is `True`, the unwrapping process
26 will regard the edges along the corresponding axis of the image to be
27 connected and use this connectivity to guide the phase unwrapping
28 process. If only a single boolean is given, it will apply to all axes.
29 Wrap around is not supported for 1D arrays.
30 seed : int, optional
31 Unwrapping 2D or 3D images uses random initialization. This sets the
32 seed of the PRNG to achieve deterministic behavior.
33
34 Returns
35 -------
36 image_unwrapped : array_like, double
37 Unwrapped image of the same shape as the input. If the input `image`
38 was a masked array, the mask will be preserved.
39
40 Raises
41 ------
42 ValueError
43 If called with a masked 1D array or called with a 1D array and
44 ``wrap_around=True``.
45
46 Examples
47 --------
48 >>> c0, c1 = np.ogrid[-1:1:128j, -1:1:128j]
49 >>> image = 12 * np.pi * np.exp(-(c0**2 + c1**2))
50 >>> image_wrapped = np.angle(np.exp(1j * image))
51 >>> image_unwrapped = unwrap_phase(image_wrapped)
52 >>> np.std(image_unwrapped - image) < 1e-6 # A constant offset is normal
53 True
54
55 References
56 ----------
57 .. [1] Miguel Arevallilo Herraez, David R. Burton, Michael J. Lalor,
58 and Munther A. Gdeisat, "Fast two-dimensional phase-unwrapping
59 algorithm based on sorting by reliability following a noncontinuous
60 path", Journal Applied Optics, Vol. 41, No. 35 (2002) 7437,
61 .. [2] Abdul-Rahman, H., Gdeisat, M., Burton, D., & Lalor, M., "Fast
62 three-dimensional phase-unwrapping algorithm based on sorting by
63 reliability following a non-continuous path. In W. Osten,
64 C. Gorecki, & E. L. Novak (Eds.), Optical Metrology (2005) 32--40,
65 International Society for Optics and Photonics.
66 '''
67 if image.ndim not in (1, 2, 3):
68 raise ValueError('Image must be 1, 2, or 3 dimensional')
69 if isinstance(wrap_around, bool):
70 wrap_around = [wrap_around] * image.ndim
71 elif (hasattr(wrap_around, '__getitem__')
72 and not isinstance(wrap_around, string_types)):
73 if len(wrap_around) != image.ndim:
74 raise ValueError('Length of `wrap_around` must equal the '
75 'dimensionality of image')
76 wrap_around = [bool(wa) for wa in wrap_around]
77 else:
78 raise ValueError('`wrap_around` must be a bool or a sequence with '
79 'length equal to the dimensionality of image')
80 if image.ndim == 1:
81 if np.ma.isMaskedArray(image):
82 raise ValueError('1D masked images cannot be unwrapped')
83 if wrap_around[0]:
84 raise ValueError('`wrap_around` is not supported for 1D images')
85 if image.ndim in (2, 3) and 1 in image.shape:
86 warnings.warn('Image has a length 1 dimension. Consider using an '
87 'array of lower dimensionality to use a more efficient '
88 'algorithm')
89
90 if np.ma.isMaskedArray(image):
91 mask = np.require(image.mask, np.uint8, ['C'])
92 image = image.data
93 else:
94 mask = np.zeros_like(image, dtype=np.uint8, order='C')
95
96 image_not_masked = np.asarray(image, dtype=np.double, order='C')
97 image_unwrapped = np.empty_like(image, dtype=np.double, order='C')
98
99 if image.ndim == 1:
100 unwrap_1d(image_not_masked, image_unwrapped)
101 elif image.ndim == 2:
102 unwrap_2d(image_not_masked, mask, image_unwrapped,
103 wrap_around, seed)
104 elif image.ndim == 3:
105 unwrap_3d(image_not_masked, mask, image_unwrapped,
106 wrap_around, seed)
107
108 if np.ma.isMaskedArray(image):
109 return np.ma.array(image_unwrapped, mask=mask)
110 else:
111 return image_unwrapped
112
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/skimage/restoration/unwrap.py b/skimage/restoration/unwrap.py
--- a/skimage/restoration/unwrap.py
+++ b/skimage/restoration/unwrap.py
@@ -88,13 +88,14 @@
'algorithm')
if np.ma.isMaskedArray(image):
- mask = np.require(image.mask, np.uint8, ['C'])
- image = image.data
+ mask = np.require(np.ma.getmaskarray(image), np.uint8, ['C'])
else:
mask = np.zeros_like(image, dtype=np.uint8, order='C')
- image_not_masked = np.asarray(image, dtype=np.double, order='C')
- image_unwrapped = np.empty_like(image, dtype=np.double, order='C')
+ image_not_masked = np.asarray(
+ np.ma.getdata(image), dtype=np.double, order='C')
+ image_unwrapped = np.empty_like(image, dtype=np.double, order='C',
+ subok=False)
if image.ndim == 1:
unwrap_1d(image_not_masked, image_unwrapped)
|
{"golden_diff": "diff --git a/skimage/restoration/unwrap.py b/skimage/restoration/unwrap.py\n--- a/skimage/restoration/unwrap.py\n+++ b/skimage/restoration/unwrap.py\n@@ -88,13 +88,14 @@\n 'algorithm')\n \n if np.ma.isMaskedArray(image):\n- mask = np.require(image.mask, np.uint8, ['C'])\n- image = image.data\n+ mask = np.require(np.ma.getmaskarray(image), np.uint8, ['C'])\n else:\n mask = np.zeros_like(image, dtype=np.uint8, order='C')\n \n- image_not_masked = np.asarray(image, dtype=np.double, order='C')\n- image_unwrapped = np.empty_like(image, dtype=np.double, order='C')\n+ image_not_masked = np.asarray(\n+ np.ma.getdata(image), dtype=np.double, order='C')\n+ image_unwrapped = np.empty_like(image, dtype=np.double, order='C',\n+ subok=False)\n \n if image.ndim == 1:\n unwrap_1d(image_not_masked, image_unwrapped)\n", "issue": "unwrap_phase SegFaults when passed a masked array with all elements masked\nThe **unwrap_phase** function in the skimage.restoration namespace Segmentation faults when passed a masked array with all elements masked. For example:\n\n``` Python\nimport numpy as np\nfrom skimage.restoration import unwrap_phase\nimage = np.ma.arange(100, dtype='float32').reshape(10, 10)\nimage[:] = np.ma.masked\nunwrap_phase(image)\n```\n\nThe Segmentation fault occurs in the find_pivot function in unwrap_2d_ljmu.c which is called from quicker_sort. The issue is that the routine is attempting to sort a length 0 array. A check should be added in the unwrap2D (and related functions) to see if params.no_of_edges is not 0 (or 1?)\n\n", "before_files": [{"content": "import numpy as np\nimport warnings\nfrom six import string_types\n\nfrom ._unwrap_1d import unwrap_1d\nfrom ._unwrap_2d import unwrap_2d\nfrom ._unwrap_3d import unwrap_3d\n\n\ndef unwrap_phase(image, wrap_around=False, seed=None):\n '''Recover the original from a wrapped phase image.\n\n From an image wrapped to lie in the interval [-pi, pi), recover the\n original, unwrapped image.\n\n Parameters\n ----------\n image : 1D, 2D or 3D ndarray of floats, optionally a masked array\n The values should be in the range [-pi, pi). If a masked array is\n provided, the masked entries will not be changed, and their values\n will not be used to guide the unwrapping of neighboring, unmasked\n values. Masked 1D arrays are not allowed, and will raise a\n `ValueError`.\n wrap_around : bool or sequence of bool, optional\n When an element of the sequence is `True`, the unwrapping process\n will regard the edges along the corresponding axis of the image to be\n connected and use this connectivity to guide the phase unwrapping\n process. If only a single boolean is given, it will apply to all axes.\n Wrap around is not supported for 1D arrays.\n seed : int, optional\n Unwrapping 2D or 3D images uses random initialization. This sets the\n seed of the PRNG to achieve deterministic behavior.\n\n Returns\n -------\n image_unwrapped : array_like, double\n Unwrapped image of the same shape as the input. If the input `image`\n was a masked array, the mask will be preserved.\n\n Raises\n ------\n ValueError\n If called with a masked 1D array or called with a 1D array and\n ``wrap_around=True``.\n\n Examples\n --------\n >>> c0, c1 = np.ogrid[-1:1:128j, -1:1:128j]\n >>> image = 12 * np.pi * np.exp(-(c0**2 + c1**2))\n >>> image_wrapped = np.angle(np.exp(1j * image))\n >>> image_unwrapped = unwrap_phase(image_wrapped)\n >>> np.std(image_unwrapped - image) < 1e-6 # A constant offset is normal\n True\n\n References\n ----------\n .. [1] Miguel Arevallilo Herraez, David R. Burton, Michael J. Lalor,\n and Munther A. Gdeisat, \"Fast two-dimensional phase-unwrapping\n algorithm based on sorting by reliability following a noncontinuous\n path\", Journal Applied Optics, Vol. 41, No. 35 (2002) 7437,\n .. [2] Abdul-Rahman, H., Gdeisat, M., Burton, D., & Lalor, M., \"Fast\n three-dimensional phase-unwrapping algorithm based on sorting by\n reliability following a non-continuous path. In W. Osten,\n C. Gorecki, & E. L. Novak (Eds.), Optical Metrology (2005) 32--40,\n International Society for Optics and Photonics.\n '''\n if image.ndim not in (1, 2, 3):\n raise ValueError('Image must be 1, 2, or 3 dimensional')\n if isinstance(wrap_around, bool):\n wrap_around = [wrap_around] * image.ndim\n elif (hasattr(wrap_around, '__getitem__')\n and not isinstance(wrap_around, string_types)):\n if len(wrap_around) != image.ndim:\n raise ValueError('Length of `wrap_around` must equal the '\n 'dimensionality of image')\n wrap_around = [bool(wa) for wa in wrap_around]\n else:\n raise ValueError('`wrap_around` must be a bool or a sequence with '\n 'length equal to the dimensionality of image')\n if image.ndim == 1:\n if np.ma.isMaskedArray(image):\n raise ValueError('1D masked images cannot be unwrapped')\n if wrap_around[0]:\n raise ValueError('`wrap_around` is not supported for 1D images')\n if image.ndim in (2, 3) and 1 in image.shape:\n warnings.warn('Image has a length 1 dimension. Consider using an '\n 'array of lower dimensionality to use a more efficient '\n 'algorithm')\n\n if np.ma.isMaskedArray(image):\n mask = np.require(image.mask, np.uint8, ['C'])\n image = image.data\n else:\n mask = np.zeros_like(image, dtype=np.uint8, order='C')\n\n image_not_masked = np.asarray(image, dtype=np.double, order='C')\n image_unwrapped = np.empty_like(image, dtype=np.double, order='C')\n\n if image.ndim == 1:\n unwrap_1d(image_not_masked, image_unwrapped)\n elif image.ndim == 2:\n unwrap_2d(image_not_masked, mask, image_unwrapped,\n wrap_around, seed)\n elif image.ndim == 3:\n unwrap_3d(image_not_masked, mask, image_unwrapped,\n wrap_around, seed)\n\n if np.ma.isMaskedArray(image):\n return np.ma.array(image_unwrapped, mask=mask)\n else:\n return image_unwrapped\n", "path": "skimage/restoration/unwrap.py"}], "after_files": [{"content": "import numpy as np\nimport warnings\nfrom six import string_types\n\nfrom ._unwrap_1d import unwrap_1d\nfrom ._unwrap_2d import unwrap_2d\nfrom ._unwrap_3d import unwrap_3d\n\n\ndef unwrap_phase(image, wrap_around=False, seed=None):\n '''Recover the original from a wrapped phase image.\n\n From an image wrapped to lie in the interval [-pi, pi), recover the\n original, unwrapped image.\n\n Parameters\n ----------\n image : 1D, 2D or 3D ndarray of floats, optionally a masked array\n The values should be in the range [-pi, pi). If a masked array is\n provided, the masked entries will not be changed, and their values\n will not be used to guide the unwrapping of neighboring, unmasked\n values. Masked 1D arrays are not allowed, and will raise a\n `ValueError`.\n wrap_around : bool or sequence of bool, optional\n When an element of the sequence is `True`, the unwrapping process\n will regard the edges along the corresponding axis of the image to be\n connected and use this connectivity to guide the phase unwrapping\n process. If only a single boolean is given, it will apply to all axes.\n Wrap around is not supported for 1D arrays.\n seed : int, optional\n Unwrapping 2D or 3D images uses random initialization. This sets the\n seed of the PRNG to achieve deterministic behavior.\n\n Returns\n -------\n image_unwrapped : array_like, double\n Unwrapped image of the same shape as the input. If the input `image`\n was a masked array, the mask will be preserved.\n\n Raises\n ------\n ValueError\n If called with a masked 1D array or called with a 1D array and\n ``wrap_around=True``.\n\n Examples\n --------\n >>> c0, c1 = np.ogrid[-1:1:128j, -1:1:128j]\n >>> image = 12 * np.pi * np.exp(-(c0**2 + c1**2))\n >>> image_wrapped = np.angle(np.exp(1j * image))\n >>> image_unwrapped = unwrap_phase(image_wrapped)\n >>> np.std(image_unwrapped - image) < 1e-6 # A constant offset is normal\n True\n\n References\n ----------\n .. [1] Miguel Arevallilo Herraez, David R. Burton, Michael J. Lalor,\n and Munther A. Gdeisat, \"Fast two-dimensional phase-unwrapping\n algorithm based on sorting by reliability following a noncontinuous\n path\", Journal Applied Optics, Vol. 41, No. 35 (2002) 7437,\n .. [2] Abdul-Rahman, H., Gdeisat, M., Burton, D., & Lalor, M., \"Fast\n three-dimensional phase-unwrapping algorithm based on sorting by\n reliability following a non-continuous path. In W. Osten,\n C. Gorecki, & E. L. Novak (Eds.), Optical Metrology (2005) 32--40,\n International Society for Optics and Photonics.\n '''\n if image.ndim not in (1, 2, 3):\n raise ValueError('Image must be 1, 2, or 3 dimensional')\n if isinstance(wrap_around, bool):\n wrap_around = [wrap_around] * image.ndim\n elif (hasattr(wrap_around, '__getitem__')\n and not isinstance(wrap_around, string_types)):\n if len(wrap_around) != image.ndim:\n raise ValueError('Length of `wrap_around` must equal the '\n 'dimensionality of image')\n wrap_around = [bool(wa) for wa in wrap_around]\n else:\n raise ValueError('`wrap_around` must be a bool or a sequence with '\n 'length equal to the dimensionality of image')\n if image.ndim == 1:\n if np.ma.isMaskedArray(image):\n raise ValueError('1D masked images cannot be unwrapped')\n if wrap_around[0]:\n raise ValueError('`wrap_around` is not supported for 1D images')\n if image.ndim in (2, 3) and 1 in image.shape:\n warnings.warn('Image has a length 1 dimension. Consider using an '\n 'array of lower dimensionality to use a more efficient '\n 'algorithm')\n\n if np.ma.isMaskedArray(image):\n mask = np.require(np.ma.getmaskarray(image), np.uint8, ['C'])\n else:\n mask = np.zeros_like(image, dtype=np.uint8, order='C')\n\n image_not_masked = np.asarray(\n np.ma.getdata(image), dtype=np.double, order='C')\n image_unwrapped = np.empty_like(image, dtype=np.double, order='C',\n subok=False)\n\n if image.ndim == 1:\n unwrap_1d(image_not_masked, image_unwrapped)\n elif image.ndim == 2:\n unwrap_2d(image_not_masked, mask, image_unwrapped,\n wrap_around, seed)\n elif image.ndim == 3:\n unwrap_3d(image_not_masked, mask, image_unwrapped,\n wrap_around, seed)\n\n if np.ma.isMaskedArray(image):\n return np.ma.array(image_unwrapped, mask=mask)\n else:\n return image_unwrapped\n", "path": "skimage/restoration/unwrap.py"}]}
| 1,881 | 249 |
gh_patches_debug_50539
|
rasdani/github-patches
|
git_diff
|
mars-project__mars-291
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Chinese document layout has a link error.
<!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
A clear and concise description of what the bug is.
Chinese document layout has a connection error.
doc link:[https://mars-project.readthedocs.io/zh_CN/latest/tensor/overview.html](https://mars-project.readthedocs.io/zh_CN/latest/tensor/overview.html)

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/source/norm_zh.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 # Copyright 1999-2018 Alibaba Group Holding Ltd.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17 """
18 This file folds Chinese po files by hacking babel.messages.pofile.normalize
19 using jieba text segment library instead of regex
20 """
21
22 import datetime
23 import os
24
25 from babel.messages import pofile
26 from babel.messages.pofile import escape
27
28
29 def _zh_len(s):
30 """
31 Calculate text length in Chinese
32 """
33 try:
34 return len(s.encode('gb2312'))
35 except ValueError:
36 return len(s)
37
38
39 def _zh_split(s):
40 """
41 Split text length in Chinese
42 """
43 import jieba
44 try:
45 s.encode('ascii')
46 has_zh = False
47 except ValueError:
48 has_zh = True
49
50 if has_zh:
51 return list(jieba.cut(s))
52 else:
53 return pofile.WORD_SEP.split(s)
54
55
56 # code modified from babel.messages.pofile (hash 359ecffca479dfe032d0f7210d5cd8160599c816)
57 def _normalize(string, prefix='', width=76):
58 r"""Convert a string into a format that is appropriate for .po files.
59 >>> print(normalize('''Say:
60 ... "hello, world!"
61 ... ''', width=None))
62 ""
63 "Say:\n"
64 " \"hello, world!\"\n"
65 >>> print(normalize('''Say:
66 ... "Lorem ipsum dolor sit amet, consectetur adipisicing elit, "
67 ... ''', width=32))
68 ""
69 "Say:\n"
70 " \"Lorem ipsum dolor sit "
71 "amet, consectetur adipisicing"
72 " elit, \"\n"
73 :param string: the string to normalize
74 :param prefix: a string that should be prepended to every line
75 :param width: the maximum line width; use `None`, 0, or a negative number
76 to completely disable line wrapping
77 """
78
79 if width and width > 0:
80 prefixlen = _zh_len(prefix)
81 lines = []
82 for line in string.splitlines(True):
83 if _zh_len(escape(line)) + prefixlen > width:
84 chunks = _zh_split(line)
85 chunks.reverse()
86 while chunks:
87 buf = []
88 size = 2
89 while chunks:
90 l = _zh_len(escape(chunks[-1])) - 2 + prefixlen # noqa: E741
91 if size + l < width:
92 buf.append(chunks.pop())
93 size += l
94 else:
95 if not buf:
96 # handle long chunks by putting them on a
97 # separate line
98 buf.append(chunks.pop())
99 break
100 lines.append(u''.join(buf))
101 else:
102 lines.append(line)
103 else:
104 lines = string.splitlines(True)
105
106 if len(lines) <= 1:
107 return escape(string)
108
109 # Remove empty trailing line
110 if lines and not lines[-1]:
111 del lines[-1]
112 lines[-1] += '\n'
113 return u'""\n' + u'\n'.join([(prefix + escape(line)) for line in lines])
114
115
116 def main():
117 try:
118 import jieba # noqa: F401
119 except ImportError:
120 return
121
122 pofile.normalize = _normalize
123 for root, dirs, files in os.walk('.'):
124 if 'zh' not in root:
125 continue
126 for f in files:
127 if not f.endswith('.po'):
128 continue
129 path = os.path.join(root, f)
130
131 # only modify recent-changed files
132 modify_time = datetime.datetime.fromtimestamp(os.path.getmtime(path))
133 if (datetime.datetime.now() - modify_time).total_seconds() > 1800:
134 continue
135
136 with open(path, 'rb') as inpf:
137 catalog = pofile.read_po(inpf)
138 with open(path, 'wb') as outf:
139 pofile.write_po(outf, catalog)
140
141
142 if __name__ == '__main__':
143 main()
144
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docs/source/norm_zh.py b/docs/source/norm_zh.py
--- a/docs/source/norm_zh.py
+++ b/docs/source/norm_zh.py
@@ -130,7 +130,7 @@
# only modify recent-changed files
modify_time = datetime.datetime.fromtimestamp(os.path.getmtime(path))
- if (datetime.datetime.now() - modify_time).total_seconds() > 1800:
+ if (datetime.datetime.now() - modify_time).total_seconds() > 120:
continue
with open(path, 'rb') as inpf:
|
{"golden_diff": "diff --git a/docs/source/norm_zh.py b/docs/source/norm_zh.py\n--- a/docs/source/norm_zh.py\n+++ b/docs/source/norm_zh.py\n@@ -130,7 +130,7 @@\n \n # only modify recent-changed files\n modify_time = datetime.datetime.fromtimestamp(os.path.getmtime(path))\n- if (datetime.datetime.now() - modify_time).total_seconds() > 1800:\n+ if (datetime.datetime.now() - modify_time).total_seconds() > 120:\n continue\n \n with open(path, 'rb') as inpf:\n", "issue": "[BUG] Chinese document layout has a link error.\n<!--\r\nThank you for your contribution!\r\n\r\nPlease review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.\r\n-->\r\n\r\n**Describe the bug**\r\nA clear and concise description of what the bug is.\r\n\r\nChinese document layout has a connection error.\r\ndoc link:[https://mars-project.readthedocs.io/zh_CN/latest/tensor/overview.html](https://mars-project.readthedocs.io/zh_CN/latest/tensor/overview.html)\r\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n# Copyright 1999-2018 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nThis file folds Chinese po files by hacking babel.messages.pofile.normalize\nusing jieba text segment library instead of regex\n\"\"\"\n\nimport datetime\nimport os\n\nfrom babel.messages import pofile\nfrom babel.messages.pofile import escape\n\n\ndef _zh_len(s):\n \"\"\"\n Calculate text length in Chinese\n \"\"\"\n try:\n return len(s.encode('gb2312'))\n except ValueError:\n return len(s)\n\n\ndef _zh_split(s):\n \"\"\"\n Split text length in Chinese\n \"\"\"\n import jieba\n try:\n s.encode('ascii')\n has_zh = False\n except ValueError:\n has_zh = True\n\n if has_zh:\n return list(jieba.cut(s))\n else:\n return pofile.WORD_SEP.split(s)\n\n\n# code modified from babel.messages.pofile (hash 359ecffca479dfe032d0f7210d5cd8160599c816)\ndef _normalize(string, prefix='', width=76):\n r\"\"\"Convert a string into a format that is appropriate for .po files.\n >>> print(normalize('''Say:\n ... \"hello, world!\"\n ... ''', width=None))\n \"\"\n \"Say:\\n\"\n \" \\\"hello, world!\\\"\\n\"\n >>> print(normalize('''Say:\n ... \"Lorem ipsum dolor sit amet, consectetur adipisicing elit, \"\n ... ''', width=32))\n \"\"\n \"Say:\\n\"\n \" \\\"Lorem ipsum dolor sit \"\n \"amet, consectetur adipisicing\"\n \" elit, \\\"\\n\"\n :param string: the string to normalize\n :param prefix: a string that should be prepended to every line\n :param width: the maximum line width; use `None`, 0, or a negative number\n to completely disable line wrapping\n \"\"\"\n\n if width and width > 0:\n prefixlen = _zh_len(prefix)\n lines = []\n for line in string.splitlines(True):\n if _zh_len(escape(line)) + prefixlen > width:\n chunks = _zh_split(line)\n chunks.reverse()\n while chunks:\n buf = []\n size = 2\n while chunks:\n l = _zh_len(escape(chunks[-1])) - 2 + prefixlen # noqa: E741\n if size + l < width:\n buf.append(chunks.pop())\n size += l\n else:\n if not buf:\n # handle long chunks by putting them on a\n # separate line\n buf.append(chunks.pop())\n break\n lines.append(u''.join(buf))\n else:\n lines.append(line)\n else:\n lines = string.splitlines(True)\n\n if len(lines) <= 1:\n return escape(string)\n\n # Remove empty trailing line\n if lines and not lines[-1]:\n del lines[-1]\n lines[-1] += '\\n'\n return u'\"\"\\n' + u'\\n'.join([(prefix + escape(line)) for line in lines])\n\n\ndef main():\n try:\n import jieba # noqa: F401\n except ImportError:\n return\n\n pofile.normalize = _normalize\n for root, dirs, files in os.walk('.'):\n if 'zh' not in root:\n continue\n for f in files:\n if not f.endswith('.po'):\n continue\n path = os.path.join(root, f)\n\n # only modify recent-changed files\n modify_time = datetime.datetime.fromtimestamp(os.path.getmtime(path))\n if (datetime.datetime.now() - modify_time).total_seconds() > 1800:\n continue\n\n with open(path, 'rb') as inpf:\n catalog = pofile.read_po(inpf)\n with open(path, 'wb') as outf:\n pofile.write_po(outf, catalog)\n\n\nif __name__ == '__main__':\n main()\n", "path": "docs/source/norm_zh.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n# Copyright 1999-2018 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nThis file folds Chinese po files by hacking babel.messages.pofile.normalize\nusing jieba text segment library instead of regex\n\"\"\"\n\nimport datetime\nimport os\n\nfrom babel.messages import pofile\nfrom babel.messages.pofile import escape\n\n\ndef _zh_len(s):\n \"\"\"\n Calculate text length in Chinese\n \"\"\"\n try:\n return len(s.encode('gb2312'))\n except ValueError:\n return len(s)\n\n\ndef _zh_split(s):\n \"\"\"\n Split text length in Chinese\n \"\"\"\n import jieba\n try:\n s.encode('ascii')\n has_zh = False\n except ValueError:\n has_zh = True\n\n if has_zh:\n return list(jieba.cut(s))\n else:\n return pofile.WORD_SEP.split(s)\n\n\n# code modified from babel.messages.pofile (hash 359ecffca479dfe032d0f7210d5cd8160599c816)\ndef _normalize(string, prefix='', width=76):\n r\"\"\"Convert a string into a format that is appropriate for .po files.\n >>> print(normalize('''Say:\n ... \"hello, world!\"\n ... ''', width=None))\n \"\"\n \"Say:\\n\"\n \" \\\"hello, world!\\\"\\n\"\n >>> print(normalize('''Say:\n ... \"Lorem ipsum dolor sit amet, consectetur adipisicing elit, \"\n ... ''', width=32))\n \"\"\n \"Say:\\n\"\n \" \\\"Lorem ipsum dolor sit \"\n \"amet, consectetur adipisicing\"\n \" elit, \\\"\\n\"\n :param string: the string to normalize\n :param prefix: a string that should be prepended to every line\n :param width: the maximum line width; use `None`, 0, or a negative number\n to completely disable line wrapping\n \"\"\"\n\n if width and width > 0:\n prefixlen = _zh_len(prefix)\n lines = []\n for line in string.splitlines(True):\n if _zh_len(escape(line)) + prefixlen > width:\n chunks = _zh_split(line)\n chunks.reverse()\n while chunks:\n buf = []\n size = 2\n while chunks:\n l = _zh_len(escape(chunks[-1])) - 2 + prefixlen # noqa: E741\n if size + l < width:\n buf.append(chunks.pop())\n size += l\n else:\n if not buf:\n # handle long chunks by putting them on a\n # separate line\n buf.append(chunks.pop())\n break\n lines.append(u''.join(buf))\n else:\n lines.append(line)\n else:\n lines = string.splitlines(True)\n\n if len(lines) <= 1:\n return escape(string)\n\n # Remove empty trailing line\n if lines and not lines[-1]:\n del lines[-1]\n lines[-1] += '\\n'\n return u'\"\"\\n' + u'\\n'.join([(prefix + escape(line)) for line in lines])\n\n\ndef main():\n try:\n import jieba # noqa: F401\n except ImportError:\n return\n\n pofile.normalize = _normalize\n for root, dirs, files in os.walk('.'):\n if 'zh' not in root:\n continue\n for f in files:\n if not f.endswith('.po'):\n continue\n path = os.path.join(root, f)\n\n # only modify recent-changed files\n modify_time = datetime.datetime.fromtimestamp(os.path.getmtime(path))\n if (datetime.datetime.now() - modify_time).total_seconds() > 120:\n continue\n\n with open(path, 'rb') as inpf:\n catalog = pofile.read_po(inpf)\n with open(path, 'wb') as outf:\n pofile.write_po(outf, catalog)\n\n\nif __name__ == '__main__':\n main()\n", "path": "docs/source/norm_zh.py"}]}
| 1,783 | 136 |
gh_patches_debug_20191
|
rasdani/github-patches
|
git_diff
|
mathesar-foundation__mathesar-267
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
DB function to edit column names
**Problem**
<!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.-->
The user might want to edit column names.
**Proposed solution**
<!-- A clear and concise description of your proposed solution or feature. -->
We need a `db` function to allow the user to edit the name of a column. We need to check and fix foreign keys and other references to the column.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `db/columns.py`
Content:
```
1 from sqlalchemy import Column, Integer, ForeignKey
2 from db import constants
3
4
5 NULLABLE = "nullable"
6 PRIMARY_KEY = "primary_key"
7 TYPE = "type"
8
9 ID_TYPE = Integer
10 DEFAULT_COLUMNS = {
11 constants.ID: {TYPE: ID_TYPE, PRIMARY_KEY: True, NULLABLE: False}
12 }
13
14
15 class MathesarColumn(Column):
16 """
17 This class constrains the possible arguments, enabling us to include
18 a copy method (which has been deprecated in upstream SQLAlchemy since
19 1.4). The idea is that we can faithfully copy the subset of the
20 column definition that we care about, and this class defines that
21 subset.
22 """
23 def __init__(
24 self,
25 name,
26 sa_type,
27 foreign_keys=set(),
28 primary_key=False,
29 nullable=True,
30 ):
31 """
32 Construct a new ``MathesarColumn`` object.
33
34 Required arguments:
35 name -- String giving the name of the column in the database.
36 sa_type -- the SQLAlchemy type of the column.
37
38 Optional keyword arguments:
39 primary_key -- Boolean giving whether the column is a primary key.
40 """
41 super().__init__(
42 *foreign_keys,
43 name=name,
44 type_=sa_type,
45 primary_key=primary_key,
46 nullable=nullable,
47 )
48
49 @classmethod
50 def from_column(cls, column):
51 """
52 This alternate init method creates a new column (a copy) of the
53 given column. It respects only the properties in the __init__
54 of the MathesarColumn.
55 """
56 fkeys = {ForeignKey(fk.target_fullname) for fk in column.foreign_keys}
57 return cls(
58 column.name,
59 column.type,
60 foreign_keys=fkeys,
61 primary_key=column.primary_key,
62 nullable=column.nullable,
63 )
64
65 @property
66 def is_default(self):
67 default_def = DEFAULT_COLUMNS.get(self.name, False)
68 return (
69 default_def
70 and self.type.python_type == default_def[TYPE]().python_type
71 and self.primary_key == default_def.get(PRIMARY_KEY, False)
72 and self.nullable == default_def.get(NULLABLE, True)
73 )
74
75
76 def get_default_mathesar_column_list():
77 return [
78 MathesarColumn(
79 c,
80 DEFAULT_COLUMNS[c][TYPE],
81 primary_key=DEFAULT_COLUMNS[c][PRIMARY_KEY]
82 )
83 for c in DEFAULT_COLUMNS
84 ]
85
86
87 def init_mathesar_table_column_list_with_defaults(column_list):
88 default_columns = get_default_mathesar_column_list()
89 given_columns = [MathesarColumn.from_column(c) for c in column_list]
90 return default_columns + given_columns
91
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/db/columns.py b/db/columns.py
--- a/db/columns.py
+++ b/db/columns.py
@@ -1,4 +1,4 @@
-from sqlalchemy import Column, Integer, ForeignKey
+from sqlalchemy import Column, Integer, ForeignKey, Table, DDL, MetaData
from db import constants
@@ -88,3 +88,19 @@
default_columns = get_default_mathesar_column_list()
given_columns = [MathesarColumn.from_column(c) for c in column_list]
return default_columns + given_columns
+
+
+def rename_column(schema, table_name, column_name, new_column_name, engine):
+ _preparer = engine.dialect.identifier_preparer
+ with engine.begin() as conn:
+ metadata = MetaData(bind=engine, schema=schema)
+ table = Table(table_name, metadata, schema=schema, autoload_with=engine)
+ column = table.columns[column_name]
+ prepared_table_name = _preparer.format_table(table)
+ prepared_column_name = _preparer.format_column(column)
+ prepared_new_column_name = _preparer.quote(new_column_name)
+ alter_stmt = f"""
+ ALTER TABLE {prepared_table_name}
+ RENAME {prepared_column_name} TO {prepared_new_column_name}
+ """
+ conn.execute(DDL(alter_stmt))
|
{"golden_diff": "diff --git a/db/columns.py b/db/columns.py\n--- a/db/columns.py\n+++ b/db/columns.py\n@@ -1,4 +1,4 @@\n-from sqlalchemy import Column, Integer, ForeignKey\n+from sqlalchemy import Column, Integer, ForeignKey, Table, DDL, MetaData\n from db import constants\n \n \n@@ -88,3 +88,19 @@\n default_columns = get_default_mathesar_column_list()\n given_columns = [MathesarColumn.from_column(c) for c in column_list]\n return default_columns + given_columns\n+\n+\n+def rename_column(schema, table_name, column_name, new_column_name, engine):\n+ _preparer = engine.dialect.identifier_preparer\n+ with engine.begin() as conn:\n+ metadata = MetaData(bind=engine, schema=schema)\n+ table = Table(table_name, metadata, schema=schema, autoload_with=engine)\n+ column = table.columns[column_name]\n+ prepared_table_name = _preparer.format_table(table)\n+ prepared_column_name = _preparer.format_column(column)\n+ prepared_new_column_name = _preparer.quote(new_column_name)\n+ alter_stmt = f\"\"\"\n+ ALTER TABLE {prepared_table_name}\n+ RENAME {prepared_column_name} TO {prepared_new_column_name}\n+ \"\"\"\n+ conn.execute(DDL(alter_stmt))\n", "issue": "DB function to edit column names\n**Problem**\r\n<!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.-->\r\nThe user might want to edit column names.\r\n\r\n**Proposed solution**\r\n<!-- A clear and concise description of your proposed solution or feature. -->\r\nWe need a `db` function to allow the user to edit the name of a column. We need to check and fix foreign keys and other references to the column.\n", "before_files": [{"content": "from sqlalchemy import Column, Integer, ForeignKey\nfrom db import constants\n\n\nNULLABLE = \"nullable\"\nPRIMARY_KEY = \"primary_key\"\nTYPE = \"type\"\n\nID_TYPE = Integer\nDEFAULT_COLUMNS = {\n constants.ID: {TYPE: ID_TYPE, PRIMARY_KEY: True, NULLABLE: False}\n}\n\n\nclass MathesarColumn(Column):\n \"\"\"\n This class constrains the possible arguments, enabling us to include\n a copy method (which has been deprecated in upstream SQLAlchemy since\n 1.4). The idea is that we can faithfully copy the subset of the\n column definition that we care about, and this class defines that\n subset.\n \"\"\"\n def __init__(\n self,\n name,\n sa_type,\n foreign_keys=set(),\n primary_key=False,\n nullable=True,\n ):\n \"\"\"\n Construct a new ``MathesarColumn`` object.\n\n Required arguments:\n name -- String giving the name of the column in the database.\n sa_type -- the SQLAlchemy type of the column.\n\n Optional keyword arguments:\n primary_key -- Boolean giving whether the column is a primary key.\n \"\"\"\n super().__init__(\n *foreign_keys,\n name=name,\n type_=sa_type,\n primary_key=primary_key,\n nullable=nullable,\n )\n\n @classmethod\n def from_column(cls, column):\n \"\"\"\n This alternate init method creates a new column (a copy) of the\n given column. It respects only the properties in the __init__\n of the MathesarColumn.\n \"\"\"\n fkeys = {ForeignKey(fk.target_fullname) for fk in column.foreign_keys}\n return cls(\n column.name,\n column.type,\n foreign_keys=fkeys,\n primary_key=column.primary_key,\n nullable=column.nullable,\n )\n\n @property\n def is_default(self):\n default_def = DEFAULT_COLUMNS.get(self.name, False)\n return (\n default_def\n and self.type.python_type == default_def[TYPE]().python_type\n and self.primary_key == default_def.get(PRIMARY_KEY, False)\n and self.nullable == default_def.get(NULLABLE, True)\n )\n\n\ndef get_default_mathesar_column_list():\n return [\n MathesarColumn(\n c,\n DEFAULT_COLUMNS[c][TYPE],\n primary_key=DEFAULT_COLUMNS[c][PRIMARY_KEY]\n )\n for c in DEFAULT_COLUMNS\n ]\n\n\ndef init_mathesar_table_column_list_with_defaults(column_list):\n default_columns = get_default_mathesar_column_list()\n given_columns = [MathesarColumn.from_column(c) for c in column_list]\n return default_columns + given_columns\n", "path": "db/columns.py"}], "after_files": [{"content": "from sqlalchemy import Column, Integer, ForeignKey, Table, DDL, MetaData\nfrom db import constants\n\n\nNULLABLE = \"nullable\"\nPRIMARY_KEY = \"primary_key\"\nTYPE = \"type\"\n\nID_TYPE = Integer\nDEFAULT_COLUMNS = {\n constants.ID: {TYPE: ID_TYPE, PRIMARY_KEY: True, NULLABLE: False}\n}\n\n\nclass MathesarColumn(Column):\n \"\"\"\n This class constrains the possible arguments, enabling us to include\n a copy method (which has been deprecated in upstream SQLAlchemy since\n 1.4). The idea is that we can faithfully copy the subset of the\n column definition that we care about, and this class defines that\n subset.\n \"\"\"\n def __init__(\n self,\n name,\n sa_type,\n foreign_keys=set(),\n primary_key=False,\n nullable=True,\n ):\n \"\"\"\n Construct a new ``MathesarColumn`` object.\n\n Required arguments:\n name -- String giving the name of the column in the database.\n sa_type -- the SQLAlchemy type of the column.\n\n Optional keyword arguments:\n primary_key -- Boolean giving whether the column is a primary key.\n \"\"\"\n super().__init__(\n *foreign_keys,\n name=name,\n type_=sa_type,\n primary_key=primary_key,\n nullable=nullable,\n )\n\n @classmethod\n def from_column(cls, column):\n \"\"\"\n This alternate init method creates a new column (a copy) of the\n given column. It respects only the properties in the __init__\n of the MathesarColumn.\n \"\"\"\n fkeys = {ForeignKey(fk.target_fullname) for fk in column.foreign_keys}\n return cls(\n column.name,\n column.type,\n foreign_keys=fkeys,\n primary_key=column.primary_key,\n nullable=column.nullable,\n )\n\n @property\n def is_default(self):\n default_def = DEFAULT_COLUMNS.get(self.name, False)\n return (\n default_def\n and self.type.python_type == default_def[TYPE]().python_type\n and self.primary_key == default_def.get(PRIMARY_KEY, False)\n and self.nullable == default_def.get(NULLABLE, True)\n )\n\n\ndef get_default_mathesar_column_list():\n return [\n MathesarColumn(\n c,\n DEFAULT_COLUMNS[c][TYPE],\n primary_key=DEFAULT_COLUMNS[c][PRIMARY_KEY]\n )\n for c in DEFAULT_COLUMNS\n ]\n\n\ndef init_mathesar_table_column_list_with_defaults(column_list):\n default_columns = get_default_mathesar_column_list()\n given_columns = [MathesarColumn.from_column(c) for c in column_list]\n return default_columns + given_columns\n\n\ndef rename_column(schema, table_name, column_name, new_column_name, engine):\n _preparer = engine.dialect.identifier_preparer\n with engine.begin() as conn:\n metadata = MetaData(bind=engine, schema=schema)\n table = Table(table_name, metadata, schema=schema, autoload_with=engine)\n column = table.columns[column_name]\n prepared_table_name = _preparer.format_table(table)\n prepared_column_name = _preparer.format_column(column)\n prepared_new_column_name = _preparer.quote(new_column_name)\n alter_stmt = f\"\"\"\n ALTER TABLE {prepared_table_name}\n RENAME {prepared_column_name} TO {prepared_new_column_name}\n \"\"\"\n conn.execute(DDL(alter_stmt))\n", "path": "db/columns.py"}]}
| 1,084 | 291 |
gh_patches_debug_23785
|
rasdani/github-patches
|
git_diff
|
Zeroto521__my-data-toolkit-704
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ENH: New geoaccessor to calculate two coordinates distance on earth
<!--
Thanks for contributing a pull request!
Please follow these standard acronyms to start the commit message:
- ENH: enhancement
- BUG: bug fix
- DOC: documentation
- TYP: type annotations
- TST: addition or modification of tests
- MAINT: maintenance commit (refactoring, typos, etc.)
- BLD: change related to building
- REL: related to releasing
- API: an (incompatible) API change
- DEP: deprecate something, or remove a deprecated object
- DEV: development tool or utility
- REV: revert an earlier commit
- PERF: performance improvement
- BOT: always commit via a bot
- CI: related to CI or CD
- CLN: Code cleanup
-->
- [ ] closes #xxxx
- [ ] whatsnew entry
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dtoolkit/geoaccessor/geoseries/geodistance.py`
Content:
```
1 from __future__ import annotations
2
3 from warnings import warn
4
5 import geopandas as gpd
6 import numpy as np
7 import pandas as pd
8 from shapely.geometry.base import BaseGeometry
9
10 from dtoolkit.geoaccessor.register import register_geoseries_method
11 from dtoolkit.util._exception import find_stack_level
12
13
14 @register_geoseries_method
15 def geodistance(
16 s: gpd.GeoSeries,
17 /,
18 other: BaseGeometry | gpd.GeoSeries | gpd.GeoDataFrame,
19 align: bool = True,
20 radius: float = 6371008.7714150598,
21 ) -> pd.Series:
22 """
23 Returns a ``Series`` containing the `great-circle`__ distance to aligned other.
24
25 __ https://en.wikipedia.org/wiki/Great-circle_distance
26
27 The algorithm uses the Vincenty formula which is more accurate than the Haversine
28 formula.
29
30 .. math::
31
32 D(x, y) = \\arctan[
33 \\frac{
34 \\sqrt{
35 (
36 \\cos(y_1) \\sin(y_2)
37 - \\sin(y_1) \\cos(y_2) \\cos(x_2 - x_1)
38 )^2
39 + (\\cos(y_2) \\sin(x_2 - x_1))^2
40 }
41 }{
42 \\sin(y_1) \\sin(y_2)
43 + \\cos(y_1) \\cos(y_2) \\cos(x_2 - x_1)
44 }
45 ]
46
47 Parameters
48 ----------
49 other : BaseGeometry, GeoSeries, or GeoDataFrame
50
51 align : bool, default True
52 If True, automatically aligns GeoSeries based on their indices. If False,
53 the order of elements is preserved.
54
55 radius : float, default 6371008.7714150598
56 Great-circle distance uses a spherical model of the earth, using the mean earth
57 radius as defined by the International Union of Geodesy and Geophysics,
58 (2\\ *a* + *b*)/3 = 6371008.7714150598 meters for WGS-84.
59
60 Returns
61 -------
62 Series
63 The values are the great-circle distances and its unit is meters.
64
65 Raises
66 ------
67 ValueError
68 If the CRS is not ``ESGP:4326``.
69
70 TypeError
71 If the other is not a ``BaseGeometry``, ``GeoSeries``, or ``GeoDataFrame``.
72
73 See Also
74 --------
75 geopandas.GeoSeries.distance
76 dtoolkit.geoaccessor.geoseries.geodistance
77 dtoolkit.geoaccessor.geoseries.geodistance_matrix
78 dtoolkit.geoaccessor.geodataframe.geodistance
79 dtoolkit.geoaccessor.geodataframe.geodistance_matrix
80
81 Notes
82 -----
83 - Currently, only supports Point geometry.
84 - The geodesic distance is the shortest distance on the surface of an ellipsoidal
85 model of the earth. Resulting in an error of up to about 0.5%.
86
87 Examples
88 --------
89 >>> import dtoolkit.geoaccessor
90 >>> import pandas as pd
91 >>> import geopandas as gpd
92 >>> from shapely.geometry import Point
93 >>> df = (
94 ... pd.DataFrame({"x": [122, 100], "y":[55, 1]})
95 ... .from_xy("x", "y", crs=4326)
96 ... )
97 >>> df
98 x y geometry
99 0 122 55 POINT (122.00000 55.00000)
100 1 100 1 POINT (100.00000 1.00000)
101 >>> df.geodistance(Point(120, 30)) / 1e6
102 0 2.784435
103 1 3.855604
104 dtype: float64
105
106 Calculate the great-circle distance of corresponding points.
107
108 >>> s = gpd.GeoSeries([Point(120, 30), Point(120, 50)], index=[1, 2], crs=4326)
109 >>> s
110 1 POINT (120.00000 30.00000)
111 2 POINT (120.00000 50.00000)
112 dtype: geometry
113 >>> df.geodistance(s) / 1e6
114 0 NaN
115 1 3.855604
116 2 NaN
117 dtype: float64
118 >>> df.geodistance(s, align=False) / 1e6
119 0 2.784435
120 1 5.768885
121 dtype: float64
122 """
123
124 if s.crs != 4326:
125 raise ValueError(f"Only support 'EPSG:4326' CRS, but got {s.crs!r}.")
126 if not isinstance(other, (BaseGeometry, gpd.base.GeoPandasBase)):
127 raise TypeError(f"Unknown type: {type(other).__name__!r}.")
128
129 if isinstance(other, gpd.base.GeoPandasBase):
130 if other.crs != 4326:
131 raise ValueError(f"Only support 'EPSG:4326' CRS, but got {other.crs!r}.")
132
133 s = s.geometry
134 if align and not s.index.equals(other.index):
135 warn("The indices are different.", stacklevel=find_stack_level())
136 s, other = s.align(other)
137 else:
138 other = other.geometry
139
140 return pd.Series(
141 distance(
142 s.geometry.x.to_numpy(),
143 s.geometry.y.to_numpy(),
144 other.x if isinstance(other, BaseGeometry) else other.x.to_numpy(),
145 other.y if isinstance(other, BaseGeometry) else other.y.to_numpy(),
146 radius=radius,
147 ),
148 index=s.index,
149 )
150
151
152 # based on https://github.com/geopy/geopy geopy/distance.py::great_circle.measure
153 def distance(
154 lng1: np.ndarray | float,
155 lat1: np.ndarray | float,
156 lng2: np.ndarray | float,
157 lat2: np.ndarray | float,
158 radius: float,
159 ) -> np.ndarray:
160 lng1, lat1, lng2, lat2 = map(np.radians, (lng1, lat1, lng2, lat2))
161 sin_lat1, cos_lat1 = np.sin(lat1), np.cos(lat1)
162 sin_lat2, cos_lat2 = np.sin(lat2), np.cos(lat2)
163
164 delta_lng = lng2 - lng1
165 cos_delta_lng, sin_delta_lng = np.cos(delta_lng), np.sin(delta_lng)
166
167 return radius * np.arctan2(
168 np.sqrt(
169 (cos_lat1 * sin_lat2 - sin_lat1 * cos_lat2 * cos_delta_lng) ** 2
170 + (cos_lat2 * sin_delta_lng) ** 2,
171 ),
172 sin_lat1 * sin_lat2 + cos_lat1 * cos_lat2 * cos_delta_lng,
173 )
174
```
Path: `dtoolkit/geoaccessor/geoseries/geodistance_matrix.py`
Content:
```
1 from __future__ import annotations
2
3 import geopandas as gpd
4 import numpy as np
5 import pandas as pd
6
7 from dtoolkit.geoaccessor.register import register_geoseries_method
8
9
10 @register_geoseries_method
11 def geodistance_matrix(
12 s: gpd.GeoSeries,
13 /,
14 other: gpd.GeoSeries | gpd.GeoDataFrame | None = None,
15 radius: float = 6371008.7714150598,
16 ) -> pd.DataFrame:
17 """
18 Returns a ``DataFrame`` containing the `great-circle`__ distances matrix between in
19 ``s`` and ``other`` via haversine formula.
20
21 __ https://en.wikipedia.org/wiki/Great-circle_distance
22
23 .. math::
24
25 D(x, y) = 2 \\arcsin [
26 \\sqrt{
27 \\sin^2 ((x_1 - y_1) / 2)
28 + \\cos(x_1) \\cos(y_1) \\sin^2 ((x_2 - y_2) / 2)
29 }
30 ]
31
32 Parameters
33 ----------
34 other : GeoSeries, or GeoDataFrame, default None
35 If None, uses ``other=s``.
36
37 radius : float, default 6371008.7714150598
38 Great-circle distance uses a spherical model of the earth, using the mean earth
39 radius as defined by the International Union of Geodesy and Geophysics,
40 (2\\ *a* + *b*)/3 = 6371008.7714150598 meters for WGS-84.
41
42 Returns
43 -------
44 DataFrame
45 - The index and columns are the same as the index of ``s`` and ``other``.
46 - The values are the great-circle distances and its unit is meters.
47
48 Raises
49 ------
50 ModuleNotFoundError
51 If don't have module named 'sklearn'.
52
53 ValueError
54 If the CRS is not ``ESGP:4326``.
55
56 See Also
57 --------
58 sklearn.metrics.pairwise.haversine_distances
59 dtoolkit.geoaccessor.geoseries.geodistance
60 dtoolkit.geoaccessor.geoseries.geodistance_matrix
61 dtoolkit.geoaccessor.geodataframe.geodistance
62 dtoolkit.geoaccessor.geodataframe.geodistance_matrix
63
64 Notes
65 -----
66 - Currently, only supports Point geometry.
67 - The great-circle distance is the angular distance between two points on the
68 surface of a sphere. As the Earth is nearly spherical, the haversine formula
69 provides a good approximation of the distance between two points of the Earth
70 surface, with a less than 1% error on average.
71
72 Examples
73 --------
74 >>> import dtoolkit.geoaccessor
75 >>> df = pd.DataFrame(
76 ... {
77 ... "x": [120, 122, 100],
78 ... "y":[30, 55, 1],
79 ... },
80 ... ).from_xy("x", "y", crs=4326)
81 >>> df
82 x y geometry
83 0 120 30 POINT (120.00000 30.00000)
84 1 122 55 POINT (122.00000 55.00000)
85 2 100 1 POINT (100.00000 1.00000)
86 >>> other = pd.DataFrame(
87 ... {
88 ... "x": [120, 110],
89 ... "y":[30, 40],
90 ... },
91 ... ).from_xy("x", "y", crs=4326)
92 >>> other
93 x y geometry
94 0 120 30 POINT (120.00000 30.00000)
95 1 110 40 POINT (110.00000 40.00000)
96 >>> df.geodistance_matrix(other)
97 0 1
98 0 0.000000e+00 1.203540e+06
99 1 1.439971e+06 1.511958e+06
100 2 2.418544e+06 1.522752e+06
101 """
102 from sklearn.metrics.pairwise import haversine_distances
103
104 if s.crs != 4326:
105 raise ValueError(f"Only support 'EPSG:4326' CRS, but got {s.crs!r}.")
106 if other is not None and other.crs != 4326:
107 raise ValueError(f"Only support 'EPSG:4326' CRS, but got {other.crs!r}.")
108
109 X = np.radians(np.stack((s.x, s.y), axis=1))
110 Y = np.radians(np.stack((other.x, other.y), axis=1)) if other is not None else other
111 return pd.DataFrame(
112 radius * haversine_distances(X, Y),
113 index=s.index,
114 columns=other.index,
115 )
116
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/dtoolkit/geoaccessor/geoseries/geodistance.py b/dtoolkit/geoaccessor/geoseries/geodistance.py
--- a/dtoolkit/geoaccessor/geoseries/geodistance.py
+++ b/dtoolkit/geoaccessor/geoseries/geodistance.py
@@ -134,8 +134,9 @@
if align and not s.index.equals(other.index):
warn("The indices are different.", stacklevel=find_stack_level())
s, other = s.align(other)
- else:
- other = other.geometry
+
+ # Force convert to GeoSeries
+ other = other.geometry
return pd.Series(
distance(
diff --git a/dtoolkit/geoaccessor/geoseries/geodistance_matrix.py b/dtoolkit/geoaccessor/geoseries/geodistance_matrix.py
--- a/dtoolkit/geoaccessor/geoseries/geodistance_matrix.py
+++ b/dtoolkit/geoaccessor/geoseries/geodistance_matrix.py
@@ -103,8 +103,13 @@
if s.crs != 4326:
raise ValueError(f"Only support 'EPSG:4326' CRS, but got {s.crs!r}.")
- if other is not None and other.crs != 4326:
- raise ValueError(f"Only support 'EPSG:4326' CRS, but got {other.crs!r}.")
+
+ if isinstance(other, gpd.base.GeoPandasBase):
+ if other.crs != 4326:
+ raise ValueError(f"Only support 'EPSG:4326' CRS, but got {other.crs!r}.")
+
+ # Force convert to GeoSeries
+ other = other.geometry
X = np.radians(np.stack((s.x, s.y), axis=1))
Y = np.radians(np.stack((other.x, other.y), axis=1)) if other is not None else other
|
{"golden_diff": "diff --git a/dtoolkit/geoaccessor/geoseries/geodistance.py b/dtoolkit/geoaccessor/geoseries/geodistance.py\n--- a/dtoolkit/geoaccessor/geoseries/geodistance.py\n+++ b/dtoolkit/geoaccessor/geoseries/geodistance.py\n@@ -134,8 +134,9 @@\n if align and not s.index.equals(other.index):\n warn(\"The indices are different.\", stacklevel=find_stack_level())\n s, other = s.align(other)\n- else:\n- other = other.geometry\n+\n+ # Force convert to GeoSeries\n+ other = other.geometry\n \n return pd.Series(\n distance(\ndiff --git a/dtoolkit/geoaccessor/geoseries/geodistance_matrix.py b/dtoolkit/geoaccessor/geoseries/geodistance_matrix.py\n--- a/dtoolkit/geoaccessor/geoseries/geodistance_matrix.py\n+++ b/dtoolkit/geoaccessor/geoseries/geodistance_matrix.py\n@@ -103,8 +103,13 @@\n \n if s.crs != 4326:\n raise ValueError(f\"Only support 'EPSG:4326' CRS, but got {s.crs!r}.\")\n- if other is not None and other.crs != 4326:\n- raise ValueError(f\"Only support 'EPSG:4326' CRS, but got {other.crs!r}.\")\n+\n+ if isinstance(other, gpd.base.GeoPandasBase):\n+ if other.crs != 4326:\n+ raise ValueError(f\"Only support 'EPSG:4326' CRS, but got {other.crs!r}.\")\n+\n+ # Force convert to GeoSeries\n+ other = other.geometry\n \n X = np.radians(np.stack((s.x, s.y), axis=1))\n Y = np.radians(np.stack((other.x, other.y), axis=1)) if other is not None else other\n", "issue": "ENH: New geoaccessor to calculate two coordinates distance on earth\n<!--\r\nThanks for contributing a pull request!\r\n\r\nPlease follow these standard acronyms to start the commit message:\r\n\r\n- ENH: enhancement\r\n- BUG: bug fix\r\n- DOC: documentation\r\n- TYP: type annotations\r\n- TST: addition or modification of tests\r\n- MAINT: maintenance commit (refactoring, typos, etc.)\r\n- BLD: change related to building\r\n- REL: related to releasing\r\n- API: an (incompatible) API change\r\n- DEP: deprecate something, or remove a deprecated object\r\n- DEV: development tool or utility\r\n- REV: revert an earlier commit\r\n- PERF: performance improvement\r\n- BOT: always commit via a bot\r\n- CI: related to CI or CD\r\n- CLN: Code cleanup\r\n-->\r\n\r\n- [ ] closes #xxxx\r\n- [ ] whatsnew entry\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom warnings import warn\n\nimport geopandas as gpd\nimport numpy as np\nimport pandas as pd\nfrom shapely.geometry.base import BaseGeometry\n\nfrom dtoolkit.geoaccessor.register import register_geoseries_method\nfrom dtoolkit.util._exception import find_stack_level\n\n\n@register_geoseries_method\ndef geodistance(\n s: gpd.GeoSeries,\n /,\n other: BaseGeometry | gpd.GeoSeries | gpd.GeoDataFrame,\n align: bool = True,\n radius: float = 6371008.7714150598,\n) -> pd.Series:\n \"\"\"\n Returns a ``Series`` containing the `great-circle`__ distance to aligned other.\n\n __ https://en.wikipedia.org/wiki/Great-circle_distance\n\n The algorithm uses the Vincenty formula which is more accurate than the Haversine\n formula.\n\n .. math::\n\n D(x, y) = \\\\arctan[\n \\\\frac{\n \\\\sqrt{\n (\n \\\\cos(y_1) \\\\sin(y_2)\n - \\\\sin(y_1) \\\\cos(y_2) \\\\cos(x_2 - x_1)\n )^2\n + (\\\\cos(y_2) \\\\sin(x_2 - x_1))^2\n }\n }{\n \\\\sin(y_1) \\\\sin(y_2)\n + \\\\cos(y_1) \\\\cos(y_2) \\\\cos(x_2 - x_1)\n }\n ]\n\n Parameters\n ----------\n other : BaseGeometry, GeoSeries, or GeoDataFrame\n\n align : bool, default True\n If True, automatically aligns GeoSeries based on their indices. If False,\n the order of elements is preserved.\n\n radius : float, default 6371008.7714150598\n Great-circle distance uses a spherical model of the earth, using the mean earth\n radius as defined by the International Union of Geodesy and Geophysics,\n (2\\\\ *a* + *b*)/3 = 6371008.7714150598 meters for WGS-84.\n\n Returns\n -------\n Series\n The values are the great-circle distances and its unit is meters.\n\n Raises\n ------\n ValueError\n If the CRS is not ``ESGP:4326``.\n\n TypeError\n If the other is not a ``BaseGeometry``, ``GeoSeries``, or ``GeoDataFrame``.\n\n See Also\n --------\n geopandas.GeoSeries.distance\n dtoolkit.geoaccessor.geoseries.geodistance\n dtoolkit.geoaccessor.geoseries.geodistance_matrix\n dtoolkit.geoaccessor.geodataframe.geodistance\n dtoolkit.geoaccessor.geodataframe.geodistance_matrix\n\n Notes\n -----\n - Currently, only supports Point geometry.\n - The geodesic distance is the shortest distance on the surface of an ellipsoidal\n model of the earth. Resulting in an error of up to about 0.5%.\n\n Examples\n --------\n >>> import dtoolkit.geoaccessor\n >>> import pandas as pd\n >>> import geopandas as gpd\n >>> from shapely.geometry import Point\n >>> df = (\n ... pd.DataFrame({\"x\": [122, 100], \"y\":[55, 1]})\n ... .from_xy(\"x\", \"y\", crs=4326)\n ... )\n >>> df\n x y geometry\n 0 122 55 POINT (122.00000 55.00000)\n 1 100 1 POINT (100.00000 1.00000)\n >>> df.geodistance(Point(120, 30)) / 1e6\n 0 2.784435\n 1 3.855604\n dtype: float64\n\n Calculate the great-circle distance of corresponding points.\n\n >>> s = gpd.GeoSeries([Point(120, 30), Point(120, 50)], index=[1, 2], crs=4326)\n >>> s\n 1 POINT (120.00000 30.00000)\n 2 POINT (120.00000 50.00000)\n dtype: geometry\n >>> df.geodistance(s) / 1e6\n 0 NaN\n 1 3.855604\n 2 NaN\n dtype: float64\n >>> df.geodistance(s, align=False) / 1e6\n 0 2.784435\n 1 5.768885\n dtype: float64\n \"\"\"\n\n if s.crs != 4326:\n raise ValueError(f\"Only support 'EPSG:4326' CRS, but got {s.crs!r}.\")\n if not isinstance(other, (BaseGeometry, gpd.base.GeoPandasBase)):\n raise TypeError(f\"Unknown type: {type(other).__name__!r}.\")\n\n if isinstance(other, gpd.base.GeoPandasBase):\n if other.crs != 4326:\n raise ValueError(f\"Only support 'EPSG:4326' CRS, but got {other.crs!r}.\")\n\n s = s.geometry\n if align and not s.index.equals(other.index):\n warn(\"The indices are different.\", stacklevel=find_stack_level())\n s, other = s.align(other)\n else:\n other = other.geometry\n\n return pd.Series(\n distance(\n s.geometry.x.to_numpy(),\n s.geometry.y.to_numpy(),\n other.x if isinstance(other, BaseGeometry) else other.x.to_numpy(),\n other.y if isinstance(other, BaseGeometry) else other.y.to_numpy(),\n radius=radius,\n ),\n index=s.index,\n )\n\n\n# based on https://github.com/geopy/geopy geopy/distance.py::great_circle.measure\ndef distance(\n lng1: np.ndarray | float,\n lat1: np.ndarray | float,\n lng2: np.ndarray | float,\n lat2: np.ndarray | float,\n radius: float,\n) -> np.ndarray:\n lng1, lat1, lng2, lat2 = map(np.radians, (lng1, lat1, lng2, lat2))\n sin_lat1, cos_lat1 = np.sin(lat1), np.cos(lat1)\n sin_lat2, cos_lat2 = np.sin(lat2), np.cos(lat2)\n\n delta_lng = lng2 - lng1\n cos_delta_lng, sin_delta_lng = np.cos(delta_lng), np.sin(delta_lng)\n\n return radius * np.arctan2(\n np.sqrt(\n (cos_lat1 * sin_lat2 - sin_lat1 * cos_lat2 * cos_delta_lng) ** 2\n + (cos_lat2 * sin_delta_lng) ** 2,\n ),\n sin_lat1 * sin_lat2 + cos_lat1 * cos_lat2 * cos_delta_lng,\n )\n", "path": "dtoolkit/geoaccessor/geoseries/geodistance.py"}, {"content": "from __future__ import annotations\n\nimport geopandas as gpd\nimport numpy as np\nimport pandas as pd\n\nfrom dtoolkit.geoaccessor.register import register_geoseries_method\n\n\n@register_geoseries_method\ndef geodistance_matrix(\n s: gpd.GeoSeries,\n /,\n other: gpd.GeoSeries | gpd.GeoDataFrame | None = None,\n radius: float = 6371008.7714150598,\n) -> pd.DataFrame:\n \"\"\"\n Returns a ``DataFrame`` containing the `great-circle`__ distances matrix between in\n ``s`` and ``other`` via haversine formula.\n\n __ https://en.wikipedia.org/wiki/Great-circle_distance\n\n .. math::\n\n D(x, y) = 2 \\\\arcsin [\n \\\\sqrt{\n \\\\sin^2 ((x_1 - y_1) / 2)\n + \\\\cos(x_1) \\\\cos(y_1) \\\\sin^2 ((x_2 - y_2) / 2)\n }\n ]\n\n Parameters\n ----------\n other : GeoSeries, or GeoDataFrame, default None\n If None, uses ``other=s``.\n\n radius : float, default 6371008.7714150598\n Great-circle distance uses a spherical model of the earth, using the mean earth\n radius as defined by the International Union of Geodesy and Geophysics,\n (2\\\\ *a* + *b*)/3 = 6371008.7714150598 meters for WGS-84.\n\n Returns\n -------\n DataFrame\n - The index and columns are the same as the index of ``s`` and ``other``.\n - The values are the great-circle distances and its unit is meters.\n\n Raises\n ------\n ModuleNotFoundError\n If don't have module named 'sklearn'.\n\n ValueError\n If the CRS is not ``ESGP:4326``.\n\n See Also\n --------\n sklearn.metrics.pairwise.haversine_distances\n dtoolkit.geoaccessor.geoseries.geodistance\n dtoolkit.geoaccessor.geoseries.geodistance_matrix\n dtoolkit.geoaccessor.geodataframe.geodistance\n dtoolkit.geoaccessor.geodataframe.geodistance_matrix\n\n Notes\n -----\n - Currently, only supports Point geometry.\n - The great-circle distance is the angular distance between two points on the\n surface of a sphere. As the Earth is nearly spherical, the haversine formula\n provides a good approximation of the distance between two points of the Earth\n surface, with a less than 1% error on average.\n\n Examples\n --------\n >>> import dtoolkit.geoaccessor\n >>> df = pd.DataFrame(\n ... {\n ... \"x\": [120, 122, 100],\n ... \"y\":[30, 55, 1],\n ... },\n ... ).from_xy(\"x\", \"y\", crs=4326)\n >>> df\n x y geometry\n 0 120 30 POINT (120.00000 30.00000)\n 1 122 55 POINT (122.00000 55.00000)\n 2 100 1 POINT (100.00000 1.00000)\n >>> other = pd.DataFrame(\n ... {\n ... \"x\": [120, 110],\n ... \"y\":[30, 40],\n ... },\n ... ).from_xy(\"x\", \"y\", crs=4326)\n >>> other\n x y geometry\n 0 120 30 POINT (120.00000 30.00000)\n 1 110 40 POINT (110.00000 40.00000)\n >>> df.geodistance_matrix(other)\n 0 1\n 0 0.000000e+00 1.203540e+06\n 1 1.439971e+06 1.511958e+06\n 2 2.418544e+06 1.522752e+06\n \"\"\"\n from sklearn.metrics.pairwise import haversine_distances\n\n if s.crs != 4326:\n raise ValueError(f\"Only support 'EPSG:4326' CRS, but got {s.crs!r}.\")\n if other is not None and other.crs != 4326:\n raise ValueError(f\"Only support 'EPSG:4326' CRS, but got {other.crs!r}.\")\n\n X = np.radians(np.stack((s.x, s.y), axis=1))\n Y = np.radians(np.stack((other.x, other.y), axis=1)) if other is not None else other\n return pd.DataFrame(\n radius * haversine_distances(X, Y),\n index=s.index,\n columns=other.index,\n )\n", "path": "dtoolkit/geoaccessor/geoseries/geodistance_matrix.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom warnings import warn\n\nimport geopandas as gpd\nimport numpy as np\nimport pandas as pd\nfrom shapely.geometry.base import BaseGeometry\n\nfrom dtoolkit.geoaccessor.register import register_geoseries_method\nfrom dtoolkit.util._exception import find_stack_level\n\n\n@register_geoseries_method\ndef geodistance(\n s: gpd.GeoSeries,\n /,\n other: BaseGeometry | gpd.GeoSeries | gpd.GeoDataFrame,\n align: bool = True,\n radius: float = 6371008.7714150598,\n) -> pd.Series:\n \"\"\"\n Returns a ``Series`` containing the `great-circle`__ distance to aligned other.\n\n __ https://en.wikipedia.org/wiki/Great-circle_distance\n\n The algorithm uses the Vincenty formula which is more accurate than the Haversine\n formula.\n\n .. math::\n\n D(x, y) = \\\\arctan[\n \\\\frac{\n \\\\sqrt{\n (\n \\\\cos(y_1) \\\\sin(y_2)\n - \\\\sin(y_1) \\\\cos(y_2) \\\\cos(x_2 - x_1)\n )^2\n + (\\\\cos(y_2) \\\\sin(x_2 - x_1))^2\n }\n }{\n \\\\sin(y_1) \\\\sin(y_2)\n + \\\\cos(y_1) \\\\cos(y_2) \\\\cos(x_2 - x_1)\n }\n ]\n\n Parameters\n ----------\n other : BaseGeometry, GeoSeries, or GeoDataFrame\n\n align : bool, default True\n If True, automatically aligns GeoSeries based on their indices. If False,\n the order of elements is preserved.\n\n radius : float, default 6371008.7714150598\n Great-circle distance uses a spherical model of the earth, using the mean earth\n radius as defined by the International Union of Geodesy and Geophysics,\n (2\\\\ *a* + *b*)/3 = 6371008.7714150598 meters for WGS-84.\n\n Returns\n -------\n Series\n The values are the great-circle distances and its unit is meters.\n\n Raises\n ------\n ValueError\n If the CRS is not ``ESGP:4326``.\n\n TypeError\n If the other is not a ``BaseGeometry``, ``GeoSeries``, or ``GeoDataFrame``.\n\n See Also\n --------\n geopandas.GeoSeries.distance\n dtoolkit.geoaccessor.geoseries.geodistance\n dtoolkit.geoaccessor.geoseries.geodistance_matrix\n dtoolkit.geoaccessor.geodataframe.geodistance\n dtoolkit.geoaccessor.geodataframe.geodistance_matrix\n\n Notes\n -----\n - Currently, only supports Point geometry.\n - The geodesic distance is the shortest distance on the surface of an ellipsoidal\n model of the earth. Resulting in an error of up to about 0.5%.\n\n Examples\n --------\n >>> import dtoolkit.geoaccessor\n >>> import pandas as pd\n >>> import geopandas as gpd\n >>> from shapely.geometry import Point\n >>> df = (\n ... pd.DataFrame({\"x\": [122, 100], \"y\":[55, 1]})\n ... .from_xy(\"x\", \"y\", crs=4326)\n ... )\n >>> df\n x y geometry\n 0 122 55 POINT (122.00000 55.00000)\n 1 100 1 POINT (100.00000 1.00000)\n >>> df.geodistance(Point(120, 30)) / 1e6\n 0 2.784435\n 1 3.855604\n dtype: float64\n\n Calculate the great-circle distance of corresponding points.\n\n >>> s = gpd.GeoSeries([Point(120, 30), Point(120, 50)], index=[1, 2], crs=4326)\n >>> s\n 1 POINT (120.00000 30.00000)\n 2 POINT (120.00000 50.00000)\n dtype: geometry\n >>> df.geodistance(s) / 1e6\n 0 NaN\n 1 3.855604\n 2 NaN\n dtype: float64\n >>> df.geodistance(s, align=False) / 1e6\n 0 2.784435\n 1 5.768885\n dtype: float64\n \"\"\"\n\n if s.crs != 4326:\n raise ValueError(f\"Only support 'EPSG:4326' CRS, but got {s.crs!r}.\")\n if not isinstance(other, (BaseGeometry, gpd.base.GeoPandasBase)):\n raise TypeError(f\"Unknown type: {type(other).__name__!r}.\")\n\n if isinstance(other, gpd.base.GeoPandasBase):\n if other.crs != 4326:\n raise ValueError(f\"Only support 'EPSG:4326' CRS, but got {other.crs!r}.\")\n\n s = s.geometry\n if align and not s.index.equals(other.index):\n warn(\"The indices are different.\", stacklevel=find_stack_level())\n s, other = s.align(other)\n\n # Force convert to GeoSeries\n other = other.geometry\n\n return pd.Series(\n distance(\n s.geometry.x.to_numpy(),\n s.geometry.y.to_numpy(),\n other.x if isinstance(other, BaseGeometry) else other.x.to_numpy(),\n other.y if isinstance(other, BaseGeometry) else other.y.to_numpy(),\n radius=radius,\n ),\n index=s.index,\n )\n\n\n# based on https://github.com/geopy/geopy geopy/distance.py::great_circle.measure\ndef distance(\n lng1: np.ndarray | float,\n lat1: np.ndarray | float,\n lng2: np.ndarray | float,\n lat2: np.ndarray | float,\n radius: float,\n) -> np.ndarray:\n lng1, lat1, lng2, lat2 = map(np.radians, (lng1, lat1, lng2, lat2))\n sin_lat1, cos_lat1 = np.sin(lat1), np.cos(lat1)\n sin_lat2, cos_lat2 = np.sin(lat2), np.cos(lat2)\n\n delta_lng = lng2 - lng1\n cos_delta_lng, sin_delta_lng = np.cos(delta_lng), np.sin(delta_lng)\n\n return radius * np.arctan2(\n np.sqrt(\n (cos_lat1 * sin_lat2 - sin_lat1 * cos_lat2 * cos_delta_lng) ** 2\n + (cos_lat2 * sin_delta_lng) ** 2,\n ),\n sin_lat1 * sin_lat2 + cos_lat1 * cos_lat2 * cos_delta_lng,\n )\n", "path": "dtoolkit/geoaccessor/geoseries/geodistance.py"}, {"content": "from __future__ import annotations\n\nimport geopandas as gpd\nimport numpy as np\nimport pandas as pd\n\nfrom dtoolkit.geoaccessor.register import register_geoseries_method\n\n\n@register_geoseries_method\ndef geodistance_matrix(\n s: gpd.GeoSeries,\n /,\n other: gpd.GeoSeries | gpd.GeoDataFrame | None = None,\n radius: float = 6371008.7714150598,\n) -> pd.DataFrame:\n \"\"\"\n Returns a ``DataFrame`` containing the `great-circle`__ distances matrix between in\n ``s`` and ``other`` via haversine formula.\n\n __ https://en.wikipedia.org/wiki/Great-circle_distance\n\n .. math::\n\n D(x, y) = 2 \\\\arcsin [\n \\\\sqrt{\n \\\\sin^2 ((x_1 - y_1) / 2)\n + \\\\cos(x_1) \\\\cos(y_1) \\\\sin^2 ((x_2 - y_2) / 2)\n }\n ]\n\n Parameters\n ----------\n other : GeoSeries, or GeoDataFrame, default None\n If None, uses ``other=s``.\n\n radius : float, default 6371008.7714150598\n Great-circle distance uses a spherical model of the earth, using the mean earth\n radius as defined by the International Union of Geodesy and Geophysics,\n (2\\\\ *a* + *b*)/3 = 6371008.7714150598 meters for WGS-84.\n\n Returns\n -------\n DataFrame\n - The index and columns are the same as the index of ``s`` and ``other``.\n - The values are the great-circle distances and its unit is meters.\n\n Raises\n ------\n ModuleNotFoundError\n If don't have module named 'sklearn'.\n\n ValueError\n If the CRS is not ``ESGP:4326``.\n\n See Also\n --------\n sklearn.metrics.pairwise.haversine_distances\n dtoolkit.geoaccessor.geoseries.geodistance\n dtoolkit.geoaccessor.geoseries.geodistance_matrix\n dtoolkit.geoaccessor.geodataframe.geodistance\n dtoolkit.geoaccessor.geodataframe.geodistance_matrix\n\n Notes\n -----\n - Currently, only supports Point geometry.\n - The great-circle distance is the angular distance between two points on the\n surface of a sphere. As the Earth is nearly spherical, the haversine formula\n provides a good approximation of the distance between two points of the Earth\n surface, with a less than 1% error on average.\n\n Examples\n --------\n >>> import dtoolkit.geoaccessor\n >>> df = pd.DataFrame(\n ... {\n ... \"x\": [120, 122, 100],\n ... \"y\":[30, 55, 1],\n ... },\n ... ).from_xy(\"x\", \"y\", crs=4326)\n >>> df\n x y geometry\n 0 120 30 POINT (120.00000 30.00000)\n 1 122 55 POINT (122.00000 55.00000)\n 2 100 1 POINT (100.00000 1.00000)\n >>> other = pd.DataFrame(\n ... {\n ... \"x\": [120, 110],\n ... \"y\":[30, 40],\n ... },\n ... ).from_xy(\"x\", \"y\", crs=4326)\n >>> other\n x y geometry\n 0 120 30 POINT (120.00000 30.00000)\n 1 110 40 POINT (110.00000 40.00000)\n >>> df.geodistance_matrix(other)\n 0 1\n 0 0.000000e+00 1.203540e+06\n 1 1.439971e+06 1.511958e+06\n 2 2.418544e+06 1.522752e+06\n \"\"\"\n from sklearn.metrics.pairwise import haversine_distances\n\n if s.crs != 4326:\n raise ValueError(f\"Only support 'EPSG:4326' CRS, but got {s.crs!r}.\")\n\n if isinstance(other, gpd.base.GeoPandasBase):\n if other.crs != 4326:\n raise ValueError(f\"Only support 'EPSG:4326' CRS, but got {other.crs!r}.\")\n\n # Force convert to GeoSeries\n other = other.geometry\n\n X = np.radians(np.stack((s.x, s.y), axis=1))\n Y = np.radians(np.stack((other.x, other.y), axis=1)) if other is not None else other\n return pd.DataFrame(\n radius * haversine_distances(X, Y),\n index=s.index,\n columns=other.index,\n )\n", "path": "dtoolkit/geoaccessor/geoseries/geodistance_matrix.py"}]}
| 4,038 | 446 |
gh_patches_debug_1154
|
rasdani/github-patches
|
git_diff
|
cognitedata__cognite-sdk-python-291
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
client.time_series.get_time_series does not return metadata
**Describe the bug**
When executing `client.time_series.get_time_series()` with `include_metadata = True` no metadata is returned.
**To Reproduce**
Runnable code reproducing the error.
```
import cognite
import requests
import os
import numpy as np
import pandas as pd
from datetime import datetime, timedelta
from cognite.client.stable.time_series import TimeSeries
sm_api = os.environ['SM_API_KEY']
client = cognite.CogniteClient(api_key = sm_api)
ts_name = 'Test_tssssss'
my_time_series = [TimeSeries(name=ts_name,
description = 'test_description',
metadata = { 'ASSETSCOPENAME' : 'meta_test_1' })]
client.time_series.post_time_series(my_time_series)
# create dummy data
np.random.seed(1338)
start_time = int((datetime.now()-timedelta(1)).strftime("%s"))
timestamps = [(start_time + i * 10)*1000 for i in np.arange(11)]
df = pd.DataFrame({'timestamp' : timestamps})
df[ts_name] = np.random.random(df.shape[0])
client.datapoints.post_datapoints_frame(df)
# get time_series
ts1 = client.time_series.get_time_series(name = ts_name,
include_metadata = True).to_pandas()
ts1_id = ts1['id'].loc[0]
print(ts1.loc[0])
# no meta data
# requests:
# first with no metadata
r1 = requests.get(url = 'https://api.cognitedata.com/api/0.5/projects/smart-maintenance-sandbox/timeseries/' + str(ts1_id) ,
headers= { 'Api-Key' : sm_api} , params = {"includeMetadata" : False})
print(r1.text.split('\n'))
# then with metadata
r1 = requests.get(url = 'https://api.cognitedata.com/api/0.5/projects/smart-maintenance-sandbox/timeseries/' + str(ts1_id) ,
headers= { 'Api-Key' : sm_api} , params = {"includeMetadata" : True})
print(r1.text.split('\n'))
```
**Expected behavior**
The `client.time_series.get_time_series(name = ts_name,include_metadata = True)` should return the metadata.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cognite/client/stable/time_series.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 from copy import deepcopy
3 from typing import List
4 from urllib.parse import quote
5
6 import pandas as pd
7
8 from cognite.client._api_client import APIClient, CogniteCollectionResponse, CogniteResource, CogniteResponse
9
10
11 class TimeSeriesResponse(CogniteResponse):
12 """Time series Response Object"""
13
14 def __init__(self, internal_representation):
15 super().__init__(internal_representation)
16 item = self.to_json()
17 self.id = item.get("id")
18 self.name = item.get("name")
19 self.unit = item.get("unit")
20 self.is_step = item.get("isStep")
21 self.is_string = item.get("isString")
22 self.created_time = item.get("createdTime")
23 self.last_updated_time = item.get("lastUpdatedTime")
24 self.metadata = item.get("metadata")
25 self.asset_id = item.get("assetId")
26 self.description = item.get("description")
27
28 def to_pandas(self):
29 """Returns data as a pandas dataframe"""
30 if len(self.to_json()) > 0:
31 ts = self.to_json().copy()
32 if "metadata" in ts:
33 # Hack to avoid path ending up as first element in dict as from_dict will fail
34 metadata = ts.pop("metadata")
35 df = pd.DataFrame.from_dict(ts, orient="index")
36 df.loc["metadata"] = [metadata]
37 else:
38 df = pd.DataFrame.from_dict(ts, orient="index")
39 return df
40 return pd.DataFrame()
41
42
43 class TimeSeriesListResponse(CogniteCollectionResponse):
44 """Time series Response Object"""
45
46 _RESPONSE_CLASS = TimeSeriesResponse
47
48 def to_pandas(self, include_metadata: bool = False):
49 """Returns data as a pandas dataframe
50
51 Args:
52 include_metadata (bool): Whether or not to include metadata fields in the resulting dataframe
53 """
54 items = deepcopy(self.internal_representation["data"]["items"])
55 if items and items[0].get("metadata") is None:
56 return pd.DataFrame(items)
57 for d in items:
58 if d.get("metadata"):
59 metadata = d.pop("metadata")
60 if include_metadata:
61 d.update(metadata)
62 return pd.DataFrame(items)
63
64
65 class TimeSeries(CogniteResource):
66 """Data Transfer Object for a time series.
67
68 Args:
69 name (str): Unique name of time series.
70 is_string (bool): Whether the time series is string valued or not.
71 metadata (dict): Metadata.
72 unit (str): Physical unit of the time series.
73 asset_id (int): Asset that this time series belongs to.
74 description (str): Description of the time series.
75 security_categories (list(int)): Security categories required in order to access this time series.
76 is_step (bool): Whether or not the time series is a step series.
77
78 """
79
80 def __init__(
81 self,
82 name,
83 is_string=False,
84 metadata=None,
85 unit=None,
86 asset_id=None,
87 description=None,
88 security_categories=None,
89 is_step=None,
90 ):
91 self.name = name
92 self.is_string = is_string
93 self.metadata = metadata
94 self.unit = unit
95 self.asset_id = asset_id
96 self.description = description
97 self.security_categories = security_categories
98 self.is_step = is_step
99
100
101 class TimeSeriesClient(APIClient):
102 def __init__(self, **kwargs):
103 super().__init__(version="0.5", **kwargs)
104
105 def get_time_series(
106 self, prefix=None, description=None, include_metadata=False, asset_id=None, path=None, **kwargs
107 ) -> TimeSeriesListResponse:
108 """Returns an object containing the requested timeseries.
109
110 Args:
111 prefix (str): List timeseries with this prefix in the name.
112
113 description (str): Filter timeseries taht contains this string in its description.
114
115 include_metadata (bool): Decide if the metadata field should be returned or not. Defaults to False.
116
117 asset_id (int): Get timeseries related to this asset.
118
119 path (List[int]): Get timeseries under this asset path branch.
120
121 Keyword Arguments:
122 limit (int): Number of results to return.
123
124 autopaging (bool): Whether or not to automatically page through results. If set to true, limit will be
125 disregarded. Defaults to False.
126
127 Returns:
128 stable.time_series.TimeSeriesListResponse: A data object containing the requested timeseries with several getter methods with different
129 output formats.
130
131 Examples:
132 Get all time series for a given asset::
133
134 client = CogniteClient()
135 res = client.time_series.get_time_series(asset_id=123, autopaging=True)
136 print(res.to_pandas())
137 """
138 autopaging = kwargs.get("autopaging", False)
139 url = "/timeseries"
140 params = {
141 "q": prefix,
142 "description": description,
143 "includeMetadata": include_metadata,
144 "assetId": asset_id,
145 "path": str(path) if path else None,
146 "limit": kwargs.get("limit", self._LIMIT) if not autopaging else self._LIMIT,
147 }
148
149 res = self._get(url=url, params=params, autopaging=autopaging)
150 return TimeSeriesListResponse(res.json())
151
152 def post_time_series(self, time_series: List[TimeSeries]) -> None:
153 """Create a new time series.
154
155 Args:
156 time_series (list[stable.time_series.TimeSeries]): List of time series data transfer objects to create.
157
158 Returns:
159 None
160
161 Examples:
162 Create a new time series::
163
164 from cognite.client.stable.time_series import TimeSeries
165 client = CogniteClient()
166
167 my_time_series = [TimeSeries(name="my_ts_1")]
168
169 client.time_series.post_time_series(my_time_series)
170 """
171 url = "/timeseries"
172 items = [ts.camel_case_dict() for ts in time_series]
173 body = {"items": items}
174 self._post(url, body=body)
175
176 def update_time_series(self, time_series: List[TimeSeries]) -> None:
177 """Update an existing time series.
178
179 For each field that can be updated, a null value indicates that nothing should be done.
180
181 Args:
182 time_series (list[stable.time_series.TimeSeries]): List of time series data transfer objects to update.
183
184 Returns:
185 None
186
187 Examples:
188 Update the unit of a time series::
189
190 from cognite.client.stable.time_series import TimeSeries
191 client = CogniteClient()
192
193 my_time_series = [TimeSeries(name="my_ts_1", unit="celsius")]
194
195 client.time_series.update_time_series(my_time_series)
196 """
197 url = "/timeseries"
198 items = [ts.camel_case_dict() for ts in time_series]
199 body = {"items": items}
200 self._put(url, body=body)
201
202 def delete_time_series(self, name) -> None:
203 """Delete a timeseries.
204
205 Args:
206 name (str): Name of timeseries to delete.
207
208 Returns:
209 None
210
211 Examples:
212 Delete a time series by name::
213
214 client = CogniteClient()
215
216 client.time_series.delete_time_series(name="my_ts_1")
217 """
218 url = "/timeseries/{}".format(quote(name, safe=""))
219 self._delete(url)
220
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/cognite/client/stable/time_series.py b/cognite/client/stable/time_series.py
--- a/cognite/client/stable/time_series.py
+++ b/cognite/client/stable/time_series.py
@@ -45,7 +45,7 @@
_RESPONSE_CLASS = TimeSeriesResponse
- def to_pandas(self, include_metadata: bool = False):
+ def to_pandas(self, include_metadata: bool = True):
"""Returns data as a pandas dataframe
Args:
|
{"golden_diff": "diff --git a/cognite/client/stable/time_series.py b/cognite/client/stable/time_series.py\n--- a/cognite/client/stable/time_series.py\n+++ b/cognite/client/stable/time_series.py\n@@ -45,7 +45,7 @@\n \n _RESPONSE_CLASS = TimeSeriesResponse\n \n- def to_pandas(self, include_metadata: bool = False):\n+ def to_pandas(self, include_metadata: bool = True):\n \"\"\"Returns data as a pandas dataframe\n \n Args:\n", "issue": "client.time_series.get_time_series does not return metadata\n**Describe the bug**\r\nWhen executing `client.time_series.get_time_series()` with `include_metadata = True` no metadata is returned.\r\n\r\n**To Reproduce**\r\nRunnable code reproducing the error.\r\n```\r\nimport cognite\r\nimport requests\r\nimport os\r\nimport numpy as np\r\nimport pandas as pd\r\nfrom datetime import datetime, timedelta\r\n\r\nfrom cognite.client.stable.time_series import TimeSeries\r\nsm_api = os.environ['SM_API_KEY']\r\nclient = cognite.CogniteClient(api_key = sm_api)\r\nts_name = 'Test_tssssss'\r\nmy_time_series = [TimeSeries(name=ts_name, \r\n description = 'test_description',\r\n metadata = { 'ASSETSCOPENAME' : 'meta_test_1' })]\r\nclient.time_series.post_time_series(my_time_series)\r\n\r\n# create dummy data\r\nnp.random.seed(1338)\r\nstart_time = int((datetime.now()-timedelta(1)).strftime(\"%s\"))\r\ntimestamps = [(start_time + i * 10)*1000 for i in np.arange(11)]\r\ndf = pd.DataFrame({'timestamp' : timestamps})\r\ndf[ts_name] = np.random.random(df.shape[0])\r\nclient.datapoints.post_datapoints_frame(df)\r\n# get time_series\r\nts1 = client.time_series.get_time_series(name = ts_name,\r\n include_metadata = True).to_pandas()\r\nts1_id = ts1['id'].loc[0] \r\nprint(ts1.loc[0])\r\n# no meta data\r\n# requests:\r\n# first with no metadata\r\nr1 = requests.get(url = 'https://api.cognitedata.com/api/0.5/projects/smart-maintenance-sandbox/timeseries/' + str(ts1_id) ,\r\n headers= { 'Api-Key' : sm_api} , params = {\"includeMetadata\" : False})\r\nprint(r1.text.split('\\n'))\r\n# then with metadata\r\nr1 = requests.get(url = 'https://api.cognitedata.com/api/0.5/projects/smart-maintenance-sandbox/timeseries/' + str(ts1_id) ,\r\n headers= { 'Api-Key' : sm_api} , params = {\"includeMetadata\" : True})\r\nprint(r1.text.split('\\n'))\r\n```\r\n**Expected behavior**\r\nThe `client.time_series.get_time_series(name = ts_name,include_metadata = True)` should return the metadata.\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom copy import deepcopy\nfrom typing import List\nfrom urllib.parse import quote\n\nimport pandas as pd\n\nfrom cognite.client._api_client import APIClient, CogniteCollectionResponse, CogniteResource, CogniteResponse\n\n\nclass TimeSeriesResponse(CogniteResponse):\n \"\"\"Time series Response Object\"\"\"\n\n def __init__(self, internal_representation):\n super().__init__(internal_representation)\n item = self.to_json()\n self.id = item.get(\"id\")\n self.name = item.get(\"name\")\n self.unit = item.get(\"unit\")\n self.is_step = item.get(\"isStep\")\n self.is_string = item.get(\"isString\")\n self.created_time = item.get(\"createdTime\")\n self.last_updated_time = item.get(\"lastUpdatedTime\")\n self.metadata = item.get(\"metadata\")\n self.asset_id = item.get(\"assetId\")\n self.description = item.get(\"description\")\n\n def to_pandas(self):\n \"\"\"Returns data as a pandas dataframe\"\"\"\n if len(self.to_json()) > 0:\n ts = self.to_json().copy()\n if \"metadata\" in ts:\n # Hack to avoid path ending up as first element in dict as from_dict will fail\n metadata = ts.pop(\"metadata\")\n df = pd.DataFrame.from_dict(ts, orient=\"index\")\n df.loc[\"metadata\"] = [metadata]\n else:\n df = pd.DataFrame.from_dict(ts, orient=\"index\")\n return df\n return pd.DataFrame()\n\n\nclass TimeSeriesListResponse(CogniteCollectionResponse):\n \"\"\"Time series Response Object\"\"\"\n\n _RESPONSE_CLASS = TimeSeriesResponse\n\n def to_pandas(self, include_metadata: bool = False):\n \"\"\"Returns data as a pandas dataframe\n\n Args:\n include_metadata (bool): Whether or not to include metadata fields in the resulting dataframe\n \"\"\"\n items = deepcopy(self.internal_representation[\"data\"][\"items\"])\n if items and items[0].get(\"metadata\") is None:\n return pd.DataFrame(items)\n for d in items:\n if d.get(\"metadata\"):\n metadata = d.pop(\"metadata\")\n if include_metadata:\n d.update(metadata)\n return pd.DataFrame(items)\n\n\nclass TimeSeries(CogniteResource):\n \"\"\"Data Transfer Object for a time series.\n\n Args:\n name (str): Unique name of time series.\n is_string (bool): Whether the time series is string valued or not.\n metadata (dict): Metadata.\n unit (str): Physical unit of the time series.\n asset_id (int): Asset that this time series belongs to.\n description (str): Description of the time series.\n security_categories (list(int)): Security categories required in order to access this time series.\n is_step (bool): Whether or not the time series is a step series.\n\n \"\"\"\n\n def __init__(\n self,\n name,\n is_string=False,\n metadata=None,\n unit=None,\n asset_id=None,\n description=None,\n security_categories=None,\n is_step=None,\n ):\n self.name = name\n self.is_string = is_string\n self.metadata = metadata\n self.unit = unit\n self.asset_id = asset_id\n self.description = description\n self.security_categories = security_categories\n self.is_step = is_step\n\n\nclass TimeSeriesClient(APIClient):\n def __init__(self, **kwargs):\n super().__init__(version=\"0.5\", **kwargs)\n\n def get_time_series(\n self, prefix=None, description=None, include_metadata=False, asset_id=None, path=None, **kwargs\n ) -> TimeSeriesListResponse:\n \"\"\"Returns an object containing the requested timeseries.\n\n Args:\n prefix (str): List timeseries with this prefix in the name.\n\n description (str): Filter timeseries taht contains this string in its description.\n\n include_metadata (bool): Decide if the metadata field should be returned or not. Defaults to False.\n\n asset_id (int): Get timeseries related to this asset.\n\n path (List[int]): Get timeseries under this asset path branch.\n\n Keyword Arguments:\n limit (int): Number of results to return.\n\n autopaging (bool): Whether or not to automatically page through results. If set to true, limit will be\n disregarded. Defaults to False.\n\n Returns:\n stable.time_series.TimeSeriesListResponse: A data object containing the requested timeseries with several getter methods with different\n output formats.\n\n Examples:\n Get all time series for a given asset::\n\n client = CogniteClient()\n res = client.time_series.get_time_series(asset_id=123, autopaging=True)\n print(res.to_pandas())\n \"\"\"\n autopaging = kwargs.get(\"autopaging\", False)\n url = \"/timeseries\"\n params = {\n \"q\": prefix,\n \"description\": description,\n \"includeMetadata\": include_metadata,\n \"assetId\": asset_id,\n \"path\": str(path) if path else None,\n \"limit\": kwargs.get(\"limit\", self._LIMIT) if not autopaging else self._LIMIT,\n }\n\n res = self._get(url=url, params=params, autopaging=autopaging)\n return TimeSeriesListResponse(res.json())\n\n def post_time_series(self, time_series: List[TimeSeries]) -> None:\n \"\"\"Create a new time series.\n\n Args:\n time_series (list[stable.time_series.TimeSeries]): List of time series data transfer objects to create.\n\n Returns:\n None\n\n Examples:\n Create a new time series::\n\n from cognite.client.stable.time_series import TimeSeries\n client = CogniteClient()\n\n my_time_series = [TimeSeries(name=\"my_ts_1\")]\n\n client.time_series.post_time_series(my_time_series)\n \"\"\"\n url = \"/timeseries\"\n items = [ts.camel_case_dict() for ts in time_series]\n body = {\"items\": items}\n self._post(url, body=body)\n\n def update_time_series(self, time_series: List[TimeSeries]) -> None:\n \"\"\"Update an existing time series.\n\n For each field that can be updated, a null value indicates that nothing should be done.\n\n Args:\n time_series (list[stable.time_series.TimeSeries]): List of time series data transfer objects to update.\n\n Returns:\n None\n\n Examples:\n Update the unit of a time series::\n\n from cognite.client.stable.time_series import TimeSeries\n client = CogniteClient()\n\n my_time_series = [TimeSeries(name=\"my_ts_1\", unit=\"celsius\")]\n\n client.time_series.update_time_series(my_time_series)\n \"\"\"\n url = \"/timeseries\"\n items = [ts.camel_case_dict() for ts in time_series]\n body = {\"items\": items}\n self._put(url, body=body)\n\n def delete_time_series(self, name) -> None:\n \"\"\"Delete a timeseries.\n\n Args:\n name (str): Name of timeseries to delete.\n\n Returns:\n None\n\n Examples:\n Delete a time series by name::\n\n client = CogniteClient()\n\n client.time_series.delete_time_series(name=\"my_ts_1\")\n \"\"\"\n url = \"/timeseries/{}\".format(quote(name, safe=\"\"))\n self._delete(url)\n", "path": "cognite/client/stable/time_series.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nfrom copy import deepcopy\nfrom typing import List\nfrom urllib.parse import quote\n\nimport pandas as pd\n\nfrom cognite.client._api_client import APIClient, CogniteCollectionResponse, CogniteResource, CogniteResponse\n\n\nclass TimeSeriesResponse(CogniteResponse):\n \"\"\"Time series Response Object\"\"\"\n\n def __init__(self, internal_representation):\n super().__init__(internal_representation)\n item = self.to_json()\n self.id = item.get(\"id\")\n self.name = item.get(\"name\")\n self.unit = item.get(\"unit\")\n self.is_step = item.get(\"isStep\")\n self.is_string = item.get(\"isString\")\n self.created_time = item.get(\"createdTime\")\n self.last_updated_time = item.get(\"lastUpdatedTime\")\n self.metadata = item.get(\"metadata\")\n self.asset_id = item.get(\"assetId\")\n self.description = item.get(\"description\")\n\n def to_pandas(self):\n \"\"\"Returns data as a pandas dataframe\"\"\"\n if len(self.to_json()) > 0:\n ts = self.to_json().copy()\n if \"metadata\" in ts:\n # Hack to avoid path ending up as first element in dict as from_dict will fail\n metadata = ts.pop(\"metadata\")\n df = pd.DataFrame.from_dict(ts, orient=\"index\")\n df.loc[\"metadata\"] = [metadata]\n else:\n df = pd.DataFrame.from_dict(ts, orient=\"index\")\n return df\n return pd.DataFrame()\n\n\nclass TimeSeriesListResponse(CogniteCollectionResponse):\n \"\"\"Time series Response Object\"\"\"\n\n _RESPONSE_CLASS = TimeSeriesResponse\n\n def to_pandas(self, include_metadata: bool = True):\n \"\"\"Returns data as a pandas dataframe\n\n Args:\n include_metadata (bool): Whether or not to include metadata fields in the resulting dataframe\n \"\"\"\n items = deepcopy(self.internal_representation[\"data\"][\"items\"])\n if items and items[0].get(\"metadata\") is None:\n return pd.DataFrame(items)\n for d in items:\n if d.get(\"metadata\"):\n metadata = d.pop(\"metadata\")\n if include_metadata:\n d.update(metadata)\n return pd.DataFrame(items)\n\n\nclass TimeSeries(CogniteResource):\n \"\"\"Data Transfer Object for a time series.\n\n Args:\n name (str): Unique name of time series.\n is_string (bool): Whether the time series is string valued or not.\n metadata (dict): Metadata.\n unit (str): Physical unit of the time series.\n asset_id (int): Asset that this time series belongs to.\n description (str): Description of the time series.\n security_categories (list(int)): Security categories required in order to access this time series.\n is_step (bool): Whether or not the time series is a step series.\n\n \"\"\"\n\n def __init__(\n self,\n name,\n is_string=False,\n metadata=None,\n unit=None,\n asset_id=None,\n description=None,\n security_categories=None,\n is_step=None,\n ):\n self.name = name\n self.is_string = is_string\n self.metadata = metadata\n self.unit = unit\n self.asset_id = asset_id\n self.description = description\n self.security_categories = security_categories\n self.is_step = is_step\n\n\nclass TimeSeriesClient(APIClient):\n def __init__(self, **kwargs):\n super().__init__(version=\"0.5\", **kwargs)\n\n def get_time_series(\n self, prefix=None, description=None, include_metadata=False, asset_id=None, path=None, **kwargs\n ) -> TimeSeriesListResponse:\n \"\"\"Returns an object containing the requested timeseries.\n\n Args:\n prefix (str): List timeseries with this prefix in the name.\n\n description (str): Filter timeseries taht contains this string in its description.\n\n include_metadata (bool): Decide if the metadata field should be returned or not. Defaults to False.\n\n asset_id (int): Get timeseries related to this asset.\n\n path (List[int]): Get timeseries under this asset path branch.\n\n Keyword Arguments:\n limit (int): Number of results to return.\n\n autopaging (bool): Whether or not to automatically page through results. If set to true, limit will be\n disregarded. Defaults to False.\n\n Returns:\n stable.time_series.TimeSeriesListResponse: A data object containing the requested timeseries with several getter methods with different\n output formats.\n\n Examples:\n Get all time series for a given asset::\n\n client = CogniteClient()\n res = client.time_series.get_time_series(asset_id=123, autopaging=True)\n print(res.to_pandas())\n \"\"\"\n autopaging = kwargs.get(\"autopaging\", False)\n url = \"/timeseries\"\n params = {\n \"q\": prefix,\n \"description\": description,\n \"includeMetadata\": include_metadata,\n \"assetId\": asset_id,\n \"path\": str(path) if path else None,\n \"limit\": kwargs.get(\"limit\", self._LIMIT) if not autopaging else self._LIMIT,\n }\n\n res = self._get(url=url, params=params, autopaging=autopaging)\n return TimeSeriesListResponse(res.json())\n\n def post_time_series(self, time_series: List[TimeSeries]) -> None:\n \"\"\"Create a new time series.\n\n Args:\n time_series (list[stable.time_series.TimeSeries]): List of time series data transfer objects to create.\n\n Returns:\n None\n\n Examples:\n Create a new time series::\n\n from cognite.client.stable.time_series import TimeSeries\n client = CogniteClient()\n\n my_time_series = [TimeSeries(name=\"my_ts_1\")]\n\n client.time_series.post_time_series(my_time_series)\n \"\"\"\n url = \"/timeseries\"\n items = [ts.camel_case_dict() for ts in time_series]\n body = {\"items\": items}\n self._post(url, body=body)\n\n def update_time_series(self, time_series: List[TimeSeries]) -> None:\n \"\"\"Update an existing time series.\n\n For each field that can be updated, a null value indicates that nothing should be done.\n\n Args:\n time_series (list[stable.time_series.TimeSeries]): List of time series data transfer objects to update.\n\n Returns:\n None\n\n Examples:\n Update the unit of a time series::\n\n from cognite.client.stable.time_series import TimeSeries\n client = CogniteClient()\n\n my_time_series = [TimeSeries(name=\"my_ts_1\", unit=\"celsius\")]\n\n client.time_series.update_time_series(my_time_series)\n \"\"\"\n url = \"/timeseries\"\n items = [ts.camel_case_dict() for ts in time_series]\n body = {\"items\": items}\n self._put(url, body=body)\n\n def delete_time_series(self, name) -> None:\n \"\"\"Delete a timeseries.\n\n Args:\n name (str): Name of timeseries to delete.\n\n Returns:\n None\n\n Examples:\n Delete a time series by name::\n\n client = CogniteClient()\n\n client.time_series.delete_time_series(name=\"my_ts_1\")\n \"\"\"\n url = \"/timeseries/{}\".format(quote(name, safe=\"\"))\n self._delete(url)\n", "path": "cognite/client/stable/time_series.py"}]}
| 2,920 | 113 |
gh_patches_debug_5030
|
rasdani/github-patches
|
git_diff
|
geopandas__geopandas-1544
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BUG: test_numerical_operations fails under pandas 1.1.0
[Failure](https://travis-ci.org/github/geopandas/geopandas/jobs/712675986#L2140) in `test_numerical_operations` (`idxmin` error type) -> pandas-dev/pandas#32749. Even though it is a known regression in pandas, we should handle it (at least skip tests for now).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `geopandas/_compat.py`
Content:
```
1 from distutils.version import LooseVersion
2 import importlib
3 import os
4 import warnings
5
6 import pandas as pd
7 import shapely
8
9 # -----------------------------------------------------------------------------
10 # pandas compat
11 # -----------------------------------------------------------------------------
12
13 PANDAS_GE_025 = str(pd.__version__) >= LooseVersion("0.25.0")
14 PANDAS_GE_10 = str(pd.__version__) >= LooseVersion("0.26.0.dev")
15 PANDAS_GE_11 = str(pd.__version__) >= LooseVersion("1.1.0.dev")
16
17
18 # -----------------------------------------------------------------------------
19 # Shapely / PyGEOS compat
20 # -----------------------------------------------------------------------------
21
22
23 SHAPELY_GE_17 = str(shapely.__version__) >= LooseVersion("1.7.0")
24
25 HAS_PYGEOS = None
26 USE_PYGEOS = None
27 PYGEOS_SHAPELY_COMPAT = None
28
29 try:
30 import pygeos # noqa
31
32 HAS_PYGEOS = True
33 except ImportError:
34 HAS_PYGEOS = False
35
36
37 def set_use_pygeos(val=None):
38 """
39 Set the global configuration on whether to use PyGEOS or not.
40
41 The default is use PyGEOS if it is installed. This can be overridden
42 with an environment variable USE_PYGEOS (this is only checked at
43 first import, cannot be changed during interactive session).
44
45 Alternatively, pass a value here to force a True/False value.
46 """
47 global USE_PYGEOS
48 global PYGEOS_SHAPELY_COMPAT
49
50 if val is not None:
51 USE_PYGEOS = bool(val)
52 else:
53 if USE_PYGEOS is None:
54
55 USE_PYGEOS = HAS_PYGEOS
56
57 env_use_pygeos = os.getenv("USE_PYGEOS", None)
58 if env_use_pygeos is not None:
59 USE_PYGEOS = bool(int(env_use_pygeos))
60
61 # validate the pygeos version
62 if USE_PYGEOS:
63 try:
64 import pygeos # noqa
65
66 # validate the pygeos version
67 if not str(pygeos.__version__) >= LooseVersion("0.6"):
68 raise ImportError(
69 "PyGEOS >= 0.6 is required, version {0} is installed".format(
70 pygeos.__version__
71 )
72 )
73
74 # Check whether Shapely and PyGEOS use the same GEOS version.
75 # Based on PyGEOS from_shapely implementation.
76
77 from shapely.geos import geos_version_string as shapely_geos_version
78 from pygeos import geos_capi_version_string
79
80 # shapely has something like: "3.6.2-CAPI-1.10.2 4d2925d6"
81 # pygeos has something like: "3.6.2-CAPI-1.10.2"
82 if not shapely_geos_version.startswith(geos_capi_version_string):
83 warnings.warn(
84 "The Shapely GEOS version ({}) is incompatible with the GEOS "
85 "version PyGEOS was compiled with ({}). Conversions between both "
86 "will be slow.".format(
87 shapely_geos_version, geos_capi_version_string
88 )
89 )
90 PYGEOS_SHAPELY_COMPAT = False
91 else:
92 PYGEOS_SHAPELY_COMPAT = True
93
94 except ImportError:
95 raise ImportError(
96 "To use the PyGEOS speed-ups within GeoPandas, you need to install "
97 "PyGEOS: 'conda install pygeos' or 'pip install pygeos'"
98 )
99
100
101 set_use_pygeos()
102
103
104 def import_optional_dependency(name: str, extra: str = ""):
105 """
106 Import an optional dependency.
107
108 Adapted from pandas.compat._optional::import_optional_dependency
109
110 Raises a formatted ImportError if the module is not present.
111
112 Parameters
113 ----------
114 name : str
115 The module name.
116 extra : str
117 Additional text to include in the ImportError message.
118 Returns
119 -------
120 module
121 """
122 msg = """Missing optional dependency '{name}'. {extra} "
123 "Use pip or conda to install {name}.""".format(
124 name=name, extra=extra
125 )
126
127 if not isinstance(name, str):
128 raise ValueError(
129 "Invalid module name: '{name}'; must be a string".format(name=name)
130 )
131
132 try:
133 module = importlib.import_module(name)
134
135 except ImportError:
136 raise ImportError(msg) from None
137
138 return module
139
140
141 # -----------------------------------------------------------------------------
142 # RTree compat
143 # -----------------------------------------------------------------------------
144
145 HAS_RTREE = None
146 RTREE_GE_094 = False
147 try:
148 import rtree # noqa
149
150 HAS_RTREE = True
151 except ImportError:
152 HAS_RTREE = False
153
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/geopandas/_compat.py b/geopandas/_compat.py
--- a/geopandas/_compat.py
+++ b/geopandas/_compat.py
@@ -12,7 +12,7 @@
PANDAS_GE_025 = str(pd.__version__) >= LooseVersion("0.25.0")
PANDAS_GE_10 = str(pd.__version__) >= LooseVersion("0.26.0.dev")
-PANDAS_GE_11 = str(pd.__version__) >= LooseVersion("1.1.0.dev")
+PANDAS_GE_11 = str(pd.__version__) >= LooseVersion("1.1.0")
# -----------------------------------------------------------------------------
|
{"golden_diff": "diff --git a/geopandas/_compat.py b/geopandas/_compat.py\n--- a/geopandas/_compat.py\n+++ b/geopandas/_compat.py\n@@ -12,7 +12,7 @@\n \n PANDAS_GE_025 = str(pd.__version__) >= LooseVersion(\"0.25.0\")\n PANDAS_GE_10 = str(pd.__version__) >= LooseVersion(\"0.26.0.dev\")\n-PANDAS_GE_11 = str(pd.__version__) >= LooseVersion(\"1.1.0.dev\")\n+PANDAS_GE_11 = str(pd.__version__) >= LooseVersion(\"1.1.0\")\n \n \n # -----------------------------------------------------------------------------\n", "issue": "BUG: test_numerical_operations fails under pandas 1.1.0\n[Failure](https://travis-ci.org/github/geopandas/geopandas/jobs/712675986#L2140) in `test_numerical_operations` (`idxmin` error type) -> pandas-dev/pandas#32749. Even though it is a known regression in pandas, we should handle it (at least skip tests for now).\n", "before_files": [{"content": "from distutils.version import LooseVersion\nimport importlib\nimport os\nimport warnings\n\nimport pandas as pd\nimport shapely\n\n# -----------------------------------------------------------------------------\n# pandas compat\n# -----------------------------------------------------------------------------\n\nPANDAS_GE_025 = str(pd.__version__) >= LooseVersion(\"0.25.0\")\nPANDAS_GE_10 = str(pd.__version__) >= LooseVersion(\"0.26.0.dev\")\nPANDAS_GE_11 = str(pd.__version__) >= LooseVersion(\"1.1.0.dev\")\n\n\n# -----------------------------------------------------------------------------\n# Shapely / PyGEOS compat\n# -----------------------------------------------------------------------------\n\n\nSHAPELY_GE_17 = str(shapely.__version__) >= LooseVersion(\"1.7.0\")\n\nHAS_PYGEOS = None\nUSE_PYGEOS = None\nPYGEOS_SHAPELY_COMPAT = None\n\ntry:\n import pygeos # noqa\n\n HAS_PYGEOS = True\nexcept ImportError:\n HAS_PYGEOS = False\n\n\ndef set_use_pygeos(val=None):\n \"\"\"\n Set the global configuration on whether to use PyGEOS or not.\n\n The default is use PyGEOS if it is installed. This can be overridden\n with an environment variable USE_PYGEOS (this is only checked at\n first import, cannot be changed during interactive session).\n\n Alternatively, pass a value here to force a True/False value.\n \"\"\"\n global USE_PYGEOS\n global PYGEOS_SHAPELY_COMPAT\n\n if val is not None:\n USE_PYGEOS = bool(val)\n else:\n if USE_PYGEOS is None:\n\n USE_PYGEOS = HAS_PYGEOS\n\n env_use_pygeos = os.getenv(\"USE_PYGEOS\", None)\n if env_use_pygeos is not None:\n USE_PYGEOS = bool(int(env_use_pygeos))\n\n # validate the pygeos version\n if USE_PYGEOS:\n try:\n import pygeos # noqa\n\n # validate the pygeos version\n if not str(pygeos.__version__) >= LooseVersion(\"0.6\"):\n raise ImportError(\n \"PyGEOS >= 0.6 is required, version {0} is installed\".format(\n pygeos.__version__\n )\n )\n\n # Check whether Shapely and PyGEOS use the same GEOS version.\n # Based on PyGEOS from_shapely implementation.\n\n from shapely.geos import geos_version_string as shapely_geos_version\n from pygeos import geos_capi_version_string\n\n # shapely has something like: \"3.6.2-CAPI-1.10.2 4d2925d6\"\n # pygeos has something like: \"3.6.2-CAPI-1.10.2\"\n if not shapely_geos_version.startswith(geos_capi_version_string):\n warnings.warn(\n \"The Shapely GEOS version ({}) is incompatible with the GEOS \"\n \"version PyGEOS was compiled with ({}). Conversions between both \"\n \"will be slow.\".format(\n shapely_geos_version, geos_capi_version_string\n )\n )\n PYGEOS_SHAPELY_COMPAT = False\n else:\n PYGEOS_SHAPELY_COMPAT = True\n\n except ImportError:\n raise ImportError(\n \"To use the PyGEOS speed-ups within GeoPandas, you need to install \"\n \"PyGEOS: 'conda install pygeos' or 'pip install pygeos'\"\n )\n\n\nset_use_pygeos()\n\n\ndef import_optional_dependency(name: str, extra: str = \"\"):\n \"\"\"\n Import an optional dependency.\n\n Adapted from pandas.compat._optional::import_optional_dependency\n\n Raises a formatted ImportError if the module is not present.\n\n Parameters\n ----------\n name : str\n The module name.\n extra : str\n Additional text to include in the ImportError message.\n Returns\n -------\n module\n \"\"\"\n msg = \"\"\"Missing optional dependency '{name}'. {extra} \"\n \"Use pip or conda to install {name}.\"\"\".format(\n name=name, extra=extra\n )\n\n if not isinstance(name, str):\n raise ValueError(\n \"Invalid module name: '{name}'; must be a string\".format(name=name)\n )\n\n try:\n module = importlib.import_module(name)\n\n except ImportError:\n raise ImportError(msg) from None\n\n return module\n\n\n# -----------------------------------------------------------------------------\n# RTree compat\n# -----------------------------------------------------------------------------\n\nHAS_RTREE = None\nRTREE_GE_094 = False\ntry:\n import rtree # noqa\n\n HAS_RTREE = True\nexcept ImportError:\n HAS_RTREE = False\n", "path": "geopandas/_compat.py"}], "after_files": [{"content": "from distutils.version import LooseVersion\nimport importlib\nimport os\nimport warnings\n\nimport pandas as pd\nimport shapely\n\n# -----------------------------------------------------------------------------\n# pandas compat\n# -----------------------------------------------------------------------------\n\nPANDAS_GE_025 = str(pd.__version__) >= LooseVersion(\"0.25.0\")\nPANDAS_GE_10 = str(pd.__version__) >= LooseVersion(\"0.26.0.dev\")\nPANDAS_GE_11 = str(pd.__version__) >= LooseVersion(\"1.1.0\")\n\n\n# -----------------------------------------------------------------------------\n# Shapely / PyGEOS compat\n# -----------------------------------------------------------------------------\n\n\nSHAPELY_GE_17 = str(shapely.__version__) >= LooseVersion(\"1.7.0\")\n\nHAS_PYGEOS = None\nUSE_PYGEOS = None\nPYGEOS_SHAPELY_COMPAT = None\n\ntry:\n import pygeos # noqa\n\n HAS_PYGEOS = True\nexcept ImportError:\n HAS_PYGEOS = False\n\n\ndef set_use_pygeos(val=None):\n \"\"\"\n Set the global configuration on whether to use PyGEOS or not.\n\n The default is use PyGEOS if it is installed. This can be overridden\n with an environment variable USE_PYGEOS (this is only checked at\n first import, cannot be changed during interactive session).\n\n Alternatively, pass a value here to force a True/False value.\n \"\"\"\n global USE_PYGEOS\n global PYGEOS_SHAPELY_COMPAT\n\n if val is not None:\n USE_PYGEOS = bool(val)\n else:\n if USE_PYGEOS is None:\n\n USE_PYGEOS = HAS_PYGEOS\n\n env_use_pygeos = os.getenv(\"USE_PYGEOS\", None)\n if env_use_pygeos is not None:\n USE_PYGEOS = bool(int(env_use_pygeos))\n\n # validate the pygeos version\n if USE_PYGEOS:\n try:\n import pygeos # noqa\n\n # validate the pygeos version\n if not str(pygeos.__version__) >= LooseVersion(\"0.6\"):\n raise ImportError(\n \"PyGEOS >= 0.6 is required, version {0} is installed\".format(\n pygeos.__version__\n )\n )\n\n # Check whether Shapely and PyGEOS use the same GEOS version.\n # Based on PyGEOS from_shapely implementation.\n\n from shapely.geos import geos_version_string as shapely_geos_version\n from pygeos import geos_capi_version_string\n\n # shapely has something like: \"3.6.2-CAPI-1.10.2 4d2925d6\"\n # pygeos has something like: \"3.6.2-CAPI-1.10.2\"\n if not shapely_geos_version.startswith(geos_capi_version_string):\n warnings.warn(\n \"The Shapely GEOS version ({}) is incompatible with the GEOS \"\n \"version PyGEOS was compiled with ({}). Conversions between both \"\n \"will be slow.\".format(\n shapely_geos_version, geos_capi_version_string\n )\n )\n PYGEOS_SHAPELY_COMPAT = False\n else:\n PYGEOS_SHAPELY_COMPAT = True\n\n except ImportError:\n raise ImportError(\n \"To use the PyGEOS speed-ups within GeoPandas, you need to install \"\n \"PyGEOS: 'conda install pygeos' or 'pip install pygeos'\"\n )\n\n\nset_use_pygeos()\n\n\ndef import_optional_dependency(name: str, extra: str = \"\"):\n \"\"\"\n Import an optional dependency.\n\n Adapted from pandas.compat._optional::import_optional_dependency\n\n Raises a formatted ImportError if the module is not present.\n\n Parameters\n ----------\n name : str\n The module name.\n extra : str\n Additional text to include in the ImportError message.\n Returns\n -------\n module\n \"\"\"\n msg = \"\"\"Missing optional dependency '{name}'. {extra} \"\n \"Use pip or conda to install {name}.\"\"\".format(\n name=name, extra=extra\n )\n\n if not isinstance(name, str):\n raise ValueError(\n \"Invalid module name: '{name}'; must be a string\".format(name=name)\n )\n\n try:\n module = importlib.import_module(name)\n\n except ImportError:\n raise ImportError(msg) from None\n\n return module\n\n\n# -----------------------------------------------------------------------------\n# RTree compat\n# -----------------------------------------------------------------------------\n\nHAS_RTREE = None\nRTREE_GE_094 = False\ntry:\n import rtree # noqa\n\n HAS_RTREE = True\nexcept ImportError:\n HAS_RTREE = False\n", "path": "geopandas/_compat.py"}]}
| 1,757 | 150 |
gh_patches_debug_18787
|
rasdani/github-patches
|
git_diff
|
plotly__dash-964
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Clientside PreventUpdate and no_update
It would be helpful for clientside callbacks to be able to conditionally update components. This is not supported right now (as far as I can tell). This should be pretty simple, my idea would be to define
```js
window.dash_clientside.no_update = {};
```
then in `updateClientsideOutput` just conditionally block updating by adding
```js
if(window.dash_clientside.no_update && outputValue === window.dash_clientside.no_update)
return;
```
Similarly we could define `window.dash_clientside.PreventUpdate` and allow the use of
```js
throw window.dash_clientside.PreventUpdate;
```
If this seems reasonable I could put together a PR.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dash/dependencies.py`
Content:
```
1 class DashDependency:
2 # pylint: disable=too-few-public-methods
3 def __init__(self, component_id, component_property):
4 self.component_id = component_id
5 self.component_property = component_property
6
7 def __str__(self):
8 return '{}.{}'.format(
9 self.component_id,
10 self.component_property
11 )
12
13 def __repr__(self):
14 return '<{} `{}`>'.format(self.__class__.__name__, self)
15
16 def __eq__(self, other):
17 return isinstance(other, DashDependency) and str(self) == str(other)
18
19 def __hash__(self):
20 return hash(str(self))
21
22
23 class Output(DashDependency): # pylint: disable=too-few-public-methods
24 """Output of a callback."""
25
26
27 class Input(DashDependency): # pylint: disable=too-few-public-methods
28 """Input of callback trigger an update when it is updated."""
29
30
31 class State(DashDependency): # pylint: disable=too-few-public-methods
32 """Use the value of a state in a callback but don't trigger updates."""
33
34
35 class ClientsideFunction:
36 # pylint: disable=too-few-public-methods
37 def __init__(self, namespace=None, function_name=None):
38 self.namespace = namespace
39 self.function_name = function_name
40
41 def __repr__(self):
42 return 'ClientsideFunction({}, {})'.format(
43 self.namespace,
44 self.function_name
45 )
46
```
Path: `dash/_utils.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 from __future__ import unicode_literals
3 import shlex
4 import sys
5 import uuid
6 import hashlib
7 import collections
8 import subprocess
9 import logging
10 from io import open # pylint: disable=redefined-builtin
11 from functools import wraps
12 import future.utils as utils
13
14 logger = logging.getLogger()
15
16
17 def interpolate_str(template, **data):
18 s = template
19 for k, v in data.items():
20 key = "{%" + k + "%}"
21 s = s.replace(key, v)
22 return s
23
24
25 def format_tag(tag_name, attributes, inner="", closed=False, opened=False):
26 tag = "<{tag} {attributes}"
27 if closed:
28 tag += "/>"
29 elif opened:
30 tag += ">"
31 else:
32 tag += ">" + inner + "</{tag}>"
33 return tag.format(
34 tag=tag_name,
35 attributes=" ".join(
36 ['{}="{}"'.format(k, v) for k, v in attributes.items()]
37 ),
38 )
39
40
41 def generate_hash():
42 return str(uuid.uuid4().hex).strip("-")
43
44
45 def get_asset_path(requests_pathname, asset_path, asset_url_path):
46
47 return "/".join(
48 [
49 # Only take the first part of the pathname
50 requests_pathname.rstrip("/"),
51 asset_url_path,
52 asset_path,
53 ]
54 )
55
56
57 # pylint: disable=no-member
58 def patch_collections_abc(member):
59 return getattr(collections if utils.PY2 else collections.abc, member)
60
61
62 class AttributeDict(dict):
63 """Dictionary subclass enabling attribute lookup/assignment of keys/values.
64
65 For example::
66 >>> m = AttributeDict({'foo': 'bar'})
67 >>> m.foo
68 'bar'
69 >>> m.foo = 'not bar'
70 >>> m['foo']
71 'not bar'
72 ``AttributeDict`` objects also provide ``.first()`` which acts like
73 ``.get()`` but accepts multiple keys as arguments, and returns the value of
74 the first hit, e.g.::
75 >>> m = AttributeDict({'foo': 'bar', 'biz': 'baz'})
76 >>> m.first('wrong', 'incorrect', 'foo', 'biz')
77 'bar'
78 """
79
80 def __setattr__(self, key, value):
81 self[key] = value
82
83 def __getattr__(self, key):
84 try:
85 return self[key]
86 except KeyError:
87 pass
88 # to conform with __getattr__ spec
89 # but get out of the except block so it doesn't look like a nested err
90 raise AttributeError(key)
91
92 def set_read_only(self, names, msg="Attribute is read-only"):
93 object.__setattr__(self, "_read_only", names)
94 object.__setattr__(self, "_read_only_msg", msg)
95
96 def finalize(self, msg="Object is final: No new keys may be added."):
97 """Prevent any new keys being set."""
98 object.__setattr__(self, "_final", msg)
99
100 def __setitem__(self, key, val):
101 if key in self.__dict__.get("_read_only", []):
102 raise AttributeError(self._read_only_msg, key)
103
104 final_msg = self.__dict__.get("_final")
105 if final_msg and key not in self:
106 raise AttributeError(final_msg, key)
107
108 return super(AttributeDict, self).__setitem__(key, val)
109
110 # pylint: disable=inconsistent-return-statements
111 def first(self, *names):
112 for name in names:
113 value = self.get(name)
114 if value:
115 return value
116
117
118 def create_callback_id(output):
119 if isinstance(output, (list, tuple)):
120 return "..{}..".format(
121 "...".join(
122 "{}.{}".format(x.component_id, x.component_property)
123 for x in output
124 )
125 )
126
127 return "{}.{}".format(output.component_id, output.component_property)
128
129
130 def run_command_with_process(cmd):
131 proc = subprocess.Popen(shlex.split(cmd, posix=sys.platform != "win32"))
132 proc.wait()
133 if proc.poll() is None:
134 logger.warning("🚨 trying to terminate subprocess in safe way")
135 try:
136 proc.communicate()
137 except Exception: # pylint: disable=broad-except
138 logger.exception("🚨 first try communicate failed")
139 proc.kill()
140 proc.communicate()
141
142
143 def compute_md5(path):
144 with open(path, encoding="utf-8") as fp:
145 return hashlib.md5(fp.read().encode("utf-8")).hexdigest()
146
147
148 def job(msg=""):
149 def wrapper(func):
150 @wraps(func)
151 def _wrapper(*args, **kwargs):
152 logger.info("🏗️ [%s] 🏗️️ - %s", func.__name__, msg)
153 res = func(*args, **kwargs)
154 logger.info("::: 🍻🍻🍻 [%s] job done 🍻🍻🍻 :::", func.__name__)
155 return res
156
157 return _wrapper
158
159 return wrapper
160
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/dash/_utils.py b/dash/_utils.py
--- a/dash/_utils.py
+++ b/dash/_utils.py
@@ -128,7 +128,8 @@
def run_command_with_process(cmd):
- proc = subprocess.Popen(shlex.split(cmd, posix=sys.platform != "win32"))
+ is_win = sys.platform == "win32"
+ proc = subprocess.Popen(shlex.split(cmd, posix=is_win), shell=is_win)
proc.wait()
if proc.poll() is None:
logger.warning("🚨 trying to terminate subprocess in safe way")
diff --git a/dash/dependencies.py b/dash/dependencies.py
--- a/dash/dependencies.py
+++ b/dash/dependencies.py
@@ -35,6 +35,11 @@
class ClientsideFunction:
# pylint: disable=too-few-public-methods
def __init__(self, namespace=None, function_name=None):
+
+ if namespace in ['PreventUpdate', 'no_update']:
+ raise ValueError('"{}" is a forbidden namespace in'
+ ' dash_clientside.'.format(namespace))
+
self.namespace = namespace
self.function_name = function_name
|
{"golden_diff": "diff --git a/dash/_utils.py b/dash/_utils.py\n--- a/dash/_utils.py\n+++ b/dash/_utils.py\n@@ -128,7 +128,8 @@\n \n \n def run_command_with_process(cmd):\n- proc = subprocess.Popen(shlex.split(cmd, posix=sys.platform != \"win32\"))\n+ is_win = sys.platform == \"win32\"\n+ proc = subprocess.Popen(shlex.split(cmd, posix=is_win), shell=is_win)\n proc.wait()\n if proc.poll() is None:\n logger.warning(\"\ud83d\udea8 trying to terminate subprocess in safe way\")\ndiff --git a/dash/dependencies.py b/dash/dependencies.py\n--- a/dash/dependencies.py\n+++ b/dash/dependencies.py\n@@ -35,6 +35,11 @@\n class ClientsideFunction:\n # pylint: disable=too-few-public-methods\n def __init__(self, namespace=None, function_name=None):\n+\n+ if namespace in ['PreventUpdate', 'no_update']:\n+ raise ValueError('\"{}\" is a forbidden namespace in'\n+ ' dash_clientside.'.format(namespace))\n+\n self.namespace = namespace\n self.function_name = function_name\n", "issue": "Clientside PreventUpdate and no_update\nIt would be helpful for clientside callbacks to be able to conditionally update components. This is not supported right now (as far as I can tell). This should be pretty simple, my idea would be to define\r\n\r\n```js\r\nwindow.dash_clientside.no_update = {};\r\n```\r\n\r\nthen in `updateClientsideOutput` just conditionally block updating by adding\r\n\r\n```js\r\nif(window.dash_clientside.no_update && outputValue === window.dash_clientside.no_update)\r\n return;\r\n```\r\n\r\nSimilarly we could define `window.dash_clientside.PreventUpdate` and allow the use of\r\n```js\r\nthrow window.dash_clientside.PreventUpdate;\r\n``` \r\nIf this seems reasonable I could put together a PR.\r\n\n", "before_files": [{"content": "class DashDependency:\n # pylint: disable=too-few-public-methods\n def __init__(self, component_id, component_property):\n self.component_id = component_id\n self.component_property = component_property\n\n def __str__(self):\n return '{}.{}'.format(\n self.component_id,\n self.component_property\n )\n\n def __repr__(self):\n return '<{} `{}`>'.format(self.__class__.__name__, self)\n\n def __eq__(self, other):\n return isinstance(other, DashDependency) and str(self) == str(other)\n\n def __hash__(self):\n return hash(str(self))\n\n\nclass Output(DashDependency): # pylint: disable=too-few-public-methods\n \"\"\"Output of a callback.\"\"\"\n\n\nclass Input(DashDependency): # pylint: disable=too-few-public-methods\n \"\"\"Input of callback trigger an update when it is updated.\"\"\"\n\n\nclass State(DashDependency): # pylint: disable=too-few-public-methods\n \"\"\"Use the value of a state in a callback but don't trigger updates.\"\"\"\n\n\nclass ClientsideFunction:\n # pylint: disable=too-few-public-methods\n def __init__(self, namespace=None, function_name=None):\n self.namespace = namespace\n self.function_name = function_name\n\n def __repr__(self):\n return 'ClientsideFunction({}, {})'.format(\n self.namespace,\n self.function_name\n )\n", "path": "dash/dependencies.py"}, {"content": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\nimport shlex\nimport sys\nimport uuid\nimport hashlib\nimport collections\nimport subprocess\nimport logging\nfrom io import open # pylint: disable=redefined-builtin\nfrom functools import wraps\nimport future.utils as utils\n\nlogger = logging.getLogger()\n\n\ndef interpolate_str(template, **data):\n s = template\n for k, v in data.items():\n key = \"{%\" + k + \"%}\"\n s = s.replace(key, v)\n return s\n\n\ndef format_tag(tag_name, attributes, inner=\"\", closed=False, opened=False):\n tag = \"<{tag} {attributes}\"\n if closed:\n tag += \"/>\"\n elif opened:\n tag += \">\"\n else:\n tag += \">\" + inner + \"</{tag}>\"\n return tag.format(\n tag=tag_name,\n attributes=\" \".join(\n ['{}=\"{}\"'.format(k, v) for k, v in attributes.items()]\n ),\n )\n\n\ndef generate_hash():\n return str(uuid.uuid4().hex).strip(\"-\")\n\n\ndef get_asset_path(requests_pathname, asset_path, asset_url_path):\n\n return \"/\".join(\n [\n # Only take the first part of the pathname\n requests_pathname.rstrip(\"/\"),\n asset_url_path,\n asset_path,\n ]\n )\n\n\n# pylint: disable=no-member\ndef patch_collections_abc(member):\n return getattr(collections if utils.PY2 else collections.abc, member)\n\n\nclass AttributeDict(dict):\n \"\"\"Dictionary subclass enabling attribute lookup/assignment of keys/values.\n\n For example::\n >>> m = AttributeDict({'foo': 'bar'})\n >>> m.foo\n 'bar'\n >>> m.foo = 'not bar'\n >>> m['foo']\n 'not bar'\n ``AttributeDict`` objects also provide ``.first()`` which acts like\n ``.get()`` but accepts multiple keys as arguments, and returns the value of\n the first hit, e.g.::\n >>> m = AttributeDict({'foo': 'bar', 'biz': 'baz'})\n >>> m.first('wrong', 'incorrect', 'foo', 'biz')\n 'bar'\n \"\"\"\n\n def __setattr__(self, key, value):\n self[key] = value\n\n def __getattr__(self, key):\n try:\n return self[key]\n except KeyError:\n pass\n # to conform with __getattr__ spec\n # but get out of the except block so it doesn't look like a nested err\n raise AttributeError(key)\n\n def set_read_only(self, names, msg=\"Attribute is read-only\"):\n object.__setattr__(self, \"_read_only\", names)\n object.__setattr__(self, \"_read_only_msg\", msg)\n\n def finalize(self, msg=\"Object is final: No new keys may be added.\"):\n \"\"\"Prevent any new keys being set.\"\"\"\n object.__setattr__(self, \"_final\", msg)\n\n def __setitem__(self, key, val):\n if key in self.__dict__.get(\"_read_only\", []):\n raise AttributeError(self._read_only_msg, key)\n\n final_msg = self.__dict__.get(\"_final\")\n if final_msg and key not in self:\n raise AttributeError(final_msg, key)\n\n return super(AttributeDict, self).__setitem__(key, val)\n\n # pylint: disable=inconsistent-return-statements\n def first(self, *names):\n for name in names:\n value = self.get(name)\n if value:\n return value\n\n\ndef create_callback_id(output):\n if isinstance(output, (list, tuple)):\n return \"..{}..\".format(\n \"...\".join(\n \"{}.{}\".format(x.component_id, x.component_property)\n for x in output\n )\n )\n\n return \"{}.{}\".format(output.component_id, output.component_property)\n\n\ndef run_command_with_process(cmd):\n proc = subprocess.Popen(shlex.split(cmd, posix=sys.platform != \"win32\"))\n proc.wait()\n if proc.poll() is None:\n logger.warning(\"\ud83d\udea8 trying to terminate subprocess in safe way\")\n try:\n proc.communicate()\n except Exception: # pylint: disable=broad-except\n logger.exception(\"\ud83d\udea8 first try communicate failed\")\n proc.kill()\n proc.communicate()\n\n\ndef compute_md5(path):\n with open(path, encoding=\"utf-8\") as fp:\n return hashlib.md5(fp.read().encode(\"utf-8\")).hexdigest()\n\n\ndef job(msg=\"\"):\n def wrapper(func):\n @wraps(func)\n def _wrapper(*args, **kwargs):\n logger.info(\"\ud83c\udfd7\ufe0f [%s] \ud83c\udfd7\ufe0f\ufe0f - %s\", func.__name__, msg)\n res = func(*args, **kwargs)\n logger.info(\"::: \ud83c\udf7b\ud83c\udf7b\ud83c\udf7b [%s] job done \ud83c\udf7b\ud83c\udf7b\ud83c\udf7b :::\", func.__name__)\n return res\n\n return _wrapper\n\n return wrapper\n", "path": "dash/_utils.py"}], "after_files": [{"content": "class DashDependency:\n # pylint: disable=too-few-public-methods\n def __init__(self, component_id, component_property):\n self.component_id = component_id\n self.component_property = component_property\n\n def __str__(self):\n return '{}.{}'.format(\n self.component_id,\n self.component_property\n )\n\n def __repr__(self):\n return '<{} `{}`>'.format(self.__class__.__name__, self)\n\n def __eq__(self, other):\n return isinstance(other, DashDependency) and str(self) == str(other)\n\n def __hash__(self):\n return hash(str(self))\n\n\nclass Output(DashDependency): # pylint: disable=too-few-public-methods\n \"\"\"Output of a callback.\"\"\"\n\n\nclass Input(DashDependency): # pylint: disable=too-few-public-methods\n \"\"\"Input of callback trigger an update when it is updated.\"\"\"\n\n\nclass State(DashDependency): # pylint: disable=too-few-public-methods\n \"\"\"Use the value of a state in a callback but don't trigger updates.\"\"\"\n\n\nclass ClientsideFunction:\n # pylint: disable=too-few-public-methods\n def __init__(self, namespace=None, function_name=None):\n\n if namespace in ['PreventUpdate', 'no_update']:\n raise ValueError('\"{}\" is a forbidden namespace in'\n ' dash_clientside.'.format(namespace))\n\n self.namespace = namespace\n self.function_name = function_name\n\n def __repr__(self):\n return 'ClientsideFunction({}, {})'.format(\n self.namespace,\n self.function_name\n )\n", "path": "dash/dependencies.py"}, {"content": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\nimport shlex\nimport sys\nimport uuid\nimport hashlib\nimport collections\nimport subprocess\nimport logging\nfrom io import open # pylint: disable=redefined-builtin\nfrom functools import wraps\nimport future.utils as utils\n\nlogger = logging.getLogger()\n\n\ndef interpolate_str(template, **data):\n s = template\n for k, v in data.items():\n key = \"{%\" + k + \"%}\"\n s = s.replace(key, v)\n return s\n\n\ndef format_tag(tag_name, attributes, inner=\"\", closed=False, opened=False):\n tag = \"<{tag} {attributes}\"\n if closed:\n tag += \"/>\"\n elif opened:\n tag += \">\"\n else:\n tag += \">\" + inner + \"</{tag}>\"\n return tag.format(\n tag=tag_name,\n attributes=\" \".join(\n ['{}=\"{}\"'.format(k, v) for k, v in attributes.items()]\n ),\n )\n\n\ndef generate_hash():\n return str(uuid.uuid4().hex).strip(\"-\")\n\n\ndef get_asset_path(requests_pathname, asset_path, asset_url_path):\n\n return \"/\".join(\n [\n # Only take the first part of the pathname\n requests_pathname.rstrip(\"/\"),\n asset_url_path,\n asset_path,\n ]\n )\n\n\n# pylint: disable=no-member\ndef patch_collections_abc(member):\n return getattr(collections if utils.PY2 else collections.abc, member)\n\n\nclass AttributeDict(dict):\n \"\"\"Dictionary subclass enabling attribute lookup/assignment of keys/values.\n\n For example::\n >>> m = AttributeDict({'foo': 'bar'})\n >>> m.foo\n 'bar'\n >>> m.foo = 'not bar'\n >>> m['foo']\n 'not bar'\n ``AttributeDict`` objects also provide ``.first()`` which acts like\n ``.get()`` but accepts multiple keys as arguments, and returns the value of\n the first hit, e.g.::\n >>> m = AttributeDict({'foo': 'bar', 'biz': 'baz'})\n >>> m.first('wrong', 'incorrect', 'foo', 'biz')\n 'bar'\n \"\"\"\n\n def __setattr__(self, key, value):\n self[key] = value\n\n def __getattr__(self, key):\n try:\n return self[key]\n except KeyError:\n pass\n # to conform with __getattr__ spec\n # but get out of the except block so it doesn't look like a nested err\n raise AttributeError(key)\n\n def set_read_only(self, names, msg=\"Attribute is read-only\"):\n object.__setattr__(self, \"_read_only\", names)\n object.__setattr__(self, \"_read_only_msg\", msg)\n\n def finalize(self, msg=\"Object is final: No new keys may be added.\"):\n \"\"\"Prevent any new keys being set.\"\"\"\n object.__setattr__(self, \"_final\", msg)\n\n def __setitem__(self, key, val):\n if key in self.__dict__.get(\"_read_only\", []):\n raise AttributeError(self._read_only_msg, key)\n\n final_msg = self.__dict__.get(\"_final\")\n if final_msg and key not in self:\n raise AttributeError(final_msg, key)\n\n return super(AttributeDict, self).__setitem__(key, val)\n\n # pylint: disable=inconsistent-return-statements\n def first(self, *names):\n for name in names:\n value = self.get(name)\n if value:\n return value\n\n\ndef create_callback_id(output):\n if isinstance(output, (list, tuple)):\n return \"..{}..\".format(\n \"...\".join(\n \"{}.{}\".format(x.component_id, x.component_property)\n for x in output\n )\n )\n\n return \"{}.{}\".format(output.component_id, output.component_property)\n\n\ndef run_command_with_process(cmd):\n is_win = sys.platform == \"win32\"\n proc = subprocess.Popen(shlex.split(cmd, posix=is_win), shell=is_win)\n proc.wait()\n if proc.poll() is None:\n logger.warning(\"\ud83d\udea8 trying to terminate subprocess in safe way\")\n try:\n proc.communicate()\n except Exception: # pylint: disable=broad-except\n logger.exception(\"\ud83d\udea8 first try communicate failed\")\n proc.kill()\n proc.communicate()\n\n\ndef compute_md5(path):\n with open(path, encoding=\"utf-8\") as fp:\n return hashlib.md5(fp.read().encode(\"utf-8\")).hexdigest()\n\n\ndef job(msg=\"\"):\n def wrapper(func):\n @wraps(func)\n def _wrapper(*args, **kwargs):\n logger.info(\"\ud83c\udfd7\ufe0f [%s] \ud83c\udfd7\ufe0f\ufe0f - %s\", func.__name__, msg)\n res = func(*args, **kwargs)\n logger.info(\"::: \ud83c\udf7b\ud83c\udf7b\ud83c\udf7b [%s] job done \ud83c\udf7b\ud83c\udf7b\ud83c\udf7b :::\", func.__name__)\n return res\n\n return _wrapper\n\n return wrapper\n", "path": "dash/_utils.py"}]}
| 2,277 | 261 |
gh_patches_debug_1883
|
rasdani/github-patches
|
git_diff
|
openstates__openstates-scrapers-2384
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
MN failing since at least 2018-06-15
MN has been failing since 2018-06-15
Based on automated runs it appears that MN has not run successfully in 2 days (2018-06-15).
```
/opt/openstates/venv-pupa/lib/python3.6/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
""")
02:01:53 CRITICAL pupa: Session(s) 91st Legislature, 2019-2020 were reported by Minnesota.get_session_list() but were not found in Minnesota.legislative_sessions or Minnesota.ignored_scraped_sessions.
loaded Open States pupa settings...
mn (scrape, import)
bills: {}
committees: {}
people: {}
vote_events: {}
```
Visit http://bobsled.openstates.org for more info.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `openstates/mn/__init__.py`
Content:
```
1 from pupa.scrape import Jurisdiction, Organization
2
3 from openstates.utils import url_xpath
4
5 from .bills import MNBillScraper
6 from .committees import MNCommitteeScraper
7 from .people import MNPersonScraper
8 from .vote_events import MNVoteScraper
9 # from .events import MNEventScraper
10
11 """
12 Minnesota legislative data can be found at the Office of the Revisor
13 of Statutes:
14 https://www.revisor.mn.gov/
15
16 Votes:
17 There are not detailed vote data for Senate votes, simply yes and no counts.
18 Bill pages have vote counts and links to House details, so it makes more
19 sense to get vote data from the bill pages.
20 """
21
22
23 class Minnesota(Jurisdiction):
24 division_id = "ocd-division/country:us/state:mn"
25 classification = "government"
26 name = "Minnesota"
27 url = "http://state.mn.us/"
28 scrapers = {
29 "bills": MNBillScraper,
30 "committees": MNCommitteeScraper,
31 "people": MNPersonScraper,
32 "vote_events": MNVoteScraper,
33 # "events": MNEventScraper,
34 }
35 legislative_sessions = [
36 {
37 '_scraped_name': '86th Legislature, 2009-2010',
38 'classification': 'primary',
39 'identifier': '2009-2010',
40 'name': '2009-2010 Regular Session'
41 },
42 {
43 '_scraped_name': '86th Legislature, 2010 1st Special Session',
44 'classification': 'special',
45 'identifier': '2010 1st Special Session',
46 'name': '2010, 1st Special Session'
47 },
48 {
49 '_scraped_name': '86th Legislature, 2010 2nd Special Session',
50 'classification': 'special',
51 'identifier': '2010 2nd Special Session',
52 'name': '2010, 2nd Special Session'
53 },
54 {
55 '_scraped_name': '87th Legislature, 2011-2012',
56 'classification': 'primary',
57 'identifier': '2011-2012',
58 'name': '2011-2012 Regular Session'
59 },
60 {
61 '_scraped_name': '87th Legislature, 2011 1st Special Session',
62 'classification': 'special',
63 'identifier': '2011s1',
64 'name': '2011, 1st Special Session'
65 },
66 {
67 '_scraped_name': '87th Legislature, 2012 1st Special Session',
68 'classification': 'special',
69 'identifier': '2012s1',
70 'name': '2012, 1st Special Session'
71 },
72 {
73 '_scraped_name': '88th Legislature, 2013-2014',
74 'classification': 'primary',
75 'identifier': '2013-2014',
76 'name': '2013-2014 Regular Session'
77 },
78 {
79 '_scraped_name': '88th Legislature, 2013 1st Special Session',
80 'classification': 'special',
81 'identifier': '2013s1',
82 'name': '2013, 1st Special Session'
83 },
84 {
85 '_scraped_name': '89th Legislature, 2015-2016',
86 'classification': 'primary',
87 'identifier': '2015-2016',
88 'name': '2015-2016 Regular Session'
89 },
90 {
91 '_scraped_name': '89th Legislature, 2015 1st Special Session',
92 'classification': 'special',
93 'identifier': '2015s1',
94 'name': '2015, 1st Special Session'
95 },
96 {
97 '_scraped_name': '90th Legislature, 2017 1st Special Session',
98 'classification': 'special',
99 'identifier': '2017s1',
100 'name': '2017, 1st Special Session'
101 },
102 {
103 '_scraped_name': '90th Legislature, 2017-2018',
104 'classification': 'primary',
105 'identifier': '2017-2018',
106 'name': '2017-2018 Regular Session',
107 'start_date': '2017-01-03',
108 'end_date': '2018-05-21'
109 },
110 ]
111 ignored_scraped_sessions = [
112 '85th Legislature, 2007-2008',
113 '85th Legislature, 2007 1st Special Session',
114 '84th Legislature, 2005-2006',
115 '84th Legislature, 2005 1st Special Session',
116 '83rd Legislature, 2003-2004',
117 '83rd Legislature, 2003 1st Special Session',
118 '82nd Legislature, 2001-2002',
119 '82nd Legislature, 2002 1st Special Session',
120 '82nd Legislature, 2001 1st Special Session',
121 '81st Legislature, 1999-2000',
122 '80th Legislature, 1997-1998',
123 '80th Legislature, 1998 1st Special Session',
124 '80th Legislature, 1997 3rd Special Session',
125 '80th Legislature, 1997 2nd Special Session',
126 '80th Legislature, 1997 1st Special Session',
127 '79th Legislature, 1995-1996',
128 '79th Legislature, 1995 1st Special Session',
129 '89th Legislature, 2015-2016',
130 ]
131
132 def get_organizations(self):
133 legis = Organization('Minnesota Legislature', classification='legislature')
134
135 upper = Organization('Minnesota Senate', classification='upper',
136 parent_id=legis._id)
137 lower = Organization('Minnesota House of Representatives',
138 classification='lower', parent_id=legis._id)
139
140 for n in range(1, 68):
141 upper.add_post(label=str(n), role='Senator',
142 division_id='ocd-division/country:us/state:mn/sldu:{}'.format(n))
143 lower.add_post(label=str(n) + 'A', role='Representative',
144 division_id='ocd-division/country:us/state:mn/sldl:{}a'.format(n))
145 lower.add_post(label=str(n) + 'B', role='Representative',
146 division_id='ocd-division/country:us/state:mn/sldl:{}b'.format(n))
147
148 yield Organization('Governor of Minnesota', classification='executive')
149 yield legis
150 yield upper
151 yield lower
152
153 def get_session_list(self):
154 return url_xpath('https://www.revisor.mn.gov/bills/'
155 'status_search.php?body=House',
156 '//select[@name="session"]/option/text()', verify=False)
157
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/openstates/mn/__init__.py b/openstates/mn/__init__.py
--- a/openstates/mn/__init__.py
+++ b/openstates/mn/__init__.py
@@ -127,6 +127,7 @@
'79th Legislature, 1995-1996',
'79th Legislature, 1995 1st Special Session',
'89th Legislature, 2015-2016',
+ '91st Legislature, 2019-2020',
]
def get_organizations(self):
|
{"golden_diff": "diff --git a/openstates/mn/__init__.py b/openstates/mn/__init__.py\n--- a/openstates/mn/__init__.py\n+++ b/openstates/mn/__init__.py\n@@ -127,6 +127,7 @@\n '79th Legislature, 1995-1996',\n '79th Legislature, 1995 1st Special Session',\n '89th Legislature, 2015-2016',\n+ '91st Legislature, 2019-2020',\n ]\n \n def get_organizations(self):\n", "issue": "MN failing since at least 2018-06-15\nMN has been failing since 2018-06-15\n\nBased on automated runs it appears that MN has not run successfully in 2 days (2018-06-15).\n\n\n```\n /opt/openstates/venv-pupa/lib/python3.6/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use \"pip install psycopg2-binary\" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.\n \"\"\")\n02:01:53 CRITICAL pupa: Session(s) 91st Legislature, 2019-2020 were reported by Minnesota.get_session_list() but were not found in Minnesota.legislative_sessions or Minnesota.ignored_scraped_sessions.\nloaded Open States pupa settings...\nmn (scrape, import)\n bills: {}\n committees: {}\n people: {}\n vote_events: {}\n```\n\nVisit http://bobsled.openstates.org for more info.\n\n", "before_files": [{"content": "from pupa.scrape import Jurisdiction, Organization\n\nfrom openstates.utils import url_xpath\n\nfrom .bills import MNBillScraper\nfrom .committees import MNCommitteeScraper\nfrom .people import MNPersonScraper\nfrom .vote_events import MNVoteScraper\n# from .events import MNEventScraper\n\n\"\"\"\nMinnesota legislative data can be found at the Office of the Revisor\nof Statutes:\nhttps://www.revisor.mn.gov/\n\nVotes:\nThere are not detailed vote data for Senate votes, simply yes and no counts.\nBill pages have vote counts and links to House details, so it makes more\nsense to get vote data from the bill pages.\n\"\"\"\n\n\nclass Minnesota(Jurisdiction):\n division_id = \"ocd-division/country:us/state:mn\"\n classification = \"government\"\n name = \"Minnesota\"\n url = \"http://state.mn.us/\"\n scrapers = {\n \"bills\": MNBillScraper,\n \"committees\": MNCommitteeScraper,\n \"people\": MNPersonScraper,\n \"vote_events\": MNVoteScraper,\n # \"events\": MNEventScraper,\n }\n legislative_sessions = [\n {\n '_scraped_name': '86th Legislature, 2009-2010',\n 'classification': 'primary',\n 'identifier': '2009-2010',\n 'name': '2009-2010 Regular Session'\n },\n {\n '_scraped_name': '86th Legislature, 2010 1st Special Session',\n 'classification': 'special',\n 'identifier': '2010 1st Special Session',\n 'name': '2010, 1st Special Session'\n },\n {\n '_scraped_name': '86th Legislature, 2010 2nd Special Session',\n 'classification': 'special',\n 'identifier': '2010 2nd Special Session',\n 'name': '2010, 2nd Special Session'\n },\n {\n '_scraped_name': '87th Legislature, 2011-2012',\n 'classification': 'primary',\n 'identifier': '2011-2012',\n 'name': '2011-2012 Regular Session'\n },\n {\n '_scraped_name': '87th Legislature, 2011 1st Special Session',\n 'classification': 'special',\n 'identifier': '2011s1',\n 'name': '2011, 1st Special Session'\n },\n {\n '_scraped_name': '87th Legislature, 2012 1st Special Session',\n 'classification': 'special',\n 'identifier': '2012s1',\n 'name': '2012, 1st Special Session'\n },\n {\n '_scraped_name': '88th Legislature, 2013-2014',\n 'classification': 'primary',\n 'identifier': '2013-2014',\n 'name': '2013-2014 Regular Session'\n },\n {\n '_scraped_name': '88th Legislature, 2013 1st Special Session',\n 'classification': 'special',\n 'identifier': '2013s1',\n 'name': '2013, 1st Special Session'\n },\n {\n '_scraped_name': '89th Legislature, 2015-2016',\n 'classification': 'primary',\n 'identifier': '2015-2016',\n 'name': '2015-2016 Regular Session'\n },\n {\n '_scraped_name': '89th Legislature, 2015 1st Special Session',\n 'classification': 'special',\n 'identifier': '2015s1',\n 'name': '2015, 1st Special Session'\n },\n {\n '_scraped_name': '90th Legislature, 2017 1st Special Session',\n 'classification': 'special',\n 'identifier': '2017s1',\n 'name': '2017, 1st Special Session'\n },\n {\n '_scraped_name': '90th Legislature, 2017-2018',\n 'classification': 'primary',\n 'identifier': '2017-2018',\n 'name': '2017-2018 Regular Session',\n 'start_date': '2017-01-03',\n 'end_date': '2018-05-21'\n },\n ]\n ignored_scraped_sessions = [\n '85th Legislature, 2007-2008',\n '85th Legislature, 2007 1st Special Session',\n '84th Legislature, 2005-2006',\n '84th Legislature, 2005 1st Special Session',\n '83rd Legislature, 2003-2004',\n '83rd Legislature, 2003 1st Special Session',\n '82nd Legislature, 2001-2002',\n '82nd Legislature, 2002 1st Special Session',\n '82nd Legislature, 2001 1st Special Session',\n '81st Legislature, 1999-2000',\n '80th Legislature, 1997-1998',\n '80th Legislature, 1998 1st Special Session',\n '80th Legislature, 1997 3rd Special Session',\n '80th Legislature, 1997 2nd Special Session',\n '80th Legislature, 1997 1st Special Session',\n '79th Legislature, 1995-1996',\n '79th Legislature, 1995 1st Special Session',\n '89th Legislature, 2015-2016',\n ]\n\n def get_organizations(self):\n legis = Organization('Minnesota Legislature', classification='legislature')\n\n upper = Organization('Minnesota Senate', classification='upper',\n parent_id=legis._id)\n lower = Organization('Minnesota House of Representatives',\n classification='lower', parent_id=legis._id)\n\n for n in range(1, 68):\n upper.add_post(label=str(n), role='Senator',\n division_id='ocd-division/country:us/state:mn/sldu:{}'.format(n))\n lower.add_post(label=str(n) + 'A', role='Representative',\n division_id='ocd-division/country:us/state:mn/sldl:{}a'.format(n))\n lower.add_post(label=str(n) + 'B', role='Representative',\n division_id='ocd-division/country:us/state:mn/sldl:{}b'.format(n))\n\n yield Organization('Governor of Minnesota', classification='executive')\n yield legis\n yield upper\n yield lower\n\n def get_session_list(self):\n return url_xpath('https://www.revisor.mn.gov/bills/'\n 'status_search.php?body=House',\n '//select[@name=\"session\"]/option/text()', verify=False)\n", "path": "openstates/mn/__init__.py"}], "after_files": [{"content": "from pupa.scrape import Jurisdiction, Organization\n\nfrom openstates.utils import url_xpath\n\nfrom .bills import MNBillScraper\nfrom .committees import MNCommitteeScraper\nfrom .people import MNPersonScraper\nfrom .vote_events import MNVoteScraper\n# from .events import MNEventScraper\n\n\"\"\"\nMinnesota legislative data can be found at the Office of the Revisor\nof Statutes:\nhttps://www.revisor.mn.gov/\n\nVotes:\nThere are not detailed vote data for Senate votes, simply yes and no counts.\nBill pages have vote counts and links to House details, so it makes more\nsense to get vote data from the bill pages.\n\"\"\"\n\n\nclass Minnesota(Jurisdiction):\n division_id = \"ocd-division/country:us/state:mn\"\n classification = \"government\"\n name = \"Minnesota\"\n url = \"http://state.mn.us/\"\n scrapers = {\n \"bills\": MNBillScraper,\n \"committees\": MNCommitteeScraper,\n \"people\": MNPersonScraper,\n \"vote_events\": MNVoteScraper,\n # \"events\": MNEventScraper,\n }\n legislative_sessions = [\n {\n '_scraped_name': '86th Legislature, 2009-2010',\n 'classification': 'primary',\n 'identifier': '2009-2010',\n 'name': '2009-2010 Regular Session'\n },\n {\n '_scraped_name': '86th Legislature, 2010 1st Special Session',\n 'classification': 'special',\n 'identifier': '2010 1st Special Session',\n 'name': '2010, 1st Special Session'\n },\n {\n '_scraped_name': '86th Legislature, 2010 2nd Special Session',\n 'classification': 'special',\n 'identifier': '2010 2nd Special Session',\n 'name': '2010, 2nd Special Session'\n },\n {\n '_scraped_name': '87th Legislature, 2011-2012',\n 'classification': 'primary',\n 'identifier': '2011-2012',\n 'name': '2011-2012 Regular Session'\n },\n {\n '_scraped_name': '87th Legislature, 2011 1st Special Session',\n 'classification': 'special',\n 'identifier': '2011s1',\n 'name': '2011, 1st Special Session'\n },\n {\n '_scraped_name': '87th Legislature, 2012 1st Special Session',\n 'classification': 'special',\n 'identifier': '2012s1',\n 'name': '2012, 1st Special Session'\n },\n {\n '_scraped_name': '88th Legislature, 2013-2014',\n 'classification': 'primary',\n 'identifier': '2013-2014',\n 'name': '2013-2014 Regular Session'\n },\n {\n '_scraped_name': '88th Legislature, 2013 1st Special Session',\n 'classification': 'special',\n 'identifier': '2013s1',\n 'name': '2013, 1st Special Session'\n },\n {\n '_scraped_name': '89th Legislature, 2015-2016',\n 'classification': 'primary',\n 'identifier': '2015-2016',\n 'name': '2015-2016 Regular Session'\n },\n {\n '_scraped_name': '89th Legislature, 2015 1st Special Session',\n 'classification': 'special',\n 'identifier': '2015s1',\n 'name': '2015, 1st Special Session'\n },\n {\n '_scraped_name': '90th Legislature, 2017 1st Special Session',\n 'classification': 'special',\n 'identifier': '2017s1',\n 'name': '2017, 1st Special Session'\n },\n {\n '_scraped_name': '90th Legislature, 2017-2018',\n 'classification': 'primary',\n 'identifier': '2017-2018',\n 'name': '2017-2018 Regular Session',\n 'start_date': '2017-01-03',\n 'end_date': '2018-05-21'\n },\n ]\n ignored_scraped_sessions = [\n '85th Legislature, 2007-2008',\n '85th Legislature, 2007 1st Special Session',\n '84th Legislature, 2005-2006',\n '84th Legislature, 2005 1st Special Session',\n '83rd Legislature, 2003-2004',\n '83rd Legislature, 2003 1st Special Session',\n '82nd Legislature, 2001-2002',\n '82nd Legislature, 2002 1st Special Session',\n '82nd Legislature, 2001 1st Special Session',\n '81st Legislature, 1999-2000',\n '80th Legislature, 1997-1998',\n '80th Legislature, 1998 1st Special Session',\n '80th Legislature, 1997 3rd Special Session',\n '80th Legislature, 1997 2nd Special Session',\n '80th Legislature, 1997 1st Special Session',\n '79th Legislature, 1995-1996',\n '79th Legislature, 1995 1st Special Session',\n '89th Legislature, 2015-2016',\n '91st Legislature, 2019-2020',\n ]\n\n def get_organizations(self):\n legis = Organization('Minnesota Legislature', classification='legislature')\n\n upper = Organization('Minnesota Senate', classification='upper',\n parent_id=legis._id)\n lower = Organization('Minnesota House of Representatives',\n classification='lower', parent_id=legis._id)\n\n for n in range(1, 68):\n upper.add_post(label=str(n), role='Senator',\n division_id='ocd-division/country:us/state:mn/sldu:{}'.format(n))\n lower.add_post(label=str(n) + 'A', role='Representative',\n division_id='ocd-division/country:us/state:mn/sldl:{}a'.format(n))\n lower.add_post(label=str(n) + 'B', role='Representative',\n division_id='ocd-division/country:us/state:mn/sldl:{}b'.format(n))\n\n yield Organization('Governor of Minnesota', classification='executive')\n yield legis\n yield upper\n yield lower\n\n def get_session_list(self):\n return url_xpath('https://www.revisor.mn.gov/bills/'\n 'status_search.php?body=House',\n '//select[@name=\"session\"]/option/text()', verify=False)\n", "path": "openstates/mn/__init__.py"}]}
| 2,567 | 141 |
gh_patches_debug_32455
|
rasdani/github-patches
|
git_diff
|
PlasmaPy__PlasmaPy-875
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Thumbnails missing/broken on plasmapy documentation webpage home
As the title suggests, and can be seen on the screenshot, two of the three thumbnails
are missing from the PlasmaPy documentation front page at https://docs.plasmapy.org/en/latest/
<img width="598" alt="Untitled" src="https://user-images.githubusercontent.com/7075058/88436433-2b75b680-cdb9-11ea-92a0-83ebd8d67880.png">
Thumbnails missing/broken on plasmapy documentation webpage home
As the title suggests, and can be seen on the screenshot, two of the three thumbnails
are missing from the PlasmaPy documentation front page at https://docs.plasmapy.org/en/latest/
<img width="598" alt="Untitled" src="https://user-images.githubusercontent.com/7075058/88436433-2b75b680-cdb9-11ea-92a0-83ebd8d67880.png">
Thumbnails missing/broken on plasmapy documentation webpage home
As the title suggests, and can be seen on the screenshot, two of the three thumbnails
are missing from the PlasmaPy documentation front page at https://docs.plasmapy.org/en/latest/
<img width="598" alt="Untitled" src="https://user-images.githubusercontent.com/7075058/88436433-2b75b680-cdb9-11ea-92a0-83ebd8d67880.png">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 #!/usr/bin/env python3.6
2 # -*- coding: utf-8 -*-
3 #
4 # PlasmaPy documentation build configuration file, created by
5 # sphinx-quickstart on Wed May 31 18:16:46 2017.
6 #
7 # This file is execfile()d with the current directory set to its
8 # containing dir.
9 #
10 # Note that not all possible configuration values are present in this
11 # autogenerated file.
12 #
13 # All configuration values have a default; values that are commented out
14 # serve to show the default.
15
16 # If extensions (or modules to document with autodoc) are in another directory,
17 # add these directories to sys.path here. If the directory is relative to the
18 # documentation root, use os.path.abspath to make it absolute, like shown here.
19 #
20
21 import os
22 import sys
23
24 from pkg_resources import parse_version
25 from sphinx.application import Sphinx
26
27 sys.path.insert(0, os.path.abspath(".."))
28
29 from plasmapy import __version__ as release
30
31
32 # -- General configuration ------------------------------------------------
33
34 # If your documentation needs a minimal Sphinx version, state it here.
35 #
36 # needs_sphinx = '1.0'
37
38 # Add any Sphinx extension module names here, as strings. They can be
39 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
40 # ones.
41 extensions = [
42 "sphinx.ext.autodoc",
43 "sphinx.ext.intersphinx",
44 "sphinx.ext.graphviz",
45 "sphinx.ext.mathjax",
46 "sphinx.ext.napoleon",
47 "sphinx_automodapi.automodapi",
48 "sphinx_automodapi.smart_resolver",
49 "nbsphinx",
50 "sphinx_copybutton",
51 "sphinx_gallery.load_style",
52 ]
53
54 intersphinx_mapping = {
55 "python": ("https://docs.python.org/3", None),
56 "numpy": ("https://numpy.org/doc/stable/", None),
57 "scipy": ("https://docs.scipy.org/doc/scipy/reference/", None),
58 "pandas": ("http://pandas.pydata.org/pandas-docs/stable/", None),
59 "astropy": ("http://docs.astropy.org/en/stable/", None),
60 }
61 # Add any paths that contain templates here, relative to this directory.
62 templates_path = ["_templates"]
63
64 # The suffix(es) of source filenames.
65 # You can specify multiple suffix as a list of string:
66 #
67 # source_suffix = ['.rst', '.md']
68 source_suffix = ".rst"
69
70 # The master toctree document.
71 master_doc = "index"
72
73 # General information about the project.
74 project = "PlasmaPy"
75 copyright = "2015-2020, PlasmaPy Community"
76 author = "PlasmaPy Community"
77
78
79 # The version info for the project you're documenting, acts as replacement for
80 # |version| and |release|, also used in various other places throughout the
81 # built documents.
82 #
83 # The full version, including alpha/beta/rc tags.
84 # Note: If plasmapy.__version__ can not be defined then it is set to 'unknown'.
85 # However, release needs to be a semantic style version number, so set
86 # the 'unknown' case to ''.
87 release = "" if release == "unknown" else release
88 if release == "unknown":
89 release = version = revision = ""
90 else:
91 pv = parse_version(release)
92 release = pv.public
93 version = ".".join(release.split(".")[:2]) # short X.Y version
94 if pv.local is not None:
95 revision = pv.local[1:] # revision number w/o the leading g
96 else:
97 revision = ""
98
99
100 # The language for content autogenerated by Sphinx. Refer to documentation
101 # for a list of supported languages.
102 #
103 # This is also used if you do content translation via gettext catalogs.
104 # Usually you set "language" from the command line for these cases.
105 language = None
106
107 # List of patterns, relative to source directory, that match files and
108 # directories to ignore when looking for source files.
109 # This patterns also effect to html_static_path and html_extra_path
110 exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
111
112 # The name of the Pygments (syntax highlighting) style to use.
113 pygments_style = "sphinx"
114
115 # If true, `todo` and `todoList` produce output, else they produce nothing.
116 todo_include_todos = False
117
118 default_role = "obj"
119
120 # -- Options for HTML output ----------------------------------------------
121
122 # The theme to use for HTML and HTML Help pages. See the documentation for
123 # a list of builtin themes.
124 #
125 # html_theme = 'alabaster'
126 # html_theme = 'traditional'
127 # html_theme = 'agogo'
128 html_theme = "sphinx_rtd_theme"
129
130 # Theme options are theme-specific and customize the look and feel of a theme
131 # further. For a list of options available for each theme, see the
132 # documentation.
133 #
134 # html_theme_options = {}
135
136 # Add any paths that contain custom static files (such as style sheets) here,
137 # relative to this directory. They are copied after the builtin static files,
138 # so a file named "default.css" will overwrite the builtin "default.css".
139 # html_static_path = ['_static']
140
141
142 # -- Options for HTMLHelp output ------------------------------------------
143
144 # Output file base name for HTML help builder.
145 htmlhelp_basename = "PlasmaPydoc"
146
147
148 # -- Options for LaTeX output ---------------------------------------------
149
150 latex_elements = {
151 # The paper size ('letterpaper' or 'a4paper').
152 # 'papersize': 'letterpaper',
153 #
154 # The font size ('10pt', '11pt' or '12pt').
155 # 'pointsize': '10pt',
156 #
157 # Additional stuff for the LaTeX preamble.
158 # 'preamble': '',
159 #
160 # Latex figure (float) alignment
161 # 'figure_align': 'htbp',
162 }
163
164 # Grouping the document tree into LaTeX files. List of tuples
165 # (source start file, target name, title,
166 # author, documentclass [howto, manual, or own class]).
167 latex_documents = [
168 (
169 master_doc,
170 "PlasmaPy.tex",
171 "PlasmaPy Documentation",
172 "PlasmaPy Community",
173 "manual",
174 )
175 ]
176
177
178 # -- Options for manual page output ---------------------------------------
179
180 # One entry per manual page. List of tuples
181 # (source start file, name, description, authors, manual section).
182 man_pages = [(master_doc, "plasmapy", "PlasmaPy Documentation", [author], 1)]
183
184
185 # -- Options for Texinfo output -------------------------------------------
186
187 # Grouping the document tree into Texinfo files. List of tuples
188 # (source start file, target name, title, author,
189 # dir menu entry, description, category)
190 texinfo_documents = [
191 (
192 master_doc,
193 "PlasmaPy",
194 "PlasmaPy Documentation",
195 author,
196 "PlasmaPy",
197 "Python package for plasma physics",
198 "Miscellaneous",
199 )
200 ]
201
202 html_favicon = "./_static/icon.ico"
203
204
205 # -- NBSphinx options
206
207 nbsphinx_thumbnails = {"notebooks/*": "_images/graphic-circular.png"}
208
209 # adapted from https://github.com/spatialaudio/nbsphinx/blob/58b8034dd9d7349c1b4ac3e7a7d6baa87ab2a6a9/doc/conf.py
210
211 # This is processed by Jinja2 and inserted before each notebook
212 nbsphinx_prolog = r"""
213 {% set docname = 'docs/' + env.doc2path(env.docname, base=None) %}
214 {% set nb_base = 'tree' if env.config.revision else 'blob' %}
215 {% set nb_where = env.config.revision if env.config.revision else 'master' %}
216
217 .. raw:: html
218
219 <div class="admonition note">
220 <p style="margin-bottom:0px">
221 This page was generated by
222 <a href="https://nbsphinx.readthedocs.io/">nbsphinx</a> from
223 <a class="reference external" href="https://github.com/PlasmaPy/PlasmaPy/{{ nb_base|e }}/{{ nb_where|e }}/{{ docname|e }}">{{ docname|e }}</a>.
224 <br>
225 Interactive online version:
226 <a href="https://mybinder.org/v2/gh/PlasmaPy/PlasmaPy/{{ nb_where|e }}/?filepath={{ docname|e }}"><img alt="Binder badge" src="https://mybinder.org/badge_logo.svg" style="vertical-align:text-bottom"></a>.
227 </p>
228 </div>
229
230 .. raw:: latex
231
232 \nbsphinxstartnotebook{\scriptsize\noindent\strut
233 \textcolor{gray}{The following section was generated from
234 \sphinxcode{\sphinxupquote{\strut {{ docname | escape_latex }}}} \dotfill}}
235 """
236
237
238 def setup(app: Sphinx) -> None:
239 app.add_config_value("revision", "", True)
240
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -21,6 +21,7 @@
import os
import sys
+from datetime import datetime
from pkg_resources import parse_version
from sphinx.application import Sphinx
@@ -72,8 +73,8 @@
# General information about the project.
project = "PlasmaPy"
-copyright = "2015-2020, PlasmaPy Community"
author = "PlasmaPy Community"
+copyright = f"2015-{datetime.utcnow().year}, {author}"
# The version info for the project you're documenting, acts as replacement for
@@ -131,13 +132,24 @@
# further. For a list of options available for each theme, see the
# documentation.
#
-# html_theme_options = {}
+html_logo = "./_static/with-text-light-190px.png"
+html_theme_options = {
+ "logo_only": True,
+ #
+ # TOC options
+ # https://sphinx-rtd-theme.readthedocs.io/en/stable/configuring.html#theme-options
+ "includehidden": False,
+}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
-# html_static_path = ['_static']
+html_static_path = ["_static"]
+# A list of prefixes that are ignored for sorting the Python module
+# index (e.g., if this is set to ['foo.'], then foo.bar is shown under
+# B, not F).
+modindex_common_prefix = ["plasmapy."]
# -- Options for HTMLHelp output ------------------------------------------
@@ -237,3 +249,4 @@
def setup(app: Sphinx) -> None:
app.add_config_value("revision", "", True)
+ app.add_stylesheet("rtd_theme_overrides.css")
|
{"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -21,6 +21,7 @@\n import os\n import sys\n \n+from datetime import datetime\n from pkg_resources import parse_version\n from sphinx.application import Sphinx\n \n@@ -72,8 +73,8 @@\n \n # General information about the project.\n project = \"PlasmaPy\"\n-copyright = \"2015-2020, PlasmaPy Community\"\n author = \"PlasmaPy Community\"\n+copyright = f\"2015-{datetime.utcnow().year}, {author}\"\n \n \n # The version info for the project you're documenting, acts as replacement for\n@@ -131,13 +132,24 @@\n # further. For a list of options available for each theme, see the\n # documentation.\n #\n-# html_theme_options = {}\n+html_logo = \"./_static/with-text-light-190px.png\"\n+html_theme_options = {\n+ \"logo_only\": True,\n+ #\n+ # TOC options\n+ # https://sphinx-rtd-theme.readthedocs.io/en/stable/configuring.html#theme-options\n+ \"includehidden\": False,\n+}\n \n # Add any paths that contain custom static files (such as style sheets) here,\n # relative to this directory. They are copied after the builtin static files,\n # so a file named \"default.css\" will overwrite the builtin \"default.css\".\n-# html_static_path = ['_static']\n+html_static_path = [\"_static\"]\n \n+# A list of prefixes that are ignored for sorting the Python module\n+# index (e.g., if this is set to ['foo.'], then foo.bar is shown under\n+# B, not F).\n+modindex_common_prefix = [\"plasmapy.\"]\n \n # -- Options for HTMLHelp output ------------------------------------------\n \n@@ -237,3 +249,4 @@\n \n def setup(app: Sphinx) -> None:\n app.add_config_value(\"revision\", \"\", True)\n+ app.add_stylesheet(\"rtd_theme_overrides.css\")\n", "issue": "Thumbnails missing/broken on plasmapy documentation webpage home\nAs the title suggests, and can be seen on the screenshot, two of the three thumbnails\r\nare missing from the PlasmaPy documentation front page at https://docs.plasmapy.org/en/latest/\r\n<img width=\"598\" alt=\"Untitled\" src=\"https://user-images.githubusercontent.com/7075058/88436433-2b75b680-cdb9-11ea-92a0-83ebd8d67880.png\">\r\n\nThumbnails missing/broken on plasmapy documentation webpage home\nAs the title suggests, and can be seen on the screenshot, two of the three thumbnails\r\nare missing from the PlasmaPy documentation front page at https://docs.plasmapy.org/en/latest/\r\n<img width=\"598\" alt=\"Untitled\" src=\"https://user-images.githubusercontent.com/7075058/88436433-2b75b680-cdb9-11ea-92a0-83ebd8d67880.png\">\r\n\nThumbnails missing/broken on plasmapy documentation webpage home\nAs the title suggests, and can be seen on the screenshot, two of the three thumbnails\r\nare missing from the PlasmaPy documentation front page at https://docs.plasmapy.org/en/latest/\r\n<img width=\"598\" alt=\"Untitled\" src=\"https://user-images.githubusercontent.com/7075058/88436433-2b75b680-cdb9-11ea-92a0-83ebd8d67880.png\">\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3.6\n# -*- coding: utf-8 -*-\n#\n# PlasmaPy documentation build configuration file, created by\n# sphinx-quickstart on Wed May 31 18:16:46 2017.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n\nimport os\nimport sys\n\nfrom pkg_resources import parse_version\nfrom sphinx.application import Sphinx\n\nsys.path.insert(0, os.path.abspath(\"..\"))\n\nfrom plasmapy import __version__ as release\n\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.graphviz\",\n \"sphinx.ext.mathjax\",\n \"sphinx.ext.napoleon\",\n \"sphinx_automodapi.automodapi\",\n \"sphinx_automodapi.smart_resolver\",\n \"nbsphinx\",\n \"sphinx_copybutton\",\n \"sphinx_gallery.load_style\",\n]\n\nintersphinx_mapping = {\n \"python\": (\"https://docs.python.org/3\", None),\n \"numpy\": (\"https://numpy.org/doc/stable/\", None),\n \"scipy\": (\"https://docs.scipy.org/doc/scipy/reference/\", None),\n \"pandas\": (\"http://pandas.pydata.org/pandas-docs/stable/\", None),\n \"astropy\": (\"http://docs.astropy.org/en/stable/\", None),\n}\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = \".rst\"\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# General information about the project.\nproject = \"PlasmaPy\"\ncopyright = \"2015-2020, PlasmaPy Community\"\nauthor = \"PlasmaPy Community\"\n\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The full version, including alpha/beta/rc tags.\n# Note: If plasmapy.__version__ can not be defined then it is set to 'unknown'.\n# However, release needs to be a semantic style version number, so set\n# the 'unknown' case to ''.\nrelease = \"\" if release == \"unknown\" else release\nif release == \"unknown\":\n release = version = revision = \"\"\nelse:\n pv = parse_version(release)\n release = pv.public\n version = \".\".join(release.split(\".\")[:2]) # short X.Y version\n if pv.local is not None:\n revision = pv.local[1:] # revision number w/o the leading g\n else:\n revision = \"\"\n\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This patterns also effect to html_static_path and html_extra_path\nexclude_patterns = [\"_build\", \"Thumbs.db\", \".DS_Store\"]\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"sphinx\"\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = False\n\ndefault_role = \"obj\"\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\n# html_theme = 'alabaster'\n# html_theme = 'traditional'\n# html_theme = 'agogo'\nhtml_theme = \"sphinx_rtd_theme\"\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\n# html_theme_options = {}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\n# html_static_path = ['_static']\n\n\n# -- Options for HTMLHelp output ------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"PlasmaPydoc\"\n\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n # 'papersize': 'letterpaper',\n #\n # The font size ('10pt', '11pt' or '12pt').\n # 'pointsize': '10pt',\n #\n # Additional stuff for the LaTeX preamble.\n # 'preamble': '',\n #\n # Latex figure (float) alignment\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (\n master_doc,\n \"PlasmaPy.tex\",\n \"PlasmaPy Documentation\",\n \"PlasmaPy Community\",\n \"manual\",\n )\n]\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(master_doc, \"plasmapy\", \"PlasmaPy Documentation\", [author], 1)]\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n master_doc,\n \"PlasmaPy\",\n \"PlasmaPy Documentation\",\n author,\n \"PlasmaPy\",\n \"Python package for plasma physics\",\n \"Miscellaneous\",\n )\n]\n\nhtml_favicon = \"./_static/icon.ico\"\n\n\n# -- NBSphinx options\n\nnbsphinx_thumbnails = {\"notebooks/*\": \"_images/graphic-circular.png\"}\n\n# adapted from https://github.com/spatialaudio/nbsphinx/blob/58b8034dd9d7349c1b4ac3e7a7d6baa87ab2a6a9/doc/conf.py\n\n# This is processed by Jinja2 and inserted before each notebook\nnbsphinx_prolog = r\"\"\"\n{% set docname = 'docs/' + env.doc2path(env.docname, base=None) %}\n{% set nb_base = 'tree' if env.config.revision else 'blob' %}\n{% set nb_where = env.config.revision if env.config.revision else 'master' %}\n\n.. raw:: html\n\n <div class=\"admonition note\">\n <p style=\"margin-bottom:0px\">\n This page was generated by\n <a href=\"https://nbsphinx.readthedocs.io/\">nbsphinx</a> from\n <a class=\"reference external\" href=\"https://github.com/PlasmaPy/PlasmaPy/{{ nb_base|e }}/{{ nb_where|e }}/{{ docname|e }}\">{{ docname|e }}</a>.\n <br>\n Interactive online version:\n <a href=\"https://mybinder.org/v2/gh/PlasmaPy/PlasmaPy/{{ nb_where|e }}/?filepath={{ docname|e }}\"><img alt=\"Binder badge\" src=\"https://mybinder.org/badge_logo.svg\" style=\"vertical-align:text-bottom\"></a>.\n </p>\n </div>\n\n.. raw:: latex\n\n \\nbsphinxstartnotebook{\\scriptsize\\noindent\\strut\n \\textcolor{gray}{The following section was generated from\n \\sphinxcode{\\sphinxupquote{\\strut {{ docname | escape_latex }}}} \\dotfill}}\n\"\"\"\n\n\ndef setup(app: Sphinx) -> None:\n app.add_config_value(\"revision\", \"\", True)\n", "path": "docs/conf.py"}], "after_files": [{"content": "#!/usr/bin/env python3.6\n# -*- coding: utf-8 -*-\n#\n# PlasmaPy documentation build configuration file, created by\n# sphinx-quickstart on Wed May 31 18:16:46 2017.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n\nimport os\nimport sys\n\nfrom datetime import datetime\nfrom pkg_resources import parse_version\nfrom sphinx.application import Sphinx\n\nsys.path.insert(0, os.path.abspath(\"..\"))\n\nfrom plasmapy import __version__ as release\n\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.graphviz\",\n \"sphinx.ext.mathjax\",\n \"sphinx.ext.napoleon\",\n \"sphinx_automodapi.automodapi\",\n \"sphinx_automodapi.smart_resolver\",\n \"nbsphinx\",\n \"sphinx_copybutton\",\n \"sphinx_gallery.load_style\",\n]\n\nintersphinx_mapping = {\n \"python\": (\"https://docs.python.org/3\", None),\n \"numpy\": (\"https://numpy.org/doc/stable/\", None),\n \"scipy\": (\"https://docs.scipy.org/doc/scipy/reference/\", None),\n \"pandas\": (\"http://pandas.pydata.org/pandas-docs/stable/\", None),\n \"astropy\": (\"http://docs.astropy.org/en/stable/\", None),\n}\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = \".rst\"\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# General information about the project.\nproject = \"PlasmaPy\"\nauthor = \"PlasmaPy Community\"\ncopyright = f\"2015-{datetime.utcnow().year}, {author}\"\n\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The full version, including alpha/beta/rc tags.\n# Note: If plasmapy.__version__ can not be defined then it is set to 'unknown'.\n# However, release needs to be a semantic style version number, so set\n# the 'unknown' case to ''.\nrelease = \"\" if release == \"unknown\" else release\nif release == \"unknown\":\n release = version = revision = \"\"\nelse:\n pv = parse_version(release)\n release = pv.public\n version = \".\".join(release.split(\".\")[:2]) # short X.Y version\n if pv.local is not None:\n revision = pv.local[1:] # revision number w/o the leading g\n else:\n revision = \"\"\n\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This patterns also effect to html_static_path and html_extra_path\nexclude_patterns = [\"_build\", \"Thumbs.db\", \".DS_Store\"]\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"sphinx\"\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = False\n\ndefault_role = \"obj\"\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\n# html_theme = 'alabaster'\n# html_theme = 'traditional'\n# html_theme = 'agogo'\nhtml_theme = \"sphinx_rtd_theme\"\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\nhtml_logo = \"./_static/with-text-light-190px.png\"\nhtml_theme_options = {\n \"logo_only\": True,\n #\n # TOC options\n # https://sphinx-rtd-theme.readthedocs.io/en/stable/configuring.html#theme-options\n \"includehidden\": False,\n}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\n# A list of prefixes that are ignored for sorting the Python module\n# index (e.g., if this is set to ['foo.'], then foo.bar is shown under\n# B, not F).\nmodindex_common_prefix = [\"plasmapy.\"]\n\n# -- Options for HTMLHelp output ------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"PlasmaPydoc\"\n\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n # 'papersize': 'letterpaper',\n #\n # The font size ('10pt', '11pt' or '12pt').\n # 'pointsize': '10pt',\n #\n # Additional stuff for the LaTeX preamble.\n # 'preamble': '',\n #\n # Latex figure (float) alignment\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (\n master_doc,\n \"PlasmaPy.tex\",\n \"PlasmaPy Documentation\",\n \"PlasmaPy Community\",\n \"manual\",\n )\n]\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(master_doc, \"plasmapy\", \"PlasmaPy Documentation\", [author], 1)]\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n master_doc,\n \"PlasmaPy\",\n \"PlasmaPy Documentation\",\n author,\n \"PlasmaPy\",\n \"Python package for plasma physics\",\n \"Miscellaneous\",\n )\n]\n\nhtml_favicon = \"./_static/icon.ico\"\n\n\n# -- NBSphinx options\n\nnbsphinx_thumbnails = {\"notebooks/*\": \"_images/graphic-circular.png\"}\n\n# adapted from https://github.com/spatialaudio/nbsphinx/blob/58b8034dd9d7349c1b4ac3e7a7d6baa87ab2a6a9/doc/conf.py\n\n# This is processed by Jinja2 and inserted before each notebook\nnbsphinx_prolog = r\"\"\"\n{% set docname = 'docs/' + env.doc2path(env.docname, base=None) %}\n{% set nb_base = 'tree' if env.config.revision else 'blob' %}\n{% set nb_where = env.config.revision if env.config.revision else 'master' %}\n\n.. raw:: html\n\n <div class=\"admonition note\">\n <p style=\"margin-bottom:0px\">\n This page was generated by\n <a href=\"https://nbsphinx.readthedocs.io/\">nbsphinx</a> from\n <a class=\"reference external\" href=\"https://github.com/PlasmaPy/PlasmaPy/{{ nb_base|e }}/{{ nb_where|e }}/{{ docname|e }}\">{{ docname|e }}</a>.\n <br>\n Interactive online version:\n <a href=\"https://mybinder.org/v2/gh/PlasmaPy/PlasmaPy/{{ nb_where|e }}/?filepath={{ docname|e }}\"><img alt=\"Binder badge\" src=\"https://mybinder.org/badge_logo.svg\" style=\"vertical-align:text-bottom\"></a>.\n </p>\n </div>\n\n.. raw:: latex\n\n \\nbsphinxstartnotebook{\\scriptsize\\noindent\\strut\n \\textcolor{gray}{The following section was generated from\n \\sphinxcode{\\sphinxupquote{\\strut {{ docname | escape_latex }}}} \\dotfill}}\n\"\"\"\n\n\ndef setup(app: Sphinx) -> None:\n app.add_config_value(\"revision\", \"\", True)\n app.add_stylesheet(\"rtd_theme_overrides.css\")\n", "path": "docs/conf.py"}]}
| 3,226 | 454 |
gh_patches_debug_4718
|
rasdani/github-patches
|
git_diff
|
opsdroid__opsdroid-379
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error if parsers is empty
# Description
If you leave the `parsers` section of the config as an empty dictionary an error is thrown and startup fails.
## Steps to Reproduce
Uncomment the parsers section of the config but with no actual parsers
```
parsers:
# nothing else
```
## Expected Functionality
The application should start with the default parsers only. The same as if `parsers:` is not in the config.
## Experienced Functionality
```
DEBUG opsdroid.core: Parsing input: hi
DEBUG opsdroid.core: Processing parsers...
ERROR aiohttp.server: Error handling request
Traceback (most recent call last):
File "/Users/jacob/.pyenv/versions/3.5.4/lib/python3.5/site-packages/aiohttp/web_protocol.py", line 416, in start
resp = yield from self._request_handler(request)
File "/Users/jacob/.pyenv/versions/3.5.4/lib/python3.5/site-packages/aiohttp/web.py", line 325, in _handle
resp = yield from handler(request)
File "/Users/jacob/.opsdroid/modules/opsdroid-modules/connector/websocket/__init__.py", line 77, in websocket_handler
await self.opsdroid.parse(message)
File "/Users/jacob/Projects/opsdroid/opsdroid/opsdroid/core.py", line 273, in parse
ranked_skills = await self.get_ranked_skills(message)
File "/Users/jacob/Projects/opsdroid/opsdroid/opsdroid/core.py", line 218, in get_ranked_skills
dialogflow = [p for p in parsers if p["name"] == "dialogflow"
TypeError: 'NoneType' object is not iterable
```
## Versions
- **Opsdroid version:** 0.10.0
- **Python version:** Python 3.5.4
- **OS/Docker version:** macOS 10.13
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opsdroid/core.py`
Content:
```
1 """Core components of OpsDroid."""
2
3 import copy
4 import logging
5 import signal
6 import sys
7 import weakref
8 import asyncio
9
10 from opsdroid.memory import Memory
11 from opsdroid.connector import Connector
12 from opsdroid.database import Database
13 from opsdroid.loader import Loader
14 from opsdroid.parsers.always import parse_always
15 from opsdroid.parsers.regex import parse_regex
16 from opsdroid.parsers.dialogflow import parse_dialogflow
17 from opsdroid.parsers.luisai import parse_luisai
18 from opsdroid.parsers.recastai import parse_recastai
19 from opsdroid.parsers.witai import parse_witai
20 from opsdroid.parsers.crontab import parse_crontab
21 from opsdroid.const import DEFAULT_CONFIG_PATH
22
23
24 _LOGGER = logging.getLogger(__name__)
25
26
27 class OpsDroid():
28 """Root object for opsdroid."""
29
30 # pylint: disable=too-many-instance-attributes
31 # All are reasonable in this case.
32
33 instances = []
34
35 def __init__(self):
36 """Start opsdroid."""
37 self.bot_name = 'opsdroid'
38 self.sys_status = 0
39 self.connectors = []
40 self.connector_tasks = []
41 self.eventloop = asyncio.get_event_loop()
42 for sig in (signal.SIGINT, signal.SIGTERM):
43 self.eventloop.add_signal_handler(sig, self.call_stop)
44 self.skills = []
45 self.memory = Memory()
46 self.loader = Loader(self)
47 self.config = {}
48 self.stats = {
49 "messages_parsed": 0,
50 "webhooks_called": 0,
51 "total_response_time": 0,
52 "total_responses": 0,
53 }
54 self.web_server = None
55 self.should_restart = False
56 self.stored_path = []
57
58 def __enter__(self):
59 """Add self to existing instances."""
60 self.stored_path = copy.copy(sys.path)
61 if not self.__class__.instances:
62 self.__class__.instances.append(weakref.proxy(self))
63 else:
64 self.critical("opsdroid has already been started", 1)
65 return self
66
67 def __exit__(self, exc_type, exc_value, traceback):
68 """Remove self from existing instances."""
69 sys.path = self.stored_path
70 self.__class__.instances = []
71 asyncio.set_event_loop(asyncio.new_event_loop())
72
73 @property
74 def default_connector(self):
75 """Return the default connector."""
76 default_connector = None
77 for connector in self.connectors:
78 if "default" in connector.config and connector.config["default"]:
79 default_connector = connector
80 break
81 if default_connector is None:
82 default_connector = self.connectors[0]
83 return default_connector
84
85 def exit(self):
86 """Exit application."""
87 _LOGGER.info("Exiting application with return code %s",
88 str(self.sys_status))
89 sys.exit(self.sys_status)
90
91 def critical(self, error, code):
92 """Exit due to unrecoverable error."""
93 self.sys_status = code
94 _LOGGER.critical(error)
95 self.exit()
96
97 def restart(self):
98 """Restart opsdroid."""
99 self.should_restart = True
100 self.stop()
101
102 def call_stop(self):
103 """Signal handler to call disconnect and stop."""
104 future = asyncio.ensure_future(self.disconnect())
105 future.add_done_callback(self.stop)
106 return future
107
108 async def disconnect(self):
109 """Disconnect all the connectors."""
110 for connector in self.connectors:
111 await connector.disconnect(self)
112
113 def stop(self, future=None):
114 """Stop the event loop."""
115 pending = asyncio.Task.all_tasks()
116 for task in pending:
117 task.cancel()
118 self.eventloop.stop()
119 print('') # Prints a character return for return to shell
120 _LOGGER.info("Keyboard interrupt, exiting.")
121
122 def load(self):
123 """Load configuration."""
124 self.config = self.loader.load_config_file([
125 "configuration.yaml",
126 DEFAULT_CONFIG_PATH,
127 "/etc/opsdroid/configuration.yaml"
128 ])
129
130 def start_loop(self):
131 """Start the event loop."""
132 connectors, databases, skills = \
133 self.loader.load_modules_from_config(self.config)
134 _LOGGER.debug("Loaded %i skills", len(skills))
135 if databases is not None:
136 self.start_databases(databases)
137 self.setup_skills(skills)
138 self.start_connector_tasks(connectors)
139 self.eventloop.create_task(parse_crontab(self))
140 self.web_server.start()
141 try:
142 pending = asyncio.Task.all_tasks()
143 self.eventloop.run_until_complete(asyncio.gather(*pending))
144 except RuntimeError as error:
145 if str(error) != 'Event loop is closed':
146 raise error
147 finally:
148 self.eventloop.close()
149
150 def setup_skills(self, skills):
151 """Call the setup function on the passed in skills."""
152 for skill in skills:
153 try:
154 skill["module"].setup(self)
155 except AttributeError:
156 pass
157
158 def start_connector_tasks(self, connectors):
159 """Start the connectors."""
160 for connector_module in connectors:
161 for _, cls in connector_module["module"].__dict__.items():
162 if isinstance(cls, type) and \
163 issubclass(cls, Connector) and\
164 cls is not Connector:
165 connector = cls(connector_module["config"])
166 self.connectors.append(connector)
167
168 if connectors:
169 for connector in self.connectors:
170 self.eventloop.run_until_complete(connector.connect(self))
171 for connector in self.connectors:
172 task = self.eventloop.create_task(connector.listen(self))
173 self.connector_tasks.append(task)
174 else:
175 self.critical("All connectors failed to load", 1)
176
177 def start_databases(self, databases):
178 """Start the databases."""
179 if not databases:
180 _LOGGER.debug(databases)
181 _LOGGER.warning("All databases failed to load")
182 for database_module in databases:
183 for name, cls in database_module["module"].__dict__.items():
184 if isinstance(cls, type) and \
185 issubclass(cls, Database) and \
186 cls is not Database:
187 _LOGGER.debug("Adding database: %s", name)
188 database = cls(database_module["config"])
189 self.memory.databases.append(database)
190 self.eventloop.run_until_complete(database.connect(self))
191
192 async def run_skill(self, skill, config, message):
193 """Execute a skill."""
194 # pylint: disable=broad-except
195 # We want to catch all exceptions coming from a skill module and not
196 # halt the application. If a skill throws an exception it just doesn't
197 # give a response to the user, so an error response should be given.
198 try:
199 await skill(self, config, message)
200 except Exception:
201 if message:
202 await message.respond(
203 "Whoops there has been an error")
204 await message.respond(
205 "Check the log for details")
206 _LOGGER.exception("Exception when running skill '%s' ",
207 config["name"])
208
209 async def get_ranked_skills(self, message):
210 """Take a message and return a ranked list of matching skills."""
211 skills = []
212 skills = skills + await parse_regex(self, message)
213
214 if "parsers" in self.config:
215 _LOGGER.debug("Processing parsers...")
216 parsers = self.config["parsers"]
217
218 dialogflow = [p for p in parsers if p["name"] == "dialogflow"
219 or p["name"] == "apiai"]
220
221 # Show deprecation message but parse message
222 # Once it stops working remove this bit
223 apiai = [p for p in parsers if p["name"] == "apiai"]
224 if apiai:
225 _LOGGER.warning("Api.ai is now called Dialogflow. This "
226 "parser will stop working in the future "
227 "please swap: 'name: apiai' for "
228 "'name: dialogflow' in configuration.yaml")
229
230 if len(dialogflow) == 1 and \
231 ("enabled" not in dialogflow[0] or
232 dialogflow[0]["enabled"] is not False):
233 _LOGGER.debug("Checking dialogflow...")
234 skills = skills + \
235 await parse_dialogflow(self, message, dialogflow[0])
236
237 luisai = [p for p in parsers if p["name"] == "luisai"]
238 if len(luisai) == 1 and \
239 ("enabled" not in luisai[0] or
240 luisai[0]["enabled"] is not False):
241 _LOGGER.debug("Checking luisai...")
242 skills = skills + \
243 await parse_luisai(self, message, luisai[0])
244
245 recastai = [p for p in parsers if p["name"] == "recastai"]
246 if len(recastai) == 1 and \
247 ("enabled" not in recastai[0] or
248 recastai[0]["enabled"] is not False):
249 _LOGGER.debug("Checking Recast.AI...")
250 skills = skills + \
251 await parse_recastai(self, message, recastai[0])
252
253 witai = [p for p in parsers if p["name"] == "witai"]
254 if len(witai) == 1 and \
255 ("enabled" not in witai[0] or
256 witai[0]["enabled"] is not False):
257 _LOGGER.debug("Checking wit.ai...")
258 skills = skills + \
259 await parse_witai(self, message, witai[0])
260
261 return sorted(skills, key=lambda k: k["score"], reverse=True)
262
263 async def parse(self, message):
264 """Parse a string against all skills."""
265 self.stats["messages_parsed"] = self.stats["messages_parsed"] + 1
266 tasks = []
267 if message.text.strip() != "":
268 _LOGGER.debug("Parsing input: %s", message.text)
269
270 tasks.append(
271 self.eventloop.create_task(parse_always(self, message)))
272
273 ranked_skills = await self.get_ranked_skills(message)
274 if ranked_skills:
275 tasks.append(
276 self.eventloop.create_task(
277 self.run_skill(ranked_skills[0]["skill"],
278 ranked_skills[0]["config"],
279 message)))
280
281 return tasks
282
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/opsdroid/core.py b/opsdroid/core.py
--- a/opsdroid/core.py
+++ b/opsdroid/core.py
@@ -213,7 +213,7 @@
if "parsers" in self.config:
_LOGGER.debug("Processing parsers...")
- parsers = self.config["parsers"]
+ parsers = self.config["parsers"] or []
dialogflow = [p for p in parsers if p["name"] == "dialogflow"
or p["name"] == "apiai"]
|
{"golden_diff": "diff --git a/opsdroid/core.py b/opsdroid/core.py\n--- a/opsdroid/core.py\n+++ b/opsdroid/core.py\n@@ -213,7 +213,7 @@\n \n if \"parsers\" in self.config:\n _LOGGER.debug(\"Processing parsers...\")\n- parsers = self.config[\"parsers\"]\n+ parsers = self.config[\"parsers\"] or []\n \n dialogflow = [p for p in parsers if p[\"name\"] == \"dialogflow\"\n or p[\"name\"] == \"apiai\"]\n", "issue": "Error if parsers is empty\n# Description\r\nIf you leave the `parsers` section of the config as an empty dictionary an error is thrown and startup fails.\r\n\r\n## Steps to Reproduce\r\nUncomment the parsers section of the config but with no actual parsers\r\n\r\n```\r\nparsers:\r\n\r\n# nothing else\r\n```\r\n\r\n## Expected Functionality\r\nThe application should start with the default parsers only. The same as if `parsers:` is not in the config.\r\n\r\n## Experienced Functionality\r\n\r\n```\r\nDEBUG opsdroid.core: Parsing input: hi\r\nDEBUG opsdroid.core: Processing parsers...\r\nERROR aiohttp.server: Error handling request\r\nTraceback (most recent call last):\r\n File \"/Users/jacob/.pyenv/versions/3.5.4/lib/python3.5/site-packages/aiohttp/web_protocol.py\", line 416, in start\r\n resp = yield from self._request_handler(request)\r\n File \"/Users/jacob/.pyenv/versions/3.5.4/lib/python3.5/site-packages/aiohttp/web.py\", line 325, in _handle\r\n resp = yield from handler(request)\r\n File \"/Users/jacob/.opsdroid/modules/opsdroid-modules/connector/websocket/__init__.py\", line 77, in websocket_handler\r\n await self.opsdroid.parse(message)\r\n File \"/Users/jacob/Projects/opsdroid/opsdroid/opsdroid/core.py\", line 273, in parse\r\n ranked_skills = await self.get_ranked_skills(message)\r\n File \"/Users/jacob/Projects/opsdroid/opsdroid/opsdroid/core.py\", line 218, in get_ranked_skills\r\n dialogflow = [p for p in parsers if p[\"name\"] == \"dialogflow\"\r\nTypeError: 'NoneType' object is not iterable\r\n```\r\n\r\n## Versions\r\n- **Opsdroid version:** 0.10.0\r\n- **Python version:** Python 3.5.4\r\n- **OS/Docker version:** macOS 10.13\n", "before_files": [{"content": "\"\"\"Core components of OpsDroid.\"\"\"\n\nimport copy\nimport logging\nimport signal\nimport sys\nimport weakref\nimport asyncio\n\nfrom opsdroid.memory import Memory\nfrom opsdroid.connector import Connector\nfrom opsdroid.database import Database\nfrom opsdroid.loader import Loader\nfrom opsdroid.parsers.always import parse_always\nfrom opsdroid.parsers.regex import parse_regex\nfrom opsdroid.parsers.dialogflow import parse_dialogflow\nfrom opsdroid.parsers.luisai import parse_luisai\nfrom opsdroid.parsers.recastai import parse_recastai\nfrom opsdroid.parsers.witai import parse_witai\nfrom opsdroid.parsers.crontab import parse_crontab\nfrom opsdroid.const import DEFAULT_CONFIG_PATH\n\n\n_LOGGER = logging.getLogger(__name__)\n\n\nclass OpsDroid():\n \"\"\"Root object for opsdroid.\"\"\"\n\n # pylint: disable=too-many-instance-attributes\n # All are reasonable in this case.\n\n instances = []\n\n def __init__(self):\n \"\"\"Start opsdroid.\"\"\"\n self.bot_name = 'opsdroid'\n self.sys_status = 0\n self.connectors = []\n self.connector_tasks = []\n self.eventloop = asyncio.get_event_loop()\n for sig in (signal.SIGINT, signal.SIGTERM):\n self.eventloop.add_signal_handler(sig, self.call_stop)\n self.skills = []\n self.memory = Memory()\n self.loader = Loader(self)\n self.config = {}\n self.stats = {\n \"messages_parsed\": 0,\n \"webhooks_called\": 0,\n \"total_response_time\": 0,\n \"total_responses\": 0,\n }\n self.web_server = None\n self.should_restart = False\n self.stored_path = []\n\n def __enter__(self):\n \"\"\"Add self to existing instances.\"\"\"\n self.stored_path = copy.copy(sys.path)\n if not self.__class__.instances:\n self.__class__.instances.append(weakref.proxy(self))\n else:\n self.critical(\"opsdroid has already been started\", 1)\n return self\n\n def __exit__(self, exc_type, exc_value, traceback):\n \"\"\"Remove self from existing instances.\"\"\"\n sys.path = self.stored_path\n self.__class__.instances = []\n asyncio.set_event_loop(asyncio.new_event_loop())\n\n @property\n def default_connector(self):\n \"\"\"Return the default connector.\"\"\"\n default_connector = None\n for connector in self.connectors:\n if \"default\" in connector.config and connector.config[\"default\"]:\n default_connector = connector\n break\n if default_connector is None:\n default_connector = self.connectors[0]\n return default_connector\n\n def exit(self):\n \"\"\"Exit application.\"\"\"\n _LOGGER.info(\"Exiting application with return code %s\",\n str(self.sys_status))\n sys.exit(self.sys_status)\n\n def critical(self, error, code):\n \"\"\"Exit due to unrecoverable error.\"\"\"\n self.sys_status = code\n _LOGGER.critical(error)\n self.exit()\n\n def restart(self):\n \"\"\"Restart opsdroid.\"\"\"\n self.should_restart = True\n self.stop()\n\n def call_stop(self):\n \"\"\"Signal handler to call disconnect and stop.\"\"\"\n future = asyncio.ensure_future(self.disconnect())\n future.add_done_callback(self.stop)\n return future\n\n async def disconnect(self):\n \"\"\"Disconnect all the connectors.\"\"\"\n for connector in self.connectors:\n await connector.disconnect(self)\n\n def stop(self, future=None):\n \"\"\"Stop the event loop.\"\"\"\n pending = asyncio.Task.all_tasks()\n for task in pending:\n task.cancel()\n self.eventloop.stop()\n print('') # Prints a character return for return to shell\n _LOGGER.info(\"Keyboard interrupt, exiting.\")\n\n def load(self):\n \"\"\"Load configuration.\"\"\"\n self.config = self.loader.load_config_file([\n \"configuration.yaml\",\n DEFAULT_CONFIG_PATH,\n \"/etc/opsdroid/configuration.yaml\"\n ])\n\n def start_loop(self):\n \"\"\"Start the event loop.\"\"\"\n connectors, databases, skills = \\\n self.loader.load_modules_from_config(self.config)\n _LOGGER.debug(\"Loaded %i skills\", len(skills))\n if databases is not None:\n self.start_databases(databases)\n self.setup_skills(skills)\n self.start_connector_tasks(connectors)\n self.eventloop.create_task(parse_crontab(self))\n self.web_server.start()\n try:\n pending = asyncio.Task.all_tasks()\n self.eventloop.run_until_complete(asyncio.gather(*pending))\n except RuntimeError as error:\n if str(error) != 'Event loop is closed':\n raise error\n finally:\n self.eventloop.close()\n\n def setup_skills(self, skills):\n \"\"\"Call the setup function on the passed in skills.\"\"\"\n for skill in skills:\n try:\n skill[\"module\"].setup(self)\n except AttributeError:\n pass\n\n def start_connector_tasks(self, connectors):\n \"\"\"Start the connectors.\"\"\"\n for connector_module in connectors:\n for _, cls in connector_module[\"module\"].__dict__.items():\n if isinstance(cls, type) and \\\n issubclass(cls, Connector) and\\\n cls is not Connector:\n connector = cls(connector_module[\"config\"])\n self.connectors.append(connector)\n\n if connectors:\n for connector in self.connectors:\n self.eventloop.run_until_complete(connector.connect(self))\n for connector in self.connectors:\n task = self.eventloop.create_task(connector.listen(self))\n self.connector_tasks.append(task)\n else:\n self.critical(\"All connectors failed to load\", 1)\n\n def start_databases(self, databases):\n \"\"\"Start the databases.\"\"\"\n if not databases:\n _LOGGER.debug(databases)\n _LOGGER.warning(\"All databases failed to load\")\n for database_module in databases:\n for name, cls in database_module[\"module\"].__dict__.items():\n if isinstance(cls, type) and \\\n issubclass(cls, Database) and \\\n cls is not Database:\n _LOGGER.debug(\"Adding database: %s\", name)\n database = cls(database_module[\"config\"])\n self.memory.databases.append(database)\n self.eventloop.run_until_complete(database.connect(self))\n\n async def run_skill(self, skill, config, message):\n \"\"\"Execute a skill.\"\"\"\n # pylint: disable=broad-except\n # We want to catch all exceptions coming from a skill module and not\n # halt the application. If a skill throws an exception it just doesn't\n # give a response to the user, so an error response should be given.\n try:\n await skill(self, config, message)\n except Exception:\n if message:\n await message.respond(\n \"Whoops there has been an error\")\n await message.respond(\n \"Check the log for details\")\n _LOGGER.exception(\"Exception when running skill '%s' \",\n config[\"name\"])\n\n async def get_ranked_skills(self, message):\n \"\"\"Take a message and return a ranked list of matching skills.\"\"\"\n skills = []\n skills = skills + await parse_regex(self, message)\n\n if \"parsers\" in self.config:\n _LOGGER.debug(\"Processing parsers...\")\n parsers = self.config[\"parsers\"]\n\n dialogflow = [p for p in parsers if p[\"name\"] == \"dialogflow\"\n or p[\"name\"] == \"apiai\"]\n\n # Show deprecation message but parse message\n # Once it stops working remove this bit\n apiai = [p for p in parsers if p[\"name\"] == \"apiai\"]\n if apiai:\n _LOGGER.warning(\"Api.ai is now called Dialogflow. This \"\n \"parser will stop working in the future \"\n \"please swap: 'name: apiai' for \"\n \"'name: dialogflow' in configuration.yaml\")\n\n if len(dialogflow) == 1 and \\\n (\"enabled\" not in dialogflow[0] or\n dialogflow[0][\"enabled\"] is not False):\n _LOGGER.debug(\"Checking dialogflow...\")\n skills = skills + \\\n await parse_dialogflow(self, message, dialogflow[0])\n\n luisai = [p for p in parsers if p[\"name\"] == \"luisai\"]\n if len(luisai) == 1 and \\\n (\"enabled\" not in luisai[0] or\n luisai[0][\"enabled\"] is not False):\n _LOGGER.debug(\"Checking luisai...\")\n skills = skills + \\\n await parse_luisai(self, message, luisai[0])\n\n recastai = [p for p in parsers if p[\"name\"] == \"recastai\"]\n if len(recastai) == 1 and \\\n (\"enabled\" not in recastai[0] or\n recastai[0][\"enabled\"] is not False):\n _LOGGER.debug(\"Checking Recast.AI...\")\n skills = skills + \\\n await parse_recastai(self, message, recastai[0])\n\n witai = [p for p in parsers if p[\"name\"] == \"witai\"]\n if len(witai) == 1 and \\\n (\"enabled\" not in witai[0] or\n witai[0][\"enabled\"] is not False):\n _LOGGER.debug(\"Checking wit.ai...\")\n skills = skills + \\\n await parse_witai(self, message, witai[0])\n\n return sorted(skills, key=lambda k: k[\"score\"], reverse=True)\n\n async def parse(self, message):\n \"\"\"Parse a string against all skills.\"\"\"\n self.stats[\"messages_parsed\"] = self.stats[\"messages_parsed\"] + 1\n tasks = []\n if message.text.strip() != \"\":\n _LOGGER.debug(\"Parsing input: %s\", message.text)\n\n tasks.append(\n self.eventloop.create_task(parse_always(self, message)))\n\n ranked_skills = await self.get_ranked_skills(message)\n if ranked_skills:\n tasks.append(\n self.eventloop.create_task(\n self.run_skill(ranked_skills[0][\"skill\"],\n ranked_skills[0][\"config\"],\n message)))\n\n return tasks\n", "path": "opsdroid/core.py"}], "after_files": [{"content": "\"\"\"Core components of OpsDroid.\"\"\"\n\nimport copy\nimport logging\nimport signal\nimport sys\nimport weakref\nimport asyncio\n\nfrom opsdroid.memory import Memory\nfrom opsdroid.connector import Connector\nfrom opsdroid.database import Database\nfrom opsdroid.loader import Loader\nfrom opsdroid.parsers.always import parse_always\nfrom opsdroid.parsers.regex import parse_regex\nfrom opsdroid.parsers.dialogflow import parse_dialogflow\nfrom opsdroid.parsers.luisai import parse_luisai\nfrom opsdroid.parsers.recastai import parse_recastai\nfrom opsdroid.parsers.witai import parse_witai\nfrom opsdroid.parsers.crontab import parse_crontab\nfrom opsdroid.const import DEFAULT_CONFIG_PATH\n\n\n_LOGGER = logging.getLogger(__name__)\n\n\nclass OpsDroid():\n \"\"\"Root object for opsdroid.\"\"\"\n\n # pylint: disable=too-many-instance-attributes\n # All are reasonable in this case.\n\n instances = []\n\n def __init__(self):\n \"\"\"Start opsdroid.\"\"\"\n self.bot_name = 'opsdroid'\n self.sys_status = 0\n self.connectors = []\n self.connector_tasks = []\n self.eventloop = asyncio.get_event_loop()\n for sig in (signal.SIGINT, signal.SIGTERM):\n self.eventloop.add_signal_handler(sig, self.call_stop)\n self.skills = []\n self.memory = Memory()\n self.loader = Loader(self)\n self.config = {}\n self.stats = {\n \"messages_parsed\": 0,\n \"webhooks_called\": 0,\n \"total_response_time\": 0,\n \"total_responses\": 0,\n }\n self.web_server = None\n self.should_restart = False\n self.stored_path = []\n\n def __enter__(self):\n \"\"\"Add self to existing instances.\"\"\"\n self.stored_path = copy.copy(sys.path)\n if not self.__class__.instances:\n self.__class__.instances.append(weakref.proxy(self))\n else:\n self.critical(\"opsdroid has already been started\", 1)\n return self\n\n def __exit__(self, exc_type, exc_value, traceback):\n \"\"\"Remove self from existing instances.\"\"\"\n sys.path = self.stored_path\n self.__class__.instances = []\n asyncio.set_event_loop(asyncio.new_event_loop())\n\n @property\n def default_connector(self):\n \"\"\"Return the default connector.\"\"\"\n default_connector = None\n for connector in self.connectors:\n if \"default\" in connector.config and connector.config[\"default\"]:\n default_connector = connector\n break\n if default_connector is None:\n default_connector = self.connectors[0]\n return default_connector\n\n def exit(self):\n \"\"\"Exit application.\"\"\"\n _LOGGER.info(\"Exiting application with return code %s\",\n str(self.sys_status))\n sys.exit(self.sys_status)\n\n def critical(self, error, code):\n \"\"\"Exit due to unrecoverable error.\"\"\"\n self.sys_status = code\n _LOGGER.critical(error)\n self.exit()\n\n def restart(self):\n \"\"\"Restart opsdroid.\"\"\"\n self.should_restart = True\n self.stop()\n\n def call_stop(self):\n \"\"\"Signal handler to call disconnect and stop.\"\"\"\n future = asyncio.ensure_future(self.disconnect())\n future.add_done_callback(self.stop)\n return future\n\n async def disconnect(self):\n \"\"\"Disconnect all the connectors.\"\"\"\n for connector in self.connectors:\n await connector.disconnect(self)\n\n def stop(self, future=None):\n \"\"\"Stop the event loop.\"\"\"\n pending = asyncio.Task.all_tasks()\n for task in pending:\n task.cancel()\n self.eventloop.stop()\n print('') # Prints a character return for return to shell\n _LOGGER.info(\"Keyboard interrupt, exiting.\")\n\n def load(self):\n \"\"\"Load configuration.\"\"\"\n self.config = self.loader.load_config_file([\n \"configuration.yaml\",\n DEFAULT_CONFIG_PATH,\n \"/etc/opsdroid/configuration.yaml\"\n ])\n\n def start_loop(self):\n \"\"\"Start the event loop.\"\"\"\n connectors, databases, skills = \\\n self.loader.load_modules_from_config(self.config)\n _LOGGER.debug(\"Loaded %i skills\", len(skills))\n if databases is not None:\n self.start_databases(databases)\n self.setup_skills(skills)\n self.start_connector_tasks(connectors)\n self.eventloop.create_task(parse_crontab(self))\n self.web_server.start()\n try:\n pending = asyncio.Task.all_tasks()\n self.eventloop.run_until_complete(asyncio.gather(*pending))\n except RuntimeError as error:\n if str(error) != 'Event loop is closed':\n raise error\n finally:\n self.eventloop.close()\n\n def setup_skills(self, skills):\n \"\"\"Call the setup function on the passed in skills.\"\"\"\n for skill in skills:\n try:\n skill[\"module\"].setup(self)\n except AttributeError:\n pass\n\n def start_connector_tasks(self, connectors):\n \"\"\"Start the connectors.\"\"\"\n for connector_module in connectors:\n for _, cls in connector_module[\"module\"].__dict__.items():\n if isinstance(cls, type) and \\\n issubclass(cls, Connector) and\\\n cls is not Connector:\n connector = cls(connector_module[\"config\"])\n self.connectors.append(connector)\n\n if connectors:\n for connector in self.connectors:\n self.eventloop.run_until_complete(connector.connect(self))\n for connector in self.connectors:\n task = self.eventloop.create_task(connector.listen(self))\n self.connector_tasks.append(task)\n else:\n self.critical(\"All connectors failed to load\", 1)\n\n def start_databases(self, databases):\n \"\"\"Start the databases.\"\"\"\n if not databases:\n _LOGGER.debug(databases)\n _LOGGER.warning(\"All databases failed to load\")\n for database_module in databases:\n for name, cls in database_module[\"module\"].__dict__.items():\n if isinstance(cls, type) and \\\n issubclass(cls, Database) and \\\n cls is not Database:\n _LOGGER.debug(\"Adding database: %s\", name)\n database = cls(database_module[\"config\"])\n self.memory.databases.append(database)\n self.eventloop.run_until_complete(database.connect(self))\n\n async def run_skill(self, skill, config, message):\n \"\"\"Execute a skill.\"\"\"\n # pylint: disable=broad-except\n # We want to catch all exceptions coming from a skill module and not\n # halt the application. If a skill throws an exception it just doesn't\n # give a response to the user, so an error response should be given.\n try:\n await skill(self, config, message)\n except Exception:\n if message:\n await message.respond(\n \"Whoops there has been an error\")\n await message.respond(\n \"Check the log for details\")\n _LOGGER.exception(\"Exception when running skill '%s' \",\n config[\"name\"])\n\n async def get_ranked_skills(self, message):\n \"\"\"Take a message and return a ranked list of matching skills.\"\"\"\n skills = []\n skills = skills + await parse_regex(self, message)\n\n if \"parsers\" in self.config:\n _LOGGER.debug(\"Processing parsers...\")\n parsers = self.config[\"parsers\"] or []\n\n dialogflow = [p for p in parsers if p[\"name\"] == \"dialogflow\"\n or p[\"name\"] == \"apiai\"]\n\n # Show deprecation message but parse message\n # Once it stops working remove this bit\n apiai = [p for p in parsers if p[\"name\"] == \"apiai\"]\n if apiai:\n _LOGGER.warning(\"Api.ai is now called Dialogflow. This \"\n \"parser will stop working in the future \"\n \"please swap: 'name: apiai' for \"\n \"'name: dialogflow' in configuration.yaml\")\n\n if len(dialogflow) == 1 and \\\n (\"enabled\" not in dialogflow[0] or\n dialogflow[0][\"enabled\"] is not False):\n _LOGGER.debug(\"Checking dialogflow...\")\n skills = skills + \\\n await parse_dialogflow(self, message, dialogflow[0])\n\n luisai = [p for p in parsers if p[\"name\"] == \"luisai\"]\n if len(luisai) == 1 and \\\n (\"enabled\" not in luisai[0] or\n luisai[0][\"enabled\"] is not False):\n _LOGGER.debug(\"Checking luisai...\")\n skills = skills + \\\n await parse_luisai(self, message, luisai[0])\n\n recastai = [p for p in parsers if p[\"name\"] == \"recastai\"]\n if len(recastai) == 1 and \\\n (\"enabled\" not in recastai[0] or\n recastai[0][\"enabled\"] is not False):\n _LOGGER.debug(\"Checking Recast.AI...\")\n skills = skills + \\\n await parse_recastai(self, message, recastai[0])\n\n witai = [p for p in parsers if p[\"name\"] == \"witai\"]\n if len(witai) == 1 and \\\n (\"enabled\" not in witai[0] or\n witai[0][\"enabled\"] is not False):\n _LOGGER.debug(\"Checking wit.ai...\")\n skills = skills + \\\n await parse_witai(self, message, witai[0])\n\n return sorted(skills, key=lambda k: k[\"score\"], reverse=True)\n\n async def parse(self, message):\n \"\"\"Parse a string against all skills.\"\"\"\n self.stats[\"messages_parsed\"] = self.stats[\"messages_parsed\"] + 1\n tasks = []\n if message.text.strip() != \"\":\n _LOGGER.debug(\"Parsing input: %s\", message.text)\n\n tasks.append(\n self.eventloop.create_task(parse_always(self, message)))\n\n ranked_skills = await self.get_ranked_skills(message)\n if ranked_skills:\n tasks.append(\n self.eventloop.create_task(\n self.run_skill(ranked_skills[0][\"skill\"],\n ranked_skills[0][\"config\"],\n message)))\n\n return tasks\n", "path": "opsdroid/core.py"}]}
| 3,600 | 120 |
gh_patches_debug_3408
|
rasdani/github-patches
|
git_diff
|
conda__conda-9660
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
conda env config (or other inappropriate/incomplete commands) results in AttributeError(possibly related to argparse ?)
<!--
Hi! Read this; it's important.
This is an issue tracker for conda -- the package manager. File feature requests
for conda here, as well as bug reports about something conda has messed up.
Anaconda Community Code of Conduct: https://www.anaconda.com/community-code-of-conduct/
If your issue is a bug report for:
* a specific conda package from Anaconda ('defaults' channel):
==> file at https://github.com/ContinuumIO/anaconda-issues
* a specific conda package from conda-forge:
==> file at the corresponding feedstock under https://github.com/conda-forge
* repo.anaconda.com access and service:
==> file at https://github.com/ContinuumIO/anaconda-issues
* anaconda.org access and service:
==> file at https://anaconda.org/contact/report
* commands under 'conda build':
==> file at https://github.com/conda/conda-build
* commands under 'conda env':
==> please file it here!
* all other conda commands that start with 'conda':
==> please file it here!
If you continue on,
**please include all requested information below.**
If a maintainer determines the information is required to understand
your issue, and if it is not provided, your issue may be
closed automatically.
-->
## Current Behavior
<!-- What actually happens?
If you want to include console output, please use "Steps to Reproduce" below. -->
```conda env config``` (or other options that are either incomplete or inappropriate,
such as ```conda env config vars```)
shows Error message including
```
Traceback (most recent call last):
File "/home/USERNAME/miniconda3/lib/python3.7/site-packages/conda/exceptions.py", line 1078, in __call__
return func(*args, **kwargs)
File "/home/USERNAME/miniconda3/lib/python3.7/site-packages/conda_env/cli/main.py", line 76, in do_call
relative_mod, func_name = args.func.rsplit('.', 1)
AttributeError: 'Namespace' object has no attribute 'func'
```
This seems to be related to argparse behavior?
(p.s. USERNAME is in place for the username currently used, and ROOTUSERNAME is the (main) root username. ANOTHERFOLDER is just another folder.)
### Steps to Reproduce
<!-- If the current behavior is a bug, please provide specific, minimal steps to independently reproduce.
Include the exact conda commands that reproduce the issue and their output between the ticks below. -->
```
conda env config
# >>>>>>>>>>>>>>>>>>>>>> ERROR REPORT <<<<<<<<<<<<<<<<<<<<<<
Traceback (most recent call last):
File "/home/USERNAME/miniconda3/lib/python3.7/site-packages/conda/exceptions.py", line 1078, in __call__
return func(*args, **kwargs)
File "/home/USERNAME/miniconda3/lib/python3.7/site-packages/conda_env/cli/main.py", line 76, in do_call
relative_mod, func_name = args.func.rsplit('.', 1)
AttributeError: 'Namespace' object has no attribute 'func'
`$ /home/USERNAME/miniconda3/bin/conda-env config vars`
environment variables:
CIO_TEST=<not set>
CONDA_AUTO_UPDATE_CONDA=false
CONDA_DEFAULT_ENV=base
CONDA_EXE=/home/USERNAME/miniconda3/bin/conda
CONDA_PREFIX=/home/USERNAME/miniconda3
CONDA_PROMPT_MODIFIER=(base)
CONDA_PYTHON_EXE=/home/USERNAME/miniconda3/bin/python
CONDA_ROOT=/home/USERNAME/miniconda3
CONDA_SHLVL=1
PATH=/home/USERNAME/miniconda3/bin:/home/USERNAME/miniconda3/bin:/home/
USERNAME/miniconda3/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/
usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/lib/jvm/
java-8-openjdk-amd64/bin:/usr/lib/jvm/java-8-openjdk-
amd64/jre/bin:/home/ROOTUSERNAME/ANOTHERFOLDER/apache-maven-3.6.0/bin
REQUESTS_CA_BUNDLE=<not set>
SSL_CERT_FILE=<not set>
active environment : base
active env location : /home/USERNAME/miniconda3
shell level : 1
user config file : /home/USERNAME/.condarc
populated config files : /home/USERNAME/.condarc
conda version : 4.8.1
conda-build version : not installed
python version : 3.7.6.final.0
virtual packages : __cuda=10.1
__glibc=2.27
base environment : /home/USERNAME/miniconda3 (writable)
channel URLs : https://repo.anaconda.com/pkgs/main/linux-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/linux-64
https://repo.anaconda.com/pkgs/r/noarch
package cache : /home/USERNAME/miniconda3/pkgs
/home/USERNAME/.conda/pkgs
envs directories : /home/USERNAME/miniconda3/envs
/home/USERNAME/.conda/envs
platform : linux-64
user-agent : conda/4.8.1 requests/2.22.0 CPython/3.7.6 Linux/4.15.0-74-generic ubuntu/18.04.3 glibc/2.27
UID:GID : 1010:1010
netrc file : None
offline mode : False
```
## Expected Behavior
<!-- What do you think should happen? -->
show error message or --help option equivalent output
## Environment Information
<details open><summary><code>`conda info`</code></summary><p>
<!-- between the ticks below, paste the output of 'conda info' -->
```
active environment : base
active env location : /home/USERNAME/miniconda3
shell level : 1
user config file : /home/USERNAME/.condarc
populated config files : /home/USERNAME/.condarc
conda version : 4.8.1
conda-build version : not installed
python version : 3.7.6.final.0
virtual packages : __cuda=10.1
__glibc=2.27
base environment : /home/USERNAME/miniconda3 (writable)
channel URLs : https://repo.anaconda.com/pkgs/main/linux-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/linux-64
https://repo.anaconda.com/pkgs/r/noarch
package cache : /home/USERNAME/miniconda3/pkgs
/home/USERNAME/.conda/pkgs
envs directories : /home/USERNAME/miniconda3/envs
/home/USERNAME/.conda/envs
platform : linux-64
user-agent : conda/4.8.1 requests/2.22.0 CPython/3.7.6 Linux/4.15.0-74-generic ubuntu/18.04.3 glibc/2.27
UID:GID : 1010:1010
netrc file : None
offline mode : False
```
</p></details>
<details open><summary><code>`conda config --show-sources`</code></summary><p>
<!-- between the ticks below, paste the output of 'conda config --show-sources' -->
```
==> /home/USERNAME/.condarc <==
channels:
- defaults
```
</p></details>
<details><summary><code>`conda list --show-channel-urls`</code></summary><p>
<!-- between the ticks below, paste the output of 'conda list --show-channel-urls' -->
```
# packages in environment at /home/USERNAME/miniconda3:
#
# Name Version Build Channel
_libgcc_mutex 0.1 main defaults
asn1crypto 1.3.0 py37_0 defaults
ca-certificates 2019.11.27 0 defaults
certifi 2019.11.28 py37_0 defaults
cffi 1.13.2 py37h2e261b9_0 defaults
chardet 3.0.4 py37_1003 defaults
conda 4.8.1 py37_0 defaults
conda-package-handling 1.6.0 py37h7b6447c_0 defaults
cryptography 2.8 py37h1ba5d50_0 defaults
idna 2.8 py37_0 defaults
ld_impl_linux-64 2.33.1 h53a641e_7 defaults
libedit 3.1.20181209 hc058e9b_0 defaults
libffi 3.2.1 hd88cf55_4 defaults
libgcc-ng 9.1.0 hdf63c60_0 defaults
libstdcxx-ng 9.1.0 hdf63c60_0 defaults
ncurses 6.1 he6710b0_1 defaults
openssl 1.1.1d h7b6447c_3 defaults
pip 19.3.1 py37_0 defaults
pycosat 0.6.3 py37h7b6447c_0 defaults
pycparser 2.19 py37_0 defaults
pyopenssl 19.1.0 py37_0 defaults
pysocks 1.7.1 py37_0 defaults
python 3.7.6 h0371630_2 defaults
readline 7.0 h7b6447c_5 defaults
requests 2.22.0 py37_1 defaults
ruamel_yaml 0.15.87 py37h7b6447c_0 defaults
setuptools 44.0.0 py37_0 defaults
six 1.13.0 py37_0 defaults
sqlite 3.30.1 h7b6447c_0 defaults
tk 8.6.8 hbc83047_0 defaults
tqdm 4.41.1 py_0 defaults
urllib3 1.25.7 py37_0 defaults
wheel 0.33.6 py37_0 defaults
xz 5.2.4 h14c3975_4 defaults
yaml 0.1.7 had09818_2 defaults
zlib 1.2.11 h7b6447c_3 defaults
```
</p></details>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conda_env/cli/main_config.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright (C) 2012 Anaconda, Inc
3 # SPDX-License-Identifier: BSD-3-Clause
4 from argparse import RawDescriptionHelpFormatter
5
6
7 from .main_vars import configure_parser as configure_vars_parser
8
9 config_description = '''
10 Configure a conda environment
11 '''
12
13 config_example = '''
14 examples:
15 conda env config vars list
16 conda env config --append channels conda-forge
17 '''
18
19 def configure_parser(sub_parsers):
20 config_parser = sub_parsers.add_parser(
21 'config',
22 formatter_class=RawDescriptionHelpFormatter,
23 description=config_description,
24 help=config_description,
25 epilog=config_example,
26 )
27 config_subparser = config_parser.add_subparsers()
28 configure_vars_parser(config_subparser)
29
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/conda_env/cli/main_config.py b/conda_env/cli/main_config.py
--- a/conda_env/cli/main_config.py
+++ b/conda_env/cli/main_config.py
@@ -24,5 +24,10 @@
help=config_description,
epilog=config_example,
)
+ config_parser.set_defaults(func='.main_config.execute')
config_subparser = config_parser.add_subparsers()
configure_vars_parser(config_subparser)
+
+
+def execute(args, parser):
+ parser.parse_args(['config', '--help'])
|
{"golden_diff": "diff --git a/conda_env/cli/main_config.py b/conda_env/cli/main_config.py\n--- a/conda_env/cli/main_config.py\n+++ b/conda_env/cli/main_config.py\n@@ -24,5 +24,10 @@\n help=config_description,\n epilog=config_example,\n )\n+ config_parser.set_defaults(func='.main_config.execute')\n config_subparser = config_parser.add_subparsers()\n configure_vars_parser(config_subparser)\n+\n+\n+def execute(args, parser):\n+ parser.parse_args(['config', '--help'])\n", "issue": "conda env config (or other inappropriate/incomplete commands) results in AttributeError(possibly related to argparse ?) \n<!--\r\nHi! Read this; it's important.\r\n\r\nThis is an issue tracker for conda -- the package manager. File feature requests\r\nfor conda here, as well as bug reports about something conda has messed up.\r\n\r\nAnaconda Community Code of Conduct: https://www.anaconda.com/community-code-of-conduct/\r\n\r\nIf your issue is a bug report for:\r\n * a specific conda package from Anaconda ('defaults' channel):\r\n ==> file at https://github.com/ContinuumIO/anaconda-issues\r\n * a specific conda package from conda-forge:\r\n ==> file at the corresponding feedstock under https://github.com/conda-forge\r\n * repo.anaconda.com access and service:\r\n ==> file at https://github.com/ContinuumIO/anaconda-issues\r\n * anaconda.org access and service:\r\n ==> file at https://anaconda.org/contact/report\r\n * commands under 'conda build':\r\n ==> file at https://github.com/conda/conda-build\r\n * commands under 'conda env':\r\n ==> please file it here!\r\n * all other conda commands that start with 'conda':\r\n ==> please file it here!\r\n\r\nIf you continue on,\r\n\r\n **please include all requested information below.**\r\n\r\nIf a maintainer determines the information is required to understand \r\nyour issue, and if it is not provided, your issue may be\r\nclosed automatically.\r\n\r\n-->\r\n\r\n## Current Behavior\r\n<!-- What actually happens?\r\n If you want to include console output, please use \"Steps to Reproduce\" below. -->\r\n```conda env config``` (or other options that are either incomplete or inappropriate,\r\nsuch as ```conda env config vars```)\r\nshows Error message including \r\n```\r\n Traceback (most recent call last):\r\n File \"/home/USERNAME/miniconda3/lib/python3.7/site-packages/conda/exceptions.py\", line 1078, in __call__\r\n return func(*args, **kwargs)\r\n File \"/home/USERNAME/miniconda3/lib/python3.7/site-packages/conda_env/cli/main.py\", line 76, in do_call\r\n relative_mod, func_name = args.func.rsplit('.', 1)\r\n AttributeError: 'Namespace' object has no attribute 'func'\r\n```\r\nThis seems to be related to argparse behavior?\r\n(p.s. USERNAME is in place for the username currently used, and ROOTUSERNAME is the (main) root username. ANOTHERFOLDER is just another folder.)\r\n\r\n### Steps to Reproduce\r\n<!-- If the current behavior is a bug, please provide specific, minimal steps to independently reproduce.\r\n Include the exact conda commands that reproduce the issue and their output between the ticks below. -->\r\n```\r\nconda env config\r\n\r\n# >>>>>>>>>>>>>>>>>>>>>> ERROR REPORT <<<<<<<<<<<<<<<<<<<<<<\r\n\r\n Traceback (most recent call last):\r\n File \"/home/USERNAME/miniconda3/lib/python3.7/site-packages/conda/exceptions.py\", line 1078, in __call__\r\n return func(*args, **kwargs)\r\n File \"/home/USERNAME/miniconda3/lib/python3.7/site-packages/conda_env/cli/main.py\", line 76, in do_call\r\n relative_mod, func_name = args.func.rsplit('.', 1)\r\n AttributeError: 'Namespace' object has no attribute 'func'\r\n\r\n`$ /home/USERNAME/miniconda3/bin/conda-env config vars`\r\n\r\n environment variables:\r\n CIO_TEST=<not set>\r\n CONDA_AUTO_UPDATE_CONDA=false\r\n CONDA_DEFAULT_ENV=base\r\n CONDA_EXE=/home/USERNAME/miniconda3/bin/conda\r\n CONDA_PREFIX=/home/USERNAME/miniconda3\r\n CONDA_PROMPT_MODIFIER=(base)\r\n CONDA_PYTHON_EXE=/home/USERNAME/miniconda3/bin/python\r\n CONDA_ROOT=/home/USERNAME/miniconda3\r\n CONDA_SHLVL=1\r\n PATH=/home/USERNAME/miniconda3/bin:/home/USERNAME/miniconda3/bin:/home/\r\n USERNAME/miniconda3/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/\r\n usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/lib/jvm/\r\n java-8-openjdk-amd64/bin:/usr/lib/jvm/java-8-openjdk-\r\n amd64/jre/bin:/home/ROOTUSERNAME/ANOTHERFOLDER/apache-maven-3.6.0/bin\r\n REQUESTS_CA_BUNDLE=<not set>\r\n SSL_CERT_FILE=<not set>\r\n\r\n active environment : base\r\n active env location : /home/USERNAME/miniconda3\r\n shell level : 1\r\n user config file : /home/USERNAME/.condarc\r\n populated config files : /home/USERNAME/.condarc\r\n conda version : 4.8.1\r\n conda-build version : not installed\r\n python version : 3.7.6.final.0\r\n virtual packages : __cuda=10.1\r\n __glibc=2.27\r\n base environment : /home/USERNAME/miniconda3 (writable)\r\n channel URLs : https://repo.anaconda.com/pkgs/main/linux-64\r\n https://repo.anaconda.com/pkgs/main/noarch\r\n https://repo.anaconda.com/pkgs/r/linux-64\r\n https://repo.anaconda.com/pkgs/r/noarch\r\n package cache : /home/USERNAME/miniconda3/pkgs\r\n /home/USERNAME/.conda/pkgs\r\n envs directories : /home/USERNAME/miniconda3/envs\r\n /home/USERNAME/.conda/envs\r\n platform : linux-64\r\n user-agent : conda/4.8.1 requests/2.22.0 CPython/3.7.6 Linux/4.15.0-74-generic ubuntu/18.04.3 glibc/2.27\r\n UID:GID : 1010:1010\r\n netrc file : None\r\n offline mode : False\r\n```\r\n\r\n\r\n## Expected Behavior\r\n<!-- What do you think should happen? -->\r\nshow error message or --help option equivalent output\r\n\r\n## Environment Information\r\n<details open><summary><code>`conda info`</code></summary><p>\r\n<!-- between the ticks below, paste the output of 'conda info' -->\r\n\r\n```\r\n active environment : base\r\n active env location : /home/USERNAME/miniconda3\r\n shell level : 1\r\n user config file : /home/USERNAME/.condarc\r\n populated config files : /home/USERNAME/.condarc\r\n conda version : 4.8.1\r\n conda-build version : not installed\r\n python version : 3.7.6.final.0\r\n virtual packages : __cuda=10.1\r\n __glibc=2.27\r\n base environment : /home/USERNAME/miniconda3 (writable)\r\n channel URLs : https://repo.anaconda.com/pkgs/main/linux-64\r\n https://repo.anaconda.com/pkgs/main/noarch\r\n https://repo.anaconda.com/pkgs/r/linux-64\r\n https://repo.anaconda.com/pkgs/r/noarch\r\n package cache : /home/USERNAME/miniconda3/pkgs\r\n /home/USERNAME/.conda/pkgs\r\n envs directories : /home/USERNAME/miniconda3/envs\r\n /home/USERNAME/.conda/envs\r\n platform : linux-64\r\n user-agent : conda/4.8.1 requests/2.22.0 CPython/3.7.6 Linux/4.15.0-74-generic ubuntu/18.04.3 glibc/2.27\r\n UID:GID : 1010:1010\r\n netrc file : None\r\n offline mode : False\r\n```\r\n</p></details>\r\n\r\n\r\n<details open><summary><code>`conda config --show-sources`</code></summary><p>\r\n<!-- between the ticks below, paste the output of 'conda config --show-sources' -->\r\n\r\n```\r\n==> /home/USERNAME/.condarc <==\r\nchannels:\r\n - defaults\r\n```\r\n</p></details>\r\n\r\n\r\n<details><summary><code>`conda list --show-channel-urls`</code></summary><p>\r\n<!-- between the ticks below, paste the output of 'conda list --show-channel-urls' -->\r\n\r\n```\r\n# packages in environment at /home/USERNAME/miniconda3:\r\n#\r\n# Name Version Build Channel\r\n_libgcc_mutex 0.1 main defaults\r\nasn1crypto 1.3.0 py37_0 defaults\r\nca-certificates 2019.11.27 0 defaults\r\ncertifi 2019.11.28 py37_0 defaults\r\ncffi 1.13.2 py37h2e261b9_0 defaults\r\nchardet 3.0.4 py37_1003 defaults\r\nconda 4.8.1 py37_0 defaults\r\nconda-package-handling 1.6.0 py37h7b6447c_0 defaults\r\ncryptography 2.8 py37h1ba5d50_0 defaults\r\nidna 2.8 py37_0 defaults\r\nld_impl_linux-64 2.33.1 h53a641e_7 defaults\r\nlibedit 3.1.20181209 hc058e9b_0 defaults\r\nlibffi 3.2.1 hd88cf55_4 defaults\r\nlibgcc-ng 9.1.0 hdf63c60_0 defaults\r\nlibstdcxx-ng 9.1.0 hdf63c60_0 defaults\r\nncurses 6.1 he6710b0_1 defaults\r\nopenssl 1.1.1d h7b6447c_3 defaults\r\npip 19.3.1 py37_0 defaults\r\npycosat 0.6.3 py37h7b6447c_0 defaults\r\npycparser 2.19 py37_0 defaults\r\npyopenssl 19.1.0 py37_0 defaults\r\npysocks 1.7.1 py37_0 defaults\r\npython 3.7.6 h0371630_2 defaults\r\nreadline 7.0 h7b6447c_5 defaults\r\nrequests 2.22.0 py37_1 defaults\r\nruamel_yaml 0.15.87 py37h7b6447c_0 defaults\r\nsetuptools 44.0.0 py37_0 defaults\r\nsix 1.13.0 py37_0 defaults\r\nsqlite 3.30.1 h7b6447c_0 defaults\r\ntk 8.6.8 hbc83047_0 defaults\r\ntqdm 4.41.1 py_0 defaults\r\nurllib3 1.25.7 py37_0 defaults\r\nwheel 0.33.6 py37_0 defaults\r\nxz 5.2.4 h14c3975_4 defaults\r\nyaml 0.1.7 had09818_2 defaults\r\nzlib 1.2.11 h7b6447c_3 defaults\r\n```\r\n</p></details>\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (C) 2012 Anaconda, Inc\n# SPDX-License-Identifier: BSD-3-Clause\nfrom argparse import RawDescriptionHelpFormatter\n\n\nfrom .main_vars import configure_parser as configure_vars_parser\n\nconfig_description = '''\nConfigure a conda environment\n'''\n\nconfig_example = '''\nexamples:\n conda env config vars list\n conda env config --append channels conda-forge\n'''\n\ndef configure_parser(sub_parsers):\n config_parser = sub_parsers.add_parser(\n 'config',\n formatter_class=RawDescriptionHelpFormatter,\n description=config_description,\n help=config_description,\n epilog=config_example,\n )\n config_subparser = config_parser.add_subparsers()\n configure_vars_parser(config_subparser)\n", "path": "conda_env/cli/main_config.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (C) 2012 Anaconda, Inc\n# SPDX-License-Identifier: BSD-3-Clause\nfrom argparse import RawDescriptionHelpFormatter\n\n\nfrom .main_vars import configure_parser as configure_vars_parser\n\nconfig_description = '''\nConfigure a conda environment\n'''\n\nconfig_example = '''\nexamples:\n conda env config vars list\n conda env config --append channels conda-forge\n'''\n\ndef configure_parser(sub_parsers):\n config_parser = sub_parsers.add_parser(\n 'config',\n formatter_class=RawDescriptionHelpFormatter,\n description=config_description,\n help=config_description,\n epilog=config_example,\n )\n config_parser.set_defaults(func='.main_config.execute')\n config_subparser = config_parser.add_subparsers()\n configure_vars_parser(config_subparser)\n\n\ndef execute(args, parser):\n parser.parse_args(['config', '--help'])\n", "path": "conda_env/cli/main_config.py"}]}
| 3,116 | 117 |
gh_patches_debug_23203
|
rasdani/github-patches
|
git_diff
|
carpentries__amy-2339
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Server Error when editing community roles
Reported by @ErinBecker at `/workshops/person/<id>/edit/#communityroles` and `/communityroles/role/<another id>/edit/` in production.
Reproduced on test AMY:
```
TypeError at /communityroles/role/25/edit/
the JSON object must be str, bytes or bytearray, not list
Error during template rendering
In template /webapps/test-amy.carpentries.org/repo/amy/templates/bootstrap4/field.html, error at line 39
{% crispy_field field %}
```
Also encountered locally when creating a community role and immediately editing it:
```
Exception Value: | 'NoneType' object is not iterable
/home/eli/amy/amy/communityroles/fields.py, line 20, in get_context
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `amy/communityroles/fields.py`
Content:
```
1 import json
2
3 from django import forms
4 from django.http import QueryDict
5 from django.utils.datastructures import MultiValueDict
6
7
8 class CustomKeysWidget(forms.TextInput):
9 template_name = "widgets/custom_keys_widget.html"
10
11 def __init__(self, *args, **kwargs):
12 self.subwidget_form = kwargs.pop("subwidget_form", forms.TextInput)
13 super().__init__(*args, **kwargs)
14
15 def apply_labels(self, labels: list[str]) -> None:
16 self.labels = labels[:]
17
18 def get_context(self, name: str, value: str, attrs: dict):
19 value_deserialized = json.loads(value)
20 value_deserialized_dict = dict(value_deserialized)
21 default_values = dict([(label, "") for label in self.labels])
22 context_value = default_values | value_deserialized_dict
23
24 context = super().get_context(name, context_value, attrs)
25 final_attrs = context["widget"]["attrs"]
26 id_ = context["widget"]["attrs"].get("id")
27
28 subwidgets = []
29 for index, (label, value) in enumerate(context_value.items()):
30 widget_attrs = final_attrs.copy()
31 if id_:
32 widget_attrs["id"] = "{id_}_{index}".format(id_=id_, index=index)
33
34 widget = self.subwidget_form()
35 subwidget_context = widget.get_context(name, value, widget_attrs)["widget"]
36 subwidgets.append(subwidget_context | {"label": label})
37
38 context["widget"]["subwidgets"] = subwidgets
39 return context
40
41 def value_from_datadict(
42 self, data: QueryDict, files: MultiValueDict, name: str
43 ) -> list[tuple[str, str]]:
44 """Prepare structure stored in database. The structure is tied to
45 `CommunityRole.custom_keys` expected format:
46 [
47 (label1, value1),
48 (label2, value2),
49 ...
50 ]
51 """
52 try:
53 values = data.getlist(name)
54 except AttributeError:
55 values = data.get(name, [])
56 return list(zip(self.labels, values))
57
58 def value_omitted_from_data(
59 self, data: QueryDict, files: MultiValueDict, name: str
60 ) -> bool:
61 return False
62
63
64 class CustomKeysJSONField(forms.JSONField):
65 def __init__(self, **kwargs):
66 kwargs.setdefault("widget", CustomKeysWidget)
67 super().__init__(**kwargs)
68
69 def apply_labels(self, labels: list[str]) -> None:
70 self.labels = labels[:]
71 self.widget.apply_labels(self.labels)
72
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/amy/communityroles/fields.py b/amy/communityroles/fields.py
--- a/amy/communityroles/fields.py
+++ b/amy/communityroles/fields.py
@@ -1,9 +1,12 @@
import json
+import logging
from django import forms
from django.http import QueryDict
from django.utils.datastructures import MultiValueDict
+logger = logging.getLogger("amy")
+
class CustomKeysWidget(forms.TextInput):
template_name = "widgets/custom_keys_widget.html"
@@ -17,7 +20,14 @@
def get_context(self, name: str, value: str, attrs: dict):
value_deserialized = json.loads(value)
- value_deserialized_dict = dict(value_deserialized)
+ try:
+ value_deserialized_dict = dict(value_deserialized)
+ except (ValueError, TypeError) as e:
+ logger.debug(
+ f"Failed to load custom key values {value_deserialized} to dict: {e}."
+ )
+ logger.debug("Proceeding without custom key values...")
+ value_deserialized_dict = {}
default_values = dict([(label, "") for label in self.labels])
context_value = default_values | value_deserialized_dict
|
{"golden_diff": "diff --git a/amy/communityroles/fields.py b/amy/communityroles/fields.py\n--- a/amy/communityroles/fields.py\n+++ b/amy/communityroles/fields.py\n@@ -1,9 +1,12 @@\n import json\n+import logging\n \n from django import forms\n from django.http import QueryDict\n from django.utils.datastructures import MultiValueDict\n \n+logger = logging.getLogger(\"amy\")\n+\n \n class CustomKeysWidget(forms.TextInput):\n template_name = \"widgets/custom_keys_widget.html\"\n@@ -17,7 +20,14 @@\n \n def get_context(self, name: str, value: str, attrs: dict):\n value_deserialized = json.loads(value)\n- value_deserialized_dict = dict(value_deserialized)\n+ try:\n+ value_deserialized_dict = dict(value_deserialized)\n+ except (ValueError, TypeError) as e:\n+ logger.debug(\n+ f\"Failed to load custom key values {value_deserialized} to dict: {e}.\"\n+ )\n+ logger.debug(\"Proceeding without custom key values...\")\n+ value_deserialized_dict = {}\n default_values = dict([(label, \"\") for label in self.labels])\n context_value = default_values | value_deserialized_dict\n", "issue": "Server Error when editing community roles\nReported by @ErinBecker at `/workshops/person/<id>/edit/#communityroles` and `/communityroles/role/<another id>/edit/` in production.\r\n\r\nReproduced on test AMY:\r\n```\r\nTypeError at /communityroles/role/25/edit/\r\nthe JSON object must be str, bytes or bytearray, not list\r\nError during template rendering\r\nIn template /webapps/test-amy.carpentries.org/repo/amy/templates/bootstrap4/field.html, error at line 39\r\n\r\n{% crispy_field field %}\r\n```\r\n\r\nAlso encountered locally when creating a community role and immediately editing it:\r\n```\r\nException Value: | 'NoneType' object is not iterable\r\n/home/eli/amy/amy/communityroles/fields.py, line 20, in get_context\r\n\r\n```\n", "before_files": [{"content": "import json\n\nfrom django import forms\nfrom django.http import QueryDict\nfrom django.utils.datastructures import MultiValueDict\n\n\nclass CustomKeysWidget(forms.TextInput):\n template_name = \"widgets/custom_keys_widget.html\"\n\n def __init__(self, *args, **kwargs):\n self.subwidget_form = kwargs.pop(\"subwidget_form\", forms.TextInput)\n super().__init__(*args, **kwargs)\n\n def apply_labels(self, labels: list[str]) -> None:\n self.labels = labels[:]\n\n def get_context(self, name: str, value: str, attrs: dict):\n value_deserialized = json.loads(value)\n value_deserialized_dict = dict(value_deserialized)\n default_values = dict([(label, \"\") for label in self.labels])\n context_value = default_values | value_deserialized_dict\n\n context = super().get_context(name, context_value, attrs)\n final_attrs = context[\"widget\"][\"attrs\"]\n id_ = context[\"widget\"][\"attrs\"].get(\"id\")\n\n subwidgets = []\n for index, (label, value) in enumerate(context_value.items()):\n widget_attrs = final_attrs.copy()\n if id_:\n widget_attrs[\"id\"] = \"{id_}_{index}\".format(id_=id_, index=index)\n\n widget = self.subwidget_form()\n subwidget_context = widget.get_context(name, value, widget_attrs)[\"widget\"]\n subwidgets.append(subwidget_context | {\"label\": label})\n\n context[\"widget\"][\"subwidgets\"] = subwidgets\n return context\n\n def value_from_datadict(\n self, data: QueryDict, files: MultiValueDict, name: str\n ) -> list[tuple[str, str]]:\n \"\"\"Prepare structure stored in database. The structure is tied to\n `CommunityRole.custom_keys` expected format:\n [\n (label1, value1),\n (label2, value2),\n ...\n ]\n \"\"\"\n try:\n values = data.getlist(name)\n except AttributeError:\n values = data.get(name, [])\n return list(zip(self.labels, values))\n\n def value_omitted_from_data(\n self, data: QueryDict, files: MultiValueDict, name: str\n ) -> bool:\n return False\n\n\nclass CustomKeysJSONField(forms.JSONField):\n def __init__(self, **kwargs):\n kwargs.setdefault(\"widget\", CustomKeysWidget)\n super().__init__(**kwargs)\n\n def apply_labels(self, labels: list[str]) -> None:\n self.labels = labels[:]\n self.widget.apply_labels(self.labels)\n", "path": "amy/communityroles/fields.py"}], "after_files": [{"content": "import json\nimport logging\n\nfrom django import forms\nfrom django.http import QueryDict\nfrom django.utils.datastructures import MultiValueDict\n\nlogger = logging.getLogger(\"amy\")\n\n\nclass CustomKeysWidget(forms.TextInput):\n template_name = \"widgets/custom_keys_widget.html\"\n\n def __init__(self, *args, **kwargs):\n self.subwidget_form = kwargs.pop(\"subwidget_form\", forms.TextInput)\n super().__init__(*args, **kwargs)\n\n def apply_labels(self, labels: list[str]) -> None:\n self.labels = labels[:]\n\n def get_context(self, name: str, value: str, attrs: dict):\n value_deserialized = json.loads(value)\n try:\n value_deserialized_dict = dict(value_deserialized)\n except (ValueError, TypeError) as e:\n logger.debug(\n f\"Failed to load custom key values {value_deserialized} to dict: {e}.\"\n )\n logger.debug(\"Proceeding without custom key values...\")\n value_deserialized_dict = {}\n default_values = dict([(label, \"\") for label in self.labels])\n context_value = default_values | value_deserialized_dict\n\n context = super().get_context(name, context_value, attrs)\n final_attrs = context[\"widget\"][\"attrs\"]\n id_ = context[\"widget\"][\"attrs\"].get(\"id\")\n\n subwidgets = []\n for index, (label, value) in enumerate(context_value.items()):\n widget_attrs = final_attrs.copy()\n if id_:\n widget_attrs[\"id\"] = \"{id_}_{index}\".format(id_=id_, index=index)\n\n widget = self.subwidget_form()\n subwidget_context = widget.get_context(name, value, widget_attrs)[\"widget\"]\n subwidgets.append(subwidget_context | {\"label\": label})\n\n context[\"widget\"][\"subwidgets\"] = subwidgets\n return context\n\n def value_from_datadict(\n self, data: QueryDict, files: MultiValueDict, name: str\n ) -> list[tuple[str, str]]:\n \"\"\"Prepare structure stored in database. The structure is tied to\n `CommunityRole.custom_keys` expected format:\n [\n (label1, value1),\n (label2, value2),\n ...\n ]\n \"\"\"\n try:\n values = data.getlist(name)\n except AttributeError:\n values = data.get(name, [])\n return list(zip(self.labels, values))\n\n def value_omitted_from_data(\n self, data: QueryDict, files: MultiValueDict, name: str\n ) -> bool:\n return False\n\n\nclass CustomKeysJSONField(forms.JSONField):\n def __init__(self, **kwargs):\n kwargs.setdefault(\"widget\", CustomKeysWidget)\n super().__init__(**kwargs)\n\n def apply_labels(self, labels: list[str]) -> None:\n self.labels = labels[:]\n self.widget.apply_labels(self.labels)\n", "path": "amy/communityroles/fields.py"}]}
| 1,115 | 266 |
gh_patches_debug_21536
|
rasdani/github-patches
|
git_diff
|
pymeasure__pymeasure-81
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Consider installing the tests
Since you decided to embed the test suite inside the package instead of leaving it outside in a common `tests` folder (both of which are totally acceptable and covered by the `pytest` guidelines), why not install the tests with the package? It would give the following benefits:
- Simplify the `setuptools` metadata, by replacing the explicit listing of the modules with a single call to `find_package`,
- Easy testing of the installed package via `python -m pytest --pyargs pymeasure`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #
2 # This file is part of the PyMeasure package.
3 #
4 # Copyright (c) 2013-2017 PyMeasure Developers
5 #
6 # Permission is hereby granted, free of charge, to any person obtaining a copy
7 # of this software and associated documentation files (the "Software"), to deal
8 # in the Software without restriction, including without limitation the rights
9 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
10 # copies of the Software, and to permit persons to whom the Software is
11 # furnished to do so, subject to the following conditions:
12 #
13 # The above copyright notice and this permission notice shall be included in
14 # all copies or substantial portions of the Software.
15 #
16 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
17 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
18 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
19 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
20 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
21 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
22 # THE SOFTWARE.
23 #
24
25
26 from setuptools import setup
27
28 setup(
29 name='PyMeasure',
30 version='0.4.3',
31 author='PyMeasure Developers',
32 packages=[
33 'pymeasure', 'pymeasure.instruments',
34 'pymeasure.adapters', 'pymeasure.display',
35 'pymeasure.experiment',
36 'pymeasure.instruments.agilent',
37 'pymeasure.instruments.ami',
38 'pymeasure.instruments.anritsu',
39 'pymeasure.instruments.danfysik',
40 'pymeasure.instruments.fwbell',
41 'pymeasure.instruments.hp',
42 'pymeasure.instruments.keithley',
43 'pymeasure.instruments.lakeshore',
44 'pymeasure.instruments.newport',
45 'pymeasure.instruments.parker',
46 'pymeasure.instruments.signalrecovery',
47 'pymeasure.instruments.srs',
48 'pymeasure.instruments.thorlabs',
49 'pymeasure.instruments.tektronix',
50 'pymeasure.instruments.yokogawa',
51 ],
52 scripts=[],
53 url='https://github.com/ralph-group/pymeasure',
54 download_url = 'https://github.com/ralph-group/pymeasure/tarball/v0.4.3',
55 license='MIT License',
56 description='Scientific measurement library for instruments, experiments, and live-plotting',
57 long_description=open('README.rst').read() + "\n\n" + open('CHANGES.txt').read(),
58 install_requires=[
59 "Numpy >= 1.6.1",
60 "pandas >= 0.14",
61 "pyvisa >= 1.8",
62 "pyserial >= 2.7",
63 "pyqtgraph >= 0.9.10"
64 ],
65 classifiers=[
66 "Development Status :: 4 - Beta",
67 "Intended Audience :: Science/Research",
68 "License :: OSI Approved :: MIT License",
69 "Operating System :: MacOS",
70 "Operating System :: Microsoft :: Windows",
71 "Operating System :: POSIX",
72 "Operating System :: Unix",
73 "Programming Language :: Python :: 3 :: Only",
74 "Topic :: Scientific/Engineering",
75 ],
76 keywords="measure instrument experiment control automate graph plot"
77 )
78
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -23,32 +23,13 @@
#
-from setuptools import setup
+from setuptools import setup, find_packages
setup(
name='PyMeasure',
version='0.4.3',
author='PyMeasure Developers',
- packages=[
- 'pymeasure', 'pymeasure.instruments',
- 'pymeasure.adapters', 'pymeasure.display',
- 'pymeasure.experiment',
- 'pymeasure.instruments.agilent',
- 'pymeasure.instruments.ami',
- 'pymeasure.instruments.anritsu',
- 'pymeasure.instruments.danfysik',
- 'pymeasure.instruments.fwbell',
- 'pymeasure.instruments.hp',
- 'pymeasure.instruments.keithley',
- 'pymeasure.instruments.lakeshore',
- 'pymeasure.instruments.newport',
- 'pymeasure.instruments.parker',
- 'pymeasure.instruments.signalrecovery',
- 'pymeasure.instruments.srs',
- 'pymeasure.instruments.thorlabs',
- 'pymeasure.instruments.tektronix',
- 'pymeasure.instruments.yokogawa',
- ],
+ packages=find_packages(),
scripts=[],
url='https://github.com/ralph-group/pymeasure',
download_url = 'https://github.com/ralph-group/pymeasure/tarball/v0.4.3',
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -23,32 +23,13 @@\n #\n \n \n-from setuptools import setup\n+from setuptools import setup, find_packages\n \n setup(\n name='PyMeasure',\n version='0.4.3',\n author='PyMeasure Developers',\n- packages=[\n- 'pymeasure', 'pymeasure.instruments',\n- 'pymeasure.adapters', 'pymeasure.display',\n- 'pymeasure.experiment',\n- 'pymeasure.instruments.agilent',\n- 'pymeasure.instruments.ami',\n- 'pymeasure.instruments.anritsu',\n- 'pymeasure.instruments.danfysik',\n- 'pymeasure.instruments.fwbell',\n- 'pymeasure.instruments.hp',\n- 'pymeasure.instruments.keithley',\n- 'pymeasure.instruments.lakeshore',\n- 'pymeasure.instruments.newport',\n- 'pymeasure.instruments.parker',\n- 'pymeasure.instruments.signalrecovery',\n- 'pymeasure.instruments.srs',\n- 'pymeasure.instruments.thorlabs',\n- 'pymeasure.instruments.tektronix',\n- 'pymeasure.instruments.yokogawa',\n- ],\n+ packages=find_packages(),\n scripts=[],\n url='https://github.com/ralph-group/pymeasure',\n download_url = 'https://github.com/ralph-group/pymeasure/tarball/v0.4.3',\n", "issue": "Consider installing the tests\nSince you decided to embed the test suite inside the package instead of leaving it outside in a common `tests` folder (both of which are totally acceptable and covered by the `pytest` guidelines), why not install the tests with the package? It would give the following benefits:\r\n\r\n- Simplify the `setuptools` metadata, by replacing the explicit listing of the modules with a single call to `find_package`,\r\n- Easy testing of the installed package via `python -m pytest --pyargs pymeasure`.\n", "before_files": [{"content": "#\n# This file is part of the PyMeasure package.\n#\n# Copyright (c) 2013-2017 PyMeasure Developers\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.\n#\n\n\nfrom setuptools import setup\n\nsetup(\n name='PyMeasure',\n version='0.4.3',\n author='PyMeasure Developers',\n packages=[\n 'pymeasure', 'pymeasure.instruments',\n 'pymeasure.adapters', 'pymeasure.display',\n 'pymeasure.experiment',\n 'pymeasure.instruments.agilent',\n 'pymeasure.instruments.ami',\n 'pymeasure.instruments.anritsu',\n 'pymeasure.instruments.danfysik',\n 'pymeasure.instruments.fwbell',\n 'pymeasure.instruments.hp',\n 'pymeasure.instruments.keithley',\n 'pymeasure.instruments.lakeshore',\n 'pymeasure.instruments.newport',\n 'pymeasure.instruments.parker',\n 'pymeasure.instruments.signalrecovery',\n 'pymeasure.instruments.srs',\n 'pymeasure.instruments.thorlabs',\n 'pymeasure.instruments.tektronix',\n 'pymeasure.instruments.yokogawa',\n ],\n scripts=[],\n url='https://github.com/ralph-group/pymeasure',\n download_url = 'https://github.com/ralph-group/pymeasure/tarball/v0.4.3',\n license='MIT License',\n description='Scientific measurement library for instruments, experiments, and live-plotting',\n long_description=open('README.rst').read() + \"\\n\\n\" + open('CHANGES.txt').read(),\n install_requires=[\n \"Numpy >= 1.6.1\",\n \"pandas >= 0.14\",\n \"pyvisa >= 1.8\",\n \"pyserial >= 2.7\",\n \"pyqtgraph >= 0.9.10\"\n ],\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: MacOS\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: POSIX\",\n \"Operating System :: Unix\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Topic :: Scientific/Engineering\",\n ],\n keywords=\"measure instrument experiment control automate graph plot\"\n)\n", "path": "setup.py"}], "after_files": [{"content": "#\n# This file is part of the PyMeasure package.\n#\n# Copyright (c) 2013-2017 PyMeasure Developers\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.\n#\n\n\nfrom setuptools import setup, find_packages\n\nsetup(\n name='PyMeasure',\n version='0.4.3',\n author='PyMeasure Developers',\n packages=find_packages(),\n scripts=[],\n url='https://github.com/ralph-group/pymeasure',\n download_url = 'https://github.com/ralph-group/pymeasure/tarball/v0.4.3',\n license='MIT License',\n description='Scientific measurement library for instruments, experiments, and live-plotting',\n long_description=open('README.rst').read() + \"\\n\\n\" + open('CHANGES.txt').read(),\n install_requires=[\n \"Numpy >= 1.6.1\",\n \"pandas >= 0.14\",\n \"pyvisa >= 1.8\",\n \"pyserial >= 2.7\",\n \"pyqtgraph >= 0.9.10\"\n ],\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: MacOS\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: POSIX\",\n \"Operating System :: Unix\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Topic :: Scientific/Engineering\",\n ],\n keywords=\"measure instrument experiment control automate graph plot\"\n)\n", "path": "setup.py"}]}
| 1,252 | 347 |
gh_patches_debug_22658
|
rasdani/github-patches
|
git_diff
|
DDMAL__CantusDB-942
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
List of Genres: alphabetization should be case-insensitive
The Lists of Feasts (this includes the list of all feasts as well as only the temporale and only the sanctorale) are ordered by alphabetic order of feast name, except that capital letters are currently taking precedence; a capital "Z" comes before a lowercase "a" and that muddles the system somewhat. Behold:
<img width="1119" alt="NewCantus temporale list" src="https://github.com/DDMAL/CantusDB/assets/83373378/2c4f7d2e-df3c-4a57-8c52-7acbfe96b099">
You can see that the alphabet thing is solid until Ad Processionem, after which the third letter becomes lowercase (Ad aquam benedictio) and the alphabet starts all over again. Maybe this is preferable, but OldCantus doesn't take capitalization into account, so I thought I'd mention it.
This is the same list on OldCantus, for reference:
<img width="1158" alt="OldCantus temporale list" src="https://github.com/DDMAL/CantusDB/assets/83373378/598ba519-a37a-478b-884f-a1ff6b07ee34">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django/cantusdb_project/main_app/views/feast.py`
Content:
```
1 from django.views.generic import DetailView, ListView
2 from main_app.models import Feast, Source
3 from extra_views import SearchableListMixin
4
5 # this categorization is not finalized yet
6 # the feastcode on old cantus requires cleaning
7 # for now we just leave this categorization as it is
8 TEMP_PREFIX = [
9 "01",
10 "02",
11 "03",
12 "04",
13 "05",
14 "06",
15 "07",
16 "08",
17 "09",
18 "10",
19 "11",
20 "16",
21 "17",
22 ]
23 SANC_PREFIX = ["12", "13", "14", "15"]
24
25
26 class FeastDetailView(DetailView):
27 model = Feast
28 context_object_name = "feast"
29 template_name = "feast_detail.html"
30
31 def get_context_data(self, **kwargs):
32 context = super().get_context_data(**kwargs)
33
34 display_unpublished = self.request.user.is_authenticated
35
36 chants_in_feast = self.get_object().chant_set
37 if not display_unpublished:
38 chants_in_feast = chants_in_feast.filter(source__published=True)
39
40 cantus_ids = list(
41 chants_in_feast.values_list("cantus_id", flat=True).distinct()
42 )
43
44 counts = []
45 incipits = []
46 genres = []
47
48 for cantus_id in cantus_ids:
49 chants = chants_in_feast.filter(cantus_id=cantus_id)
50 count = chants.count()
51 incipit = chants.first().incipit
52 genre = chants.first().genre
53
54 counts.append(count)
55 incipits.append(incipit)
56 genres.append(genre)
57
58 zipped = zip(cantus_ids, incipits, genres, counts)
59 zipped = sorted(zipped, key=lambda t: t[3], reverse=True)
60 context["frequent_chants_zip"] = zipped
61
62 source_ids = list(
63 chants_in_feast.values_list("source__id", flat=True).distinct()
64 )
65 sources = Source.objects.filter(id__in=source_ids)
66 counts = [chants_in_feast.filter(source=source).count() for source in sources]
67
68 zipped = zip(sources, counts)
69 zipped = sorted(zipped, key=lambda t: t[1], reverse=True)
70 context["sources_zip"] = zipped
71
72 return context
73
74
75 class FeastListView(SearchableListMixin, ListView):
76 """Searchable List view for Feast model
77
78 Accessed by /feasts/
79
80 When passed a ``?q=<query>`` argument in the GET request, it will filter feasts
81 based on the fields defined in ``search_fields`` with the ``icontains`` lookup
82
83 The feasts can also be filtered by `date` (temp/sanc) and `month` and ordered by `sort_by`,
84 which are also passed as GET parameters
85 """
86
87 model = Feast
88 search_fields = ["name", "description", "feast_code"]
89 paginate_by = 100
90 context_object_name = "feasts"
91 template_name = "feast_list.html"
92
93 def get_ordering(self):
94 ordering = self.request.GET.get("sort_by")
95 # feasts can be ordered by name or feast_code,
96 # default to ordering by name if given anything else
97 if ordering not in ["name", "feast_code"]:
98 ordering = "name"
99 return ordering
100
101 def get_queryset(self):
102 queryset = super().get_queryset()
103 date = self.request.GET.get("date")
104 month = self.request.GET.get("month")
105 # temp vs sanc categorization is not finalized yet,
106 # the feastcode needs to be updated by the cantus people
107 if date == "temp":
108 queryset = queryset.filter(prefix__in=TEMP_PREFIX)
109 elif date == "sanc":
110 queryset = queryset.filter(prefix__in=SANC_PREFIX)
111
112 if month and (int(month)) in range(1, 13):
113 month = int(month)
114 queryset = queryset.filter(month=month)
115
116 return queryset
117
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/django/cantusdb_project/main_app/views/feast.py b/django/cantusdb_project/main_app/views/feast.py
--- a/django/cantusdb_project/main_app/views/feast.py
+++ b/django/cantusdb_project/main_app/views/feast.py
@@ -1,4 +1,5 @@
from django.views.generic import DetailView, ListView
+from django.db.models.functions import Lower
from main_app.models import Feast, Source
from extra_views import SearchableListMixin
@@ -90,13 +91,14 @@
context_object_name = "feasts"
template_name = "feast_list.html"
- def get_ordering(self):
+ def get_ordering(self) -> tuple:
ordering = self.request.GET.get("sort_by")
# feasts can be ordered by name or feast_code,
# default to ordering by name if given anything else
if ordering not in ["name", "feast_code"]:
ordering = "name"
- return ordering
+ # case insensitive ordering by name
+ return (Lower(ordering),) if ordering == "name" else (ordering,)
def get_queryset(self):
queryset = super().get_queryset()
|
{"golden_diff": "diff --git a/django/cantusdb_project/main_app/views/feast.py b/django/cantusdb_project/main_app/views/feast.py\n--- a/django/cantusdb_project/main_app/views/feast.py\n+++ b/django/cantusdb_project/main_app/views/feast.py\n@@ -1,4 +1,5 @@\n from django.views.generic import DetailView, ListView\n+from django.db.models.functions import Lower\n from main_app.models import Feast, Source\n from extra_views import SearchableListMixin\n \n@@ -90,13 +91,14 @@\n context_object_name = \"feasts\"\n template_name = \"feast_list.html\"\n \n- def get_ordering(self):\n+ def get_ordering(self) -> tuple:\n ordering = self.request.GET.get(\"sort_by\")\n # feasts can be ordered by name or feast_code,\n # default to ordering by name if given anything else\n if ordering not in [\"name\", \"feast_code\"]:\n ordering = \"name\"\n- return ordering\n+ # case insensitive ordering by name\n+ return (Lower(ordering),) if ordering == \"name\" else (ordering,)\n \n def get_queryset(self):\n queryset = super().get_queryset()\n", "issue": "List of Genres: alphabetization should be case-insensitive\nThe Lists of Feasts (this includes the list of all feasts as well as only the temporale and only the sanctorale) are ordered by alphabetic order of feast name, except that capital letters are currently taking precedence; a capital \"Z\" comes before a lowercase \"a\" and that muddles the system somewhat. Behold:\r\n\r\n<img width=\"1119\" alt=\"NewCantus temporale list\" src=\"https://github.com/DDMAL/CantusDB/assets/83373378/2c4f7d2e-df3c-4a57-8c52-7acbfe96b099\">\r\n\r\nYou can see that the alphabet thing is solid until Ad Processionem, after which the third letter becomes lowercase (Ad aquam benedictio) and the alphabet starts all over again. Maybe this is preferable, but OldCantus doesn't take capitalization into account, so I thought I'd mention it.\r\n\r\nThis is the same list on OldCantus, for reference: \r\n\r\n<img width=\"1158\" alt=\"OldCantus temporale list\" src=\"https://github.com/DDMAL/CantusDB/assets/83373378/598ba519-a37a-478b-884f-a1ff6b07ee34\">\r\n\n", "before_files": [{"content": "from django.views.generic import DetailView, ListView\nfrom main_app.models import Feast, Source\nfrom extra_views import SearchableListMixin\n\n# this categorization is not finalized yet\n# the feastcode on old cantus requires cleaning\n# for now we just leave this categorization as it is\nTEMP_PREFIX = [\n \"01\",\n \"02\",\n \"03\",\n \"04\",\n \"05\",\n \"06\",\n \"07\",\n \"08\",\n \"09\",\n \"10\",\n \"11\",\n \"16\",\n \"17\",\n]\nSANC_PREFIX = [\"12\", \"13\", \"14\", \"15\"]\n\n\nclass FeastDetailView(DetailView):\n model = Feast\n context_object_name = \"feast\"\n template_name = \"feast_detail.html\"\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n\n display_unpublished = self.request.user.is_authenticated\n\n chants_in_feast = self.get_object().chant_set\n if not display_unpublished:\n chants_in_feast = chants_in_feast.filter(source__published=True)\n\n cantus_ids = list(\n chants_in_feast.values_list(\"cantus_id\", flat=True).distinct()\n )\n\n counts = []\n incipits = []\n genres = []\n\n for cantus_id in cantus_ids:\n chants = chants_in_feast.filter(cantus_id=cantus_id)\n count = chants.count()\n incipit = chants.first().incipit\n genre = chants.first().genre\n\n counts.append(count)\n incipits.append(incipit)\n genres.append(genre)\n\n zipped = zip(cantus_ids, incipits, genres, counts)\n zipped = sorted(zipped, key=lambda t: t[3], reverse=True)\n context[\"frequent_chants_zip\"] = zipped\n\n source_ids = list(\n chants_in_feast.values_list(\"source__id\", flat=True).distinct()\n )\n sources = Source.objects.filter(id__in=source_ids)\n counts = [chants_in_feast.filter(source=source).count() for source in sources]\n\n zipped = zip(sources, counts)\n zipped = sorted(zipped, key=lambda t: t[1], reverse=True)\n context[\"sources_zip\"] = zipped\n\n return context\n\n\nclass FeastListView(SearchableListMixin, ListView):\n \"\"\"Searchable List view for Feast model\n\n Accessed by /feasts/\n\n When passed a ``?q=<query>`` argument in the GET request, it will filter feasts\n based on the fields defined in ``search_fields`` with the ``icontains`` lookup\n\n The feasts can also be filtered by `date` (temp/sanc) and `month` and ordered by `sort_by`,\n which are also passed as GET parameters\n \"\"\"\n\n model = Feast\n search_fields = [\"name\", \"description\", \"feast_code\"]\n paginate_by = 100\n context_object_name = \"feasts\"\n template_name = \"feast_list.html\"\n\n def get_ordering(self):\n ordering = self.request.GET.get(\"sort_by\")\n # feasts can be ordered by name or feast_code,\n # default to ordering by name if given anything else\n if ordering not in [\"name\", \"feast_code\"]:\n ordering = \"name\"\n return ordering\n\n def get_queryset(self):\n queryset = super().get_queryset()\n date = self.request.GET.get(\"date\")\n month = self.request.GET.get(\"month\")\n # temp vs sanc categorization is not finalized yet,\n # the feastcode needs to be updated by the cantus people\n if date == \"temp\":\n queryset = queryset.filter(prefix__in=TEMP_PREFIX)\n elif date == \"sanc\":\n queryset = queryset.filter(prefix__in=SANC_PREFIX)\n\n if month and (int(month)) in range(1, 13):\n month = int(month)\n queryset = queryset.filter(month=month)\n\n return queryset\n", "path": "django/cantusdb_project/main_app/views/feast.py"}], "after_files": [{"content": "from django.views.generic import DetailView, ListView\nfrom django.db.models.functions import Lower\nfrom main_app.models import Feast, Source\nfrom extra_views import SearchableListMixin\n\n# this categorization is not finalized yet\n# the feastcode on old cantus requires cleaning\n# for now we just leave this categorization as it is\nTEMP_PREFIX = [\n \"01\",\n \"02\",\n \"03\",\n \"04\",\n \"05\",\n \"06\",\n \"07\",\n \"08\",\n \"09\",\n \"10\",\n \"11\",\n \"16\",\n \"17\",\n]\nSANC_PREFIX = [\"12\", \"13\", \"14\", \"15\"]\n\n\nclass FeastDetailView(DetailView):\n model = Feast\n context_object_name = \"feast\"\n template_name = \"feast_detail.html\"\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n\n display_unpublished = self.request.user.is_authenticated\n\n chants_in_feast = self.get_object().chant_set\n if not display_unpublished:\n chants_in_feast = chants_in_feast.filter(source__published=True)\n\n cantus_ids = list(\n chants_in_feast.values_list(\"cantus_id\", flat=True).distinct()\n )\n\n counts = []\n incipits = []\n genres = []\n\n for cantus_id in cantus_ids:\n chants = chants_in_feast.filter(cantus_id=cantus_id)\n count = chants.count()\n incipit = chants.first().incipit\n genre = chants.first().genre\n\n counts.append(count)\n incipits.append(incipit)\n genres.append(genre)\n\n zipped = zip(cantus_ids, incipits, genres, counts)\n zipped = sorted(zipped, key=lambda t: t[3], reverse=True)\n context[\"frequent_chants_zip\"] = zipped\n\n source_ids = list(\n chants_in_feast.values_list(\"source__id\", flat=True).distinct()\n )\n sources = Source.objects.filter(id__in=source_ids)\n counts = [chants_in_feast.filter(source=source).count() for source in sources]\n\n zipped = zip(sources, counts)\n zipped = sorted(zipped, key=lambda t: t[1], reverse=True)\n context[\"sources_zip\"] = zipped\n\n return context\n\n\nclass FeastListView(SearchableListMixin, ListView):\n \"\"\"Searchable List view for Feast model\n\n Accessed by /feasts/\n\n When passed a ``?q=<query>`` argument in the GET request, it will filter feasts\n based on the fields defined in ``search_fields`` with the ``icontains`` lookup\n\n The feasts can also be filtered by `date` (temp/sanc) and `month` and ordered by `sort_by`,\n which are also passed as GET parameters\n \"\"\"\n\n model = Feast\n search_fields = [\"name\", \"description\", \"feast_code\"]\n paginate_by = 100\n context_object_name = \"feasts\"\n template_name = \"feast_list.html\"\n\n def get_ordering(self) -> tuple:\n ordering = self.request.GET.get(\"sort_by\")\n # feasts can be ordered by name or feast_code,\n # default to ordering by name if given anything else\n if ordering not in [\"name\", \"feast_code\"]:\n ordering = \"name\"\n # case insensitive ordering by name\n return (Lower(ordering),) if ordering == \"name\" else (ordering,)\n\n def get_queryset(self):\n queryset = super().get_queryset()\n date = self.request.GET.get(\"date\")\n month = self.request.GET.get(\"month\")\n # temp vs sanc categorization is not finalized yet,\n # the feastcode needs to be updated by the cantus people\n if date == \"temp\":\n queryset = queryset.filter(prefix__in=TEMP_PREFIX)\n elif date == \"sanc\":\n queryset = queryset.filter(prefix__in=SANC_PREFIX)\n\n if month and (int(month)) in range(1, 13):\n month = int(month)\n queryset = queryset.filter(month=month)\n\n return queryset\n", "path": "django/cantusdb_project/main_app/views/feast.py"}]}
| 1,718 | 271 |
gh_patches_debug_8438
|
rasdani/github-patches
|
git_diff
|
cocotb__cocotb-1800
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
AvalonSTPkts hexdump needs a minor fix now that the transaction is of type bytes rather than string
When DEBUG logging level is enabled, the `hexdump` call in `_monitor_recv` function of AvalonSTPkts calls hexdump after converting the `pkt` in to `str` which is no longer needed
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cocotb/monitors/avalon.py`
Content:
```
1 # Copyright (c) 2013 Potential Ventures Ltd
2 # Copyright (c) 2013 SolarFlare Communications Inc
3 # All rights reserved.
4 #
5 # Redistribution and use in source and binary forms, with or without
6 # modification, are permitted provided that the following conditions are met:
7 # * Redistributions of source code must retain the above copyright
8 # notice, this list of conditions and the following disclaimer.
9 # * Redistributions in binary form must reproduce the above copyright
10 # notice, this list of conditions and the following disclaimer in the
11 # documentation and/or other materials provided with the distribution.
12 # * Neither the name of Potential Ventures Ltd,
13 # SolarFlare Communications Inc nor the
14 # names of its contributors may be used to endorse or promote products
15 # derived from this software without specific prior written permission.
16 #
17 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
18 # ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
19 # WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
20 # DISCLAIMED. IN NO EVENT SHALL POTENTIAL VENTURES LTD BE LIABLE FOR ANY
21 # DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
22 # (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
23 # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
24 # ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
25 # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
26 # SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
27
28 """Monitors for Intel Avalon interfaces.
29
30 See https://www.intel.com/content/dam/www/programmable/us/en/pdfs/literature/manual/mnl_avalon_spec_1_3.pdf
31
32 NB Currently we only support a very small subset of functionality.
33 """
34
35 import warnings
36
37 from cocotb.utils import hexdump
38 from cocotb.decorators import coroutine
39 from cocotb.monitors import BusMonitor
40 from cocotb.triggers import RisingEdge, ReadOnly
41 from cocotb.binary import BinaryValue
42
43
44 class AvalonProtocolError(Exception):
45 pass
46
47
48 class AvalonST(BusMonitor):
49 """Avalon-ST bus.
50
51 Non-packetized so each valid word is a separate transaction.
52 """
53
54 _signals = ["valid", "data"]
55 _optional_signals = ["ready"]
56
57 _default_config = {"firstSymbolInHighOrderBits": True}
58
59 def __init__(self, entity, name, clock, *, config={}, **kwargs):
60 BusMonitor.__init__(self, entity, name, clock, **kwargs)
61
62 self.config = self._default_config.copy()
63
64 for configoption, value in config.items():
65 self.config[configoption] = value
66 self.log.debug("Setting config option %s to %s", configoption, str(value))
67
68 @coroutine
69 def _monitor_recv(self):
70 """Watch the pins and reconstruct transactions."""
71
72 # Avoid spurious object creation by recycling
73 clkedge = RisingEdge(self.clock)
74 rdonly = ReadOnly()
75
76 def valid():
77 if hasattr(self.bus, "ready"):
78 return self.bus.valid.value and self.bus.ready.value
79 return self.bus.valid.value
80
81 # NB could yield on valid here more efficiently?
82 while True:
83 yield clkedge
84 yield rdonly
85 if valid():
86 vec = self.bus.data.value
87 vec.big_endian = self.config["firstSymbolInHighOrderBits"]
88 self._recv(vec.buff)
89
90
91 class AvalonSTPkts(BusMonitor):
92 """Packetized Avalon-ST bus.
93
94 Args:
95 entity, name, clock: see :class:`BusMonitor`
96 config (dict): bus configuration options
97 report_channel (bool): report channel with data, default is False
98 Setting to True on bus without channel signal will give an error
99 """
100
101 _signals = ["valid", "data", "startofpacket", "endofpacket"]
102 _optional_signals = ["error", "channel", "ready", "empty"]
103
104 _default_config = {
105 "dataBitsPerSymbol" : 8,
106 "firstSymbolInHighOrderBits" : True,
107 "maxChannel" : 0,
108 "readyLatency" : 0,
109 "invalidTimeout" : 0,
110 }
111
112 def __init__(self, entity, name, clock, *, config={}, report_channel=False, **kwargs):
113 BusMonitor.__init__(self, entity, name , clock, **kwargs)
114
115 self.config = self._default_config.copy()
116 self.report_channel = report_channel
117
118 # Set default config maxChannel to max value on channel bus
119 if hasattr(self.bus, 'channel'):
120 self.config['maxChannel'] = (2 ** len(self.bus.channel)) -1
121 else:
122 if report_channel:
123 raise ValueError("Channel reporting asked on bus without channel signal")
124
125 for configoption, value in config.items():
126 self.config[configoption] = value
127 self.log.debug("Setting config option %s to %s",
128 configoption, str(value))
129
130 num_data_symbols = (len(self.bus.data) /
131 self.config["dataBitsPerSymbol"])
132 if (num_data_symbols > 1 and not hasattr(self.bus, 'empty')):
133 raise AttributeError(
134 "%s has %i data symbols, but contains no object named empty" %
135 (self.name, num_data_symbols))
136
137 self.config["useEmpty"] = (num_data_symbols > 1)
138
139 if hasattr(self.bus, 'channel'):
140 if len(self.bus.channel) > 128:
141 raise AttributeError("AvalonST interface specification defines channel width as 1-128. "
142 "%d channel width is %d" %
143 (self.name, len(self.bus.channel)))
144 maxChannel = (2 ** len(self.bus.channel)) -1
145 if self.config['maxChannel'] > maxChannel:
146 raise AttributeError("%s has maxChannel=%d, but can only support a maximum channel of "
147 "(2**channel_width)-1=%d, channel_width=%d" %
148 (self.name, self.config['maxChannel'], maxChannel, len(self.bus.channel)))
149
150 @coroutine
151 def _monitor_recv(self):
152 """Watch the pins and reconstruct transactions."""
153
154 # Avoid spurious object creation by recycling
155 clkedge = RisingEdge(self.clock)
156 rdonly = ReadOnly()
157 pkt = b""
158 in_pkt = False
159 invalid_cyclecount = 0
160 channel = None
161
162 def valid():
163 if hasattr(self.bus, 'ready'):
164 return self.bus.valid.value and self.bus.ready.value
165 return self.bus.valid.value
166
167 while True:
168 yield clkedge
169 yield rdonly
170
171 if self.in_reset:
172 continue
173
174 if valid():
175 invalid_cyclecount = 0
176
177 if self.bus.startofpacket.value:
178 if pkt:
179 raise AvalonProtocolError("Duplicate start-of-packet received on %s" %
180 str(self.bus.startofpacket))
181 pkt = b""
182 in_pkt = True
183
184 if not in_pkt:
185 raise AvalonProtocolError("Data transfer outside of "
186 "packet")
187
188 # Handle empty and X's in empty / data
189 vec = BinaryValue()
190 if not self.bus.endofpacket.value:
191 vec = self.bus.data.value
192 else:
193 value = self.bus.data.value.get_binstr()
194 if self.config["useEmpty"] and self.bus.empty.value.integer:
195 empty = self.bus.empty.value.integer * self.config["dataBitsPerSymbol"]
196 if self.config["firstSymbolInHighOrderBits"]:
197 value = value[:-empty]
198 else:
199 value = value[empty:]
200 vec.assign(value)
201 if not vec.is_resolvable:
202 raise AvalonProtocolError("After empty masking value is still bad? "
203 "Had empty {:d}, got value {:s}".format(empty,
204 self.bus.data.value.get_binstr()))
205
206 vec.big_endian = self.config['firstSymbolInHighOrderBits']
207 pkt += vec.buff
208
209 if hasattr(self.bus, 'channel'):
210 if channel is None:
211 channel = self.bus.channel.value.integer
212 if channel > self.config["maxChannel"]:
213 raise AvalonProtocolError("Channel value (%d) is greater than maxChannel (%d)" %
214 (channel, self.config["maxChannel"]))
215 elif self.bus.channel.value.integer != channel:
216 raise AvalonProtocolError("Channel value changed during packet")
217
218 if self.bus.endofpacket.value:
219 self.log.info("Received a packet of %d bytes", len(pkt))
220 self.log.debug(hexdump(str((pkt))))
221 self.channel = channel
222 if self.report_channel:
223 self._recv({"data": pkt, "channel": channel})
224 else:
225 self._recv(pkt)
226 pkt = b""
227 in_pkt = False
228 channel = None
229 else:
230 if in_pkt:
231 invalid_cyclecount += 1
232 if self.config["invalidTimeout"]:
233 if invalid_cyclecount >= self.config["invalidTimeout"]:
234 raise AvalonProtocolError(
235 "In-Packet Timeout. Didn't receive any valid data for %d cycles!" %
236 invalid_cyclecount)
237
238
239 class AvalonSTPktsWithChannel(AvalonSTPkts):
240 """Packetized AvalonST bus using channel.
241
242 This class is deprecated. Use AvalonSTPkts(..., report_channel=True, ...)
243 """
244
245 def __init__(self, entity, name, clock, **kwargs):
246 warnings.warn(
247 "Use of AvalonSTPktsWithChannel is deprecated\n"
248 "\tUse AvalonSTPkts(..., report_channel=True, ...)",
249 DeprecationWarning, stacklevel=2
250 )
251 AvalonSTPkts.__init__(self, entity, name, clock, report_channel=True, **kwargs)
252
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/cocotb/monitors/avalon.py b/cocotb/monitors/avalon.py
--- a/cocotb/monitors/avalon.py
+++ b/cocotb/monitors/avalon.py
@@ -217,7 +217,7 @@
if self.bus.endofpacket.value:
self.log.info("Received a packet of %d bytes", len(pkt))
- self.log.debug(hexdump(str((pkt))))
+ self.log.debug(hexdump(pkt))
self.channel = channel
if self.report_channel:
self._recv({"data": pkt, "channel": channel})
|
{"golden_diff": "diff --git a/cocotb/monitors/avalon.py b/cocotb/monitors/avalon.py\n--- a/cocotb/monitors/avalon.py\n+++ b/cocotb/monitors/avalon.py\n@@ -217,7 +217,7 @@\n \n if self.bus.endofpacket.value:\n self.log.info(\"Received a packet of %d bytes\", len(pkt))\n- self.log.debug(hexdump(str((pkt))))\n+ self.log.debug(hexdump(pkt))\n self.channel = channel\n if self.report_channel:\n self._recv({\"data\": pkt, \"channel\": channel})\n", "issue": "AvalonSTPkts hexdump needs a minor fix now that the transaction is of type bytes rather than string\nWhen DEBUG logging level is enabled, the `hexdump` call in `_monitor_recv` function of AvalonSTPkts calls hexdump after converting the `pkt` in to `str` which is no longer needed\n", "before_files": [{"content": "# Copyright (c) 2013 Potential Ventures Ltd\n# Copyright (c) 2013 SolarFlare Communications Inc\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n# * Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n# * Redistributions in binary form must reproduce the above copyright\n# notice, this list of conditions and the following disclaimer in the\n# documentation and/or other materials provided with the distribution.\n# * Neither the name of Potential Ventures Ltd,\n# SolarFlare Communications Inc nor the\n# names of its contributors may be used to endorse or promote products\n# derived from this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND\n# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\n# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL POTENTIAL VENTURES LTD BE LIABLE FOR ANY\n# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\n# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND\n# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n\"\"\"Monitors for Intel Avalon interfaces.\n\nSee https://www.intel.com/content/dam/www/programmable/us/en/pdfs/literature/manual/mnl_avalon_spec_1_3.pdf\n\nNB Currently we only support a very small subset of functionality.\n\"\"\"\n\nimport warnings\n\nfrom cocotb.utils import hexdump\nfrom cocotb.decorators import coroutine\nfrom cocotb.monitors import BusMonitor\nfrom cocotb.triggers import RisingEdge, ReadOnly\nfrom cocotb.binary import BinaryValue\n\n\nclass AvalonProtocolError(Exception):\n pass\n\n\nclass AvalonST(BusMonitor):\n \"\"\"Avalon-ST bus.\n\n Non-packetized so each valid word is a separate transaction.\n \"\"\"\n\n _signals = [\"valid\", \"data\"]\n _optional_signals = [\"ready\"]\n\n _default_config = {\"firstSymbolInHighOrderBits\": True}\n\n def __init__(self, entity, name, clock, *, config={}, **kwargs):\n BusMonitor.__init__(self, entity, name, clock, **kwargs)\n\n self.config = self._default_config.copy()\n\n for configoption, value in config.items():\n self.config[configoption] = value\n self.log.debug(\"Setting config option %s to %s\", configoption, str(value))\n\n @coroutine\n def _monitor_recv(self):\n \"\"\"Watch the pins and reconstruct transactions.\"\"\"\n\n # Avoid spurious object creation by recycling\n clkedge = RisingEdge(self.clock)\n rdonly = ReadOnly()\n\n def valid():\n if hasattr(self.bus, \"ready\"):\n return self.bus.valid.value and self.bus.ready.value\n return self.bus.valid.value\n\n # NB could yield on valid here more efficiently?\n while True:\n yield clkedge\n yield rdonly\n if valid():\n vec = self.bus.data.value\n vec.big_endian = self.config[\"firstSymbolInHighOrderBits\"]\n self._recv(vec.buff)\n\n\nclass AvalonSTPkts(BusMonitor):\n \"\"\"Packetized Avalon-ST bus.\n\n Args:\n entity, name, clock: see :class:`BusMonitor`\n config (dict): bus configuration options\n report_channel (bool): report channel with data, default is False\n Setting to True on bus without channel signal will give an error\n \"\"\"\n\n _signals = [\"valid\", \"data\", \"startofpacket\", \"endofpacket\"]\n _optional_signals = [\"error\", \"channel\", \"ready\", \"empty\"]\n\n _default_config = {\n \"dataBitsPerSymbol\" : 8,\n \"firstSymbolInHighOrderBits\" : True,\n \"maxChannel\" : 0,\n \"readyLatency\" : 0,\n \"invalidTimeout\" : 0,\n }\n\n def __init__(self, entity, name, clock, *, config={}, report_channel=False, **kwargs):\n BusMonitor.__init__(self, entity, name , clock, **kwargs)\n\n self.config = self._default_config.copy()\n self.report_channel = report_channel\n\n # Set default config maxChannel to max value on channel bus\n if hasattr(self.bus, 'channel'):\n self.config['maxChannel'] = (2 ** len(self.bus.channel)) -1\n else:\n if report_channel:\n raise ValueError(\"Channel reporting asked on bus without channel signal\")\n\n for configoption, value in config.items():\n self.config[configoption] = value\n self.log.debug(\"Setting config option %s to %s\",\n configoption, str(value))\n\n num_data_symbols = (len(self.bus.data) /\n self.config[\"dataBitsPerSymbol\"])\n if (num_data_symbols > 1 and not hasattr(self.bus, 'empty')):\n raise AttributeError(\n \"%s has %i data symbols, but contains no object named empty\" %\n (self.name, num_data_symbols))\n\n self.config[\"useEmpty\"] = (num_data_symbols > 1)\n\n if hasattr(self.bus, 'channel'):\n if len(self.bus.channel) > 128:\n raise AttributeError(\"AvalonST interface specification defines channel width as 1-128. \"\n \"%d channel width is %d\" %\n (self.name, len(self.bus.channel)))\n maxChannel = (2 ** len(self.bus.channel)) -1\n if self.config['maxChannel'] > maxChannel:\n raise AttributeError(\"%s has maxChannel=%d, but can only support a maximum channel of \"\n \"(2**channel_width)-1=%d, channel_width=%d\" %\n (self.name, self.config['maxChannel'], maxChannel, len(self.bus.channel)))\n\n @coroutine\n def _monitor_recv(self):\n \"\"\"Watch the pins and reconstruct transactions.\"\"\"\n\n # Avoid spurious object creation by recycling\n clkedge = RisingEdge(self.clock)\n rdonly = ReadOnly()\n pkt = b\"\"\n in_pkt = False\n invalid_cyclecount = 0\n channel = None\n\n def valid():\n if hasattr(self.bus, 'ready'):\n return self.bus.valid.value and self.bus.ready.value\n return self.bus.valid.value\n\n while True:\n yield clkedge\n yield rdonly\n\n if self.in_reset:\n continue\n\n if valid():\n invalid_cyclecount = 0\n\n if self.bus.startofpacket.value:\n if pkt:\n raise AvalonProtocolError(\"Duplicate start-of-packet received on %s\" %\n str(self.bus.startofpacket))\n pkt = b\"\"\n in_pkt = True\n\n if not in_pkt:\n raise AvalonProtocolError(\"Data transfer outside of \"\n \"packet\")\n\n # Handle empty and X's in empty / data\n vec = BinaryValue()\n if not self.bus.endofpacket.value:\n vec = self.bus.data.value\n else:\n value = self.bus.data.value.get_binstr()\n if self.config[\"useEmpty\"] and self.bus.empty.value.integer:\n empty = self.bus.empty.value.integer * self.config[\"dataBitsPerSymbol\"]\n if self.config[\"firstSymbolInHighOrderBits\"]:\n value = value[:-empty]\n else:\n value = value[empty:]\n vec.assign(value)\n if not vec.is_resolvable:\n raise AvalonProtocolError(\"After empty masking value is still bad? \"\n \"Had empty {:d}, got value {:s}\".format(empty,\n self.bus.data.value.get_binstr()))\n\n vec.big_endian = self.config['firstSymbolInHighOrderBits']\n pkt += vec.buff\n\n if hasattr(self.bus, 'channel'):\n if channel is None:\n channel = self.bus.channel.value.integer\n if channel > self.config[\"maxChannel\"]:\n raise AvalonProtocolError(\"Channel value (%d) is greater than maxChannel (%d)\" %\n (channel, self.config[\"maxChannel\"]))\n elif self.bus.channel.value.integer != channel:\n raise AvalonProtocolError(\"Channel value changed during packet\")\n\n if self.bus.endofpacket.value:\n self.log.info(\"Received a packet of %d bytes\", len(pkt))\n self.log.debug(hexdump(str((pkt))))\n self.channel = channel\n if self.report_channel:\n self._recv({\"data\": pkt, \"channel\": channel})\n else:\n self._recv(pkt)\n pkt = b\"\"\n in_pkt = False\n channel = None\n else:\n if in_pkt:\n invalid_cyclecount += 1\n if self.config[\"invalidTimeout\"]:\n if invalid_cyclecount >= self.config[\"invalidTimeout\"]:\n raise AvalonProtocolError(\n \"In-Packet Timeout. Didn't receive any valid data for %d cycles!\" %\n invalid_cyclecount)\n\n\nclass AvalonSTPktsWithChannel(AvalonSTPkts):\n \"\"\"Packetized AvalonST bus using channel.\n\n This class is deprecated. Use AvalonSTPkts(..., report_channel=True, ...)\n \"\"\"\n\n def __init__(self, entity, name, clock, **kwargs):\n warnings.warn(\n \"Use of AvalonSTPktsWithChannel is deprecated\\n\"\n \"\\tUse AvalonSTPkts(..., report_channel=True, ...)\",\n DeprecationWarning, stacklevel=2\n )\n AvalonSTPkts.__init__(self, entity, name, clock, report_channel=True, **kwargs)\n", "path": "cocotb/monitors/avalon.py"}], "after_files": [{"content": "# Copyright (c) 2013 Potential Ventures Ltd\n# Copyright (c) 2013 SolarFlare Communications Inc\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n# * Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n# * Redistributions in binary form must reproduce the above copyright\n# notice, this list of conditions and the following disclaimer in the\n# documentation and/or other materials provided with the distribution.\n# * Neither the name of Potential Ventures Ltd,\n# SolarFlare Communications Inc nor the\n# names of its contributors may be used to endorse or promote products\n# derived from this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND\n# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\n# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL POTENTIAL VENTURES LTD BE LIABLE FOR ANY\n# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\n# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND\n# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n\"\"\"Monitors for Intel Avalon interfaces.\n\nSee https://www.intel.com/content/dam/www/programmable/us/en/pdfs/literature/manual/mnl_avalon_spec_1_3.pdf\n\nNB Currently we only support a very small subset of functionality.\n\"\"\"\n\nimport warnings\n\nfrom cocotb.utils import hexdump\nfrom cocotb.decorators import coroutine\nfrom cocotb.monitors import BusMonitor\nfrom cocotb.triggers import RisingEdge, ReadOnly\nfrom cocotb.binary import BinaryValue\n\n\nclass AvalonProtocolError(Exception):\n pass\n\n\nclass AvalonST(BusMonitor):\n \"\"\"Avalon-ST bus.\n\n Non-packetized so each valid word is a separate transaction.\n \"\"\"\n\n _signals = [\"valid\", \"data\"]\n _optional_signals = [\"ready\"]\n\n _default_config = {\"firstSymbolInHighOrderBits\": True}\n\n def __init__(self, entity, name, clock, *, config={}, **kwargs):\n BusMonitor.__init__(self, entity, name, clock, **kwargs)\n\n self.config = self._default_config.copy()\n\n for configoption, value in config.items():\n self.config[configoption] = value\n self.log.debug(\"Setting config option %s to %s\", configoption, str(value))\n\n @coroutine\n def _monitor_recv(self):\n \"\"\"Watch the pins and reconstruct transactions.\"\"\"\n\n # Avoid spurious object creation by recycling\n clkedge = RisingEdge(self.clock)\n rdonly = ReadOnly()\n\n def valid():\n if hasattr(self.bus, \"ready\"):\n return self.bus.valid.value and self.bus.ready.value\n return self.bus.valid.value\n\n # NB could yield on valid here more efficiently?\n while True:\n yield clkedge\n yield rdonly\n if valid():\n vec = self.bus.data.value\n vec.big_endian = self.config[\"firstSymbolInHighOrderBits\"]\n self._recv(vec.buff)\n\n\nclass AvalonSTPkts(BusMonitor):\n \"\"\"Packetized Avalon-ST bus.\n\n Args:\n entity, name, clock: see :class:`BusMonitor`\n config (dict): bus configuration options\n report_channel (bool): report channel with data, default is False\n Setting to True on bus without channel signal will give an error\n \"\"\"\n\n _signals = [\"valid\", \"data\", \"startofpacket\", \"endofpacket\"]\n _optional_signals = [\"error\", \"channel\", \"ready\", \"empty\"]\n\n _default_config = {\n \"dataBitsPerSymbol\" : 8,\n \"firstSymbolInHighOrderBits\" : True,\n \"maxChannel\" : 0,\n \"readyLatency\" : 0,\n \"invalidTimeout\" : 0,\n }\n\n def __init__(self, entity, name, clock, *, config={}, report_channel=False, **kwargs):\n BusMonitor.__init__(self, entity, name , clock, **kwargs)\n\n self.config = self._default_config.copy()\n self.report_channel = report_channel\n\n # Set default config maxChannel to max value on channel bus\n if hasattr(self.bus, 'channel'):\n self.config['maxChannel'] = (2 ** len(self.bus.channel)) -1\n else:\n if report_channel:\n raise ValueError(\"Channel reporting asked on bus without channel signal\")\n\n for configoption, value in config.items():\n self.config[configoption] = value\n self.log.debug(\"Setting config option %s to %s\",\n configoption, str(value))\n\n num_data_symbols = (len(self.bus.data) /\n self.config[\"dataBitsPerSymbol\"])\n if (num_data_symbols > 1 and not hasattr(self.bus, 'empty')):\n raise AttributeError(\n \"%s has %i data symbols, but contains no object named empty\" %\n (self.name, num_data_symbols))\n\n self.config[\"useEmpty\"] = (num_data_symbols > 1)\n\n if hasattr(self.bus, 'channel'):\n if len(self.bus.channel) > 128:\n raise AttributeError(\"AvalonST interface specification defines channel width as 1-128. \"\n \"%d channel width is %d\" %\n (self.name, len(self.bus.channel)))\n maxChannel = (2 ** len(self.bus.channel)) -1\n if self.config['maxChannel'] > maxChannel:\n raise AttributeError(\"%s has maxChannel=%d, but can only support a maximum channel of \"\n \"(2**channel_width)-1=%d, channel_width=%d\" %\n (self.name, self.config['maxChannel'], maxChannel, len(self.bus.channel)))\n\n @coroutine\n def _monitor_recv(self):\n \"\"\"Watch the pins and reconstruct transactions.\"\"\"\n\n # Avoid spurious object creation by recycling\n clkedge = RisingEdge(self.clock)\n rdonly = ReadOnly()\n pkt = b\"\"\n in_pkt = False\n invalid_cyclecount = 0\n channel = None\n\n def valid():\n if hasattr(self.bus, 'ready'):\n return self.bus.valid.value and self.bus.ready.value\n return self.bus.valid.value\n\n while True:\n yield clkedge\n yield rdonly\n\n if self.in_reset:\n continue\n\n if valid():\n invalid_cyclecount = 0\n\n if self.bus.startofpacket.value:\n if pkt:\n raise AvalonProtocolError(\"Duplicate start-of-packet received on %s\" %\n str(self.bus.startofpacket))\n pkt = b\"\"\n in_pkt = True\n\n if not in_pkt:\n raise AvalonProtocolError(\"Data transfer outside of \"\n \"packet\")\n\n # Handle empty and X's in empty / data\n vec = BinaryValue()\n if not self.bus.endofpacket.value:\n vec = self.bus.data.value\n else:\n value = self.bus.data.value.get_binstr()\n if self.config[\"useEmpty\"] and self.bus.empty.value.integer:\n empty = self.bus.empty.value.integer * self.config[\"dataBitsPerSymbol\"]\n if self.config[\"firstSymbolInHighOrderBits\"]:\n value = value[:-empty]\n else:\n value = value[empty:]\n vec.assign(value)\n if not vec.is_resolvable:\n raise AvalonProtocolError(\"After empty masking value is still bad? \"\n \"Had empty {:d}, got value {:s}\".format(empty,\n self.bus.data.value.get_binstr()))\n\n vec.big_endian = self.config['firstSymbolInHighOrderBits']\n pkt += vec.buff\n\n if hasattr(self.bus, 'channel'):\n if channel is None:\n channel = self.bus.channel.value.integer\n if channel > self.config[\"maxChannel\"]:\n raise AvalonProtocolError(\"Channel value (%d) is greater than maxChannel (%d)\" %\n (channel, self.config[\"maxChannel\"]))\n elif self.bus.channel.value.integer != channel:\n raise AvalonProtocolError(\"Channel value changed during packet\")\n\n if self.bus.endofpacket.value:\n self.log.info(\"Received a packet of %d bytes\", len(pkt))\n self.log.debug(hexdump(pkt))\n self.channel = channel\n if self.report_channel:\n self._recv({\"data\": pkt, \"channel\": channel})\n else:\n self._recv(pkt)\n pkt = b\"\"\n in_pkt = False\n channel = None\n else:\n if in_pkt:\n invalid_cyclecount += 1\n if self.config[\"invalidTimeout\"]:\n if invalid_cyclecount >= self.config[\"invalidTimeout\"]:\n raise AvalonProtocolError(\n \"In-Packet Timeout. Didn't receive any valid data for %d cycles!\" %\n invalid_cyclecount)\n\n\nclass AvalonSTPktsWithChannel(AvalonSTPkts):\n \"\"\"Packetized AvalonST bus using channel.\n\n This class is deprecated. Use AvalonSTPkts(..., report_channel=True, ...)\n \"\"\"\n\n def __init__(self, entity, name, clock, **kwargs):\n warnings.warn(\n \"Use of AvalonSTPktsWithChannel is deprecated\\n\"\n \"\\tUse AvalonSTPkts(..., report_channel=True, ...)\",\n DeprecationWarning, stacklevel=2\n )\n AvalonSTPkts.__init__(self, entity, name, clock, report_channel=True, **kwargs)\n", "path": "cocotb/monitors/avalon.py"}]}
| 3,119 | 142 |
gh_patches_debug_5544
|
rasdani/github-patches
|
git_diff
|
mozilla__pontoon-2777
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Drop contributor profile URL with email addresses
We should drop support for accessing contributor profiles by using user email addresses in the URLs. This feature, while not used internally in the app, leaks otherwise hidden user email addressed on sites like on web.archive.org index.
This is particularly problematic for users without any contributions.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pontoon/contributors/urls.py`
Content:
```
1 from django.urls import path, register_converter
2 from django.urls.converters import StringConverter
3 from django.views.generic import RedirectView
4
5 from . import views
6
7
8 class EmailConverter(StringConverter):
9 regex = r"[\w.%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,4}"
10
11
12 class UsernameConverter(StringConverter):
13 regex = r"[\w.@+-]+"
14
15
16 register_converter(EmailConverter, "email")
17 register_converter(UsernameConverter, "username")
18
19 urlpatterns = [
20 # Legacy: Redirect to /contributors/email
21 path(
22 "contributor/<email:email>/",
23 RedirectView.as_view(url="/contributors/%(email)s/", permanent=True),
24 ),
25 # List contributors
26 path(
27 "contributors/",
28 views.ContributorsView.as_view(),
29 name="pontoon.contributors",
30 ),
31 # Contributor profile by email
32 path(
33 "contributors/<email:email>/",
34 views.contributor_email,
35 name="pontoon.contributors.contributor.email",
36 ),
37 # Contributor profile by username
38 path(
39 "contributors/<username:username>/",
40 views.contributor_username,
41 name="pontoon.contributors.contributor.username",
42 ),
43 # Verify email address
44 path(
45 "verify-email-address/<str:token>/",
46 views.verify_email_address,
47 name="pontoon.contributors.verify.email",
48 ),
49 # Current user's profile
50 path("profile/", views.profile, name="pontoon.contributors.profile"),
51 # Current user's settings
52 path("settings/", views.settings, name="pontoon.contributors.settings"),
53 # Current user's notifications
54 path(
55 "notifications/",
56 views.notifications,
57 name="pontoon.contributors.notifications",
58 ),
59 # Mark current user's notifications as read
60 path(
61 "notifications/mark-all-as-read/",
62 views.mark_all_notifications_as_read,
63 name="pontoon.contributors.notifications.mark.all.as.read",
64 ),
65 # API: Toggle user profile attribute
66 path(
67 "api/v1/user/<username:username>/",
68 views.toggle_user_profile_attribute,
69 name="pontoon.contributors.toggle_user_profile_attribute",
70 ),
71 # AJAX: Save custom homepage
72 path(
73 "save-custom-homepage/",
74 views.save_custom_homepage,
75 name="pontoon.contributors.save_custom_homepage",
76 ),
77 # AJAX: Save preferred source locale
78 path(
79 "save-preferred-source-locale/",
80 views.save_preferred_source_locale,
81 name="pontoon.contributors.save_preferred_source_locale",
82 ),
83 # AJAX: Dismiss Add-On Promotion
84 path(
85 "dismiss-addon-promotion/",
86 views.dismiss_addon_promotion,
87 name="pontoon.contributors.dismiss_addon_promotion",
88 ),
89 # AJAX: Update contribution graph
90 path(
91 "update-contribution-graph/",
92 views.update_contribution_graph,
93 name="pontoon.contributors.update_contribution_graph",
94 ),
95 # AJAX: Update contribution timeline
96 path(
97 "update-contribution-timeline/",
98 views.update_contribution_timeline,
99 name="pontoon.contributors.update_contribution_timeline",
100 ),
101 ]
102
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pontoon/contributors/urls.py b/pontoon/contributors/urls.py
--- a/pontoon/contributors/urls.py
+++ b/pontoon/contributors/urls.py
@@ -28,12 +28,6 @@
views.ContributorsView.as_view(),
name="pontoon.contributors",
),
- # Contributor profile by email
- path(
- "contributors/<email:email>/",
- views.contributor_email,
- name="pontoon.contributors.contributor.email",
- ),
# Contributor profile by username
path(
"contributors/<username:username>/",
|
{"golden_diff": "diff --git a/pontoon/contributors/urls.py b/pontoon/contributors/urls.py\n--- a/pontoon/contributors/urls.py\n+++ b/pontoon/contributors/urls.py\n@@ -28,12 +28,6 @@\n views.ContributorsView.as_view(),\n name=\"pontoon.contributors\",\n ),\n- # Contributor profile by email\n- path(\n- \"contributors/<email:email>/\",\n- views.contributor_email,\n- name=\"pontoon.contributors.contributor.email\",\n- ),\n # Contributor profile by username\n path(\n \"contributors/<username:username>/\",\n", "issue": "Drop contributor profile URL with email addresses\nWe should drop support for accessing contributor profiles by using user email addresses in the URLs. This feature, while not used internally in the app, leaks otherwise hidden user email addressed on sites like on web.archive.org index.\r\n\r\nThis is particularly problematic for users without any contributions.\r\n\n", "before_files": [{"content": "from django.urls import path, register_converter\nfrom django.urls.converters import StringConverter\nfrom django.views.generic import RedirectView\n\nfrom . import views\n\n\nclass EmailConverter(StringConverter):\n regex = r\"[\\w.%+-]+@[A-Za-z0-9.-]+\\.[A-Za-z]{2,4}\"\n\n\nclass UsernameConverter(StringConverter):\n regex = r\"[\\w.@+-]+\"\n\n\nregister_converter(EmailConverter, \"email\")\nregister_converter(UsernameConverter, \"username\")\n\nurlpatterns = [\n # Legacy: Redirect to /contributors/email\n path(\n \"contributor/<email:email>/\",\n RedirectView.as_view(url=\"/contributors/%(email)s/\", permanent=True),\n ),\n # List contributors\n path(\n \"contributors/\",\n views.ContributorsView.as_view(),\n name=\"pontoon.contributors\",\n ),\n # Contributor profile by email\n path(\n \"contributors/<email:email>/\",\n views.contributor_email,\n name=\"pontoon.contributors.contributor.email\",\n ),\n # Contributor profile by username\n path(\n \"contributors/<username:username>/\",\n views.contributor_username,\n name=\"pontoon.contributors.contributor.username\",\n ),\n # Verify email address\n path(\n \"verify-email-address/<str:token>/\",\n views.verify_email_address,\n name=\"pontoon.contributors.verify.email\",\n ),\n # Current user's profile\n path(\"profile/\", views.profile, name=\"pontoon.contributors.profile\"),\n # Current user's settings\n path(\"settings/\", views.settings, name=\"pontoon.contributors.settings\"),\n # Current user's notifications\n path(\n \"notifications/\",\n views.notifications,\n name=\"pontoon.contributors.notifications\",\n ),\n # Mark current user's notifications as read\n path(\n \"notifications/mark-all-as-read/\",\n views.mark_all_notifications_as_read,\n name=\"pontoon.contributors.notifications.mark.all.as.read\",\n ),\n # API: Toggle user profile attribute\n path(\n \"api/v1/user/<username:username>/\",\n views.toggle_user_profile_attribute,\n name=\"pontoon.contributors.toggle_user_profile_attribute\",\n ),\n # AJAX: Save custom homepage\n path(\n \"save-custom-homepage/\",\n views.save_custom_homepage,\n name=\"pontoon.contributors.save_custom_homepage\",\n ),\n # AJAX: Save preferred source locale\n path(\n \"save-preferred-source-locale/\",\n views.save_preferred_source_locale,\n name=\"pontoon.contributors.save_preferred_source_locale\",\n ),\n # AJAX: Dismiss Add-On Promotion\n path(\n \"dismiss-addon-promotion/\",\n views.dismiss_addon_promotion,\n name=\"pontoon.contributors.dismiss_addon_promotion\",\n ),\n # AJAX: Update contribution graph\n path(\n \"update-contribution-graph/\",\n views.update_contribution_graph,\n name=\"pontoon.contributors.update_contribution_graph\",\n ),\n # AJAX: Update contribution timeline\n path(\n \"update-contribution-timeline/\",\n views.update_contribution_timeline,\n name=\"pontoon.contributors.update_contribution_timeline\",\n ),\n]\n", "path": "pontoon/contributors/urls.py"}], "after_files": [{"content": "from django.urls import path, register_converter\nfrom django.urls.converters import StringConverter\nfrom django.views.generic import RedirectView\n\nfrom . import views\n\n\nclass EmailConverter(StringConverter):\n regex = r\"[\\w.%+-]+@[A-Za-z0-9.-]+\\.[A-Za-z]{2,4}\"\n\n\nclass UsernameConverter(StringConverter):\n regex = r\"[\\w.@+-]+\"\n\n\nregister_converter(EmailConverter, \"email\")\nregister_converter(UsernameConverter, \"username\")\n\nurlpatterns = [\n # Legacy: Redirect to /contributors/email\n path(\n \"contributor/<email:email>/\",\n RedirectView.as_view(url=\"/contributors/%(email)s/\", permanent=True),\n ),\n # List contributors\n path(\n \"contributors/\",\n views.ContributorsView.as_view(),\n name=\"pontoon.contributors\",\n ),\n # Contributor profile by username\n path(\n \"contributors/<username:username>/\",\n views.contributor_username,\n name=\"pontoon.contributors.contributor.username\",\n ),\n # Verify email address\n path(\n \"verify-email-address/<str:token>/\",\n views.verify_email_address,\n name=\"pontoon.contributors.verify.email\",\n ),\n # Current user's profile\n path(\"profile/\", views.profile, name=\"pontoon.contributors.profile\"),\n # Current user's settings\n path(\"settings/\", views.settings, name=\"pontoon.contributors.settings\"),\n # Current user's notifications\n path(\n \"notifications/\",\n views.notifications,\n name=\"pontoon.contributors.notifications\",\n ),\n # Mark current user's notifications as read\n path(\n \"notifications/mark-all-as-read/\",\n views.mark_all_notifications_as_read,\n name=\"pontoon.contributors.notifications.mark.all.as.read\",\n ),\n # API: Toggle user profile attribute\n path(\n \"api/v1/user/<username:username>/\",\n views.toggle_user_profile_attribute,\n name=\"pontoon.contributors.toggle_user_profile_attribute\",\n ),\n # AJAX: Save custom homepage\n path(\n \"save-custom-homepage/\",\n views.save_custom_homepage,\n name=\"pontoon.contributors.save_custom_homepage\",\n ),\n # AJAX: Save preferred source locale\n path(\n \"save-preferred-source-locale/\",\n views.save_preferred_source_locale,\n name=\"pontoon.contributors.save_preferred_source_locale\",\n ),\n # AJAX: Dismiss Add-On Promotion\n path(\n \"dismiss-addon-promotion/\",\n views.dismiss_addon_promotion,\n name=\"pontoon.contributors.dismiss_addon_promotion\",\n ),\n # AJAX: Update contribution graph\n path(\n \"update-contribution-graph/\",\n views.update_contribution_graph,\n name=\"pontoon.contributors.update_contribution_graph\",\n ),\n # AJAX: Update contribution timeline\n path(\n \"update-contribution-timeline/\",\n views.update_contribution_timeline,\n name=\"pontoon.contributors.update_contribution_timeline\",\n ),\n]\n", "path": "pontoon/contributors/urls.py"}]}
| 1,205 | 145 |
gh_patches_debug_40422
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-python-686
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Setting transaction name in pyramid
Hi is it possible to set transaction name when using PyramidIntegration?
I tried setting `scope.transaction = "mycustomname"` but it seems that integration itself does:
try:
if integration.transaction_style == "route_name":
event["transaction"] = request.matched_route.name
elif integration.transaction_style == "route_pattern":
event["transaction"] = request.matched_route.pattern
except Exception:
pass
And I suspect that overwrites anything I've set up earlier?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sentry_sdk/integrations/pyramid.py`
Content:
```
1 from __future__ import absolute_import
2
3 import os
4 import sys
5 import weakref
6
7 from pyramid.httpexceptions import HTTPException
8 from pyramid.request import Request
9
10 from sentry_sdk.hub import Hub, _should_send_default_pii
11 from sentry_sdk.utils import capture_internal_exceptions, event_from_exception
12 from sentry_sdk._compat import reraise, iteritems
13
14 from sentry_sdk.integrations import Integration
15 from sentry_sdk.integrations._wsgi_common import RequestExtractor
16 from sentry_sdk.integrations.wsgi import SentryWsgiMiddleware
17
18 from sentry_sdk._types import MYPY
19
20 if MYPY:
21 from pyramid.response import Response
22 from typing import Any
23 from sentry_sdk.integrations.wsgi import _ScopedResponse
24 from typing import Callable
25 from typing import Dict
26 from typing import Optional
27 from webob.cookies import RequestCookies # type: ignore
28 from webob.compat import cgi_FieldStorage # type: ignore
29
30 from sentry_sdk.utils import ExcInfo
31 from sentry_sdk._types import EventProcessor
32
33
34 if getattr(Request, "authenticated_userid", None):
35
36 def authenticated_userid(request):
37 # type: (Request) -> Optional[Any]
38 return request.authenticated_userid
39
40
41 else:
42 # bw-compat for pyramid < 1.5
43 from pyramid.security import authenticated_userid # type: ignore
44
45
46 TRANSACTION_STYLE_VALUES = ("route_name", "route_pattern")
47
48
49 class PyramidIntegration(Integration):
50 identifier = "pyramid"
51
52 transaction_style = None
53
54 def __init__(self, transaction_style="route_name"):
55 # type: (str) -> None
56 if transaction_style not in TRANSACTION_STYLE_VALUES:
57 raise ValueError(
58 "Invalid value for transaction_style: %s (must be in %s)"
59 % (transaction_style, TRANSACTION_STYLE_VALUES)
60 )
61 self.transaction_style = transaction_style
62
63 @staticmethod
64 def setup_once():
65 # type: () -> None
66 from pyramid.router import Router
67 from pyramid.request import Request
68
69 old_handle_request = Router.handle_request
70
71 def sentry_patched_handle_request(self, request, *args, **kwargs):
72 # type: (Any, Request, *Any, **Any) -> Response
73 hub = Hub.current
74 integration = hub.get_integration(PyramidIntegration)
75 if integration is not None:
76 with hub.configure_scope() as scope:
77 scope.add_event_processor(
78 _make_event_processor(weakref.ref(request), integration)
79 )
80
81 return old_handle_request(self, request, *args, **kwargs)
82
83 Router.handle_request = sentry_patched_handle_request
84
85 if hasattr(Request, "invoke_exception_view"):
86 old_invoke_exception_view = Request.invoke_exception_view
87
88 def sentry_patched_invoke_exception_view(self, *args, **kwargs):
89 # type: (Request, *Any, **Any) -> Any
90 rv = old_invoke_exception_view(self, *args, **kwargs)
91
92 if (
93 self.exc_info
94 and all(self.exc_info)
95 and rv.status_int == 500
96 and Hub.current.get_integration(PyramidIntegration) is not None
97 ):
98 _capture_exception(self.exc_info)
99
100 return rv
101
102 Request.invoke_exception_view = sentry_patched_invoke_exception_view
103
104 old_wsgi_call = Router.__call__
105
106 def sentry_patched_wsgi_call(self, environ, start_response):
107 # type: (Any, Dict[str, str], Callable[..., Any]) -> _ScopedResponse
108 hub = Hub.current
109 integration = hub.get_integration(PyramidIntegration)
110 if integration is None:
111 return old_wsgi_call(self, environ, start_response)
112
113 def sentry_patched_inner_wsgi_call(environ, start_response):
114 # type: (Dict[str, Any], Callable[..., Any]) -> Any
115 try:
116 return old_wsgi_call(self, environ, start_response)
117 except Exception:
118 einfo = sys.exc_info()
119 _capture_exception(einfo)
120 reraise(*einfo)
121
122 return SentryWsgiMiddleware(sentry_patched_inner_wsgi_call)(
123 environ, start_response
124 )
125
126 Router.__call__ = sentry_patched_wsgi_call
127
128
129 def _capture_exception(exc_info):
130 # type: (ExcInfo) -> None
131 if exc_info[0] is None or issubclass(exc_info[0], HTTPException):
132 return
133 hub = Hub.current
134 if hub.get_integration(PyramidIntegration) is None:
135 return
136
137 # If an integration is there, a client has to be there.
138 client = hub.client # type: Any
139
140 event, hint = event_from_exception(
141 exc_info,
142 client_options=client.options,
143 mechanism={"type": "pyramid", "handled": False},
144 )
145
146 hub.capture_event(event, hint=hint)
147
148
149 class PyramidRequestExtractor(RequestExtractor):
150 def url(self):
151 # type: () -> str
152 return self.request.path_url
153
154 def env(self):
155 # type: () -> Dict[str, str]
156 return self.request.environ
157
158 def cookies(self):
159 # type: () -> RequestCookies
160 return self.request.cookies
161
162 def raw_data(self):
163 # type: () -> str
164 return self.request.text
165
166 def form(self):
167 # type: () -> Dict[str, str]
168 return {
169 key: value
170 for key, value in iteritems(self.request.POST)
171 if not getattr(value, "filename", None)
172 }
173
174 def files(self):
175 # type: () -> Dict[str, cgi_FieldStorage]
176 return {
177 key: value
178 for key, value in iteritems(self.request.POST)
179 if getattr(value, "filename", None)
180 }
181
182 def size_of_file(self, postdata):
183 # type: (cgi_FieldStorage) -> int
184 file = postdata.file
185 try:
186 return os.fstat(file.fileno()).st_size
187 except Exception:
188 return 0
189
190
191 def _make_event_processor(weak_request, integration):
192 # type: (Callable[[], Request], PyramidIntegration) -> EventProcessor
193 def event_processor(event, hint):
194 # type: (Dict[str, Any], Dict[str, Any]) -> Dict[str, Any]
195 request = weak_request()
196 if request is None:
197 return event
198
199 try:
200 if integration.transaction_style == "route_name":
201 event["transaction"] = request.matched_route.name
202 elif integration.transaction_style == "route_pattern":
203 event["transaction"] = request.matched_route.pattern
204 except Exception:
205 pass
206
207 with capture_internal_exceptions():
208 PyramidRequestExtractor(request).extract_into_event(event)
209
210 if _should_send_default_pii():
211 with capture_internal_exceptions():
212 user_info = event.setdefault("user", {})
213 user_info.setdefault("id", authenticated_userid(request))
214
215 return event
216
217 return event_processor
218
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sentry_sdk/integrations/pyramid.py b/sentry_sdk/integrations/pyramid.py
--- a/sentry_sdk/integrations/pyramid.py
+++ b/sentry_sdk/integrations/pyramid.py
@@ -63,24 +63,33 @@
@staticmethod
def setup_once():
# type: () -> None
- from pyramid.router import Router
+ from pyramid import router
from pyramid.request import Request
- old_handle_request = Router.handle_request
+ old_call_view = router._call_view
- def sentry_patched_handle_request(self, request, *args, **kwargs):
+ def sentry_patched_call_view(registry, request, *args, **kwargs):
# type: (Any, Request, *Any, **Any) -> Response
hub = Hub.current
integration = hub.get_integration(PyramidIntegration)
+
if integration is not None:
with hub.configure_scope() as scope:
+ try:
+ if integration.transaction_style == "route_name":
+ scope.transaction = request.matched_route.name
+ elif integration.transaction_style == "route_pattern":
+ scope.transaction = request.matched_route.pattern
+ except Exception:
+ raise
+
scope.add_event_processor(
_make_event_processor(weakref.ref(request), integration)
)
- return old_handle_request(self, request, *args, **kwargs)
+ return old_call_view(registry, request, *args, **kwargs)
- Router.handle_request = sentry_patched_handle_request
+ router._call_view = sentry_patched_call_view
if hasattr(Request, "invoke_exception_view"):
old_invoke_exception_view = Request.invoke_exception_view
@@ -101,7 +110,7 @@
Request.invoke_exception_view = sentry_patched_invoke_exception_view
- old_wsgi_call = Router.__call__
+ old_wsgi_call = router.Router.__call__
def sentry_patched_wsgi_call(self, environ, start_response):
# type: (Any, Dict[str, str], Callable[..., Any]) -> _ScopedResponse
@@ -123,7 +132,7 @@
environ, start_response
)
- Router.__call__ = sentry_patched_wsgi_call
+ router.Router.__call__ = sentry_patched_wsgi_call
def _capture_exception(exc_info):
@@ -196,14 +205,6 @@
if request is None:
return event
- try:
- if integration.transaction_style == "route_name":
- event["transaction"] = request.matched_route.name
- elif integration.transaction_style == "route_pattern":
- event["transaction"] = request.matched_route.pattern
- except Exception:
- pass
-
with capture_internal_exceptions():
PyramidRequestExtractor(request).extract_into_event(event)
|
{"golden_diff": "diff --git a/sentry_sdk/integrations/pyramid.py b/sentry_sdk/integrations/pyramid.py\n--- a/sentry_sdk/integrations/pyramid.py\n+++ b/sentry_sdk/integrations/pyramid.py\n@@ -63,24 +63,33 @@\n @staticmethod\n def setup_once():\n # type: () -> None\n- from pyramid.router import Router\n+ from pyramid import router\n from pyramid.request import Request\n \n- old_handle_request = Router.handle_request\n+ old_call_view = router._call_view\n \n- def sentry_patched_handle_request(self, request, *args, **kwargs):\n+ def sentry_patched_call_view(registry, request, *args, **kwargs):\n # type: (Any, Request, *Any, **Any) -> Response\n hub = Hub.current\n integration = hub.get_integration(PyramidIntegration)\n+\n if integration is not None:\n with hub.configure_scope() as scope:\n+ try:\n+ if integration.transaction_style == \"route_name\":\n+ scope.transaction = request.matched_route.name\n+ elif integration.transaction_style == \"route_pattern\":\n+ scope.transaction = request.matched_route.pattern\n+ except Exception:\n+ raise\n+\n scope.add_event_processor(\n _make_event_processor(weakref.ref(request), integration)\n )\n \n- return old_handle_request(self, request, *args, **kwargs)\n+ return old_call_view(registry, request, *args, **kwargs)\n \n- Router.handle_request = sentry_patched_handle_request\n+ router._call_view = sentry_patched_call_view\n \n if hasattr(Request, \"invoke_exception_view\"):\n old_invoke_exception_view = Request.invoke_exception_view\n@@ -101,7 +110,7 @@\n \n Request.invoke_exception_view = sentry_patched_invoke_exception_view\n \n- old_wsgi_call = Router.__call__\n+ old_wsgi_call = router.Router.__call__\n \n def sentry_patched_wsgi_call(self, environ, start_response):\n # type: (Any, Dict[str, str], Callable[..., Any]) -> _ScopedResponse\n@@ -123,7 +132,7 @@\n environ, start_response\n )\n \n- Router.__call__ = sentry_patched_wsgi_call\n+ router.Router.__call__ = sentry_patched_wsgi_call\n \n \n def _capture_exception(exc_info):\n@@ -196,14 +205,6 @@\n if request is None:\n return event\n \n- try:\n- if integration.transaction_style == \"route_name\":\n- event[\"transaction\"] = request.matched_route.name\n- elif integration.transaction_style == \"route_pattern\":\n- event[\"transaction\"] = request.matched_route.pattern\n- except Exception:\n- pass\n-\n with capture_internal_exceptions():\n PyramidRequestExtractor(request).extract_into_event(event)\n", "issue": "Setting transaction name in pyramid\nHi is it possible to set transaction name when using PyramidIntegration?\r\nI tried setting `scope.transaction = \"mycustomname\"` but it seems that integration itself does:\r\n\r\n try:\r\n if integration.transaction_style == \"route_name\":\r\n event[\"transaction\"] = request.matched_route.name\r\n elif integration.transaction_style == \"route_pattern\":\r\n event[\"transaction\"] = request.matched_route.pattern\r\n except Exception:\r\n pass\r\n\r\nAnd I suspect that overwrites anything I've set up earlier?\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport os\nimport sys\nimport weakref\n\nfrom pyramid.httpexceptions import HTTPException\nfrom pyramid.request import Request\n\nfrom sentry_sdk.hub import Hub, _should_send_default_pii\nfrom sentry_sdk.utils import capture_internal_exceptions, event_from_exception\nfrom sentry_sdk._compat import reraise, iteritems\n\nfrom sentry_sdk.integrations import Integration\nfrom sentry_sdk.integrations._wsgi_common import RequestExtractor\nfrom sentry_sdk.integrations.wsgi import SentryWsgiMiddleware\n\nfrom sentry_sdk._types import MYPY\n\nif MYPY:\n from pyramid.response import Response\n from typing import Any\n from sentry_sdk.integrations.wsgi import _ScopedResponse\n from typing import Callable\n from typing import Dict\n from typing import Optional\n from webob.cookies import RequestCookies # type: ignore\n from webob.compat import cgi_FieldStorage # type: ignore\n\n from sentry_sdk.utils import ExcInfo\n from sentry_sdk._types import EventProcessor\n\n\nif getattr(Request, \"authenticated_userid\", None):\n\n def authenticated_userid(request):\n # type: (Request) -> Optional[Any]\n return request.authenticated_userid\n\n\nelse:\n # bw-compat for pyramid < 1.5\n from pyramid.security import authenticated_userid # type: ignore\n\n\nTRANSACTION_STYLE_VALUES = (\"route_name\", \"route_pattern\")\n\n\nclass PyramidIntegration(Integration):\n identifier = \"pyramid\"\n\n transaction_style = None\n\n def __init__(self, transaction_style=\"route_name\"):\n # type: (str) -> None\n if transaction_style not in TRANSACTION_STYLE_VALUES:\n raise ValueError(\n \"Invalid value for transaction_style: %s (must be in %s)\"\n % (transaction_style, TRANSACTION_STYLE_VALUES)\n )\n self.transaction_style = transaction_style\n\n @staticmethod\n def setup_once():\n # type: () -> None\n from pyramid.router import Router\n from pyramid.request import Request\n\n old_handle_request = Router.handle_request\n\n def sentry_patched_handle_request(self, request, *args, **kwargs):\n # type: (Any, Request, *Any, **Any) -> Response\n hub = Hub.current\n integration = hub.get_integration(PyramidIntegration)\n if integration is not None:\n with hub.configure_scope() as scope:\n scope.add_event_processor(\n _make_event_processor(weakref.ref(request), integration)\n )\n\n return old_handle_request(self, request, *args, **kwargs)\n\n Router.handle_request = sentry_patched_handle_request\n\n if hasattr(Request, \"invoke_exception_view\"):\n old_invoke_exception_view = Request.invoke_exception_view\n\n def sentry_patched_invoke_exception_view(self, *args, **kwargs):\n # type: (Request, *Any, **Any) -> Any\n rv = old_invoke_exception_view(self, *args, **kwargs)\n\n if (\n self.exc_info\n and all(self.exc_info)\n and rv.status_int == 500\n and Hub.current.get_integration(PyramidIntegration) is not None\n ):\n _capture_exception(self.exc_info)\n\n return rv\n\n Request.invoke_exception_view = sentry_patched_invoke_exception_view\n\n old_wsgi_call = Router.__call__\n\n def sentry_patched_wsgi_call(self, environ, start_response):\n # type: (Any, Dict[str, str], Callable[..., Any]) -> _ScopedResponse\n hub = Hub.current\n integration = hub.get_integration(PyramidIntegration)\n if integration is None:\n return old_wsgi_call(self, environ, start_response)\n\n def sentry_patched_inner_wsgi_call(environ, start_response):\n # type: (Dict[str, Any], Callable[..., Any]) -> Any\n try:\n return old_wsgi_call(self, environ, start_response)\n except Exception:\n einfo = sys.exc_info()\n _capture_exception(einfo)\n reraise(*einfo)\n\n return SentryWsgiMiddleware(sentry_patched_inner_wsgi_call)(\n environ, start_response\n )\n\n Router.__call__ = sentry_patched_wsgi_call\n\n\ndef _capture_exception(exc_info):\n # type: (ExcInfo) -> None\n if exc_info[0] is None or issubclass(exc_info[0], HTTPException):\n return\n hub = Hub.current\n if hub.get_integration(PyramidIntegration) is None:\n return\n\n # If an integration is there, a client has to be there.\n client = hub.client # type: Any\n\n event, hint = event_from_exception(\n exc_info,\n client_options=client.options,\n mechanism={\"type\": \"pyramid\", \"handled\": False},\n )\n\n hub.capture_event(event, hint=hint)\n\n\nclass PyramidRequestExtractor(RequestExtractor):\n def url(self):\n # type: () -> str\n return self.request.path_url\n\n def env(self):\n # type: () -> Dict[str, str]\n return self.request.environ\n\n def cookies(self):\n # type: () -> RequestCookies\n return self.request.cookies\n\n def raw_data(self):\n # type: () -> str\n return self.request.text\n\n def form(self):\n # type: () -> Dict[str, str]\n return {\n key: value\n for key, value in iteritems(self.request.POST)\n if not getattr(value, \"filename\", None)\n }\n\n def files(self):\n # type: () -> Dict[str, cgi_FieldStorage]\n return {\n key: value\n for key, value in iteritems(self.request.POST)\n if getattr(value, \"filename\", None)\n }\n\n def size_of_file(self, postdata):\n # type: (cgi_FieldStorage) -> int\n file = postdata.file\n try:\n return os.fstat(file.fileno()).st_size\n except Exception:\n return 0\n\n\ndef _make_event_processor(weak_request, integration):\n # type: (Callable[[], Request], PyramidIntegration) -> EventProcessor\n def event_processor(event, hint):\n # type: (Dict[str, Any], Dict[str, Any]) -> Dict[str, Any]\n request = weak_request()\n if request is None:\n return event\n\n try:\n if integration.transaction_style == \"route_name\":\n event[\"transaction\"] = request.matched_route.name\n elif integration.transaction_style == \"route_pattern\":\n event[\"transaction\"] = request.matched_route.pattern\n except Exception:\n pass\n\n with capture_internal_exceptions():\n PyramidRequestExtractor(request).extract_into_event(event)\n\n if _should_send_default_pii():\n with capture_internal_exceptions():\n user_info = event.setdefault(\"user\", {})\n user_info.setdefault(\"id\", authenticated_userid(request))\n\n return event\n\n return event_processor\n", "path": "sentry_sdk/integrations/pyramid.py"}], "after_files": [{"content": "from __future__ import absolute_import\n\nimport os\nimport sys\nimport weakref\n\nfrom pyramid.httpexceptions import HTTPException\nfrom pyramid.request import Request\n\nfrom sentry_sdk.hub import Hub, _should_send_default_pii\nfrom sentry_sdk.utils import capture_internal_exceptions, event_from_exception\nfrom sentry_sdk._compat import reraise, iteritems\n\nfrom sentry_sdk.integrations import Integration\nfrom sentry_sdk.integrations._wsgi_common import RequestExtractor\nfrom sentry_sdk.integrations.wsgi import SentryWsgiMiddleware\n\nfrom sentry_sdk._types import MYPY\n\nif MYPY:\n from pyramid.response import Response\n from typing import Any\n from sentry_sdk.integrations.wsgi import _ScopedResponse\n from typing import Callable\n from typing import Dict\n from typing import Optional\n from webob.cookies import RequestCookies # type: ignore\n from webob.compat import cgi_FieldStorage # type: ignore\n\n from sentry_sdk.utils import ExcInfo\n from sentry_sdk._types import EventProcessor\n\n\nif getattr(Request, \"authenticated_userid\", None):\n\n def authenticated_userid(request):\n # type: (Request) -> Optional[Any]\n return request.authenticated_userid\n\n\nelse:\n # bw-compat for pyramid < 1.5\n from pyramid.security import authenticated_userid # type: ignore\n\n\nTRANSACTION_STYLE_VALUES = (\"route_name\", \"route_pattern\")\n\n\nclass PyramidIntegration(Integration):\n identifier = \"pyramid\"\n\n transaction_style = None\n\n def __init__(self, transaction_style=\"route_name\"):\n # type: (str) -> None\n if transaction_style not in TRANSACTION_STYLE_VALUES:\n raise ValueError(\n \"Invalid value for transaction_style: %s (must be in %s)\"\n % (transaction_style, TRANSACTION_STYLE_VALUES)\n )\n self.transaction_style = transaction_style\n\n @staticmethod\n def setup_once():\n # type: () -> None\n from pyramid import router\n from pyramid.request import Request\n\n old_call_view = router._call_view\n\n def sentry_patched_call_view(registry, request, *args, **kwargs):\n # type: (Any, Request, *Any, **Any) -> Response\n hub = Hub.current\n integration = hub.get_integration(PyramidIntegration)\n\n if integration is not None:\n with hub.configure_scope() as scope:\n try:\n if integration.transaction_style == \"route_name\":\n scope.transaction = request.matched_route.name\n elif integration.transaction_style == \"route_pattern\":\n scope.transaction = request.matched_route.pattern\n except Exception:\n raise\n\n scope.add_event_processor(\n _make_event_processor(weakref.ref(request), integration)\n )\n\n return old_call_view(registry, request, *args, **kwargs)\n\n router._call_view = sentry_patched_call_view\n\n if hasattr(Request, \"invoke_exception_view\"):\n old_invoke_exception_view = Request.invoke_exception_view\n\n def sentry_patched_invoke_exception_view(self, *args, **kwargs):\n # type: (Request, *Any, **Any) -> Any\n rv = old_invoke_exception_view(self, *args, **kwargs)\n\n if (\n self.exc_info\n and all(self.exc_info)\n and rv.status_int == 500\n and Hub.current.get_integration(PyramidIntegration) is not None\n ):\n _capture_exception(self.exc_info)\n\n return rv\n\n Request.invoke_exception_view = sentry_patched_invoke_exception_view\n\n old_wsgi_call = router.Router.__call__\n\n def sentry_patched_wsgi_call(self, environ, start_response):\n # type: (Any, Dict[str, str], Callable[..., Any]) -> _ScopedResponse\n hub = Hub.current\n integration = hub.get_integration(PyramidIntegration)\n if integration is None:\n return old_wsgi_call(self, environ, start_response)\n\n def sentry_patched_inner_wsgi_call(environ, start_response):\n # type: (Dict[str, Any], Callable[..., Any]) -> Any\n try:\n return old_wsgi_call(self, environ, start_response)\n except Exception:\n einfo = sys.exc_info()\n _capture_exception(einfo)\n reraise(*einfo)\n\n return SentryWsgiMiddleware(sentry_patched_inner_wsgi_call)(\n environ, start_response\n )\n\n router.Router.__call__ = sentry_patched_wsgi_call\n\n\ndef _capture_exception(exc_info):\n # type: (ExcInfo) -> None\n if exc_info[0] is None or issubclass(exc_info[0], HTTPException):\n return\n hub = Hub.current\n if hub.get_integration(PyramidIntegration) is None:\n return\n\n # If an integration is there, a client has to be there.\n client = hub.client # type: Any\n\n event, hint = event_from_exception(\n exc_info,\n client_options=client.options,\n mechanism={\"type\": \"pyramid\", \"handled\": False},\n )\n\n hub.capture_event(event, hint=hint)\n\n\nclass PyramidRequestExtractor(RequestExtractor):\n def url(self):\n # type: () -> str\n return self.request.path_url\n\n def env(self):\n # type: () -> Dict[str, str]\n return self.request.environ\n\n def cookies(self):\n # type: () -> RequestCookies\n return self.request.cookies\n\n def raw_data(self):\n # type: () -> str\n return self.request.text\n\n def form(self):\n # type: () -> Dict[str, str]\n return {\n key: value\n for key, value in iteritems(self.request.POST)\n if not getattr(value, \"filename\", None)\n }\n\n def files(self):\n # type: () -> Dict[str, cgi_FieldStorage]\n return {\n key: value\n for key, value in iteritems(self.request.POST)\n if getattr(value, \"filename\", None)\n }\n\n def size_of_file(self, postdata):\n # type: (cgi_FieldStorage) -> int\n file = postdata.file\n try:\n return os.fstat(file.fileno()).st_size\n except Exception:\n return 0\n\n\ndef _make_event_processor(weak_request, integration):\n # type: (Callable[[], Request], PyramidIntegration) -> EventProcessor\n def event_processor(event, hint):\n # type: (Dict[str, Any], Dict[str, Any]) -> Dict[str, Any]\n request = weak_request()\n if request is None:\n return event\n\n with capture_internal_exceptions():\n PyramidRequestExtractor(request).extract_into_event(event)\n\n if _should_send_default_pii():\n with capture_internal_exceptions():\n user_info = event.setdefault(\"user\", {})\n user_info.setdefault(\"id\", authenticated_userid(request))\n\n return event\n\n return event_processor\n", "path": "sentry_sdk/integrations/pyramid.py"}]}
| 2,415 | 630 |
gh_patches_debug_1910
|
rasdani/github-patches
|
git_diff
|
gratipay__gratipay.com-2699
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[email protected] still linked several places
Should be [email protected], right? ;-)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gratipay/exceptions.py`
Content:
```
1 """
2 This module contains exceptions shared across application code.
3 """
4
5 from __future__ import print_function, unicode_literals
6
7
8 class ProblemChangingUsername(Exception):
9 def __str__(self):
10 return self.msg.format(self.args[0])
11
12 class UsernameIsEmpty(ProblemChangingUsername):
13 msg = "You need to provide a username!"
14
15 class UsernameTooLong(ProblemChangingUsername):
16 msg = "The username '{}' is too long."
17
18 class UsernameContainsInvalidCharacters(ProblemChangingUsername):
19 msg = "The username '{}' contains invalid characters."
20
21 class UsernameIsRestricted(ProblemChangingUsername):
22 msg = "The username '{}' is restricted."
23
24 class UsernameAlreadyTaken(ProblemChangingUsername):
25 msg = "The username '{}' is already taken."
26
27
28 class ProblemChangingNumber(Exception):
29 def __str__(self):
30 return self.msg
31
32 class HasBigTips(ProblemChangingNumber):
33 msg = "You receive tips too large for an individual. Please contact [email protected]."
34
35
36 class TooGreedy(Exception): pass
37 class NoSelfTipping(Exception): pass
38 class NoTippee(Exception): pass
39 class BadAmount(Exception): pass
40 class UserDoesntAcceptTips(Exception): pass
41
42 class FailedToReserveUsername(Exception): pass
43
44 class NegativeBalance(Exception):
45 def __str__(self):
46 return "Negative balance not allowed in this context."
47
48 class NotWhitelisted(Exception): pass
49 class NoBalancedCustomerHref(Exception): pass
50
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/gratipay/exceptions.py b/gratipay/exceptions.py
--- a/gratipay/exceptions.py
+++ b/gratipay/exceptions.py
@@ -30,7 +30,7 @@
return self.msg
class HasBigTips(ProblemChangingNumber):
- msg = "You receive tips too large for an individual. Please contact [email protected]."
+ msg = "You receive tips too large for an individual. Please contact [email protected]."
class TooGreedy(Exception): pass
|
{"golden_diff": "diff --git a/gratipay/exceptions.py b/gratipay/exceptions.py\n--- a/gratipay/exceptions.py\n+++ b/gratipay/exceptions.py\n@@ -30,7 +30,7 @@\n return self.msg\n \n class HasBigTips(ProblemChangingNumber):\n- msg = \"You receive tips too large for an individual. Please contact [email protected].\"\n+ msg = \"You receive tips too large for an individual. Please contact [email protected].\"\n \n \n class TooGreedy(Exception): pass\n", "issue": "[email protected] still linked several places\nShould be [email protected], right? ;-)\n\n", "before_files": [{"content": "\"\"\"\nThis module contains exceptions shared across application code.\n\"\"\"\n\nfrom __future__ import print_function, unicode_literals\n\n\nclass ProblemChangingUsername(Exception):\n def __str__(self):\n return self.msg.format(self.args[0])\n\nclass UsernameIsEmpty(ProblemChangingUsername):\n msg = \"You need to provide a username!\"\n\nclass UsernameTooLong(ProblemChangingUsername):\n msg = \"The username '{}' is too long.\"\n\nclass UsernameContainsInvalidCharacters(ProblemChangingUsername):\n msg = \"The username '{}' contains invalid characters.\"\n\nclass UsernameIsRestricted(ProblemChangingUsername):\n msg = \"The username '{}' is restricted.\"\n\nclass UsernameAlreadyTaken(ProblemChangingUsername):\n msg = \"The username '{}' is already taken.\"\n\n\nclass ProblemChangingNumber(Exception):\n def __str__(self):\n return self.msg\n\nclass HasBigTips(ProblemChangingNumber):\n msg = \"You receive tips too large for an individual. Please contact [email protected].\"\n\n\nclass TooGreedy(Exception): pass\nclass NoSelfTipping(Exception): pass\nclass NoTippee(Exception): pass\nclass BadAmount(Exception): pass\nclass UserDoesntAcceptTips(Exception): pass\n\nclass FailedToReserveUsername(Exception): pass\n\nclass NegativeBalance(Exception):\n def __str__(self):\n return \"Negative balance not allowed in this context.\"\n\nclass NotWhitelisted(Exception): pass\nclass NoBalancedCustomerHref(Exception): pass\n", "path": "gratipay/exceptions.py"}], "after_files": [{"content": "\"\"\"\nThis module contains exceptions shared across application code.\n\"\"\"\n\nfrom __future__ import print_function, unicode_literals\n\n\nclass ProblemChangingUsername(Exception):\n def __str__(self):\n return self.msg.format(self.args[0])\n\nclass UsernameIsEmpty(ProblemChangingUsername):\n msg = \"You need to provide a username!\"\n\nclass UsernameTooLong(ProblemChangingUsername):\n msg = \"The username '{}' is too long.\"\n\nclass UsernameContainsInvalidCharacters(ProblemChangingUsername):\n msg = \"The username '{}' contains invalid characters.\"\n\nclass UsernameIsRestricted(ProblemChangingUsername):\n msg = \"The username '{}' is restricted.\"\n\nclass UsernameAlreadyTaken(ProblemChangingUsername):\n msg = \"The username '{}' is already taken.\"\n\n\nclass ProblemChangingNumber(Exception):\n def __str__(self):\n return self.msg\n\nclass HasBigTips(ProblemChangingNumber):\n msg = \"You receive tips too large for an individual. Please contact [email protected].\"\n\n\nclass TooGreedy(Exception): pass\nclass NoSelfTipping(Exception): pass\nclass NoTippee(Exception): pass\nclass BadAmount(Exception): pass\nclass UserDoesntAcceptTips(Exception): pass\n\nclass FailedToReserveUsername(Exception): pass\n\nclass NegativeBalance(Exception):\n def __str__(self):\n return \"Negative balance not allowed in this context.\"\n\nclass NotWhitelisted(Exception): pass\nclass NoBalancedCustomerHref(Exception): pass\n", "path": "gratipay/exceptions.py"}]}
| 676 | 116 |
gh_patches_debug_35123
|
rasdani/github-patches
|
git_diff
|
mozilla__bugbug-722
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add an option not to cleanup URLs for the Duplicate model
In the duplicate case, having two equal URL can be a strong indication that two bugs are the same.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bugbug/models/duplicate.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # This Source Code Form is subject to the terms of the Mozilla Public
3 # License, v. 2.0. If a copy of the MPL was not distributed with this file,
4 # You can obtain one at http://mozilla.org/MPL/2.0/.
5
6 import random
7
8 from sklearn.calibration import CalibratedClassifierCV
9 from sklearn.compose import ColumnTransformer
10 from sklearn.pipeline import Pipeline
11 from sklearn.preprocessing import LabelEncoder
12 from sklearn.svm import LinearSVC
13
14 from bugbug import bug_features, bugzilla, feature_cleanup
15 from bugbug.model import BugCoupleModel
16
17 REPORTERS_TO_IGNORE = {"[email protected]", "[email protected]"}
18
19
20 class LinearSVCWithLabelEncoding(CalibratedClassifierCV):
21 def __init__(self, clf):
22 super().__init__(clf)
23 self._le = LabelEncoder()
24
25 def fit(self, X, y):
26 super().fit(X, y)
27 self._le.fit(y)
28
29
30 class DuplicateModel(BugCoupleModel):
31 def __init__(self, training_size=14000, lemmatization=False):
32 self.num_duplicates = training_size // 2
33 self.num_nondups_nondups = self.num_dup_nondups = training_size // 4
34
35 BugCoupleModel.__init__(self, lemmatization)
36
37 self.calculate_importance = False
38
39 feature_extractors = [bug_features.is_same_product()]
40
41 cleanup_functions = [
42 feature_cleanup.responses(),
43 feature_cleanup.hex(),
44 feature_cleanup.dll(),
45 feature_cleanup.fileref(),
46 feature_cleanup.url(),
47 feature_cleanup.synonyms(),
48 feature_cleanup.crash(),
49 ]
50
51 self.extraction_pipeline = Pipeline(
52 [
53 (
54 "bug_extractor",
55 bug_features.BugExtractor(feature_extractors, cleanup_functions),
56 ),
57 (
58 "union",
59 ColumnTransformer([("text", self.text_vectorizer(), "text")]),
60 ),
61 ]
62 )
63
64 self.clf = LinearSVCWithLabelEncoding(LinearSVC())
65
66 def get_labels(self):
67
68 random.seed(4)
69
70 all_ids = set(
71 bug["id"]
72 for bug in bugzilla.get_bugs()
73 if bug["creator"] not in REPORTERS_TO_IGNORE
74 and "dupeme" not in bug["keywords"]
75 )
76
77 classes = {}
78
79 # Only store ids of bugs that have duplicates or are duplicates
80 duplicate_ids = []
81
82 duplicates_num = 0
83 for bug_data in bugzilla.get_bugs():
84 bug_id = bug_data["id"]
85 if bug_id not in all_ids:
86 continue
87
88 if bug_data["dupe_of"] or len(bug_data["duplicates"]) > 0:
89 duplicate_ids.append(bug_id)
90
91 for duplicate_bug_id in bug_data["duplicates"]:
92 if duplicate_bug_id not in all_ids:
93 continue
94
95 duplicate_ids.append(duplicate_bug_id)
96
97 if duplicates_num < self.num_duplicates:
98 classes[(bug_id, duplicate_bug_id)] = 1
99 duplicates_num += 1
100
101 # Remove duplicate duplicate IDs.
102 duplicate_ids = list(set(duplicate_ids))
103
104 # Store all remaining ids
105 non_duplicate_ids = list(all_ids - set(duplicate_ids))
106
107 print(f"Number of duplicate labels is: {self.num_duplicates}")
108
109 # When the bug has no duplicates, we create dup-nondup labels.
110 dup_nondup_num = 0
111 while dup_nondup_num < self.num_dup_nondups:
112 bug_id1 = random.choice(duplicate_ids)
113 bug_id2 = random.choice(non_duplicate_ids)
114
115 classes[(bug_id1, bug_id2)] = 0
116 dup_nondup_num += 1
117
118 print(f"Number of hybrid labels is: {self.num_dup_nondups}")
119
120 # Now we map non-dup to non-dup bug.
121 nondup_nondup_num = 0
122 while nondup_nondup_num < self.num_nondups_nondups:
123 bug_id1 = random.choice(non_duplicate_ids)
124 bug_id2 = random.choice(non_duplicate_ids)
125 if bug_id1 != bug_id2:
126 classes[(bug_id1, bug_id2)] = 0
127 nondup_nondup_num += 1
128
129 print(f"Number of purely non-duplicate labels is: {self.num_nondups_nondups}")
130
131 return classes, [0, 1]
132
133 def get_feature_names(self):
134 return self.extraction_pipeline.named_steps["union"].get_feature_names()
135
```
Path: `run.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # This Source Code Form is subject to the terms of the Mozilla Public
3 # License, v. 2.0. If a copy of the MPL was not distributed with this file,
4 # You can obtain one at http://mozilla.org/MPL/2.0/.
5
6 import argparse
7 import csv
8 import os
9 import sys
10 from datetime import datetime, timedelta
11
12 import numpy as np
13
14 from bugbug import bugzilla, db, repository
15 from bugbug.models import MODELS, get_model_class
16
17
18 def parse_args(args):
19 parser = argparse.ArgumentParser()
20 parser.add_argument(
21 "--lemmatization",
22 help="Perform lemmatization (using spaCy)",
23 action="store_true",
24 )
25 parser.add_argument(
26 "--training-set-size",
27 nargs="?",
28 default=14000,
29 type=int,
30 help="The size of the training set for the duplicate model",
31 )
32 parser.add_argument("--train", help="Perform training", action="store_true")
33 parser.add_argument(
34 "--goal", help="Goal of the classifier", choices=MODELS.keys(), default="defect"
35 )
36 parser.add_argument(
37 "--classifier",
38 help="Type of the classifier. Only used for component classification.",
39 choices=["default", "nn"],
40 default="default",
41 )
42 parser.add_argument("--classify", help="Perform evaluation", action="store_true")
43 parser.add_argument(
44 "--generate-sheet",
45 help="Perform evaluation on bugs from last week and generate a csv file",
46 action="store_true",
47 )
48 parser.add_argument("--token", help="Bugzilla token", action="store")
49 parser.add_argument(
50 "--historical",
51 help="""Analyze historical bugs. Only used for defect, bugtype,
52 defectenhancementtask and regression tasks.""",
53 action="store_true",
54 )
55 return parser.parse_args(args)
56
57
58 def main(args):
59 model_file_name = "{}{}model".format(
60 args.goal, "" if args.classifier == "default" else args.classifier
61 )
62
63 if args.goal == "component":
64 if args.classifier == "default":
65 model_class_name = "component"
66 else:
67 model_class_name = "component_nn"
68 else:
69 model_class_name = args.goal
70
71 model_class = get_model_class(model_class_name)
72
73 if args.train:
74 db.download(bugzilla.BUGS_DB)
75 db.download(repository.COMMITS_DB)
76
77 historical_supported_tasks = [
78 "defect",
79 "bugtype",
80 "defectenhancementtask",
81 "regression",
82 ]
83
84 if args.goal in historical_supported_tasks:
85 model = model_class(args.lemmatization, args.historical)
86 elif args.goal == "duplicate":
87 model = model_class(args.training_set_size, args.lemmatization)
88 else:
89 model = model_class(args.lemmatization)
90 model.train()
91 else:
92 model = model_class.load(model_file_name)
93
94 if args.classify:
95 for bug in bugzilla.get_bugs():
96 print(
97 f'https://bugzilla.mozilla.org/show_bug.cgi?id={ bug["id"] } - { bug["summary"]} '
98 )
99
100 if model.calculate_importance:
101 probas, importance = model.classify(
102 bug, probabilities=True, importances=True
103 )
104
105 feature_names = model.get_human_readable_feature_names()
106 for i, (importance, index, is_positive) in enumerate(
107 importance["importances"]
108 ):
109 print(
110 f'{i + 1}. \'{feature_names[int(index)]}\' ({"+" if (is_positive) else "-"}{importance})'
111 )
112 else:
113 probas = model.classify(bug, probabilities=True, importances=False)
114
115 if np.argmax(probas) == 1:
116 print(f"Positive! {probas}")
117 else:
118 print(f"Negative! {probas}")
119 input()
120
121 if args.generate_sheet:
122 assert (
123 args.token is not None
124 ), "A Bugzilla token should be set in order to download bugs"
125 today = datetime.utcnow()
126 a_week_ago = today - timedelta(7)
127 bugzilla.set_token(args.token)
128 bug_ids = bugzilla.get_ids_between(a_week_ago, today)
129 bugs = bugzilla.get(bug_ids)
130
131 print(f"Classifying {len(bugs)} bugs...")
132
133 rows = [["Bug", f"{args.goal}(model)", args.goal, "Title"]]
134
135 for bug in bugs.values():
136 p = model.classify(bug, probabilities=True)
137 rows.append(
138 [
139 f'https://bugzilla.mozilla.org/show_bug.cgi?id={bug["id"]}',
140 "y" if p[0][1] >= 0.7 else "n",
141 "",
142 bug["summary"],
143 ]
144 )
145
146 os.makedirs("sheets", exist_ok=True)
147 with open(
148 os.path.join(
149 "sheets",
150 f'{args.goal}-{datetime.utcnow().strftime("%Y-%m-%d")}-labels.csv',
151 ),
152 "w",
153 ) as f:
154 writer = csv.writer(f)
155 writer.writerows(rows)
156
157
158 if __name__ == "__main__":
159 main(parse_args(sys.argv[1:]))
160
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bugbug/models/duplicate.py b/bugbug/models/duplicate.py
--- a/bugbug/models/duplicate.py
+++ b/bugbug/models/duplicate.py
@@ -28,7 +28,7 @@
class DuplicateModel(BugCoupleModel):
- def __init__(self, training_size=14000, lemmatization=False):
+ def __init__(self, training_size=14000, lemmatization=False, cleanup_urls=True):
self.num_duplicates = training_size // 2
self.num_nondups_nondups = self.num_dup_nondups = training_size // 4
@@ -43,11 +43,13 @@
feature_cleanup.hex(),
feature_cleanup.dll(),
feature_cleanup.fileref(),
- feature_cleanup.url(),
feature_cleanup.synonyms(),
feature_cleanup.crash(),
]
+ if cleanup_urls:
+ cleanup_functions.append(feature_cleanup.url())
+
self.extraction_pipeline = Pipeline(
[
(
diff --git a/run.py b/run.py
--- a/run.py
+++ b/run.py
@@ -29,6 +29,13 @@
type=int,
help="The size of the training set for the duplicate model",
)
+ parser.add_argument(
+ "--disable-url-cleanup",
+ help="Don't cleanup urls when training the duplicate model",
+ dest="cleanup_urls",
+ default=True,
+ action="store_false",
+ )
parser.add_argument("--train", help="Perform training", action="store_true")
parser.add_argument(
"--goal", help="Goal of the classifier", choices=MODELS.keys(), default="defect"
@@ -84,7 +91,9 @@
if args.goal in historical_supported_tasks:
model = model_class(args.lemmatization, args.historical)
elif args.goal == "duplicate":
- model = model_class(args.training_set_size, args.lemmatization)
+ model = model_class(
+ args.training_set_size, args.lemmatization, args.cleanup_urls
+ )
else:
model = model_class(args.lemmatization)
model.train()
|
{"golden_diff": "diff --git a/bugbug/models/duplicate.py b/bugbug/models/duplicate.py\n--- a/bugbug/models/duplicate.py\n+++ b/bugbug/models/duplicate.py\n@@ -28,7 +28,7 @@\n \n \n class DuplicateModel(BugCoupleModel):\n- def __init__(self, training_size=14000, lemmatization=False):\n+ def __init__(self, training_size=14000, lemmatization=False, cleanup_urls=True):\n self.num_duplicates = training_size // 2\n self.num_nondups_nondups = self.num_dup_nondups = training_size // 4\n \n@@ -43,11 +43,13 @@\n feature_cleanup.hex(),\n feature_cleanup.dll(),\n feature_cleanup.fileref(),\n- feature_cleanup.url(),\n feature_cleanup.synonyms(),\n feature_cleanup.crash(),\n ]\n \n+ if cleanup_urls:\n+ cleanup_functions.append(feature_cleanup.url())\n+\n self.extraction_pipeline = Pipeline(\n [\n (\ndiff --git a/run.py b/run.py\n--- a/run.py\n+++ b/run.py\n@@ -29,6 +29,13 @@\n type=int,\n help=\"The size of the training set for the duplicate model\",\n )\n+ parser.add_argument(\n+ \"--disable-url-cleanup\",\n+ help=\"Don't cleanup urls when training the duplicate model\",\n+ dest=\"cleanup_urls\",\n+ default=True,\n+ action=\"store_false\",\n+ )\n parser.add_argument(\"--train\", help=\"Perform training\", action=\"store_true\")\n parser.add_argument(\n \"--goal\", help=\"Goal of the classifier\", choices=MODELS.keys(), default=\"defect\"\n@@ -84,7 +91,9 @@\n if args.goal in historical_supported_tasks:\n model = model_class(args.lemmatization, args.historical)\n elif args.goal == \"duplicate\":\n- model = model_class(args.training_set_size, args.lemmatization)\n+ model = model_class(\n+ args.training_set_size, args.lemmatization, args.cleanup_urls\n+ )\n else:\n model = model_class(args.lemmatization)\n model.train()\n", "issue": "Add an option not to cleanup URLs for the Duplicate model\nIn the duplicate case, having two equal URL can be a strong indication that two bugs are the same.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport random\n\nfrom sklearn.calibration import CalibratedClassifierCV\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn.svm import LinearSVC\n\nfrom bugbug import bug_features, bugzilla, feature_cleanup\nfrom bugbug.model import BugCoupleModel\n\nREPORTERS_TO_IGNORE = {\"[email protected]\", \"[email protected]\"}\n\n\nclass LinearSVCWithLabelEncoding(CalibratedClassifierCV):\n def __init__(self, clf):\n super().__init__(clf)\n self._le = LabelEncoder()\n\n def fit(self, X, y):\n super().fit(X, y)\n self._le.fit(y)\n\n\nclass DuplicateModel(BugCoupleModel):\n def __init__(self, training_size=14000, lemmatization=False):\n self.num_duplicates = training_size // 2\n self.num_nondups_nondups = self.num_dup_nondups = training_size // 4\n\n BugCoupleModel.__init__(self, lemmatization)\n\n self.calculate_importance = False\n\n feature_extractors = [bug_features.is_same_product()]\n\n cleanup_functions = [\n feature_cleanup.responses(),\n feature_cleanup.hex(),\n feature_cleanup.dll(),\n feature_cleanup.fileref(),\n feature_cleanup.url(),\n feature_cleanup.synonyms(),\n feature_cleanup.crash(),\n ]\n\n self.extraction_pipeline = Pipeline(\n [\n (\n \"bug_extractor\",\n bug_features.BugExtractor(feature_extractors, cleanup_functions),\n ),\n (\n \"union\",\n ColumnTransformer([(\"text\", self.text_vectorizer(), \"text\")]),\n ),\n ]\n )\n\n self.clf = LinearSVCWithLabelEncoding(LinearSVC())\n\n def get_labels(self):\n\n random.seed(4)\n\n all_ids = set(\n bug[\"id\"]\n for bug in bugzilla.get_bugs()\n if bug[\"creator\"] not in REPORTERS_TO_IGNORE\n and \"dupeme\" not in bug[\"keywords\"]\n )\n\n classes = {}\n\n # Only store ids of bugs that have duplicates or are duplicates\n duplicate_ids = []\n\n duplicates_num = 0\n for bug_data in bugzilla.get_bugs():\n bug_id = bug_data[\"id\"]\n if bug_id not in all_ids:\n continue\n\n if bug_data[\"dupe_of\"] or len(bug_data[\"duplicates\"]) > 0:\n duplicate_ids.append(bug_id)\n\n for duplicate_bug_id in bug_data[\"duplicates\"]:\n if duplicate_bug_id not in all_ids:\n continue\n\n duplicate_ids.append(duplicate_bug_id)\n\n if duplicates_num < self.num_duplicates:\n classes[(bug_id, duplicate_bug_id)] = 1\n duplicates_num += 1\n\n # Remove duplicate duplicate IDs.\n duplicate_ids = list(set(duplicate_ids))\n\n # Store all remaining ids\n non_duplicate_ids = list(all_ids - set(duplicate_ids))\n\n print(f\"Number of duplicate labels is: {self.num_duplicates}\")\n\n # When the bug has no duplicates, we create dup-nondup labels.\n dup_nondup_num = 0\n while dup_nondup_num < self.num_dup_nondups:\n bug_id1 = random.choice(duplicate_ids)\n bug_id2 = random.choice(non_duplicate_ids)\n\n classes[(bug_id1, bug_id2)] = 0\n dup_nondup_num += 1\n\n print(f\"Number of hybrid labels is: {self.num_dup_nondups}\")\n\n # Now we map non-dup to non-dup bug.\n nondup_nondup_num = 0\n while nondup_nondup_num < self.num_nondups_nondups:\n bug_id1 = random.choice(non_duplicate_ids)\n bug_id2 = random.choice(non_duplicate_ids)\n if bug_id1 != bug_id2:\n classes[(bug_id1, bug_id2)] = 0\n nondup_nondup_num += 1\n\n print(f\"Number of purely non-duplicate labels is: {self.num_nondups_nondups}\")\n\n return classes, [0, 1]\n\n def get_feature_names(self):\n return self.extraction_pipeline.named_steps[\"union\"].get_feature_names()\n", "path": "bugbug/models/duplicate.py"}, {"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport argparse\nimport csv\nimport os\nimport sys\nfrom datetime import datetime, timedelta\n\nimport numpy as np\n\nfrom bugbug import bugzilla, db, repository\nfrom bugbug.models import MODELS, get_model_class\n\n\ndef parse_args(args):\n parser = argparse.ArgumentParser()\n parser.add_argument(\n \"--lemmatization\",\n help=\"Perform lemmatization (using spaCy)\",\n action=\"store_true\",\n )\n parser.add_argument(\n \"--training-set-size\",\n nargs=\"?\",\n default=14000,\n type=int,\n help=\"The size of the training set for the duplicate model\",\n )\n parser.add_argument(\"--train\", help=\"Perform training\", action=\"store_true\")\n parser.add_argument(\n \"--goal\", help=\"Goal of the classifier\", choices=MODELS.keys(), default=\"defect\"\n )\n parser.add_argument(\n \"--classifier\",\n help=\"Type of the classifier. Only used for component classification.\",\n choices=[\"default\", \"nn\"],\n default=\"default\",\n )\n parser.add_argument(\"--classify\", help=\"Perform evaluation\", action=\"store_true\")\n parser.add_argument(\n \"--generate-sheet\",\n help=\"Perform evaluation on bugs from last week and generate a csv file\",\n action=\"store_true\",\n )\n parser.add_argument(\"--token\", help=\"Bugzilla token\", action=\"store\")\n parser.add_argument(\n \"--historical\",\n help=\"\"\"Analyze historical bugs. Only used for defect, bugtype,\n defectenhancementtask and regression tasks.\"\"\",\n action=\"store_true\",\n )\n return parser.parse_args(args)\n\n\ndef main(args):\n model_file_name = \"{}{}model\".format(\n args.goal, \"\" if args.classifier == \"default\" else args.classifier\n )\n\n if args.goal == \"component\":\n if args.classifier == \"default\":\n model_class_name = \"component\"\n else:\n model_class_name = \"component_nn\"\n else:\n model_class_name = args.goal\n\n model_class = get_model_class(model_class_name)\n\n if args.train:\n db.download(bugzilla.BUGS_DB)\n db.download(repository.COMMITS_DB)\n\n historical_supported_tasks = [\n \"defect\",\n \"bugtype\",\n \"defectenhancementtask\",\n \"regression\",\n ]\n\n if args.goal in historical_supported_tasks:\n model = model_class(args.lemmatization, args.historical)\n elif args.goal == \"duplicate\":\n model = model_class(args.training_set_size, args.lemmatization)\n else:\n model = model_class(args.lemmatization)\n model.train()\n else:\n model = model_class.load(model_file_name)\n\n if args.classify:\n for bug in bugzilla.get_bugs():\n print(\n f'https://bugzilla.mozilla.org/show_bug.cgi?id={ bug[\"id\"] } - { bug[\"summary\"]} '\n )\n\n if model.calculate_importance:\n probas, importance = model.classify(\n bug, probabilities=True, importances=True\n )\n\n feature_names = model.get_human_readable_feature_names()\n for i, (importance, index, is_positive) in enumerate(\n importance[\"importances\"]\n ):\n print(\n f'{i + 1}. \\'{feature_names[int(index)]}\\' ({\"+\" if (is_positive) else \"-\"}{importance})'\n )\n else:\n probas = model.classify(bug, probabilities=True, importances=False)\n\n if np.argmax(probas) == 1:\n print(f\"Positive! {probas}\")\n else:\n print(f\"Negative! {probas}\")\n input()\n\n if args.generate_sheet:\n assert (\n args.token is not None\n ), \"A Bugzilla token should be set in order to download bugs\"\n today = datetime.utcnow()\n a_week_ago = today - timedelta(7)\n bugzilla.set_token(args.token)\n bug_ids = bugzilla.get_ids_between(a_week_ago, today)\n bugs = bugzilla.get(bug_ids)\n\n print(f\"Classifying {len(bugs)} bugs...\")\n\n rows = [[\"Bug\", f\"{args.goal}(model)\", args.goal, \"Title\"]]\n\n for bug in bugs.values():\n p = model.classify(bug, probabilities=True)\n rows.append(\n [\n f'https://bugzilla.mozilla.org/show_bug.cgi?id={bug[\"id\"]}',\n \"y\" if p[0][1] >= 0.7 else \"n\",\n \"\",\n bug[\"summary\"],\n ]\n )\n\n os.makedirs(\"sheets\", exist_ok=True)\n with open(\n os.path.join(\n \"sheets\",\n f'{args.goal}-{datetime.utcnow().strftime(\"%Y-%m-%d\")}-labels.csv',\n ),\n \"w\",\n ) as f:\n writer = csv.writer(f)\n writer.writerows(rows)\n\n\nif __name__ == \"__main__\":\n main(parse_args(sys.argv[1:]))\n", "path": "run.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport random\n\nfrom sklearn.calibration import CalibratedClassifierCV\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn.svm import LinearSVC\n\nfrom bugbug import bug_features, bugzilla, feature_cleanup\nfrom bugbug.model import BugCoupleModel\n\nREPORTERS_TO_IGNORE = {\"[email protected]\", \"[email protected]\"}\n\n\nclass LinearSVCWithLabelEncoding(CalibratedClassifierCV):\n def __init__(self, clf):\n super().__init__(clf)\n self._le = LabelEncoder()\n\n def fit(self, X, y):\n super().fit(X, y)\n self._le.fit(y)\n\n\nclass DuplicateModel(BugCoupleModel):\n def __init__(self, training_size=14000, lemmatization=False, cleanup_urls=True):\n self.num_duplicates = training_size // 2\n self.num_nondups_nondups = self.num_dup_nondups = training_size // 4\n\n BugCoupleModel.__init__(self, lemmatization)\n\n self.calculate_importance = False\n\n feature_extractors = [bug_features.is_same_product()]\n\n cleanup_functions = [\n feature_cleanup.responses(),\n feature_cleanup.hex(),\n feature_cleanup.dll(),\n feature_cleanup.fileref(),\n feature_cleanup.synonyms(),\n feature_cleanup.crash(),\n ]\n\n if cleanup_urls:\n cleanup_functions.append(feature_cleanup.url())\n\n self.extraction_pipeline = Pipeline(\n [\n (\n \"bug_extractor\",\n bug_features.BugExtractor(feature_extractors, cleanup_functions),\n ),\n (\n \"union\",\n ColumnTransformer([(\"text\", self.text_vectorizer(), \"text\")]),\n ),\n ]\n )\n\n self.clf = LinearSVCWithLabelEncoding(LinearSVC())\n\n def get_labels(self):\n\n random.seed(4)\n\n all_ids = set(\n bug[\"id\"]\n for bug in bugzilla.get_bugs()\n if bug[\"creator\"] not in REPORTERS_TO_IGNORE\n and \"dupeme\" not in bug[\"keywords\"]\n )\n\n classes = {}\n\n # Only store ids of bugs that have duplicates or are duplicates\n duplicate_ids = []\n\n duplicates_num = 0\n for bug_data in bugzilla.get_bugs():\n bug_id = bug_data[\"id\"]\n if bug_id not in all_ids:\n continue\n\n if bug_data[\"dupe_of\"] or len(bug_data[\"duplicates\"]) > 0:\n duplicate_ids.append(bug_id)\n\n for duplicate_bug_id in bug_data[\"duplicates\"]:\n if duplicate_bug_id not in all_ids:\n continue\n\n duplicate_ids.append(duplicate_bug_id)\n\n if duplicates_num < self.num_duplicates:\n classes[(bug_id, duplicate_bug_id)] = 1\n duplicates_num += 1\n\n # Remove duplicate duplicate IDs.\n duplicate_ids = list(set(duplicate_ids))\n\n # Store all remaining ids\n non_duplicate_ids = list(all_ids - set(duplicate_ids))\n\n print(f\"Number of duplicate labels is: {self.num_duplicates}\")\n\n # When the bug has no duplicates, we create dup-nondup labels.\n dup_nondup_num = 0\n while dup_nondup_num < self.num_dup_nondups:\n bug_id1 = random.choice(duplicate_ids)\n bug_id2 = random.choice(non_duplicate_ids)\n\n classes[(bug_id1, bug_id2)] = 0\n dup_nondup_num += 1\n\n print(f\"Number of hybrid labels is: {self.num_dup_nondups}\")\n\n # Now we map non-dup to non-dup bug.\n nondup_nondup_num = 0\n while nondup_nondup_num < self.num_nondups_nondups:\n bug_id1 = random.choice(non_duplicate_ids)\n bug_id2 = random.choice(non_duplicate_ids)\n if bug_id1 != bug_id2:\n classes[(bug_id1, bug_id2)] = 0\n nondup_nondup_num += 1\n\n print(f\"Number of purely non-duplicate labels is: {self.num_nondups_nondups}\")\n\n return classes, [0, 1]\n\n def get_feature_names(self):\n return self.extraction_pipeline.named_steps[\"union\"].get_feature_names()\n", "path": "bugbug/models/duplicate.py"}, {"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport argparse\nimport csv\nimport os\nimport sys\nfrom datetime import datetime, timedelta\n\nimport numpy as np\n\nfrom bugbug import bugzilla, db, repository\nfrom bugbug.models import MODELS, get_model_class\n\n\ndef parse_args(args):\n parser = argparse.ArgumentParser()\n parser.add_argument(\n \"--lemmatization\",\n help=\"Perform lemmatization (using spaCy)\",\n action=\"store_true\",\n )\n parser.add_argument(\n \"--training-set-size\",\n nargs=\"?\",\n default=14000,\n type=int,\n help=\"The size of the training set for the duplicate model\",\n )\n parser.add_argument(\n \"--disable-url-cleanup\",\n help=\"Don't cleanup urls when training the duplicate model\",\n dest=\"cleanup_urls\",\n default=True,\n action=\"store_false\",\n )\n parser.add_argument(\"--train\", help=\"Perform training\", action=\"store_true\")\n parser.add_argument(\n \"--goal\", help=\"Goal of the classifier\", choices=MODELS.keys(), default=\"defect\"\n )\n parser.add_argument(\n \"--classifier\",\n help=\"Type of the classifier. Only used for component classification.\",\n choices=[\"default\", \"nn\"],\n default=\"default\",\n )\n parser.add_argument(\"--classify\", help=\"Perform evaluation\", action=\"store_true\")\n parser.add_argument(\n \"--generate-sheet\",\n help=\"Perform evaluation on bugs from last week and generate a csv file\",\n action=\"store_true\",\n )\n parser.add_argument(\"--token\", help=\"Bugzilla token\", action=\"store\")\n parser.add_argument(\n \"--historical\",\n help=\"\"\"Analyze historical bugs. Only used for defect, bugtype,\n defectenhancementtask and regression tasks.\"\"\",\n action=\"store_true\",\n )\n return parser.parse_args(args)\n\n\ndef main(args):\n model_file_name = \"{}{}model\".format(\n args.goal, \"\" if args.classifier == \"default\" else args.classifier\n )\n\n if args.goal == \"component\":\n if args.classifier == \"default\":\n model_class_name = \"component\"\n else:\n model_class_name = \"component_nn\"\n else:\n model_class_name = args.goal\n\n model_class = get_model_class(model_class_name)\n\n if args.train:\n db.download(bugzilla.BUGS_DB)\n db.download(repository.COMMITS_DB)\n\n historical_supported_tasks = [\n \"defect\",\n \"bugtype\",\n \"defectenhancementtask\",\n \"regression\",\n ]\n\n if args.goal in historical_supported_tasks:\n model = model_class(args.lemmatization, args.historical)\n elif args.goal == \"duplicate\":\n model = model_class(\n args.training_set_size, args.lemmatization, args.cleanup_urls\n )\n else:\n model = model_class(args.lemmatization)\n model.train()\n else:\n model = model_class.load(model_file_name)\n\n if args.classify:\n for bug in bugzilla.get_bugs():\n print(\n f'https://bugzilla.mozilla.org/show_bug.cgi?id={ bug[\"id\"] } - { bug[\"summary\"]} '\n )\n\n if model.calculate_importance:\n probas, importance = model.classify(\n bug, probabilities=True, importances=True\n )\n\n feature_names = model.get_human_readable_feature_names()\n for i, (importance, index, is_positive) in enumerate(\n importance[\"importances\"]\n ):\n print(\n f'{i + 1}. \\'{feature_names[int(index)]}\\' ({\"+\" if (is_positive) else \"-\"}{importance})'\n )\n else:\n probas = model.classify(bug, probabilities=True, importances=False)\n\n if np.argmax(probas) == 1:\n print(f\"Positive! {probas}\")\n else:\n print(f\"Negative! {probas}\")\n input()\n\n if args.generate_sheet:\n assert (\n args.token is not None\n ), \"A Bugzilla token should be set in order to download bugs\"\n today = datetime.utcnow()\n a_week_ago = today - timedelta(7)\n bugzilla.set_token(args.token)\n bug_ids = bugzilla.get_ids_between(a_week_ago, today)\n bugs = bugzilla.get(bug_ids)\n\n print(f\"Classifying {len(bugs)} bugs...\")\n\n rows = [[\"Bug\", f\"{args.goal}(model)\", args.goal, \"Title\"]]\n\n for bug in bugs.values():\n p = model.classify(bug, probabilities=True)\n rows.append(\n [\n f'https://bugzilla.mozilla.org/show_bug.cgi?id={bug[\"id\"]}',\n \"y\" if p[0][1] >= 0.7 else \"n\",\n \"\",\n bug[\"summary\"],\n ]\n )\n\n os.makedirs(\"sheets\", exist_ok=True)\n with open(\n os.path.join(\n \"sheets\",\n f'{args.goal}-{datetime.utcnow().strftime(\"%Y-%m-%d\")}-labels.csv',\n ),\n \"w\",\n ) as f:\n writer = csv.writer(f)\n writer.writerows(rows)\n\n\nif __name__ == \"__main__\":\n main(parse_args(sys.argv[1:]))\n", "path": "run.py"}]}
| 3,109 | 478 |
gh_patches_debug_40189
|
rasdani/github-patches
|
git_diff
|
mne-tools__mne-python-3423
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Memory consumption in plot_brainstorm_phantom_elekta
Circle is failing because it's using so much memory.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tutorials/plot_brainstorm_phantom_elekta.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """
3 ==========================================
4 Brainstorm Elekta phantom tutorial dataset
5 ==========================================
6
7 Here we compute the evoked from raw for the Brainstorm Elekta phantom
8 tutorial dataset. For comparison, see [1]_ and:
9
10 http://neuroimage.usc.edu/brainstorm/Tutorials/PhantomElekta
11
12 References
13 ----------
14 .. [1] Tadel F, Baillet S, Mosher JC, Pantazis D, Leahy RM.
15 Brainstorm: A User-Friendly Application for MEG/EEG Analysis.
16 Computational Intelligence and Neuroscience, vol. 2011, Article ID
17 879716, 13 pages, 2011. doi:10.1155/2011/879716
18 """
19
20 # Authors: Eric Larson <[email protected]>
21 #
22 # License: BSD (3-clause)
23
24 import os.path as op
25 import numpy as np
26
27 import mne
28 from mne import find_events, fit_dipole
29 from mne.datasets.brainstorm import bst_phantom_elekta
30 from mne.io import read_raw_fif
31
32 print(__doc__)
33
34 ###############################################################################
35 # The data were collected with an Elekta Neuromag VectorView system at 1000 Hz
36 # and low-pass filtered at 330 Hz. Here the medium-amplitude (200 nAm) data
37 # are read to construct instances of :class:`mne.io.Raw`.
38 data_path = bst_phantom_elekta.data_path()
39
40 raw_fname = op.join(data_path, 'kojak_all_200nAm_pp_no_chpi_no_ms_raw.fif')
41 raw = read_raw_fif(raw_fname, preload=True)
42
43 ###############################################################################
44 # Data channel array consisted of 204 MEG planor gradiometers,
45 # 102 axial magnetometers, and 3 stimulus channels. Let's get the events
46 # for the phantom, where each dipole (1-32) gets its own event:
47
48 events = find_events(raw, 'STI201')
49 raw.plot(events=events)
50 raw.info['bads'] = ['MEG2421']
51
52 ###############################################################################
53 # The data have strong line frequency (60 Hz and harmonics) and cHPI coil
54 # noise (five peaks around 300 Hz):
55
56 raw.plot_psd()
57
58 ###############################################################################
59 # We know our phantom produces sinusoidal bursts below 25 Hz, so let's filter.
60
61 raw.filter(None, 40., h_trans_bandwidth=10., filter_length='1s')
62 raw.plot_psd()
63
64 ###############################################################################
65 # The data are still a bit noisy, so let's use Maxwell filtering to clean it.
66 # Ideally we would have the fine calibration and cross-talk information
67 # for the site of interest, but we don't, so we just do:
68
69 raw.fix_mag_coil_types()
70 raw = mne.preprocessing.maxwell_filter(raw, origin=(0., 0., 0.))
71 raw.plot(events=events)
72
73 ###############################################################################
74 # Now we epoch our data, average it, and look at the first dipole response.
75 # The first peak appears around 3 ms.
76
77 tmin, tmax = -0.2, 0.2
78 event_id = list(range(1, 33))
79 epochs = mne.Epochs(raw, events, event_id, tmin, tmax, baseline=(None, -0.01))
80 epochs['1'].average().plot()
81
82 ###############################################################################
83 # Let's do some dipole fits. The phantom is properly modeled by a single-shell
84 # sphere with origin (0., 0., 0.). We compute covariance, then do the fits.
85
86 t_peak = 0.061 # 3 MS at largest peak
87 sphere = mne.make_sphere_model(r0=(0., 0., 0.), head_radius=None)
88 cov = mne.compute_covariance(epochs, tmax=0)
89 data = []
90 for ii in range(1, 33):
91 evoked = epochs[str(ii)].average().crop(t_peak, t_peak)
92 data.append(evoked.data[:, 0])
93 evoked = mne.EvokedArray(np.array(data).T, evoked.info, tmin=0.)
94 dip = fit_dipole(evoked, cov, sphere, n_jobs=2)[0]
95
96 ###############################################################################
97 # Now we can compare to the actual locations, taking the difference in mm:
98
99 actual_pos = mne.dipole.get_phantom_dipoles(kind='122')[0]
100 diffs = 1000 * np.sqrt(np.sum((dip.pos - actual_pos) ** 2, axis=-1))
101 print('Differences (mm):\n%s' % diffs[:, np.newaxis])
102
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/tutorials/plot_brainstorm_phantom_elekta.py b/tutorials/plot_brainstorm_phantom_elekta.py
--- a/tutorials/plot_brainstorm_phantom_elekta.py
+++ b/tutorials/plot_brainstorm_phantom_elekta.py
@@ -38,7 +38,7 @@
data_path = bst_phantom_elekta.data_path()
raw_fname = op.join(data_path, 'kojak_all_200nAm_pp_no_chpi_no_ms_raw.fif')
-raw = read_raw_fif(raw_fname, preload=True)
+raw = read_raw_fif(raw_fname)
###############################################################################
# Data channel array consisted of 204 MEG planor gradiometers,
@@ -51,39 +51,41 @@
###############################################################################
# The data have strong line frequency (60 Hz and harmonics) and cHPI coil
-# noise (five peaks around 300 Hz):
+# noise (five peaks around 300 Hz). Here we plot only out to 60 seconds
+# to save memory:
-raw.plot_psd()
+raw.plot_psd(tmax=60.)
###############################################################################
-# We know our phantom produces sinusoidal bursts below 25 Hz, so let's filter.
-
-raw.filter(None, 40., h_trans_bandwidth=10., filter_length='1s')
-raw.plot_psd()
-
-###############################################################################
-# The data are still a bit noisy, so let's use Maxwell filtering to clean it.
+# Let's use Maxwell filtering to clean the data a bit.
# Ideally we would have the fine calibration and cross-talk information
# for the site of interest, but we don't, so we just do:
raw.fix_mag_coil_types()
raw = mne.preprocessing.maxwell_filter(raw, origin=(0., 0., 0.))
+
+###############################################################################
+# We know our phantom produces sinusoidal bursts below 25 Hz, so let's filter.
+
+raw.filter(None, 40., h_trans_bandwidth=10., filter_length='1s')
raw.plot(events=events)
###############################################################################
# Now we epoch our data, average it, and look at the first dipole response.
-# The first peak appears around 3 ms.
+# The first peak appears around 3 ms. Because we low-passed at 40 Hz,
+# we can also decimate our data to save memory.
-tmin, tmax = -0.2, 0.2
+tmin, tmax = -0.1, 0.1
event_id = list(range(1, 33))
-epochs = mne.Epochs(raw, events, event_id, tmin, tmax, baseline=(None, -0.01))
+epochs = mne.Epochs(raw, events, event_id, tmin, tmax, baseline=(None, -0.01),
+ decim=5)
epochs['1'].average().plot()
###############################################################################
# Let's do some dipole fits. The phantom is properly modeled by a single-shell
# sphere with origin (0., 0., 0.). We compute covariance, then do the fits.
-t_peak = 0.061 # 3 MS at largest peak
+t_peak = 60e-3 # ~60 MS at largest peak
sphere = mne.make_sphere_model(r0=(0., 0., 0.), head_radius=None)
cov = mne.compute_covariance(epochs, tmax=0)
data = []
|
{"golden_diff": "diff --git a/tutorials/plot_brainstorm_phantom_elekta.py b/tutorials/plot_brainstorm_phantom_elekta.py\n--- a/tutorials/plot_brainstorm_phantom_elekta.py\n+++ b/tutorials/plot_brainstorm_phantom_elekta.py\n@@ -38,7 +38,7 @@\n data_path = bst_phantom_elekta.data_path()\n \n raw_fname = op.join(data_path, 'kojak_all_200nAm_pp_no_chpi_no_ms_raw.fif')\n-raw = read_raw_fif(raw_fname, preload=True)\n+raw = read_raw_fif(raw_fname)\n \n ###############################################################################\n # Data channel array consisted of 204 MEG planor gradiometers,\n@@ -51,39 +51,41 @@\n \n ###############################################################################\n # The data have strong line frequency (60 Hz and harmonics) and cHPI coil\n-# noise (five peaks around 300 Hz):\n+# noise (five peaks around 300 Hz). Here we plot only out to 60 seconds\n+# to save memory:\n \n-raw.plot_psd()\n+raw.plot_psd(tmax=60.)\n \n ###############################################################################\n-# We know our phantom produces sinusoidal bursts below 25 Hz, so let's filter.\n-\n-raw.filter(None, 40., h_trans_bandwidth=10., filter_length='1s')\n-raw.plot_psd()\n-\n-###############################################################################\n-# The data are still a bit noisy, so let's use Maxwell filtering to clean it.\n+# Let's use Maxwell filtering to clean the data a bit.\n # Ideally we would have the fine calibration and cross-talk information\n # for the site of interest, but we don't, so we just do:\n \n raw.fix_mag_coil_types()\n raw = mne.preprocessing.maxwell_filter(raw, origin=(0., 0., 0.))\n+\n+###############################################################################\n+# We know our phantom produces sinusoidal bursts below 25 Hz, so let's filter.\n+\n+raw.filter(None, 40., h_trans_bandwidth=10., filter_length='1s')\n raw.plot(events=events)\n \n ###############################################################################\n # Now we epoch our data, average it, and look at the first dipole response.\n-# The first peak appears around 3 ms.\n+# The first peak appears around 3 ms. Because we low-passed at 40 Hz,\n+# we can also decimate our data to save memory.\n \n-tmin, tmax = -0.2, 0.2\n+tmin, tmax = -0.1, 0.1\n event_id = list(range(1, 33))\n-epochs = mne.Epochs(raw, events, event_id, tmin, tmax, baseline=(None, -0.01))\n+epochs = mne.Epochs(raw, events, event_id, tmin, tmax, baseline=(None, -0.01),\n+ decim=5)\n epochs['1'].average().plot()\n \n ###############################################################################\n # Let's do some dipole fits. The phantom is properly modeled by a single-shell\n # sphere with origin (0., 0., 0.). We compute covariance, then do the fits.\n \n-t_peak = 0.061 # 3 MS at largest peak\n+t_peak = 60e-3 # ~60 MS at largest peak\n sphere = mne.make_sphere_model(r0=(0., 0., 0.), head_radius=None)\n cov = mne.compute_covariance(epochs, tmax=0)\n data = []\n", "issue": "Memory consumption in plot_brainstorm_phantom_elekta\nCircle is failing because it's using so much memory.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\n==========================================\nBrainstorm Elekta phantom tutorial dataset\n==========================================\n\nHere we compute the evoked from raw for the Brainstorm Elekta phantom\ntutorial dataset. For comparison, see [1]_ and:\n\n http://neuroimage.usc.edu/brainstorm/Tutorials/PhantomElekta\n\nReferences\n----------\n.. [1] Tadel F, Baillet S, Mosher JC, Pantazis D, Leahy RM.\n Brainstorm: A User-Friendly Application for MEG/EEG Analysis.\n Computational Intelligence and Neuroscience, vol. 2011, Article ID\n 879716, 13 pages, 2011. doi:10.1155/2011/879716\n\"\"\"\n\n# Authors: Eric Larson <[email protected]>\n#\n# License: BSD (3-clause)\n\nimport os.path as op\nimport numpy as np\n\nimport mne\nfrom mne import find_events, fit_dipole\nfrom mne.datasets.brainstorm import bst_phantom_elekta\nfrom mne.io import read_raw_fif\n\nprint(__doc__)\n\n###############################################################################\n# The data were collected with an Elekta Neuromag VectorView system at 1000 Hz\n# and low-pass filtered at 330 Hz. Here the medium-amplitude (200 nAm) data\n# are read to construct instances of :class:`mne.io.Raw`.\ndata_path = bst_phantom_elekta.data_path()\n\nraw_fname = op.join(data_path, 'kojak_all_200nAm_pp_no_chpi_no_ms_raw.fif')\nraw = read_raw_fif(raw_fname, preload=True)\n\n###############################################################################\n# Data channel array consisted of 204 MEG planor gradiometers,\n# 102 axial magnetometers, and 3 stimulus channels. Let's get the events\n# for the phantom, where each dipole (1-32) gets its own event:\n\nevents = find_events(raw, 'STI201')\nraw.plot(events=events)\nraw.info['bads'] = ['MEG2421']\n\n###############################################################################\n# The data have strong line frequency (60 Hz and harmonics) and cHPI coil\n# noise (five peaks around 300 Hz):\n\nraw.plot_psd()\n\n###############################################################################\n# We know our phantom produces sinusoidal bursts below 25 Hz, so let's filter.\n\nraw.filter(None, 40., h_trans_bandwidth=10., filter_length='1s')\nraw.plot_psd()\n\n###############################################################################\n# The data are still a bit noisy, so let's use Maxwell filtering to clean it.\n# Ideally we would have the fine calibration and cross-talk information\n# for the site of interest, but we don't, so we just do:\n\nraw.fix_mag_coil_types()\nraw = mne.preprocessing.maxwell_filter(raw, origin=(0., 0., 0.))\nraw.plot(events=events)\n\n###############################################################################\n# Now we epoch our data, average it, and look at the first dipole response.\n# The first peak appears around 3 ms.\n\ntmin, tmax = -0.2, 0.2\nevent_id = list(range(1, 33))\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, baseline=(None, -0.01))\nepochs['1'].average().plot()\n\n###############################################################################\n# Let's do some dipole fits. The phantom is properly modeled by a single-shell\n# sphere with origin (0., 0., 0.). We compute covariance, then do the fits.\n\nt_peak = 0.061 # 3 MS at largest peak\nsphere = mne.make_sphere_model(r0=(0., 0., 0.), head_radius=None)\ncov = mne.compute_covariance(epochs, tmax=0)\ndata = []\nfor ii in range(1, 33):\n evoked = epochs[str(ii)].average().crop(t_peak, t_peak)\n data.append(evoked.data[:, 0])\nevoked = mne.EvokedArray(np.array(data).T, evoked.info, tmin=0.)\ndip = fit_dipole(evoked, cov, sphere, n_jobs=2)[0]\n\n###############################################################################\n# Now we can compare to the actual locations, taking the difference in mm:\n\nactual_pos = mne.dipole.get_phantom_dipoles(kind='122')[0]\ndiffs = 1000 * np.sqrt(np.sum((dip.pos - actual_pos) ** 2, axis=-1))\nprint('Differences (mm):\\n%s' % diffs[:, np.newaxis])\n", "path": "tutorials/plot_brainstorm_phantom_elekta.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\n==========================================\nBrainstorm Elekta phantom tutorial dataset\n==========================================\n\nHere we compute the evoked from raw for the Brainstorm Elekta phantom\ntutorial dataset. For comparison, see [1]_ and:\n\n http://neuroimage.usc.edu/brainstorm/Tutorials/PhantomElekta\n\nReferences\n----------\n.. [1] Tadel F, Baillet S, Mosher JC, Pantazis D, Leahy RM.\n Brainstorm: A User-Friendly Application for MEG/EEG Analysis.\n Computational Intelligence and Neuroscience, vol. 2011, Article ID\n 879716, 13 pages, 2011. doi:10.1155/2011/879716\n\"\"\"\n\n# Authors: Eric Larson <[email protected]>\n#\n# License: BSD (3-clause)\n\nimport os.path as op\nimport numpy as np\n\nimport mne\nfrom mne import find_events, fit_dipole\nfrom mne.datasets.brainstorm import bst_phantom_elekta\nfrom mne.io import read_raw_fif\n\nprint(__doc__)\n\n###############################################################################\n# The data were collected with an Elekta Neuromag VectorView system at 1000 Hz\n# and low-pass filtered at 330 Hz. Here the medium-amplitude (200 nAm) data\n# are read to construct instances of :class:`mne.io.Raw`.\ndata_path = bst_phantom_elekta.data_path()\n\nraw_fname = op.join(data_path, 'kojak_all_200nAm_pp_no_chpi_no_ms_raw.fif')\nraw = read_raw_fif(raw_fname)\n\n###############################################################################\n# Data channel array consisted of 204 MEG planor gradiometers,\n# 102 axial magnetometers, and 3 stimulus channels. Let's get the events\n# for the phantom, where each dipole (1-32) gets its own event:\n\nevents = find_events(raw, 'STI201')\nraw.plot(events=events)\nraw.info['bads'] = ['MEG2421']\n\n###############################################################################\n# The data have strong line frequency (60 Hz and harmonics) and cHPI coil\n# noise (five peaks around 300 Hz). Here we plot only out to 60 seconds\n# to save memory:\n\nraw.plot_psd(tmax=60.)\n\n###############################################################################\n# Let's use Maxwell filtering to clean the data a bit.\n# Ideally we would have the fine calibration and cross-talk information\n# for the site of interest, but we don't, so we just do:\n\nraw.fix_mag_coil_types()\nraw = mne.preprocessing.maxwell_filter(raw, origin=(0., 0., 0.))\n\n###############################################################################\n# We know our phantom produces sinusoidal bursts below 25 Hz, so let's filter.\n\nraw.filter(None, 40., h_trans_bandwidth=10., filter_length='1s')\nraw.plot(events=events)\n\n###############################################################################\n# Now we epoch our data, average it, and look at the first dipole response.\n# The first peak appears around 3 ms. Because we low-passed at 40 Hz,\n# we can also decimate our data to save memory.\n\ntmin, tmax = -0.1, 0.1\nevent_id = list(range(1, 33))\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, baseline=(None, -0.01),\n decim=5)\nepochs['1'].average().plot()\n\n###############################################################################\n# Let's do some dipole fits. The phantom is properly modeled by a single-shell\n# sphere with origin (0., 0., 0.). We compute covariance, then do the fits.\n\nt_peak = 60e-3 # ~60 MS at largest peak\nsphere = mne.make_sphere_model(r0=(0., 0., 0.), head_radius=None)\ncov = mne.compute_covariance(epochs, tmax=0)\ndata = []\nfor ii in range(1, 33):\n evoked = epochs[str(ii)].average().crop(t_peak, t_peak)\n data.append(evoked.data[:, 0])\nevoked = mne.EvokedArray(np.array(data).T, evoked.info, tmin=0.)\ndip = fit_dipole(evoked, cov, sphere, n_jobs=2)[0]\n\n###############################################################################\n# Now we can compare to the actual locations, taking the difference in mm:\n\nactual_pos = mne.dipole.get_phantom_dipoles(kind='122')[0]\ndiffs = 1000 * np.sqrt(np.sum((dip.pos - actual_pos) ** 2, axis=-1))\nprint('Differences (mm):\\n%s' % diffs[:, np.newaxis])\n", "path": "tutorials/plot_brainstorm_phantom_elekta.py"}]}
| 1,523 | 772 |
gh_patches_debug_13164
|
rasdani/github-patches
|
git_diff
|
Kinto__kinto-1245
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
JSON validation crash on missing array minItems property
```
File "/home/ubuntu/venvs/kinto/local/lib/python3.5/site-packages/pyramid/viewderivers.py", line 147, in _requestonly_view
response = view(request)
File "/home/ubuntu/venvs/kinto/local/lib/python3.5/site-packages/cornice/service.py", line 491, in wrapper
response = view_()
File "/home/ubuntu/venvs/kinto/local/lib/python3.5/site-packages/kinto/core/resource/__init__.py", line 460, in put
new_record = self.process_record(post_record, old=existing)
File "/home/ubuntu/venvs/kinto/local/lib/python3.5/site-packages/kinto/views/records.py", line 66, in process_record
jsonschema.validate(stripped, schema)
File "/home/ubuntu/venvs/kinto/local/lib/python3.5/site-packages/jsonschema/validators.py", line 540, in validate
cls.check_schema(schema)
File "/home/ubuntu/venvs/kinto/local/lib/python3.5/site-packages/jsonschema/validators.py", line 83, in check_schema
raise SchemaError.create_from(error)
jsonschema.exceptions.SchemaError: [] is too short
Failed validating 'minItems' in schema['properties']['required']:
{'items': {'type': 'string'},
'minItems': 1,
'type': 'array',
'uniqueItems': True}
On instance['required']:
[]
```
JSON validation crash on missing array minItems property
```
File "/home/ubuntu/venvs/kinto/local/lib/python3.5/site-packages/pyramid/viewderivers.py", line 147, in _requestonly_view
response = view(request)
File "/home/ubuntu/venvs/kinto/local/lib/python3.5/site-packages/cornice/service.py", line 491, in wrapper
response = view_()
File "/home/ubuntu/venvs/kinto/local/lib/python3.5/site-packages/kinto/core/resource/__init__.py", line 460, in put
new_record = self.process_record(post_record, old=existing)
File "/home/ubuntu/venvs/kinto/local/lib/python3.5/site-packages/kinto/views/records.py", line 66, in process_record
jsonschema.validate(stripped, schema)
File "/home/ubuntu/venvs/kinto/local/lib/python3.5/site-packages/jsonschema/validators.py", line 540, in validate
cls.check_schema(schema)
File "/home/ubuntu/venvs/kinto/local/lib/python3.5/site-packages/jsonschema/validators.py", line 83, in check_schema
raise SchemaError.create_from(error)
jsonschema.exceptions.SchemaError: [] is too short
Failed validating 'minItems' in schema['properties']['required']:
{'items': {'type': 'string'},
'minItems': 1,
'type': 'array',
'uniqueItems': True}
On instance['required']:
[]
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kinto/views/records.py`
Content:
```
1 import copy
2
3 import jsonschema
4 from kinto.core import resource, utils
5 from kinto.core.errors import raise_invalid
6 from jsonschema import exceptions as jsonschema_exceptions
7 from pyramid.security import Authenticated
8 from pyramid.settings import asbool
9
10 from kinto.views import object_exists_or_404
11
12
13 _parent_path = '/buckets/{{bucket_id}}/collections/{{collection_id}}'
14
15
16 @resource.register(name='record',
17 collection_path=_parent_path + '/records',
18 record_path=_parent_path + '/records/{{id}}')
19 class Record(resource.ShareableResource):
20
21 schema_field = 'schema'
22
23 def __init__(self, request, **kwargs):
24 # Before all, first check that the parent collection exists.
25 # Check if already fetched before (in batch).
26 collections = request.bound_data.setdefault('collections', {})
27 collection_uri = self.get_parent_id(request)
28 if collection_uri not in collections:
29 # Unknown yet, fetch from storage.
30 collection_parent_id = utils.instance_uri(request, 'bucket',
31 id=self.bucket_id)
32 collection = object_exists_or_404(request,
33 collection_id='collection',
34 parent_id=collection_parent_id,
35 object_id=self.collection_id)
36 collections[collection_uri] = collection
37
38 super().__init__(request, **kwargs)
39 self._collection = collections[collection_uri]
40
41 def get_parent_id(self, request):
42 self.bucket_id = request.matchdict['bucket_id']
43 self.collection_id = request.matchdict['collection_id']
44 return utils.instance_uri(request, 'collection',
45 bucket_id=self.bucket_id,
46 id=self.collection_id)
47
48 def process_record(self, new, old=None):
49 """Validate records against collection schema, if any."""
50 new = super().process_record(new, old)
51
52 schema = self._collection.get('schema')
53 settings = self.request.registry.settings
54 schema_validation = 'experimental_collection_schema_validation'
55 if not schema or not asbool(settings.get(schema_validation)):
56 return new
57
58 collection_timestamp = self._collection[self.model.modified_field]
59
60 try:
61 stripped = copy.deepcopy(new)
62 stripped.pop(self.model.id_field, None)
63 stripped.pop(self.model.modified_field, None)
64 stripped.pop(self.model.permissions_field, None)
65 stripped.pop(self.schema_field, None)
66 jsonschema.validate(stripped, schema)
67 except jsonschema_exceptions.ValidationError as e:
68 try:
69 field = e.path.pop() if e.path else e.validator_value.pop()
70 except AttributeError:
71 field = None
72 raise_invalid(self.request, name=field, description=e.message)
73
74 new[self.schema_field] = collection_timestamp
75 return new
76
77 def collection_get(self):
78 result = super().collection_get()
79 self._handle_cache_expires(self.request.response)
80 return result
81
82 def get(self):
83 result = super().get()
84 self._handle_cache_expires(self.request.response)
85 return result
86
87 def _handle_cache_expires(self, response):
88 """If the parent collection defines a ``cache_expires`` attribute,
89 then cache-control response headers are sent.
90
91 .. note::
92
93 Those headers are also sent if the
94 ``kinto.record_cache_expires_seconds`` setting is defined.
95 """
96 is_anonymous = Authenticated not in self.request.effective_principals
97 if not is_anonymous:
98 return
99
100 cache_expires = self._collection.get('cache_expires')
101 if cache_expires is None:
102 by_bucket = '{}_record_cache_expires_seconds'.format(self.bucket_id)
103 by_collection = '{}_{}_record_cache_expires_seconds'.format(
104 self.bucket_id, self.collection_id)
105 settings = self.request.registry.settings
106 cache_expires = settings.get(by_collection,
107 settings.get(by_bucket))
108
109 if cache_expires is not None:
110 response.cache_expires(seconds=int(cache_expires))
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kinto/views/records.py b/kinto/views/records.py
--- a/kinto/views/records.py
+++ b/kinto/views/records.py
@@ -65,10 +65,10 @@
stripped.pop(self.schema_field, None)
jsonschema.validate(stripped, schema)
except jsonschema_exceptions.ValidationError as e:
- try:
- field = e.path.pop() if e.path else e.validator_value.pop()
- except AttributeError:
- field = None
+ if e.validator_value:
+ field = e.validator_value[-1]
+ else:
+ field = e.schema_path[-1]
raise_invalid(self.request, name=field, description=e.message)
new[self.schema_field] = collection_timestamp
|
{"golden_diff": "diff --git a/kinto/views/records.py b/kinto/views/records.py\n--- a/kinto/views/records.py\n+++ b/kinto/views/records.py\n@@ -65,10 +65,10 @@\n stripped.pop(self.schema_field, None)\n jsonschema.validate(stripped, schema)\n except jsonschema_exceptions.ValidationError as e:\n- try:\n- field = e.path.pop() if e.path else e.validator_value.pop()\n- except AttributeError:\n- field = None\n+ if e.validator_value:\n+ field = e.validator_value[-1]\n+ else:\n+ field = e.schema_path[-1]\n raise_invalid(self.request, name=field, description=e.message)\n \n new[self.schema_field] = collection_timestamp\n", "issue": "JSON validation crash on missing array minItems property\n```\r\n File \"/home/ubuntu/venvs/kinto/local/lib/python3.5/site-packages/pyramid/viewderivers.py\", line 147, in _requestonly_view\r\n response = view(request)\r\n File \"/home/ubuntu/venvs/kinto/local/lib/python3.5/site-packages/cornice/service.py\", line 491, in wrapper\r\n response = view_()\r\n File \"/home/ubuntu/venvs/kinto/local/lib/python3.5/site-packages/kinto/core/resource/__init__.py\", line 460, in put\r\n new_record = self.process_record(post_record, old=existing)\r\n File \"/home/ubuntu/venvs/kinto/local/lib/python3.5/site-packages/kinto/views/records.py\", line 66, in process_record\r\n jsonschema.validate(stripped, schema)\r\n File \"/home/ubuntu/venvs/kinto/local/lib/python3.5/site-packages/jsonschema/validators.py\", line 540, in validate\r\n cls.check_schema(schema)\r\n File \"/home/ubuntu/venvs/kinto/local/lib/python3.5/site-packages/jsonschema/validators.py\", line 83, in check_schema\r\n raise SchemaError.create_from(error)\r\njsonschema.exceptions.SchemaError: [] is too short\r\n\r\nFailed validating 'minItems' in schema['properties']['required']:\r\n {'items': {'type': 'string'},\r\n 'minItems': 1,\r\n 'type': 'array',\r\n 'uniqueItems': True}\r\n\r\nOn instance['required']:\r\n []\r\n```\nJSON validation crash on missing array minItems property\n```\r\n File \"/home/ubuntu/venvs/kinto/local/lib/python3.5/site-packages/pyramid/viewderivers.py\", line 147, in _requestonly_view\r\n response = view(request)\r\n File \"/home/ubuntu/venvs/kinto/local/lib/python3.5/site-packages/cornice/service.py\", line 491, in wrapper\r\n response = view_()\r\n File \"/home/ubuntu/venvs/kinto/local/lib/python3.5/site-packages/kinto/core/resource/__init__.py\", line 460, in put\r\n new_record = self.process_record(post_record, old=existing)\r\n File \"/home/ubuntu/venvs/kinto/local/lib/python3.5/site-packages/kinto/views/records.py\", line 66, in process_record\r\n jsonschema.validate(stripped, schema)\r\n File \"/home/ubuntu/venvs/kinto/local/lib/python3.5/site-packages/jsonschema/validators.py\", line 540, in validate\r\n cls.check_schema(schema)\r\n File \"/home/ubuntu/venvs/kinto/local/lib/python3.5/site-packages/jsonschema/validators.py\", line 83, in check_schema\r\n raise SchemaError.create_from(error)\r\njsonschema.exceptions.SchemaError: [] is too short\r\n\r\nFailed validating 'minItems' in schema['properties']['required']:\r\n {'items': {'type': 'string'},\r\n 'minItems': 1,\r\n 'type': 'array',\r\n 'uniqueItems': True}\r\n\r\nOn instance['required']:\r\n []\r\n```\n", "before_files": [{"content": "import copy\n\nimport jsonschema\nfrom kinto.core import resource, utils\nfrom kinto.core.errors import raise_invalid\nfrom jsonschema import exceptions as jsonschema_exceptions\nfrom pyramid.security import Authenticated\nfrom pyramid.settings import asbool\n\nfrom kinto.views import object_exists_or_404\n\n\n_parent_path = '/buckets/{{bucket_id}}/collections/{{collection_id}}'\n\n\[email protected](name='record',\n collection_path=_parent_path + '/records',\n record_path=_parent_path + '/records/{{id}}')\nclass Record(resource.ShareableResource):\n\n schema_field = 'schema'\n\n def __init__(self, request, **kwargs):\n # Before all, first check that the parent collection exists.\n # Check if already fetched before (in batch).\n collections = request.bound_data.setdefault('collections', {})\n collection_uri = self.get_parent_id(request)\n if collection_uri not in collections:\n # Unknown yet, fetch from storage.\n collection_parent_id = utils.instance_uri(request, 'bucket',\n id=self.bucket_id)\n collection = object_exists_or_404(request,\n collection_id='collection',\n parent_id=collection_parent_id,\n object_id=self.collection_id)\n collections[collection_uri] = collection\n\n super().__init__(request, **kwargs)\n self._collection = collections[collection_uri]\n\n def get_parent_id(self, request):\n self.bucket_id = request.matchdict['bucket_id']\n self.collection_id = request.matchdict['collection_id']\n return utils.instance_uri(request, 'collection',\n bucket_id=self.bucket_id,\n id=self.collection_id)\n\n def process_record(self, new, old=None):\n \"\"\"Validate records against collection schema, if any.\"\"\"\n new = super().process_record(new, old)\n\n schema = self._collection.get('schema')\n settings = self.request.registry.settings\n schema_validation = 'experimental_collection_schema_validation'\n if not schema or not asbool(settings.get(schema_validation)):\n return new\n\n collection_timestamp = self._collection[self.model.modified_field]\n\n try:\n stripped = copy.deepcopy(new)\n stripped.pop(self.model.id_field, None)\n stripped.pop(self.model.modified_field, None)\n stripped.pop(self.model.permissions_field, None)\n stripped.pop(self.schema_field, None)\n jsonschema.validate(stripped, schema)\n except jsonschema_exceptions.ValidationError as e:\n try:\n field = e.path.pop() if e.path else e.validator_value.pop()\n except AttributeError:\n field = None\n raise_invalid(self.request, name=field, description=e.message)\n\n new[self.schema_field] = collection_timestamp\n return new\n\n def collection_get(self):\n result = super().collection_get()\n self._handle_cache_expires(self.request.response)\n return result\n\n def get(self):\n result = super().get()\n self._handle_cache_expires(self.request.response)\n return result\n\n def _handle_cache_expires(self, response):\n \"\"\"If the parent collection defines a ``cache_expires`` attribute,\n then cache-control response headers are sent.\n\n .. note::\n\n Those headers are also sent if the\n ``kinto.record_cache_expires_seconds`` setting is defined.\n \"\"\"\n is_anonymous = Authenticated not in self.request.effective_principals\n if not is_anonymous:\n return\n\n cache_expires = self._collection.get('cache_expires')\n if cache_expires is None:\n by_bucket = '{}_record_cache_expires_seconds'.format(self.bucket_id)\n by_collection = '{}_{}_record_cache_expires_seconds'.format(\n self.bucket_id, self.collection_id)\n settings = self.request.registry.settings\n cache_expires = settings.get(by_collection,\n settings.get(by_bucket))\n\n if cache_expires is not None:\n response.cache_expires(seconds=int(cache_expires))\n", "path": "kinto/views/records.py"}], "after_files": [{"content": "import copy\n\nimport jsonschema\nfrom kinto.core import resource, utils\nfrom kinto.core.errors import raise_invalid\nfrom jsonschema import exceptions as jsonschema_exceptions\nfrom pyramid.security import Authenticated\nfrom pyramid.settings import asbool\n\nfrom kinto.views import object_exists_or_404\n\n\n_parent_path = '/buckets/{{bucket_id}}/collections/{{collection_id}}'\n\n\[email protected](name='record',\n collection_path=_parent_path + '/records',\n record_path=_parent_path + '/records/{{id}}')\nclass Record(resource.ShareableResource):\n\n schema_field = 'schema'\n\n def __init__(self, request, **kwargs):\n # Before all, first check that the parent collection exists.\n # Check if already fetched before (in batch).\n collections = request.bound_data.setdefault('collections', {})\n collection_uri = self.get_parent_id(request)\n if collection_uri not in collections:\n # Unknown yet, fetch from storage.\n collection_parent_id = utils.instance_uri(request, 'bucket',\n id=self.bucket_id)\n collection = object_exists_or_404(request,\n collection_id='collection',\n parent_id=collection_parent_id,\n object_id=self.collection_id)\n collections[collection_uri] = collection\n\n super().__init__(request, **kwargs)\n self._collection = collections[collection_uri]\n\n def get_parent_id(self, request):\n self.bucket_id = request.matchdict['bucket_id']\n self.collection_id = request.matchdict['collection_id']\n return utils.instance_uri(request, 'collection',\n bucket_id=self.bucket_id,\n id=self.collection_id)\n\n def process_record(self, new, old=None):\n \"\"\"Validate records against collection schema, if any.\"\"\"\n new = super().process_record(new, old)\n\n schema = self._collection.get('schema')\n settings = self.request.registry.settings\n schema_validation = 'experimental_collection_schema_validation'\n if not schema or not asbool(settings.get(schema_validation)):\n return new\n\n collection_timestamp = self._collection[self.model.modified_field]\n\n try:\n stripped = copy.deepcopy(new)\n stripped.pop(self.model.id_field, None)\n stripped.pop(self.model.modified_field, None)\n stripped.pop(self.model.permissions_field, None)\n stripped.pop(self.schema_field, None)\n jsonschema.validate(stripped, schema)\n except jsonschema_exceptions.ValidationError as e:\n if e.validator_value:\n field = e.validator_value[-1]\n else:\n field = e.schema_path[-1]\n raise_invalid(self.request, name=field, description=e.message)\n\n new[self.schema_field] = collection_timestamp\n return new\n\n def collection_get(self):\n result = super().collection_get()\n self._handle_cache_expires(self.request.response)\n return result\n\n def get(self):\n result = super().get()\n self._handle_cache_expires(self.request.response)\n return result\n\n def _handle_cache_expires(self, response):\n \"\"\"If the parent collection defines a ``cache_expires`` attribute,\n then cache-control response headers are sent.\n\n .. note::\n\n Those headers are also sent if the\n ``kinto.record_cache_expires_seconds`` setting is defined.\n \"\"\"\n is_anonymous = Authenticated not in self.request.effective_principals\n if not is_anonymous:\n return\n\n cache_expires = self._collection.get('cache_expires')\n if cache_expires is None:\n by_bucket = '{}_record_cache_expires_seconds'.format(self.bucket_id)\n by_collection = '{}_{}_record_cache_expires_seconds'.format(\n self.bucket_id, self.collection_id)\n settings = self.request.registry.settings\n cache_expires = settings.get(by_collection,\n settings.get(by_bucket))\n\n if cache_expires is not None:\n response.cache_expires(seconds=int(cache_expires))\n", "path": "kinto/views/records.py"}]}
| 1,981 | 166 |
gh_patches_debug_18792
|
rasdani/github-patches
|
git_diff
|
nonebot__nonebot2-334
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug: Logger 发起对象名一律为 nonebot
**描述问题:**

在 [v2.0.0a11](https://github.com/nonebot/nonebot2/tree/v2.0.0a11) 及更早版本中,通过在事件处理的文件或者包中调用 `loguru.logger` 其输出的日志之发起对象名为该文件或包之名称

在 [v2.0.0a13.post1](https://github.com/nonebot/nonebot2/tree/v2.0.0a13.post1) 中,使用相同方式输出的日志,其发起对象名一律为 nonebot
**期望的结果**
日志之发起对象名为该文件或包之名称
**环境信息:**
- OS: windows-latest
- Python Version: 3.9.4
- Nonebot Version: v2.0.0a13.post1
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nonebot/log.py`
Content:
```
1 """
2 日志
3 ====
4
5 NoneBot 使用 `loguru`_ 来记录日志信息。
6
7 自定义 logger 请参考 `loguru`_ 文档。
8
9 .. _loguru:
10 https://github.com/Delgan/loguru
11 """
12
13 import sys
14 import logging
15 from typing import Union
16
17 from loguru import logger as logger_
18
19 # logger = logging.getLogger("nonebot")
20 logger = logger_
21 """
22 :说明:
23
24 NoneBot 日志记录器对象。
25
26 :默认信息:
27
28 * 格式: ``[%(asctime)s %(name)s] %(levelname)s: %(message)s``
29 * 等级: ``DEBUG`` / ``INFO`` ,根据 config 配置改变
30 * 输出: 输出至 stdout
31
32 :用法:
33
34 .. code-block:: python
35
36 from nonebot.log import logger
37 """
38
39 # default_handler = logging.StreamHandler(sys.stdout)
40 # default_handler.setFormatter(
41 # logging.Formatter("[%(asctime)s %(name)s] %(levelname)s: %(message)s"))
42 # logger.addHandler(default_handler)
43
44
45 class Filter:
46
47 def __init__(self) -> None:
48 self.level: Union[int, str] = "DEBUG"
49
50 def __call__(self, record):
51 record["name"] = record["name"].split(".")[0]
52 levelno = logger.level(self.level).no if isinstance(self.level,
53 str) else self.level
54 return record["level"].no >= levelno
55
56
57 class LoguruHandler(logging.Handler):
58
59 def emit(self, record):
60 try:
61 level = logger.level(record.levelname).name
62 except ValueError:
63 level = record.levelno
64
65 frame, depth = logging.currentframe(), 2
66 while frame.f_code.co_filename == logging.__file__:
67 frame = frame.f_back
68 depth += 1
69
70 logger.opt(depth=depth,
71 exception=record.exc_info).log(level, record.getMessage())
72
73
74 logger.remove()
75 default_filter = Filter()
76 default_format = (
77 "<g>{time:MM-DD HH:mm:ss}</g> "
78 "[<lvl>{level}</lvl>] "
79 "<c><u>{name}</u></c> | "
80 # "<c>{function}:{line}</c>| "
81 "{message}")
82 logger_id = logger.add(sys.stdout,
83 colorize=True,
84 diagnose=False,
85 filter=default_filter,
86 format=default_format)
87
```
Path: `nonebot/plugin/manager.py`
Content:
```
1 import sys
2 import uuid
3 import pkgutil
4 import importlib
5 from hashlib import md5
6 from types import ModuleType
7 from collections import Counter
8 from contextvars import ContextVar
9 from importlib.abc import MetaPathFinder
10 from typing import Set, List, Optional, Iterable
11 from importlib.machinery import PathFinder, SourceFileLoader
12
13 from .export import _export, Export
14
15 _current_plugin: ContextVar[Optional[str]] = ContextVar("_current_plugin",
16 default=None)
17
18 _internal_space = ModuleType(__name__ + "._internal")
19 _internal_space.__path__ = [] # type: ignore
20 sys.modules[_internal_space.__name__] = _internal_space
21
22 _manager_stack: List["PluginManager"] = []
23
24
25 class _NamespaceModule(ModuleType):
26 """Simple namespace module to store plugins."""
27
28 @property
29 def __path__(self):
30 return []
31
32 def __getattr__(self, name: str):
33 try:
34 return super().__getattr__(name) # type: ignore
35 except AttributeError:
36 if name.startswith("__"):
37 raise
38 raise RuntimeError("Plugin manager not activated!")
39
40
41 class _InternalModule(ModuleType):
42 """Internal module for each plugin manager."""
43
44 def __init__(self, prefix: str, plugin_manager: "PluginManager"):
45 super().__init__(f"{prefix}.{plugin_manager.internal_id}")
46 self.__plugin_manager__ = plugin_manager
47
48 @property
49 def __path__(self) -> List[str]:
50 return list(self.__plugin_manager__.search_path)
51
52
53 class PluginManager:
54
55 def __init__(self,
56 namespace: Optional[str] = None,
57 plugins: Optional[Iterable[str]] = None,
58 search_path: Optional[Iterable[str]] = None,
59 *,
60 id: Optional[str] = None):
61 self.namespace: Optional[str] = namespace
62 self.namespace_module: Optional[ModuleType] = self._setup_namespace(
63 namespace)
64
65 self.id: str = id or str(uuid.uuid4())
66 self.internal_id: str = md5(
67 ((self.namespace or "") + self.id).encode()).hexdigest()
68 self.internal_module = self._setup_internal_module(self.internal_id)
69
70 # simple plugin not in search path
71 self.plugins: Set[str] = set(plugins or [])
72 self.search_path: Set[str] = set(search_path or [])
73 # ensure can be loaded
74 self.list_plugins()
75
76 def _setup_namespace(self,
77 namespace: Optional[str] = None
78 ) -> Optional[ModuleType]:
79 if not namespace:
80 return None
81
82 try:
83 module = importlib.import_module(namespace)
84 except ImportError:
85 module = _NamespaceModule(namespace)
86 if "." in namespace:
87 parent = importlib.import_module(namespace.rsplit(".", 1)[0])
88 setattr(parent, namespace.rsplit(".", 1)[1], module)
89
90 sys.modules[namespace] = module
91 return module
92
93 def _setup_internal_module(self, internal_id: str) -> ModuleType:
94 if hasattr(_internal_space, internal_id):
95 raise RuntimeError("Plugin manager already exists!")
96
97 prefix = sys._getframe(3).f_globals.get(
98 "__name__") or _internal_space.__name__
99 if not prefix.startswith(_internal_space.__name__):
100 prefix = _internal_space.__name__
101 module = _InternalModule(prefix, self)
102 sys.modules[module.__name__] = module
103 setattr(_internal_space, internal_id, module)
104 return module
105
106 def __enter__(self):
107 if self in _manager_stack:
108 raise RuntimeError("Plugin manager already activated!")
109 _manager_stack.append(self)
110 return self
111
112 def __exit__(self, exc_type, exc_value, traceback):
113 try:
114 _manager_stack.pop()
115 except IndexError:
116 pass
117
118 def search_plugins(self) -> List[str]:
119 return [
120 module_info.name
121 for module_info in pkgutil.iter_modules(self.search_path)
122 ]
123
124 def list_plugins(self) -> Set[str]:
125 _pre_managers: List[PluginManager]
126 if self in _manager_stack:
127 _pre_managers = _manager_stack[:_manager_stack.index(self)]
128 else:
129 _pre_managers = _manager_stack[:]
130
131 _search_path: Set[str] = set()
132 for manager in _pre_managers:
133 _search_path |= manager.search_path
134 if _search_path & self.search_path:
135 raise RuntimeError("Duplicate plugin search path!")
136
137 _search_plugins = self.search_plugins()
138 c = Counter([*_search_plugins, *self.plugins])
139 conflict = [name for name, num in c.items() if num > 1]
140 if conflict:
141 raise RuntimeError(
142 f"More than one plugin named {' / '.join(conflict)}!")
143 return set(_search_plugins) | self.plugins
144
145 def load_plugin(self, name) -> ModuleType:
146 if name in self.plugins:
147 with self:
148 return importlib.import_module(name)
149
150 if "." in name:
151 raise ValueError("Plugin name cannot contain '.'")
152
153 with self:
154 return importlib.import_module(f"{self.namespace}.{name}")
155
156 def load_all_plugins(self) -> List[ModuleType]:
157 return [self.load_plugin(name) for name in self.list_plugins()]
158
159 def _rewrite_module_name(self, module_name) -> Optional[str]:
160 prefix = f"{self.internal_module.__name__}."
161 if module_name.startswith(self.namespace + "."):
162 path = module_name.split(".")
163 length = self.namespace.count(".") + 1
164 return f"{prefix}{'.'.join(path[length:])}"
165 elif module_name in self.plugins or module_name.startswith(prefix):
166 return module_name
167 elif module_name in self.search_plugins():
168 return f"{prefix}{module_name}"
169 return None
170
171
172 class PluginFinder(MetaPathFinder):
173
174 def find_spec(self, fullname: str, path, target):
175 if _manager_stack:
176 index = -1
177 while -index <= len(_manager_stack):
178 manager = _manager_stack[index]
179 newname = manager._rewrite_module_name(fullname)
180 if newname:
181 spec = PathFinder.find_spec(
182 newname, [*manager.search_path, *(path or sys.path)],
183 target)
184 if spec:
185 spec.loader = PluginLoader(manager, newname,
186 spec.origin)
187 return spec
188 index -= 1
189 return None
190
191
192 class PluginLoader(SourceFileLoader):
193
194 def __init__(self, manager: PluginManager, fullname: str, path) -> None:
195 self.manager = manager
196 self.loaded = False
197 self._plugin_token = None
198 self._export_token = None
199 super().__init__(fullname, path)
200
201 def create_module(self, spec) -> Optional[ModuleType]:
202 if self.name in sys.modules:
203 self.loaded = True
204 return sys.modules[self.name]
205 prefix = self.manager.internal_module.__name__
206 plugin_name = self.name[len(prefix):] if self.name.startswith(
207 prefix) else self.name
208 self._plugin_token = _current_plugin.set(plugin_name.lstrip("."))
209 self._export_token = _export.set(Export())
210 # return None to use default module creation
211 return super().create_module(spec)
212
213 def exec_module(self, module: ModuleType) -> None:
214 if self.loaded:
215 return
216 # really need?
217 # setattr(module, "__manager__", self.manager)
218 if self._export_token:
219 setattr(module, "__export__", _export.get())
220
221 super().exec_module(module)
222
223 if self._plugin_token:
224 _current_plugin.reset(self._plugin_token)
225 if self._export_token:
226 _export.reset(self._export_token)
227 return
228
229
230 sys.meta_path.insert(0, PluginFinder())
231
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/nonebot/log.py b/nonebot/log.py
--- a/nonebot/log.py
+++ b/nonebot/log.py
@@ -47,6 +47,10 @@
self.level = "DEBUG"
def __call__(self, record):
+ module = sys.modules.get(record["name"])
+ if module:
+ plugin_name = getattr(module, "__plugin_name__", record["name"])
+ record["name"] = plugin_name
record["name"] = record["name"].split(".")[0]
levelno = logger.level(self.level).no
return record["level"].no >= levelno
diff --git a/nonebot/plugin/manager.py b/nonebot/plugin/manager.py
--- a/nonebot/plugin/manager.py
+++ b/nonebot/plugin/manager.py
@@ -215,6 +215,9 @@
return
# really need?
# setattr(module, "__manager__", self.manager)
+ if self._plugin_token:
+ setattr(module, "__plugin_name__",
+ _current_plugin.get(self._plugin_token))
if self._export_token:
setattr(module, "__export__", _export.get())
|
{"golden_diff": "diff --git a/nonebot/log.py b/nonebot/log.py\n--- a/nonebot/log.py\n+++ b/nonebot/log.py\n@@ -47,6 +47,10 @@\n self.level = \"DEBUG\"\n \n def __call__(self, record):\n+ module = sys.modules.get(record[\"name\"])\n+ if module:\n+ plugin_name = getattr(module, \"__plugin_name__\", record[\"name\"])\n+ record[\"name\"] = plugin_name\n record[\"name\"] = record[\"name\"].split(\".\")[0]\n levelno = logger.level(self.level).no\n return record[\"level\"].no >= levelno\ndiff --git a/nonebot/plugin/manager.py b/nonebot/plugin/manager.py\n--- a/nonebot/plugin/manager.py\n+++ b/nonebot/plugin/manager.py\n@@ -215,6 +215,9 @@\n return\n # really need?\n # setattr(module, \"__manager__\", self.manager)\n+ if self._plugin_token:\n+ setattr(module, \"__plugin_name__\",\n+ _current_plugin.get(self._plugin_token))\n if self._export_token:\n setattr(module, \"__export__\", _export.get())\n", "issue": "Bug: Logger \u53d1\u8d77\u5bf9\u8c61\u540d\u4e00\u5f8b\u4e3a nonebot\n**\u63cf\u8ff0\u95ee\u9898\uff1a**\r\n\r\n\u5728 [v2.0.0a11](https://github.com/nonebot/nonebot2/tree/v2.0.0a11) \u53ca\u66f4\u65e9\u7248\u672c\u4e2d\uff0c\u901a\u8fc7\u5728\u4e8b\u4ef6\u5904\u7406\u7684\u6587\u4ef6\u6216\u8005\u5305\u4e2d\u8c03\u7528 `loguru.logger` \u5176\u8f93\u51fa\u7684\u65e5\u5fd7\u4e4b\u53d1\u8d77\u5bf9\u8c61\u540d\u4e3a\u8be5\u6587\u4ef6\u6216\u5305\u4e4b\u540d\u79f0\r\n\r\n\u5728 [v2.0.0a13.post1](https://github.com/nonebot/nonebot2/tree/v2.0.0a13.post1) \u4e2d\uff0c\u4f7f\u7528\u76f8\u540c\u65b9\u5f0f\u8f93\u51fa\u7684\u65e5\u5fd7\uff0c\u5176\u53d1\u8d77\u5bf9\u8c61\u540d\u4e00\u5f8b\u4e3a nonebot\r\n\r\n**\u671f\u671b\u7684\u7ed3\u679c**\r\n\r\n\u65e5\u5fd7\u4e4b\u53d1\u8d77\u5bf9\u8c61\u540d\u4e3a\u8be5\u6587\u4ef6\u6216\u5305\u4e4b\u540d\u79f0\r\n\r\n**\u73af\u5883\u4fe1\u606f\uff1a**\r\n\r\n - OS: windows-latest\r\n - Python Version: 3.9.4\r\n - Nonebot Version: v2.0.0a13.post1\n", "before_files": [{"content": "\"\"\"\n\u65e5\u5fd7\n====\n\nNoneBot \u4f7f\u7528 `loguru`_ \u6765\u8bb0\u5f55\u65e5\u5fd7\u4fe1\u606f\u3002\n\n\u81ea\u5b9a\u4e49 logger \u8bf7\u53c2\u8003 `loguru`_ \u6587\u6863\u3002\n\n.. _loguru:\n https://github.com/Delgan/loguru\n\"\"\"\n\nimport sys\nimport logging\nfrom typing import Union\n\nfrom loguru import logger as logger_\n\n# logger = logging.getLogger(\"nonebot\")\nlogger = logger_\n\"\"\"\n:\u8bf4\u660e:\n\n NoneBot \u65e5\u5fd7\u8bb0\u5f55\u5668\u5bf9\u8c61\u3002\n\n:\u9ed8\u8ba4\u4fe1\u606f:\n\n * \u683c\u5f0f: ``[%(asctime)s %(name)s] %(levelname)s: %(message)s``\n * \u7b49\u7ea7: ``DEBUG`` / ``INFO`` \uff0c\u6839\u636e config \u914d\u7f6e\u6539\u53d8\n * \u8f93\u51fa: \u8f93\u51fa\u81f3 stdout\n\n:\u7528\u6cd5:\n\n.. code-block:: python\n\n from nonebot.log import logger\n\"\"\"\n\n# default_handler = logging.StreamHandler(sys.stdout)\n# default_handler.setFormatter(\n# logging.Formatter(\"[%(asctime)s %(name)s] %(levelname)s: %(message)s\"))\n# logger.addHandler(default_handler)\n\n\nclass Filter:\n\n def __init__(self) -> None:\n self.level: Union[int, str] = \"DEBUG\"\n\n def __call__(self, record):\n record[\"name\"] = record[\"name\"].split(\".\")[0]\n levelno = logger.level(self.level).no if isinstance(self.level,\n str) else self.level\n return record[\"level\"].no >= levelno\n\n\nclass LoguruHandler(logging.Handler):\n\n def emit(self, record):\n try:\n level = logger.level(record.levelname).name\n except ValueError:\n level = record.levelno\n\n frame, depth = logging.currentframe(), 2\n while frame.f_code.co_filename == logging.__file__:\n frame = frame.f_back\n depth += 1\n\n logger.opt(depth=depth,\n exception=record.exc_info).log(level, record.getMessage())\n\n\nlogger.remove()\ndefault_filter = Filter()\ndefault_format = (\n \"<g>{time:MM-DD HH:mm:ss}</g> \"\n \"[<lvl>{level}</lvl>] \"\n \"<c><u>{name}</u></c> | \"\n # \"<c>{function}:{line}</c>| \"\n \"{message}\")\nlogger_id = logger.add(sys.stdout,\n colorize=True,\n diagnose=False,\n filter=default_filter,\n format=default_format)\n", "path": "nonebot/log.py"}, {"content": "import sys\nimport uuid\nimport pkgutil\nimport importlib\nfrom hashlib import md5\nfrom types import ModuleType\nfrom collections import Counter\nfrom contextvars import ContextVar\nfrom importlib.abc import MetaPathFinder\nfrom typing import Set, List, Optional, Iterable\nfrom importlib.machinery import PathFinder, SourceFileLoader\n\nfrom .export import _export, Export\n\n_current_plugin: ContextVar[Optional[str]] = ContextVar(\"_current_plugin\",\n default=None)\n\n_internal_space = ModuleType(__name__ + \"._internal\")\n_internal_space.__path__ = [] # type: ignore\nsys.modules[_internal_space.__name__] = _internal_space\n\n_manager_stack: List[\"PluginManager\"] = []\n\n\nclass _NamespaceModule(ModuleType):\n \"\"\"Simple namespace module to store plugins.\"\"\"\n\n @property\n def __path__(self):\n return []\n\n def __getattr__(self, name: str):\n try:\n return super().__getattr__(name) # type: ignore\n except AttributeError:\n if name.startswith(\"__\"):\n raise\n raise RuntimeError(\"Plugin manager not activated!\")\n\n\nclass _InternalModule(ModuleType):\n \"\"\"Internal module for each plugin manager.\"\"\"\n\n def __init__(self, prefix: str, plugin_manager: \"PluginManager\"):\n super().__init__(f\"{prefix}.{plugin_manager.internal_id}\")\n self.__plugin_manager__ = plugin_manager\n\n @property\n def __path__(self) -> List[str]:\n return list(self.__plugin_manager__.search_path)\n\n\nclass PluginManager:\n\n def __init__(self,\n namespace: Optional[str] = None,\n plugins: Optional[Iterable[str]] = None,\n search_path: Optional[Iterable[str]] = None,\n *,\n id: Optional[str] = None):\n self.namespace: Optional[str] = namespace\n self.namespace_module: Optional[ModuleType] = self._setup_namespace(\n namespace)\n\n self.id: str = id or str(uuid.uuid4())\n self.internal_id: str = md5(\n ((self.namespace or \"\") + self.id).encode()).hexdigest()\n self.internal_module = self._setup_internal_module(self.internal_id)\n\n # simple plugin not in search path\n self.plugins: Set[str] = set(plugins or [])\n self.search_path: Set[str] = set(search_path or [])\n # ensure can be loaded\n self.list_plugins()\n\n def _setup_namespace(self,\n namespace: Optional[str] = None\n ) -> Optional[ModuleType]:\n if not namespace:\n return None\n\n try:\n module = importlib.import_module(namespace)\n except ImportError:\n module = _NamespaceModule(namespace)\n if \".\" in namespace:\n parent = importlib.import_module(namespace.rsplit(\".\", 1)[0])\n setattr(parent, namespace.rsplit(\".\", 1)[1], module)\n\n sys.modules[namespace] = module\n return module\n\n def _setup_internal_module(self, internal_id: str) -> ModuleType:\n if hasattr(_internal_space, internal_id):\n raise RuntimeError(\"Plugin manager already exists!\")\n\n prefix = sys._getframe(3).f_globals.get(\n \"__name__\") or _internal_space.__name__\n if not prefix.startswith(_internal_space.__name__):\n prefix = _internal_space.__name__\n module = _InternalModule(prefix, self)\n sys.modules[module.__name__] = module\n setattr(_internal_space, internal_id, module)\n return module\n\n def __enter__(self):\n if self in _manager_stack:\n raise RuntimeError(\"Plugin manager already activated!\")\n _manager_stack.append(self)\n return self\n\n def __exit__(self, exc_type, exc_value, traceback):\n try:\n _manager_stack.pop()\n except IndexError:\n pass\n\n def search_plugins(self) -> List[str]:\n return [\n module_info.name\n for module_info in pkgutil.iter_modules(self.search_path)\n ]\n\n def list_plugins(self) -> Set[str]:\n _pre_managers: List[PluginManager]\n if self in _manager_stack:\n _pre_managers = _manager_stack[:_manager_stack.index(self)]\n else:\n _pre_managers = _manager_stack[:]\n\n _search_path: Set[str] = set()\n for manager in _pre_managers:\n _search_path |= manager.search_path\n if _search_path & self.search_path:\n raise RuntimeError(\"Duplicate plugin search path!\")\n\n _search_plugins = self.search_plugins()\n c = Counter([*_search_plugins, *self.plugins])\n conflict = [name for name, num in c.items() if num > 1]\n if conflict:\n raise RuntimeError(\n f\"More than one plugin named {' / '.join(conflict)}!\")\n return set(_search_plugins) | self.plugins\n\n def load_plugin(self, name) -> ModuleType:\n if name in self.plugins:\n with self:\n return importlib.import_module(name)\n\n if \".\" in name:\n raise ValueError(\"Plugin name cannot contain '.'\")\n\n with self:\n return importlib.import_module(f\"{self.namespace}.{name}\")\n\n def load_all_plugins(self) -> List[ModuleType]:\n return [self.load_plugin(name) for name in self.list_plugins()]\n\n def _rewrite_module_name(self, module_name) -> Optional[str]:\n prefix = f\"{self.internal_module.__name__}.\"\n if module_name.startswith(self.namespace + \".\"):\n path = module_name.split(\".\")\n length = self.namespace.count(\".\") + 1\n return f\"{prefix}{'.'.join(path[length:])}\"\n elif module_name in self.plugins or module_name.startswith(prefix):\n return module_name\n elif module_name in self.search_plugins():\n return f\"{prefix}{module_name}\"\n return None\n\n\nclass PluginFinder(MetaPathFinder):\n\n def find_spec(self, fullname: str, path, target):\n if _manager_stack:\n index = -1\n while -index <= len(_manager_stack):\n manager = _manager_stack[index]\n newname = manager._rewrite_module_name(fullname)\n if newname:\n spec = PathFinder.find_spec(\n newname, [*manager.search_path, *(path or sys.path)],\n target)\n if spec:\n spec.loader = PluginLoader(manager, newname,\n spec.origin)\n return spec\n index -= 1\n return None\n\n\nclass PluginLoader(SourceFileLoader):\n\n def __init__(self, manager: PluginManager, fullname: str, path) -> None:\n self.manager = manager\n self.loaded = False\n self._plugin_token = None\n self._export_token = None\n super().__init__(fullname, path)\n\n def create_module(self, spec) -> Optional[ModuleType]:\n if self.name in sys.modules:\n self.loaded = True\n return sys.modules[self.name]\n prefix = self.manager.internal_module.__name__\n plugin_name = self.name[len(prefix):] if self.name.startswith(\n prefix) else self.name\n self._plugin_token = _current_plugin.set(plugin_name.lstrip(\".\"))\n self._export_token = _export.set(Export())\n # return None to use default module creation\n return super().create_module(spec)\n\n def exec_module(self, module: ModuleType) -> None:\n if self.loaded:\n return\n # really need?\n # setattr(module, \"__manager__\", self.manager)\n if self._export_token:\n setattr(module, \"__export__\", _export.get())\n\n super().exec_module(module)\n\n if self._plugin_token:\n _current_plugin.reset(self._plugin_token)\n if self._export_token:\n _export.reset(self._export_token)\n return\n\n\nsys.meta_path.insert(0, PluginFinder())\n", "path": "nonebot/plugin/manager.py"}], "after_files": [{"content": "\"\"\"\n\u65e5\u5fd7\n====\n\nNoneBot \u4f7f\u7528 `loguru`_ \u6765\u8bb0\u5f55\u65e5\u5fd7\u4fe1\u606f\u3002\n\n\u81ea\u5b9a\u4e49 logger \u8bf7\u53c2\u8003 `loguru`_ \u6587\u6863\u3002\n\n.. _loguru:\n https://github.com/Delgan/loguru\n\"\"\"\n\nimport sys\nimport logging\n\nfrom loguru import logger as logger_\n\n# logger = logging.getLogger(\"nonebot\")\nlogger = logger_\n\"\"\"\n:\u8bf4\u660e:\n\n NoneBot \u65e5\u5fd7\u8bb0\u5f55\u5668\u5bf9\u8c61\u3002\n\n:\u9ed8\u8ba4\u4fe1\u606f:\n\n * \u683c\u5f0f: ``[%(asctime)s %(name)s] %(levelname)s: %(message)s``\n * \u7b49\u7ea7: ``DEBUG`` / ``INFO`` \uff0c\u6839\u636e config \u914d\u7f6e\u6539\u53d8\n * \u8f93\u51fa: \u8f93\u51fa\u81f3 stdout\n\n:\u7528\u6cd5:\n\n.. code-block:: python\n\n from nonebot.log import logger\n\"\"\"\n\n# default_handler = logging.StreamHandler(sys.stdout)\n# default_handler.setFormatter(\n# logging.Formatter(\"[%(asctime)s %(name)s] %(levelname)s: %(message)s\"))\n# logger.addHandler(default_handler)\n\n\nclass Filter:\n\n def __init__(self) -> None:\n self.level = \"DEBUG\"\n\n def __call__(self, record):\n module = sys.modules.get(record[\"name\"])\n if module:\n plugin_name = getattr(module, \"__plugin_name__\", record[\"name\"])\n record[\"name\"] = plugin_name\n record[\"name\"] = record[\"name\"].split(\".\")[0]\n levelno = logger.level(self.level).no\n return record[\"level\"].no >= levelno\n\n\nclass LoguruHandler(logging.Handler):\n\n def emit(self, record):\n try:\n level = logger.level(record.levelname).name\n except ValueError:\n level = record.levelno\n\n frame, depth = logging.currentframe(), 2\n while frame.f_code.co_filename == logging.__file__:\n frame = frame.f_back\n depth += 1\n\n logger.opt(depth=depth,\n exception=record.exc_info).log(level, record.getMessage())\n\n\nlogger.remove()\ndefault_filter = Filter()\ndefault_format = (\n \"<g>{time:MM-DD HH:mm:ss}</g> \"\n \"[<lvl>{level}</lvl>] \"\n \"<c><u>{name}</u></c> | \"\n # \"<c>{function}:{line}</c>| \"\n \"{message}\")\nlogger_id = logger.add(sys.stdout,\n colorize=True,\n diagnose=False,\n filter=default_filter,\n format=default_format)\n", "path": "nonebot/log.py"}, {"content": "import sys\nimport uuid\nimport pkgutil\nimport importlib\nfrom hashlib import md5\nfrom types import ModuleType\nfrom collections import Counter\nfrom contextvars import ContextVar\nfrom importlib.abc import MetaPathFinder\nfrom typing import Set, List, Optional, Iterable\nfrom importlib.machinery import PathFinder, SourceFileLoader\n\nfrom .export import _export, Export\n\n_current_plugin: ContextVar[Optional[str]] = ContextVar(\"_current_plugin\",\n default=None)\n\n_internal_space = ModuleType(__name__ + \"._internal\")\n_internal_space.__path__ = [] # type: ignore\nsys.modules[_internal_space.__name__] = _internal_space\n\n_manager_stack: List[\"PluginManager\"] = []\n\n\nclass _NamespaceModule(ModuleType):\n \"\"\"Simple namespace module to store plugins.\"\"\"\n\n @property\n def __path__(self):\n return []\n\n def __getattr__(self, name: str):\n try:\n return super().__getattr__(name) # type: ignore\n except AttributeError:\n if name.startswith(\"__\"):\n raise\n raise RuntimeError(\"Plugin manager not activated!\")\n\n\nclass _InternalModule(ModuleType):\n \"\"\"Internal module for each plugin manager.\"\"\"\n\n def __init__(self, prefix: str, plugin_manager: \"PluginManager\"):\n super().__init__(f\"{prefix}.{plugin_manager.internal_id}\")\n self.__plugin_manager__ = plugin_manager\n\n @property\n def __path__(self) -> List[str]:\n return list(self.__plugin_manager__.search_path)\n\n\nclass PluginManager:\n\n def __init__(self,\n namespace: Optional[str] = None,\n plugins: Optional[Iterable[str]] = None,\n search_path: Optional[Iterable[str]] = None,\n *,\n id: Optional[str] = None):\n self.namespace: Optional[str] = namespace\n self.namespace_module: Optional[ModuleType] = self._setup_namespace(\n namespace)\n\n self.id: str = id or str(uuid.uuid4())\n self.internal_id: str = md5(\n ((self.namespace or \"\") + self.id).encode()).hexdigest()\n self.internal_module = self._setup_internal_module(self.internal_id)\n\n # simple plugin not in search path\n self.plugins: Set[str] = set(plugins or [])\n self.search_path: Set[str] = set(search_path or [])\n # ensure can be loaded\n self.list_plugins()\n\n def _setup_namespace(self,\n namespace: Optional[str] = None\n ) -> Optional[ModuleType]:\n if not namespace:\n return None\n\n try:\n module = importlib.import_module(namespace)\n except ImportError:\n module = _NamespaceModule(namespace)\n if \".\" in namespace:\n parent = importlib.import_module(namespace.rsplit(\".\", 1)[0])\n setattr(parent, namespace.rsplit(\".\", 1)[1], module)\n\n sys.modules[namespace] = module\n return module\n\n def _setup_internal_module(self, internal_id: str) -> ModuleType:\n if hasattr(_internal_space, internal_id):\n raise RuntimeError(\"Plugin manager already exists!\")\n\n prefix = sys._getframe(3).f_globals.get(\n \"__name__\") or _internal_space.__name__\n if not prefix.startswith(_internal_space.__name__):\n prefix = _internal_space.__name__\n module = _InternalModule(prefix, self)\n sys.modules[module.__name__] = module\n setattr(_internal_space, internal_id, module)\n return module\n\n def __enter__(self):\n if self in _manager_stack:\n raise RuntimeError(\"Plugin manager already activated!\")\n _manager_stack.append(self)\n return self\n\n def __exit__(self, exc_type, exc_value, traceback):\n try:\n _manager_stack.pop()\n except IndexError:\n pass\n\n def search_plugins(self) -> List[str]:\n return [\n module_info.name\n for module_info in pkgutil.iter_modules(self.search_path)\n ]\n\n def list_plugins(self) -> Set[str]:\n _pre_managers: List[PluginManager]\n if self in _manager_stack:\n _pre_managers = _manager_stack[:_manager_stack.index(self)]\n else:\n _pre_managers = _manager_stack[:]\n\n _search_path: Set[str] = set()\n for manager in _pre_managers:\n _search_path |= manager.search_path\n if _search_path & self.search_path:\n raise RuntimeError(\"Duplicate plugin search path!\")\n\n _search_plugins = self.search_plugins()\n c = Counter([*_search_plugins, *self.plugins])\n conflict = [name for name, num in c.items() if num > 1]\n if conflict:\n raise RuntimeError(\n f\"More than one plugin named {' / '.join(conflict)}!\")\n return set(_search_plugins) | self.plugins\n\n def load_plugin(self, name) -> ModuleType:\n if name in self.plugins:\n with self:\n return importlib.import_module(name)\n\n if \".\" in name:\n raise ValueError(\"Plugin name cannot contain '.'\")\n\n with self:\n return importlib.import_module(f\"{self.namespace}.{name}\")\n\n def load_all_plugins(self) -> List[ModuleType]:\n return [self.load_plugin(name) for name in self.list_plugins()]\n\n def _rewrite_module_name(self, module_name) -> Optional[str]:\n prefix = f\"{self.internal_module.__name__}.\"\n if module_name.startswith(self.namespace + \".\"):\n path = module_name.split(\".\")\n length = self.namespace.count(\".\") + 1\n return f\"{prefix}{'.'.join(path[length:])}\"\n elif module_name in self.plugins or module_name.startswith(prefix):\n return module_name\n elif module_name in self.search_plugins():\n return f\"{prefix}{module_name}\"\n return None\n\n\nclass PluginFinder(MetaPathFinder):\n\n def find_spec(self, fullname: str, path, target):\n if _manager_stack:\n index = -1\n while -index <= len(_manager_stack):\n manager = _manager_stack[index]\n newname = manager._rewrite_module_name(fullname)\n if newname:\n spec = PathFinder.find_spec(\n newname, [*manager.search_path, *(path or sys.path)],\n target)\n if spec:\n spec.loader = PluginLoader(manager, newname,\n spec.origin)\n return spec\n index -= 1\n return None\n\n\nclass PluginLoader(SourceFileLoader):\n\n def __init__(self, manager: PluginManager, fullname: str, path) -> None:\n self.manager = manager\n self.loaded = False\n self._plugin_token = None\n self._export_token = None\n super().__init__(fullname, path)\n\n def create_module(self, spec) -> Optional[ModuleType]:\n if self.name in sys.modules:\n self.loaded = True\n return sys.modules[self.name]\n prefix = self.manager.internal_module.__name__\n plugin_name = self.name[len(prefix):] if self.name.startswith(\n prefix) else self.name\n self._plugin_token = _current_plugin.set(plugin_name.lstrip(\".\"))\n self._export_token = _export.set(Export())\n # return None to use default module creation\n return super().create_module(spec)\n\n def exec_module(self, module: ModuleType) -> None:\n if self.loaded:\n return\n # really need?\n # setattr(module, \"__manager__\", self.manager)\n if self._plugin_token:\n setattr(module, \"__plugin_name__\",\n _current_plugin.get(self._plugin_token))\n if self._export_token:\n setattr(module, \"__export__\", _export.get())\n\n super().exec_module(module)\n\n if self._plugin_token:\n _current_plugin.reset(self._plugin_token)\n if self._export_token:\n _export.reset(self._export_token)\n return\n\n\nsys.meta_path.insert(0, PluginFinder())\n", "path": "nonebot/plugin/manager.py"}]}
| 3,537 | 264 |
gh_patches_debug_57
|
rasdani/github-patches
|
git_diff
|
Anselmoo__spectrafit-695
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Docs]: Update labeled criteria for CHANGELOG
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Missing Information in the Docs
Update the labeler criteria for avoiding double labels in `CHANGELOG.md`
### Anything else?
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `spectrafit/__init__.py`
Content:
```
1 """SpectraFit, fast command line tool for fitting data."""
2 __version__ = "1.0.0a7"
3
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/spectrafit/__init__.py b/spectrafit/__init__.py
--- a/spectrafit/__init__.py
+++ b/spectrafit/__init__.py
@@ -1,2 +1,2 @@
"""SpectraFit, fast command line tool for fitting data."""
-__version__ = "1.0.0a7"
+__version__ = "1.0.0a8"
|
{"golden_diff": "diff --git a/spectrafit/__init__.py b/spectrafit/__init__.py\n--- a/spectrafit/__init__.py\n+++ b/spectrafit/__init__.py\n@@ -1,2 +1,2 @@\n \"\"\"SpectraFit, fast command line tool for fitting data.\"\"\"\n-__version__ = \"1.0.0a7\"\n+__version__ = \"1.0.0a8\"\n", "issue": "[Docs]: Update labeled criteria for CHANGELOG\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Missing Information in the Docs\n\nUpdate the labeler criteria for avoiding double labels in `CHANGELOG.md`\n\n### Anything else?\n\n_No response_\n\n### Code of Conduct\n\n- [X] I agree to follow this project's Code of Conduct\n", "before_files": [{"content": "\"\"\"SpectraFit, fast command line tool for fitting data.\"\"\"\n__version__ = \"1.0.0a7\"\n", "path": "spectrafit/__init__.py"}], "after_files": [{"content": "\"\"\"SpectraFit, fast command line tool for fitting data.\"\"\"\n__version__ = \"1.0.0a8\"\n", "path": "spectrafit/__init__.py"}]}
| 371 | 96 |
gh_patches_debug_53981
|
rasdani/github-patches
|
git_diff
|
sanic-org__sanic-1222
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Sanic `Request` object is falsey
```python
@app.route('/myroute')
async def someroute(request):
if request:
return 'some data'
raise Exception("Woops")
```
This code will raise the exception because `bool(request)` is `False`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sanic/request.py`
Content:
```
1 import sys
2 import json
3 import socket
4 from cgi import parse_header
5 from collections import namedtuple
6 from http.cookies import SimpleCookie
7 from httptools import parse_url
8 from urllib.parse import parse_qs, urlunparse
9
10 try:
11 from ujson import loads as json_loads
12 except ImportError:
13 if sys.version_info[:2] == (3, 5):
14 def json_loads(data):
15 # on Python 3.5 json.loads only supports str not bytes
16 return json.loads(data.decode())
17 else:
18 json_loads = json.loads
19
20 from sanic.exceptions import InvalidUsage
21 from sanic.log import error_logger, logger
22
23 DEFAULT_HTTP_CONTENT_TYPE = "application/octet-stream"
24
25
26 # HTTP/1.1: https://www.w3.org/Protocols/rfc2616/rfc2616-sec7.html#sec7.2.1
27 # > If the media type remains unknown, the recipient SHOULD treat it
28 # > as type "application/octet-stream"
29
30
31 class RequestParameters(dict):
32 """Hosts a dict with lists as values where get returns the first
33 value of the list and getlist returns the whole shebang
34 """
35
36 def get(self, name, default=None):
37 """Return the first value, either the default or actual"""
38 return super().get(name, [default])[0]
39
40 def getlist(self, name, default=None):
41 """Return the entire list"""
42 return super().get(name, default)
43
44
45 class Request(dict):
46 """Properties of an HTTP request such as URL, headers, etc."""
47 __slots__ = (
48 'app', 'headers', 'version', 'method', '_cookies', 'transport',
49 'body', 'parsed_json', 'parsed_args', 'parsed_form', 'parsed_files',
50 '_ip', '_parsed_url', 'uri_template', 'stream', '_remote_addr',
51 '_socket', '_port', '__weakref__'
52 )
53
54 def __init__(self, url_bytes, headers, version, method, transport):
55 # TODO: Content-Encoding detection
56 self._parsed_url = parse_url(url_bytes)
57 self.app = None
58
59 self.headers = headers
60 self.version = version
61 self.method = method
62 self.transport = transport
63
64 # Init but do not inhale
65 self.body = []
66 self.parsed_json = None
67 self.parsed_form = None
68 self.parsed_files = None
69 self.parsed_args = None
70 self.uri_template = None
71 self._cookies = None
72 self.stream = None
73
74 def __repr__(self):
75 if self.method is None or not self.path:
76 return '<{0}>'.format(self.__class__.__name__)
77 return '<{0}: {1} {2}>'.format(self.__class__.__name__,
78 self.method,
79 self.path)
80
81 @property
82 def json(self):
83 if self.parsed_json is None:
84 self.load_json()
85
86 return self.parsed_json
87
88 def load_json(self, loads=json_loads):
89 try:
90 self.parsed_json = loads(self.body)
91 except Exception:
92 if not self.body:
93 return None
94 raise InvalidUsage("Failed when parsing body as json")
95
96 return self.parsed_json
97
98 @property
99 def token(self):
100 """Attempt to return the auth header token.
101
102 :return: token related to request
103 """
104 prefixes = ('Bearer', 'Token')
105 auth_header = self.headers.get('Authorization')
106
107 if auth_header is not None:
108 for prefix in prefixes:
109 if prefix in auth_header:
110 return auth_header.partition(prefix)[-1].strip()
111
112 return auth_header
113
114 @property
115 def form(self):
116 if self.parsed_form is None:
117 self.parsed_form = RequestParameters()
118 self.parsed_files = RequestParameters()
119 content_type = self.headers.get(
120 'Content-Type', DEFAULT_HTTP_CONTENT_TYPE)
121 content_type, parameters = parse_header(content_type)
122 try:
123 if content_type == 'application/x-www-form-urlencoded':
124 self.parsed_form = RequestParameters(
125 parse_qs(self.body.decode('utf-8')))
126 elif content_type == 'multipart/form-data':
127 # TODO: Stream this instead of reading to/from memory
128 boundary = parameters['boundary'].encode('utf-8')
129 self.parsed_form, self.parsed_files = (
130 parse_multipart_form(self.body, boundary))
131 except Exception:
132 error_logger.exception("Failed when parsing form")
133
134 return self.parsed_form
135
136 @property
137 def files(self):
138 if self.parsed_files is None:
139 self.form # compute form to get files
140
141 return self.parsed_files
142
143 @property
144 def args(self):
145 if self.parsed_args is None:
146 if self.query_string:
147 self.parsed_args = RequestParameters(
148 parse_qs(self.query_string))
149 else:
150 self.parsed_args = RequestParameters()
151 return self.parsed_args
152
153 @property
154 def raw_args(self):
155 return {k: v[0] for k, v in self.args.items()}
156
157 @property
158 def cookies(self):
159 if self._cookies is None:
160 cookie = self.headers.get('Cookie')
161 if cookie is not None:
162 cookies = SimpleCookie()
163 cookies.load(cookie)
164 self._cookies = {name: cookie.value
165 for name, cookie in cookies.items()}
166 else:
167 self._cookies = {}
168 return self._cookies
169
170 @property
171 def ip(self):
172 if not hasattr(self, '_socket'):
173 self._get_address()
174 return self._ip
175
176 @property
177 def port(self):
178 if not hasattr(self, '_socket'):
179 self._get_address()
180 return self._port
181
182 @property
183 def socket(self):
184 if not hasattr(self, '_socket'):
185 self._get_address()
186 return self._socket
187
188 def _get_address(self):
189 sock = self.transport.get_extra_info('socket')
190
191 if sock.family == socket.AF_INET:
192 self._socket = (self.transport.get_extra_info('peername') or
193 (None, None))
194 self._ip, self._port = self._socket
195 elif sock.family == socket.AF_INET6:
196 self._socket = (self.transport.get_extra_info('peername') or
197 (None, None, None, None))
198 self._ip, self._port, *_ = self._socket
199 else:
200 self._ip, self._port = (None, None)
201
202 @property
203 def remote_addr(self):
204 """Attempt to return the original client ip based on X-Forwarded-For.
205
206 :return: original client ip.
207 """
208 if not hasattr(self, '_remote_addr'):
209 forwarded_for = self.headers.get('X-Forwarded-For', '').split(',')
210 remote_addrs = [
211 addr for addr in [
212 addr.strip() for addr in forwarded_for
213 ] if addr
214 ]
215 if len(remote_addrs) > 0:
216 self._remote_addr = remote_addrs[0]
217 else:
218 self._remote_addr = ''
219 return self._remote_addr
220
221 @property
222 def scheme(self):
223 if self.app.websocket_enabled \
224 and self.headers.get('upgrade') == 'websocket':
225 scheme = 'ws'
226 else:
227 scheme = 'http'
228
229 if self.transport.get_extra_info('sslcontext'):
230 scheme += 's'
231
232 return scheme
233
234 @property
235 def host(self):
236 # it appears that httptools doesn't return the host
237 # so pull it from the headers
238 return self.headers.get('Host', '')
239
240 @property
241 def content_type(self):
242 return self.headers.get('Content-Type', DEFAULT_HTTP_CONTENT_TYPE)
243
244 @property
245 def match_info(self):
246 """return matched info after resolving route"""
247 return self.app.router.get(self)[2]
248
249 @property
250 def path(self):
251 return self._parsed_url.path.decode('utf-8')
252
253 @property
254 def query_string(self):
255 if self._parsed_url.query:
256 return self._parsed_url.query.decode('utf-8')
257 else:
258 return ''
259
260 @property
261 def url(self):
262 return urlunparse((
263 self.scheme,
264 self.host,
265 self.path,
266 None,
267 self.query_string,
268 None))
269
270
271 File = namedtuple('File', ['type', 'body', 'name'])
272
273
274 def parse_multipart_form(body, boundary):
275 """Parse a request body and returns fields and files
276
277 :param body: bytes request body
278 :param boundary: bytes multipart boundary
279 :return: fields (RequestParameters), files (RequestParameters)
280 """
281 files = RequestParameters()
282 fields = RequestParameters()
283
284 form_parts = body.split(boundary)
285 for form_part in form_parts[1:-1]:
286 file_name = None
287 content_type = 'text/plain'
288 content_charset = 'utf-8'
289 field_name = None
290 line_index = 2
291 line_end_index = 0
292 while not line_end_index == -1:
293 line_end_index = form_part.find(b'\r\n', line_index)
294 form_line = form_part[line_index:line_end_index].decode('utf-8')
295 line_index = line_end_index + 2
296
297 if not form_line:
298 break
299
300 colon_index = form_line.index(':')
301 form_header_field = form_line[0:colon_index].lower()
302 form_header_value, form_parameters = parse_header(
303 form_line[colon_index + 2:])
304
305 if form_header_field == 'content-disposition':
306 file_name = form_parameters.get('filename')
307 field_name = form_parameters.get('name')
308 elif form_header_field == 'content-type':
309 content_type = form_header_value
310 content_charset = form_parameters.get('charset', 'utf-8')
311
312 if field_name:
313 post_data = form_part[line_index:-4]
314 if file_name:
315 form_file = File(type=content_type,
316 name=file_name,
317 body=post_data)
318 if field_name in files:
319 files[field_name].append(form_file)
320 else:
321 files[field_name] = [form_file]
322 else:
323 value = post_data.decode(content_charset)
324 if field_name in fields:
325 fields[field_name].append(value)
326 else:
327 fields[field_name] = [value]
328 else:
329 logger.debug('Form-data field does not have a \'name\' parameter \
330 in the Content-Disposition header')
331
332 return fields, files
333
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sanic/request.py b/sanic/request.py
--- a/sanic/request.py
+++ b/sanic/request.py
@@ -78,6 +78,11 @@
self.method,
self.path)
+ def __bool__(self):
+ if self.transport:
+ return True
+ return False
+
@property
def json(self):
if self.parsed_json is None:
|
{"golden_diff": "diff --git a/sanic/request.py b/sanic/request.py\n--- a/sanic/request.py\n+++ b/sanic/request.py\n@@ -78,6 +78,11 @@\n self.method,\n self.path)\n \n+ def __bool__(self):\n+ if self.transport:\n+ return True\n+ return False\n+\n @property\n def json(self):\n if self.parsed_json is None:\n", "issue": "Sanic `Request` object is falsey\n```python\r\[email protected]('/myroute')\r\nasync def someroute(request):\r\n if request:\r\n return 'some data'\r\n raise Exception(\"Woops\")\r\n```\r\n\r\nThis code will raise the exception because `bool(request)` is `False`. \n", "before_files": [{"content": "import sys\nimport json\nimport socket\nfrom cgi import parse_header\nfrom collections import namedtuple\nfrom http.cookies import SimpleCookie\nfrom httptools import parse_url\nfrom urllib.parse import parse_qs, urlunparse\n\ntry:\n from ujson import loads as json_loads\nexcept ImportError:\n if sys.version_info[:2] == (3, 5):\n def json_loads(data):\n # on Python 3.5 json.loads only supports str not bytes\n return json.loads(data.decode())\n else:\n json_loads = json.loads\n\nfrom sanic.exceptions import InvalidUsage\nfrom sanic.log import error_logger, logger\n\nDEFAULT_HTTP_CONTENT_TYPE = \"application/octet-stream\"\n\n\n# HTTP/1.1: https://www.w3.org/Protocols/rfc2616/rfc2616-sec7.html#sec7.2.1\n# > If the media type remains unknown, the recipient SHOULD treat it\n# > as type \"application/octet-stream\"\n\n\nclass RequestParameters(dict):\n \"\"\"Hosts a dict with lists as values where get returns the first\n value of the list and getlist returns the whole shebang\n \"\"\"\n\n def get(self, name, default=None):\n \"\"\"Return the first value, either the default or actual\"\"\"\n return super().get(name, [default])[0]\n\n def getlist(self, name, default=None):\n \"\"\"Return the entire list\"\"\"\n return super().get(name, default)\n\n\nclass Request(dict):\n \"\"\"Properties of an HTTP request such as URL, headers, etc.\"\"\"\n __slots__ = (\n 'app', 'headers', 'version', 'method', '_cookies', 'transport',\n 'body', 'parsed_json', 'parsed_args', 'parsed_form', 'parsed_files',\n '_ip', '_parsed_url', 'uri_template', 'stream', '_remote_addr',\n '_socket', '_port', '__weakref__'\n )\n\n def __init__(self, url_bytes, headers, version, method, transport):\n # TODO: Content-Encoding detection\n self._parsed_url = parse_url(url_bytes)\n self.app = None\n\n self.headers = headers\n self.version = version\n self.method = method\n self.transport = transport\n\n # Init but do not inhale\n self.body = []\n self.parsed_json = None\n self.parsed_form = None\n self.parsed_files = None\n self.parsed_args = None\n self.uri_template = None\n self._cookies = None\n self.stream = None\n\n def __repr__(self):\n if self.method is None or not self.path:\n return '<{0}>'.format(self.__class__.__name__)\n return '<{0}: {1} {2}>'.format(self.__class__.__name__,\n self.method,\n self.path)\n\n @property\n def json(self):\n if self.parsed_json is None:\n self.load_json()\n\n return self.parsed_json\n\n def load_json(self, loads=json_loads):\n try:\n self.parsed_json = loads(self.body)\n except Exception:\n if not self.body:\n return None\n raise InvalidUsage(\"Failed when parsing body as json\")\n\n return self.parsed_json\n\n @property\n def token(self):\n \"\"\"Attempt to return the auth header token.\n\n :return: token related to request\n \"\"\"\n prefixes = ('Bearer', 'Token')\n auth_header = self.headers.get('Authorization')\n\n if auth_header is not None:\n for prefix in prefixes:\n if prefix in auth_header:\n return auth_header.partition(prefix)[-1].strip()\n\n return auth_header\n\n @property\n def form(self):\n if self.parsed_form is None:\n self.parsed_form = RequestParameters()\n self.parsed_files = RequestParameters()\n content_type = self.headers.get(\n 'Content-Type', DEFAULT_HTTP_CONTENT_TYPE)\n content_type, parameters = parse_header(content_type)\n try:\n if content_type == 'application/x-www-form-urlencoded':\n self.parsed_form = RequestParameters(\n parse_qs(self.body.decode('utf-8')))\n elif content_type == 'multipart/form-data':\n # TODO: Stream this instead of reading to/from memory\n boundary = parameters['boundary'].encode('utf-8')\n self.parsed_form, self.parsed_files = (\n parse_multipart_form(self.body, boundary))\n except Exception:\n error_logger.exception(\"Failed when parsing form\")\n\n return self.parsed_form\n\n @property\n def files(self):\n if self.parsed_files is None:\n self.form # compute form to get files\n\n return self.parsed_files\n\n @property\n def args(self):\n if self.parsed_args is None:\n if self.query_string:\n self.parsed_args = RequestParameters(\n parse_qs(self.query_string))\n else:\n self.parsed_args = RequestParameters()\n return self.parsed_args\n\n @property\n def raw_args(self):\n return {k: v[0] for k, v in self.args.items()}\n\n @property\n def cookies(self):\n if self._cookies is None:\n cookie = self.headers.get('Cookie')\n if cookie is not None:\n cookies = SimpleCookie()\n cookies.load(cookie)\n self._cookies = {name: cookie.value\n for name, cookie in cookies.items()}\n else:\n self._cookies = {}\n return self._cookies\n\n @property\n def ip(self):\n if not hasattr(self, '_socket'):\n self._get_address()\n return self._ip\n\n @property\n def port(self):\n if not hasattr(self, '_socket'):\n self._get_address()\n return self._port\n\n @property\n def socket(self):\n if not hasattr(self, '_socket'):\n self._get_address()\n return self._socket\n\n def _get_address(self):\n sock = self.transport.get_extra_info('socket')\n\n if sock.family == socket.AF_INET:\n self._socket = (self.transport.get_extra_info('peername') or\n (None, None))\n self._ip, self._port = self._socket\n elif sock.family == socket.AF_INET6:\n self._socket = (self.transport.get_extra_info('peername') or\n (None, None, None, None))\n self._ip, self._port, *_ = self._socket\n else:\n self._ip, self._port = (None, None)\n\n @property\n def remote_addr(self):\n \"\"\"Attempt to return the original client ip based on X-Forwarded-For.\n\n :return: original client ip.\n \"\"\"\n if not hasattr(self, '_remote_addr'):\n forwarded_for = self.headers.get('X-Forwarded-For', '').split(',')\n remote_addrs = [\n addr for addr in [\n addr.strip() for addr in forwarded_for\n ] if addr\n ]\n if len(remote_addrs) > 0:\n self._remote_addr = remote_addrs[0]\n else:\n self._remote_addr = ''\n return self._remote_addr\n\n @property\n def scheme(self):\n if self.app.websocket_enabled \\\n and self.headers.get('upgrade') == 'websocket':\n scheme = 'ws'\n else:\n scheme = 'http'\n\n if self.transport.get_extra_info('sslcontext'):\n scheme += 's'\n\n return scheme\n\n @property\n def host(self):\n # it appears that httptools doesn't return the host\n # so pull it from the headers\n return self.headers.get('Host', '')\n\n @property\n def content_type(self):\n return self.headers.get('Content-Type', DEFAULT_HTTP_CONTENT_TYPE)\n\n @property\n def match_info(self):\n \"\"\"return matched info after resolving route\"\"\"\n return self.app.router.get(self)[2]\n\n @property\n def path(self):\n return self._parsed_url.path.decode('utf-8')\n\n @property\n def query_string(self):\n if self._parsed_url.query:\n return self._parsed_url.query.decode('utf-8')\n else:\n return ''\n\n @property\n def url(self):\n return urlunparse((\n self.scheme,\n self.host,\n self.path,\n None,\n self.query_string,\n None))\n\n\nFile = namedtuple('File', ['type', 'body', 'name'])\n\n\ndef parse_multipart_form(body, boundary):\n \"\"\"Parse a request body and returns fields and files\n\n :param body: bytes request body\n :param boundary: bytes multipart boundary\n :return: fields (RequestParameters), files (RequestParameters)\n \"\"\"\n files = RequestParameters()\n fields = RequestParameters()\n\n form_parts = body.split(boundary)\n for form_part in form_parts[1:-1]:\n file_name = None\n content_type = 'text/plain'\n content_charset = 'utf-8'\n field_name = None\n line_index = 2\n line_end_index = 0\n while not line_end_index == -1:\n line_end_index = form_part.find(b'\\r\\n', line_index)\n form_line = form_part[line_index:line_end_index].decode('utf-8')\n line_index = line_end_index + 2\n\n if not form_line:\n break\n\n colon_index = form_line.index(':')\n form_header_field = form_line[0:colon_index].lower()\n form_header_value, form_parameters = parse_header(\n form_line[colon_index + 2:])\n\n if form_header_field == 'content-disposition':\n file_name = form_parameters.get('filename')\n field_name = form_parameters.get('name')\n elif form_header_field == 'content-type':\n content_type = form_header_value\n content_charset = form_parameters.get('charset', 'utf-8')\n\n if field_name:\n post_data = form_part[line_index:-4]\n if file_name:\n form_file = File(type=content_type,\n name=file_name,\n body=post_data)\n if field_name in files:\n files[field_name].append(form_file)\n else:\n files[field_name] = [form_file]\n else:\n value = post_data.decode(content_charset)\n if field_name in fields:\n fields[field_name].append(value)\n else:\n fields[field_name] = [value]\n else:\n logger.debug('Form-data field does not have a \\'name\\' parameter \\\n in the Content-Disposition header')\n\n return fields, files\n", "path": "sanic/request.py"}], "after_files": [{"content": "import sys\nimport json\nimport socket\nfrom cgi import parse_header\nfrom collections import namedtuple\nfrom http.cookies import SimpleCookie\nfrom httptools import parse_url\nfrom urllib.parse import parse_qs, urlunparse\n\ntry:\n from ujson import loads as json_loads\nexcept ImportError:\n if sys.version_info[:2] == (3, 5):\n def json_loads(data):\n # on Python 3.5 json.loads only supports str not bytes\n return json.loads(data.decode())\n else:\n json_loads = json.loads\n\nfrom sanic.exceptions import InvalidUsage\nfrom sanic.log import error_logger, logger\n\nDEFAULT_HTTP_CONTENT_TYPE = \"application/octet-stream\"\n\n\n# HTTP/1.1: https://www.w3.org/Protocols/rfc2616/rfc2616-sec7.html#sec7.2.1\n# > If the media type remains unknown, the recipient SHOULD treat it\n# > as type \"application/octet-stream\"\n\n\nclass RequestParameters(dict):\n \"\"\"Hosts a dict with lists as values where get returns the first\n value of the list and getlist returns the whole shebang\n \"\"\"\n\n def get(self, name, default=None):\n \"\"\"Return the first value, either the default or actual\"\"\"\n return super().get(name, [default])[0]\n\n def getlist(self, name, default=None):\n \"\"\"Return the entire list\"\"\"\n return super().get(name, default)\n\n\nclass Request(dict):\n \"\"\"Properties of an HTTP request such as URL, headers, etc.\"\"\"\n __slots__ = (\n 'app', 'headers', 'version', 'method', '_cookies', 'transport',\n 'body', 'parsed_json', 'parsed_args', 'parsed_form', 'parsed_files',\n '_ip', '_parsed_url', 'uri_template', 'stream', '_remote_addr',\n '_socket', '_port', '__weakref__'\n )\n\n def __init__(self, url_bytes, headers, version, method, transport):\n # TODO: Content-Encoding detection\n self._parsed_url = parse_url(url_bytes)\n self.app = None\n\n self.headers = headers\n self.version = version\n self.method = method\n self.transport = transport\n\n # Init but do not inhale\n self.body = []\n self.parsed_json = None\n self.parsed_form = None\n self.parsed_files = None\n self.parsed_args = None\n self.uri_template = None\n self._cookies = None\n self.stream = None\n\n def __repr__(self):\n if self.method is None or not self.path:\n return '<{0}>'.format(self.__class__.__name__)\n return '<{0}: {1} {2}>'.format(self.__class__.__name__,\n self.method,\n self.path)\n\n def __bool__(self):\n if self.transport:\n return True\n return False\n\n @property\n def json(self):\n if self.parsed_json is None:\n self.load_json()\n\n return self.parsed_json\n\n def load_json(self, loads=json_loads):\n try:\n self.parsed_json = loads(self.body)\n except Exception:\n if not self.body:\n return None\n raise InvalidUsage(\"Failed when parsing body as json\")\n\n return self.parsed_json\n\n @property\n def token(self):\n \"\"\"Attempt to return the auth header token.\n\n :return: token related to request\n \"\"\"\n prefixes = ('Bearer', 'Token')\n auth_header = self.headers.get('Authorization')\n\n if auth_header is not None:\n for prefix in prefixes:\n if prefix in auth_header:\n return auth_header.partition(prefix)[-1].strip()\n\n return auth_header\n\n @property\n def form(self):\n if self.parsed_form is None:\n self.parsed_form = RequestParameters()\n self.parsed_files = RequestParameters()\n content_type = self.headers.get(\n 'Content-Type', DEFAULT_HTTP_CONTENT_TYPE)\n content_type, parameters = parse_header(content_type)\n try:\n if content_type == 'application/x-www-form-urlencoded':\n self.parsed_form = RequestParameters(\n parse_qs(self.body.decode('utf-8')))\n elif content_type == 'multipart/form-data':\n # TODO: Stream this instead of reading to/from memory\n boundary = parameters['boundary'].encode('utf-8')\n self.parsed_form, self.parsed_files = (\n parse_multipart_form(self.body, boundary))\n except Exception:\n error_logger.exception(\"Failed when parsing form\")\n\n return self.parsed_form\n\n @property\n def files(self):\n if self.parsed_files is None:\n self.form # compute form to get files\n\n return self.parsed_files\n\n @property\n def args(self):\n if self.parsed_args is None:\n if self.query_string:\n self.parsed_args = RequestParameters(\n parse_qs(self.query_string))\n else:\n self.parsed_args = RequestParameters()\n return self.parsed_args\n\n @property\n def raw_args(self):\n return {k: v[0] for k, v in self.args.items()}\n\n @property\n def cookies(self):\n if self._cookies is None:\n cookie = self.headers.get('Cookie')\n if cookie is not None:\n cookies = SimpleCookie()\n cookies.load(cookie)\n self._cookies = {name: cookie.value\n for name, cookie in cookies.items()}\n else:\n self._cookies = {}\n return self._cookies\n\n @property\n def ip(self):\n if not hasattr(self, '_socket'):\n self._get_address()\n return self._ip\n\n @property\n def port(self):\n if not hasattr(self, '_socket'):\n self._get_address()\n return self._port\n\n @property\n def socket(self):\n if not hasattr(self, '_socket'):\n self._get_address()\n return self._socket\n\n def _get_address(self):\n sock = self.transport.get_extra_info('socket')\n\n if sock.family == socket.AF_INET:\n self._socket = (self.transport.get_extra_info('peername') or\n (None, None))\n self._ip, self._port = self._socket\n elif sock.family == socket.AF_INET6:\n self._socket = (self.transport.get_extra_info('peername') or\n (None, None, None, None))\n self._ip, self._port, *_ = self._socket\n else:\n self._ip, self._port = (None, None)\n\n @property\n def remote_addr(self):\n \"\"\"Attempt to return the original client ip based on X-Forwarded-For.\n\n :return: original client ip.\n \"\"\"\n if not hasattr(self, '_remote_addr'):\n forwarded_for = self.headers.get('X-Forwarded-For', '').split(',')\n remote_addrs = [\n addr for addr in [\n addr.strip() for addr in forwarded_for\n ] if addr\n ]\n if len(remote_addrs) > 0:\n self._remote_addr = remote_addrs[0]\n else:\n self._remote_addr = ''\n return self._remote_addr\n\n @property\n def scheme(self):\n if self.app.websocket_enabled \\\n and self.headers.get('upgrade') == 'websocket':\n scheme = 'ws'\n else:\n scheme = 'http'\n\n if self.transport.get_extra_info('sslcontext'):\n scheme += 's'\n\n return scheme\n\n @property\n def host(self):\n # it appears that httptools doesn't return the host\n # so pull it from the headers\n return self.headers.get('Host', '')\n\n @property\n def content_type(self):\n return self.headers.get('Content-Type', DEFAULT_HTTP_CONTENT_TYPE)\n\n @property\n def match_info(self):\n \"\"\"return matched info after resolving route\"\"\"\n return self.app.router.get(self)[2]\n\n @property\n def path(self):\n return self._parsed_url.path.decode('utf-8')\n\n @property\n def query_string(self):\n if self._parsed_url.query:\n return self._parsed_url.query.decode('utf-8')\n else:\n return ''\n\n @property\n def url(self):\n return urlunparse((\n self.scheme,\n self.host,\n self.path,\n None,\n self.query_string,\n None))\n\n\nFile = namedtuple('File', ['type', 'body', 'name'])\n\n\ndef parse_multipart_form(body, boundary):\n \"\"\"Parse a request body and returns fields and files\n\n :param body: bytes request body\n :param boundary: bytes multipart boundary\n :return: fields (RequestParameters), files (RequestParameters)\n \"\"\"\n files = RequestParameters()\n fields = RequestParameters()\n\n form_parts = body.split(boundary)\n for form_part in form_parts[1:-1]:\n file_name = None\n content_type = 'text/plain'\n content_charset = 'utf-8'\n field_name = None\n line_index = 2\n line_end_index = 0\n while not line_end_index == -1:\n line_end_index = form_part.find(b'\\r\\n', line_index)\n form_line = form_part[line_index:line_end_index].decode('utf-8')\n line_index = line_end_index + 2\n\n if not form_line:\n break\n\n colon_index = form_line.index(':')\n form_header_field = form_line[0:colon_index].lower()\n form_header_value, form_parameters = parse_header(\n form_line[colon_index + 2:])\n\n if form_header_field == 'content-disposition':\n file_name = form_parameters.get('filename')\n field_name = form_parameters.get('name')\n elif form_header_field == 'content-type':\n content_type = form_header_value\n content_charset = form_parameters.get('charset', 'utf-8')\n\n if field_name:\n post_data = form_part[line_index:-4]\n if file_name:\n form_file = File(type=content_type,\n name=file_name,\n body=post_data)\n if field_name in files:\n files[field_name].append(form_file)\n else:\n files[field_name] = [form_file]\n else:\n value = post_data.decode(content_charset)\n if field_name in fields:\n fields[field_name].append(value)\n else:\n fields[field_name] = [value]\n else:\n logger.debug('Form-data field does not have a \\'name\\' parameter \\\n in the Content-Disposition header')\n\n return fields, files\n", "path": "sanic/request.py"}]}
| 3,492 | 93 |
gh_patches_debug_3952
|
rasdani/github-patches
|
git_diff
|
WeblateOrg__weblate-10868
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Backup restore locks projects and shows no existing components
### Describe the issue
I was testing the borg backup feature in weblate and upon restoring the backup I noticed that the projects had the label that indicated that the project is locked (See Image).
I see that the information is all there in the backup but it is not shown in the app. For example, after opening the projects, no components are found.
I have found little to no information online regarding such an issue.
### I already tried
- [X] I've read and searched [the documentation](https://docs.weblate.org/).
- [X] I've searched for similar filed issues in this repository.
### Steps to reproduce the behavior
The issue happened after following the directions for a borg backup and restore in the weblate documentation.
### Expected behavior
_No response_
### Screenshots

### Exception traceback
_No response_
### How do you run Weblate?
Docker container
### Weblate versions
weblate/weblate:edge
Hash: sha256:780310018d21eba2ef5e06b66acab7bf2e86b3ed1778640707ad47d0525cb213.
### Weblate deploy checks
_No response_
### Additional context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `weblate/utils/tasks.py`
Content:
```
1 # Copyright © Michal Čihař <[email protected]>
2 #
3 # SPDX-License-Identifier: GPL-3.0-or-later
4
5 import gzip
6 import os
7 import shutil
8 import subprocess
9 import sys
10 import time
11 from importlib import import_module
12 from shutil import copyfile
13
14 from celery.schedules import crontab
15 from django.conf import settings
16 from django.core.cache import cache
17 from django.core.management.commands import diffsettings
18 from ruamel.yaml import YAML
19
20 import weblate.utils.version
21 from weblate.formats.models import FILE_FORMATS
22 from weblate.logger import LOGGER
23 from weblate.machinery.models import MACHINERY
24 from weblate.trans.models import Component, Translation
25 from weblate.trans.util import get_clean_env
26 from weblate.utils.backup import backup_lock
27 from weblate.utils.celery import app
28 from weblate.utils.data import data_dir
29 from weblate.utils.db import using_postgresql
30 from weblate.utils.errors import add_breadcrumb, report_error
31 from weblate.utils.lock import WeblateLockTimeoutError
32 from weblate.vcs.models import VCS_REGISTRY
33
34
35 @app.task(trail=False)
36 def ping():
37 return {
38 "version": weblate.utils.version.GIT_VERSION,
39 "vcs": sorted(VCS_REGISTRY.keys()),
40 "formats": sorted(FILE_FORMATS.keys()),
41 "mt_services": sorted(MACHINERY.keys()),
42 "encoding": [sys.getfilesystemencoding(), sys.getdefaultencoding()],
43 "uid": os.getuid(),
44 }
45
46
47 @app.task(trail=False)
48 def heartbeat():
49 cache.set("celery_loaded", time.time())
50 cache.set("celery_heartbeat", time.time())
51 cache.set(
52 "celery_encoding", [sys.getfilesystemencoding(), sys.getdefaultencoding()]
53 )
54
55
56 @app.task(trail=False, autoretry_for=(WeblateLockTimeoutError,))
57 def settings_backup():
58 with backup_lock():
59 # Expand settings in case it contains non-trivial code
60 command = diffsettings.Command()
61 kwargs = {"default": None, "all": False, "output": "hash"}
62 with open(data_dir("backups", "settings-expanded.py"), "w") as handle:
63 handle.write(command.handle(**kwargs))
64
65 # Backup original settings
66 if settings.SETTINGS_MODULE:
67 settings_mod = import_module(settings.SETTINGS_MODULE)
68 copyfile(settings_mod.__file__, data_dir("backups", "settings.py"))
69
70 # Backup environment (to make restoring Docker easier)
71 with open(data_dir("backups", "environment.yml"), "w") as handle:
72 yaml = YAML()
73 yaml.dump(dict(os.environ), handle)
74
75
76 @app.task(trail=False)
77 def update_translation_stats_parents(pk: int):
78 translation = Translation.objects.get(pk=pk)
79 translation.stats.update_parents()
80
81
82 @app.task(trail=False)
83 def update_language_stats_parents(pk: int):
84 component = Component.objects.get(pk=pk)
85 component.stats.update_language_stats_parents()
86
87
88 @app.task(trail=False, autoretry_for=(WeblateLockTimeoutError,))
89 def database_backup():
90 if settings.DATABASE_BACKUP == "none":
91 return
92 with backup_lock():
93 database = settings.DATABASES["default"]
94 env = get_clean_env()
95 compress = settings.DATABASE_BACKUP == "compressed"
96
97 out_compressed = data_dir("backups", "database.sql.gz")
98 out_text = data_dir("backups", "database.sql")
99
100 if using_postgresql():
101 cmd = [
102 "pg_dump",
103 # Superuser only, crashes on Alibaba Cloud Database PolarDB
104 "--no-subscriptions",
105 "--dbname",
106 database["NAME"],
107 ]
108
109 if database["HOST"]:
110 cmd.extend(["--host", database["HOST"]])
111 if database["PORT"]:
112 cmd.extend(["--port", database["PORT"]])
113 if database["USER"]:
114 cmd.extend(["--username", database["USER"]])
115 if settings.DATABASE_BACKUP == "compressed":
116 cmd.extend(["--file", out_compressed])
117 cmd.extend(["--compress", "6"])
118 compress = False
119 else:
120 cmd.extend(["--file", out_text])
121
122 env["PGPASSWORD"] = database["PASSWORD"]
123 else:
124 cmd = [
125 "mysqldump",
126 "--result-file",
127 out_text,
128 "--single-transaction",
129 "--skip-lock-tables",
130 ]
131
132 if database["HOST"]:
133 cmd.extend(["--host", database["HOST"]])
134 if database["PORT"]:
135 cmd.extend(["--port", database["PORT"]])
136 if database["USER"]:
137 cmd.extend(["--user", database["USER"]])
138
139 cmd.extend(["--databases", database["NAME"]])
140
141 env["MYSQL_PWD"] = database["PASSWORD"]
142
143 try:
144 subprocess.run(
145 cmd,
146 env=env,
147 capture_output=True,
148 stdin=subprocess.DEVNULL,
149 check=True,
150 text=True,
151 )
152 except subprocess.CalledProcessError as error:
153 add_breadcrumb(
154 category="backup",
155 message="database dump output",
156 stdout=error.stdout,
157 stderr=error.stderr,
158 )
159 LOGGER.error("failed database backup: %s", error.stderr)
160 report_error()
161 raise
162
163 if compress:
164 with open(out_text, "rb") as f_in, gzip.open(out_compressed, "wb") as f_out:
165 shutil.copyfileobj(f_in, f_out)
166 os.unlink(out_text)
167
168
169 @app.on_after_finalize.connect
170 def setup_periodic_tasks(sender, **kwargs):
171 cache.set("celery_loaded", time.time())
172 sender.add_periodic_task(
173 crontab(hour=1, minute=0), settings_backup.s(), name="settings-backup"
174 )
175 sender.add_periodic_task(
176 crontab(hour=1, minute=30), database_backup.s(), name="database-backup"
177 )
178 sender.add_periodic_task(60, heartbeat.s(), name="heartbeat")
179
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/weblate/utils/tasks.py b/weblate/utils/tasks.py
--- a/weblate/utils/tasks.py
+++ b/weblate/utils/tasks.py
@@ -102,6 +102,8 @@
"pg_dump",
# Superuser only, crashes on Alibaba Cloud Database PolarDB
"--no-subscriptions",
+ "--clean",
+ "--if-exists",
"--dbname",
database["NAME"],
]
|
{"golden_diff": "diff --git a/weblate/utils/tasks.py b/weblate/utils/tasks.py\n--- a/weblate/utils/tasks.py\n+++ b/weblate/utils/tasks.py\n@@ -102,6 +102,8 @@\n \"pg_dump\",\n # Superuser only, crashes on Alibaba Cloud Database PolarDB\n \"--no-subscriptions\",\n+ \"--clean\",\n+ \"--if-exists\",\n \"--dbname\",\n database[\"NAME\"],\n ]\n", "issue": "Backup restore locks projects and shows no existing components\n### Describe the issue\r\n\r\nI was testing the borg backup feature in weblate and upon restoring the backup I noticed that the projects had the label that indicated that the project is locked (See Image). \r\n\r\nI see that the information is all there in the backup but it is not shown in the app. For example, after opening the projects, no components are found.\r\n\r\nI have found little to no information online regarding such an issue.\r\n\r\n\r\n\r\n\r\n\r\n### I already tried\r\n\r\n- [X] I've read and searched [the documentation](https://docs.weblate.org/).\r\n- [X] I've searched for similar filed issues in this repository.\r\n\r\n### Steps to reproduce the behavior\r\n\r\nThe issue happened after following the directions for a borg backup and restore in the weblate documentation.\r\n\r\n### Expected behavior\r\n\r\n_No response_\r\n\r\n### Screenshots\r\n\r\n\r\n\r\n\r\n### Exception traceback\r\n\r\n_No response_\r\n\r\n### How do you run Weblate?\r\n\r\nDocker container\r\n\r\n### Weblate versions\r\n\r\n weblate/weblate:edge\r\nHash: sha256:780310018d21eba2ef5e06b66acab7bf2e86b3ed1778640707ad47d0525cb213.\r\n\r\n### Weblate deploy checks\r\n\r\n_No response_\r\n\r\n### Additional context\r\n\r\n_No response_\n", "before_files": [{"content": "# Copyright \u00a9 Michal \u010ciha\u0159 <[email protected]>\n#\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nimport gzip\nimport os\nimport shutil\nimport subprocess\nimport sys\nimport time\nfrom importlib import import_module\nfrom shutil import copyfile\n\nfrom celery.schedules import crontab\nfrom django.conf import settings\nfrom django.core.cache import cache\nfrom django.core.management.commands import diffsettings\nfrom ruamel.yaml import YAML\n\nimport weblate.utils.version\nfrom weblate.formats.models import FILE_FORMATS\nfrom weblate.logger import LOGGER\nfrom weblate.machinery.models import MACHINERY\nfrom weblate.trans.models import Component, Translation\nfrom weblate.trans.util import get_clean_env\nfrom weblate.utils.backup import backup_lock\nfrom weblate.utils.celery import app\nfrom weblate.utils.data import data_dir\nfrom weblate.utils.db import using_postgresql\nfrom weblate.utils.errors import add_breadcrumb, report_error\nfrom weblate.utils.lock import WeblateLockTimeoutError\nfrom weblate.vcs.models import VCS_REGISTRY\n\n\[email protected](trail=False)\ndef ping():\n return {\n \"version\": weblate.utils.version.GIT_VERSION,\n \"vcs\": sorted(VCS_REGISTRY.keys()),\n \"formats\": sorted(FILE_FORMATS.keys()),\n \"mt_services\": sorted(MACHINERY.keys()),\n \"encoding\": [sys.getfilesystemencoding(), sys.getdefaultencoding()],\n \"uid\": os.getuid(),\n }\n\n\[email protected](trail=False)\ndef heartbeat():\n cache.set(\"celery_loaded\", time.time())\n cache.set(\"celery_heartbeat\", time.time())\n cache.set(\n \"celery_encoding\", [sys.getfilesystemencoding(), sys.getdefaultencoding()]\n )\n\n\[email protected](trail=False, autoretry_for=(WeblateLockTimeoutError,))\ndef settings_backup():\n with backup_lock():\n # Expand settings in case it contains non-trivial code\n command = diffsettings.Command()\n kwargs = {\"default\": None, \"all\": False, \"output\": \"hash\"}\n with open(data_dir(\"backups\", \"settings-expanded.py\"), \"w\") as handle:\n handle.write(command.handle(**kwargs))\n\n # Backup original settings\n if settings.SETTINGS_MODULE:\n settings_mod = import_module(settings.SETTINGS_MODULE)\n copyfile(settings_mod.__file__, data_dir(\"backups\", \"settings.py\"))\n\n # Backup environment (to make restoring Docker easier)\n with open(data_dir(\"backups\", \"environment.yml\"), \"w\") as handle:\n yaml = YAML()\n yaml.dump(dict(os.environ), handle)\n\n\[email protected](trail=False)\ndef update_translation_stats_parents(pk: int):\n translation = Translation.objects.get(pk=pk)\n translation.stats.update_parents()\n\n\[email protected](trail=False)\ndef update_language_stats_parents(pk: int):\n component = Component.objects.get(pk=pk)\n component.stats.update_language_stats_parents()\n\n\[email protected](trail=False, autoretry_for=(WeblateLockTimeoutError,))\ndef database_backup():\n if settings.DATABASE_BACKUP == \"none\":\n return\n with backup_lock():\n database = settings.DATABASES[\"default\"]\n env = get_clean_env()\n compress = settings.DATABASE_BACKUP == \"compressed\"\n\n out_compressed = data_dir(\"backups\", \"database.sql.gz\")\n out_text = data_dir(\"backups\", \"database.sql\")\n\n if using_postgresql():\n cmd = [\n \"pg_dump\",\n # Superuser only, crashes on Alibaba Cloud Database PolarDB\n \"--no-subscriptions\",\n \"--dbname\",\n database[\"NAME\"],\n ]\n\n if database[\"HOST\"]:\n cmd.extend([\"--host\", database[\"HOST\"]])\n if database[\"PORT\"]:\n cmd.extend([\"--port\", database[\"PORT\"]])\n if database[\"USER\"]:\n cmd.extend([\"--username\", database[\"USER\"]])\n if settings.DATABASE_BACKUP == \"compressed\":\n cmd.extend([\"--file\", out_compressed])\n cmd.extend([\"--compress\", \"6\"])\n compress = False\n else:\n cmd.extend([\"--file\", out_text])\n\n env[\"PGPASSWORD\"] = database[\"PASSWORD\"]\n else:\n cmd = [\n \"mysqldump\",\n \"--result-file\",\n out_text,\n \"--single-transaction\",\n \"--skip-lock-tables\",\n ]\n\n if database[\"HOST\"]:\n cmd.extend([\"--host\", database[\"HOST\"]])\n if database[\"PORT\"]:\n cmd.extend([\"--port\", database[\"PORT\"]])\n if database[\"USER\"]:\n cmd.extend([\"--user\", database[\"USER\"]])\n\n cmd.extend([\"--databases\", database[\"NAME\"]])\n\n env[\"MYSQL_PWD\"] = database[\"PASSWORD\"]\n\n try:\n subprocess.run(\n cmd,\n env=env,\n capture_output=True,\n stdin=subprocess.DEVNULL,\n check=True,\n text=True,\n )\n except subprocess.CalledProcessError as error:\n add_breadcrumb(\n category=\"backup\",\n message=\"database dump output\",\n stdout=error.stdout,\n stderr=error.stderr,\n )\n LOGGER.error(\"failed database backup: %s\", error.stderr)\n report_error()\n raise\n\n if compress:\n with open(out_text, \"rb\") as f_in, gzip.open(out_compressed, \"wb\") as f_out:\n shutil.copyfileobj(f_in, f_out)\n os.unlink(out_text)\n\n\[email protected]_after_finalize.connect\ndef setup_periodic_tasks(sender, **kwargs):\n cache.set(\"celery_loaded\", time.time())\n sender.add_periodic_task(\n crontab(hour=1, minute=0), settings_backup.s(), name=\"settings-backup\"\n )\n sender.add_periodic_task(\n crontab(hour=1, minute=30), database_backup.s(), name=\"database-backup\"\n )\n sender.add_periodic_task(60, heartbeat.s(), name=\"heartbeat\")\n", "path": "weblate/utils/tasks.py"}], "after_files": [{"content": "# Copyright \u00a9 Michal \u010ciha\u0159 <[email protected]>\n#\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nimport gzip\nimport os\nimport shutil\nimport subprocess\nimport sys\nimport time\nfrom importlib import import_module\nfrom shutil import copyfile\n\nfrom celery.schedules import crontab\nfrom django.conf import settings\nfrom django.core.cache import cache\nfrom django.core.management.commands import diffsettings\nfrom ruamel.yaml import YAML\n\nimport weblate.utils.version\nfrom weblate.formats.models import FILE_FORMATS\nfrom weblate.logger import LOGGER\nfrom weblate.machinery.models import MACHINERY\nfrom weblate.trans.models import Component, Translation\nfrom weblate.trans.util import get_clean_env\nfrom weblate.utils.backup import backup_lock\nfrom weblate.utils.celery import app\nfrom weblate.utils.data import data_dir\nfrom weblate.utils.db import using_postgresql\nfrom weblate.utils.errors import add_breadcrumb, report_error\nfrom weblate.utils.lock import WeblateLockTimeoutError\nfrom weblate.vcs.models import VCS_REGISTRY\n\n\[email protected](trail=False)\ndef ping():\n return {\n \"version\": weblate.utils.version.GIT_VERSION,\n \"vcs\": sorted(VCS_REGISTRY.keys()),\n \"formats\": sorted(FILE_FORMATS.keys()),\n \"mt_services\": sorted(MACHINERY.keys()),\n \"encoding\": [sys.getfilesystemencoding(), sys.getdefaultencoding()],\n \"uid\": os.getuid(),\n }\n\n\[email protected](trail=False)\ndef heartbeat():\n cache.set(\"celery_loaded\", time.time())\n cache.set(\"celery_heartbeat\", time.time())\n cache.set(\n \"celery_encoding\", [sys.getfilesystemencoding(), sys.getdefaultencoding()]\n )\n\n\[email protected](trail=False, autoretry_for=(WeblateLockTimeoutError,))\ndef settings_backup():\n with backup_lock():\n # Expand settings in case it contains non-trivial code\n command = diffsettings.Command()\n kwargs = {\"default\": None, \"all\": False, \"output\": \"hash\"}\n with open(data_dir(\"backups\", \"settings-expanded.py\"), \"w\") as handle:\n handle.write(command.handle(**kwargs))\n\n # Backup original settings\n if settings.SETTINGS_MODULE:\n settings_mod = import_module(settings.SETTINGS_MODULE)\n copyfile(settings_mod.__file__, data_dir(\"backups\", \"settings.py\"))\n\n # Backup environment (to make restoring Docker easier)\n with open(data_dir(\"backups\", \"environment.yml\"), \"w\") as handle:\n yaml = YAML()\n yaml.dump(dict(os.environ), handle)\n\n\[email protected](trail=False)\ndef update_translation_stats_parents(pk: int):\n translation = Translation.objects.get(pk=pk)\n translation.stats.update_parents()\n\n\[email protected](trail=False)\ndef update_language_stats_parents(pk: int):\n component = Component.objects.get(pk=pk)\n component.stats.update_language_stats_parents()\n\n\[email protected](trail=False, autoretry_for=(WeblateLockTimeoutError,))\ndef database_backup():\n if settings.DATABASE_BACKUP == \"none\":\n return\n with backup_lock():\n database = settings.DATABASES[\"default\"]\n env = get_clean_env()\n compress = settings.DATABASE_BACKUP == \"compressed\"\n\n out_compressed = data_dir(\"backups\", \"database.sql.gz\")\n out_text = data_dir(\"backups\", \"database.sql\")\n\n if using_postgresql():\n cmd = [\n \"pg_dump\",\n # Superuser only, crashes on Alibaba Cloud Database PolarDB\n \"--no-subscriptions\",\n \"--clean\",\n \"--if-exists\",\n \"--dbname\",\n database[\"NAME\"],\n ]\n\n if database[\"HOST\"]:\n cmd.extend([\"--host\", database[\"HOST\"]])\n if database[\"PORT\"]:\n cmd.extend([\"--port\", database[\"PORT\"]])\n if database[\"USER\"]:\n cmd.extend([\"--username\", database[\"USER\"]])\n if settings.DATABASE_BACKUP == \"compressed\":\n cmd.extend([\"--file\", out_compressed])\n cmd.extend([\"--compress\", \"6\"])\n compress = False\n else:\n cmd.extend([\"--file\", out_text])\n\n env[\"PGPASSWORD\"] = database[\"PASSWORD\"]\n else:\n cmd = [\n \"mysqldump\",\n \"--result-file\",\n out_text,\n \"--single-transaction\",\n \"--skip-lock-tables\",\n ]\n\n if database[\"HOST\"]:\n cmd.extend([\"--host\", database[\"HOST\"]])\n if database[\"PORT\"]:\n cmd.extend([\"--port\", database[\"PORT\"]])\n if database[\"USER\"]:\n cmd.extend([\"--user\", database[\"USER\"]])\n\n cmd.extend([\"--databases\", database[\"NAME\"]])\n\n env[\"MYSQL_PWD\"] = database[\"PASSWORD\"]\n\n try:\n subprocess.run(\n cmd,\n env=env,\n capture_output=True,\n stdin=subprocess.DEVNULL,\n check=True,\n text=True,\n )\n except subprocess.CalledProcessError as error:\n add_breadcrumb(\n category=\"backup\",\n message=\"database dump output\",\n stdout=error.stdout,\n stderr=error.stderr,\n )\n LOGGER.error(\"failed database backup: %s\", error.stderr)\n report_error()\n raise\n\n if compress:\n with open(out_text, \"rb\") as f_in, gzip.open(out_compressed, \"wb\") as f_out:\n shutil.copyfileobj(f_in, f_out)\n os.unlink(out_text)\n\n\[email protected]_after_finalize.connect\ndef setup_periodic_tasks(sender, **kwargs):\n cache.set(\"celery_loaded\", time.time())\n sender.add_periodic_task(\n crontab(hour=1, minute=0), settings_backup.s(), name=\"settings-backup\"\n )\n sender.add_periodic_task(\n crontab(hour=1, minute=30), database_backup.s(), name=\"database-backup\"\n )\n sender.add_periodic_task(60, heartbeat.s(), name=\"heartbeat\")\n", "path": "weblate/utils/tasks.py"}]}
| 2,357 | 98 |
gh_patches_debug_3383
|
rasdani/github-patches
|
git_diff
|
techmatters__terraso-backend-722
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
harddelete fails with AttributeError: 'NoneType' object has no attribute 'delete'
## Description
The `harddelete` command fails with this backtrace:
```
Aug 5 12:01:39 AM {"extra": {"landscape_id": "UUID('8483fbd9-307a-4b1f-a6f2-2eafaf6526b9')"}, "event": "Landscape has no default group, but it must have", "timestamp": "2023-08-05T04:01:39.275313Z", "logger": "apps.core.models.landscapes", "level": "error"}
Aug 5 12:01:39 AM Traceback (most recent call last):
Aug 5 12:01:39 AM File "/app/terraso_backend/manage.py", line 34, in <module>
Aug 5 12:01:39 AM main()
Aug 5 12:01:39 AM File "/app/terraso_backend/manage.py", line 30, in main
Aug 5 12:01:39 AM execute_from_command_line(sys.argv)
Aug 5 12:01:39 AM File "/home/terraso/.local/lib/python3.11/site-packages/django/core/management/__init__.py", line 442, in execute_from_command_line
Aug 5 12:01:39 AM utility.execute()
Aug 5 12:01:39 AM File "/home/terraso/.local/lib/python3.11/site-packages/django/core/management/__init__.py", line 436, in execute
Aug 5 12:01:39 AM self.fetch_command(subcommand).run_from_argv(self.argv)
Aug 5 12:01:39 AM File "/home/terraso/.local/lib/python3.11/site-packages/django/core/management/base.py", line 412, in run_from_argv
Aug 5 12:01:39 AM self.execute(*args, **cmd_options)
Aug 5 12:01:39 AM File "/home/terraso/.local/lib/python3.11/site-packages/django/core/management/base.py", line 458, in execute
Aug 5 12:01:39 AM output = self.handle(*args, **options)
Aug 5 12:01:39 AM ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Aug 5 12:01:39 AM File "/app/terraso_backend/apps/core/management/commands/harddelete.py", line 62, in handle
Aug 5 12:01:39 AM obj.delete(force_policy=HARD_DELETE)
Aug 5 12:01:39 AM File "/app/terraso_backend/apps/core/models/landscapes.py", line 129, in delete
Aug 5 12:01:39 AM default_group.delete()
Aug 5 12:01:39 AM ^^^^^^^^^^^^^^^^^^^^
Aug 5 12:01:39 AM AttributeError: 'NoneType' object has no attribute 'delete'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `terraso_backend/apps/core/models/landscapes.py`
Content:
```
1 # Copyright © 2021-2023 Technology Matters
2 #
3 # This program is free software: you can redistribute it and/or modify
4 # it under the terms of the GNU Affero General Public License as published
5 # by the Free Software Foundation, either version 3 of the License, or
6 # (at your option) any later version.
7 #
8 # This program is distributed in the hope that it will be useful,
9 # but WITHOUT ANY WARRANTY; without even the implied warranty of
10 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
11 # GNU Affero General Public License for more details.
12 #
13 # You should have received a copy of the GNU Affero General Public License
14 # along with this program. If not, see https://www.gnu.org/licenses/.
15
16 import structlog
17 from dirtyfields import DirtyFieldsMixin
18 from django.db import models, transaction
19
20 from apps.core import permission_rules as perm_rules
21 from apps.core.gis.utils import (
22 calculate_geojson_centroid,
23 calculate_geojson_feature_area,
24 )
25 from apps.core.models.taxonomy_terms import TaxonomyTerm
26
27 from .commons import BaseModel, SlugModel, validate_name
28 from .groups import Group
29 from .users import User
30
31 logger = structlog.get_logger(__name__)
32
33
34 class Landscape(SlugModel, DirtyFieldsMixin):
35 """
36 This model represents a Landscape on Terraso platform.
37
38 A Landscape is a socio-ecological system that consists of natural
39 and/or human-modified ecosystems. Defined by its stakeholds, a
40 Landscape usually has geographical boundaries. It may correspond to,
41 or be a combination of, natural boundaries, distinct land features,
42 socially defined areas such as indigenous territories, and/or
43 jurisdictional and administrative boundaries. The boundaries of a
44 Landscape can cross several countries.
45 """
46
47 fields_to_trim = ["name", "description"]
48
49 name = models.CharField(max_length=128, validators=[validate_name])
50 description = models.TextField(blank=True, default="")
51 website = models.URLField(max_length=500, blank=True, default="")
52 location = models.CharField(max_length=128, blank=True, default="")
53 area_polygon = models.JSONField(blank=True, null=True)
54 email = models.EmailField(blank=True, default="")
55 area_scalar_m2 = models.FloatField(blank=True, null=True)
56
57 created_by = models.ForeignKey(
58 User,
59 blank=True,
60 null=True,
61 on_delete=models.PROTECT,
62 related_name="created_landscapes",
63 )
64 groups = models.ManyToManyField(Group, through="LandscapeGroup")
65
66 area_types = models.JSONField(blank=True, null=True)
67 taxonomy_terms = models.ManyToManyField(TaxonomyTerm, blank=True)
68 population = models.IntegerField(blank=True, null=True)
69
70 PARTNERSHIP_STATUS_NONE = ""
71 PARTNERSHIP_STATUS_NO = "no"
72 PARTNERSHIP_STATUS_IN_PROGRESS = "in-progress"
73 PARTNERSHIP_STATUS_YES = "yes"
74
75 MEMBERSHIP_TYPES = (
76 (PARTNERSHIP_STATUS_NONE, "None"),
77 (PARTNERSHIP_STATUS_NO, "No"),
78 (PARTNERSHIP_STATUS_IN_PROGRESS, "In Progress"),
79 (PARTNERSHIP_STATUS_YES, "Yes"),
80 )
81 partnership_status = models.CharField(
82 max_length=32, choices=MEMBERSHIP_TYPES, blank=True, default=PARTNERSHIP_STATUS_NONE
83 )
84 profile_image = models.URLField(blank=True, default="")
85 profile_image_description = models.TextField(blank=True, default="")
86 center_coordinates = models.JSONField(blank=True, null=True)
87
88 field_to_slug = "name"
89
90 class Meta(SlugModel.Meta):
91 rules_permissions = {
92 "change": perm_rules.allowed_to_change_landscape,
93 "delete": perm_rules.allowed_to_delete_landscape,
94 }
95 _unique_fields = ["name"]
96 abstract = False
97
98 def save(self, *args, **kwargs):
99 dirty_fields = self.get_dirty_fields()
100 if self.area_polygon and "area_polygon" in dirty_fields:
101 area_scalar_m2 = calculate_geojson_feature_area(self.area_polygon)
102 if area_scalar_m2 is not None:
103 self.area_scalar_m2 = round(area_scalar_m2, 3)
104 self.center_coordinates = calculate_geojson_centroid(self.area_polygon)
105
106 with transaction.atomic():
107 creating = not Landscape.objects.filter(pk=self.pk).exists()
108
109 super().save(*args, **kwargs)
110
111 if creating and self.created_by:
112 group = Group(
113 name="Group {}".format(self.slug),
114 description="",
115 created_by=self.created_by,
116 )
117 group.save()
118 landscape_group = LandscapeGroup(
119 group=group, landscape=self, is_default_landscape_group=True
120 )
121 landscape_group.save()
122
123 def delete(self, *args, **kwargs):
124 default_group = self.get_default_group()
125
126 with transaction.atomic():
127 ret = super().delete(*args, **kwargs)
128 # default group should be deleted as well
129 default_group.delete()
130
131 return ret
132
133 def get_default_group(self):
134 """
135 A default Group in a Landscape is that Group where any
136 individual (associated or not with other Groups) is added when
137 associating directly with a Landscape.
138 """
139 try:
140 # associated_groups is the related_name defined on
141 # LandscapeGroup relationship with Landscape. It returns a
142 # queryset of LandscapeGroup
143 landscape_group = self.associated_groups.get(is_default_landscape_group=True)
144 except LandscapeGroup.DoesNotExist:
145 logger.error(
146 "Landscape has no default group, but it must have", extra={"landscape_id": self.pk}
147 )
148 return None
149
150 return landscape_group.group
151
152 def __str__(self):
153 return self.name
154
155
156 class LandscapeDevelopmentStrategy(BaseModel):
157 objectives = models.TextField(blank=True, default="")
158 opportunities = models.TextField(blank=True, default="")
159 problem_situtation = models.TextField(blank=True, default="")
160 intervention_strategy = models.TextField(blank=True, default="")
161 landscape = models.ForeignKey(
162 Landscape, on_delete=models.CASCADE, related_name="associated_development_strategy"
163 )
164
165
166 class LandscapeGroup(BaseModel):
167 """
168 This model represents the association between a Landscape and a Group on
169 Terraso platform.
170 """
171
172 landscape = models.ForeignKey(
173 Landscape, on_delete=models.CASCADE, related_name="associated_groups"
174 )
175 group = models.ForeignKey(Group, on_delete=models.CASCADE, related_name="associated_landscapes")
176
177 is_default_landscape_group = models.BooleanField(blank=True, default=False)
178 is_partnership = models.BooleanField(blank=True, default=False)
179 partnership_year = models.IntegerField(blank=True, null=True)
180
181 class Meta:
182 rules_permissions = {
183 "add": perm_rules.allowed_to_add_landscape_group,
184 "delete": perm_rules.allowed_to_delete_landscape_group,
185 }
186 constraints = (
187 models.UniqueConstraint(
188 fields=("group", "landscape", "is_partnership"),
189 condition=models.Q(deleted_at__isnull=True),
190 name="unique_active_landscape_group",
191 ),
192 )
193
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/terraso_backend/apps/core/models/landscapes.py b/terraso_backend/apps/core/models/landscapes.py
--- a/terraso_backend/apps/core/models/landscapes.py
+++ b/terraso_backend/apps/core/models/landscapes.py
@@ -126,7 +126,8 @@
with transaction.atomic():
ret = super().delete(*args, **kwargs)
# default group should be deleted as well
- default_group.delete()
+ if default_group is not None:
+ default_group.delete()
return ret
|
{"golden_diff": "diff --git a/terraso_backend/apps/core/models/landscapes.py b/terraso_backend/apps/core/models/landscapes.py\n--- a/terraso_backend/apps/core/models/landscapes.py\n+++ b/terraso_backend/apps/core/models/landscapes.py\n@@ -126,7 +126,8 @@\n with transaction.atomic():\n ret = super().delete(*args, **kwargs)\n # default group should be deleted as well\n- default_group.delete()\n+ if default_group is not None:\n+ default_group.delete()\n \n return ret\n", "issue": "harddelete fails with AttributeError: 'NoneType' object has no attribute 'delete'\n## Description\r\nThe `harddelete` command fails with this backtrace:\r\n```\r\nAug 5 12:01:39 AM {\"extra\": {\"landscape_id\": \"UUID('8483fbd9-307a-4b1f-a6f2-2eafaf6526b9')\"}, \"event\": \"Landscape has no default group, but it must have\", \"timestamp\": \"2023-08-05T04:01:39.275313Z\", \"logger\": \"apps.core.models.landscapes\", \"level\": \"error\"}\r\nAug 5 12:01:39 AM Traceback (most recent call last):\r\nAug 5 12:01:39 AM File \"/app/terraso_backend/manage.py\", line 34, in <module>\r\nAug 5 12:01:39 AM main()\r\nAug 5 12:01:39 AM File \"/app/terraso_backend/manage.py\", line 30, in main\r\nAug 5 12:01:39 AM execute_from_command_line(sys.argv)\r\nAug 5 12:01:39 AM File \"/home/terraso/.local/lib/python3.11/site-packages/django/core/management/__init__.py\", line 442, in execute_from_command_line\r\nAug 5 12:01:39 AM utility.execute()\r\nAug 5 12:01:39 AM File \"/home/terraso/.local/lib/python3.11/site-packages/django/core/management/__init__.py\", line 436, in execute\r\nAug 5 12:01:39 AM self.fetch_command(subcommand).run_from_argv(self.argv)\r\nAug 5 12:01:39 AM File \"/home/terraso/.local/lib/python3.11/site-packages/django/core/management/base.py\", line 412, in run_from_argv\r\nAug 5 12:01:39 AM self.execute(*args, **cmd_options)\r\nAug 5 12:01:39 AM File \"/home/terraso/.local/lib/python3.11/site-packages/django/core/management/base.py\", line 458, in execute\r\nAug 5 12:01:39 AM output = self.handle(*args, **options)\r\nAug 5 12:01:39 AM ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nAug 5 12:01:39 AM File \"/app/terraso_backend/apps/core/management/commands/harddelete.py\", line 62, in handle\r\nAug 5 12:01:39 AM obj.delete(force_policy=HARD_DELETE)\r\nAug 5 12:01:39 AM File \"/app/terraso_backend/apps/core/models/landscapes.py\", line 129, in delete\r\nAug 5 12:01:39 AM default_group.delete()\r\nAug 5 12:01:39 AM ^^^^^^^^^^^^^^^^^^^^\r\nAug 5 12:01:39 AM AttributeError: 'NoneType' object has no attribute 'delete'\r\n```\n", "before_files": [{"content": "# Copyright \u00a9 2021-2023 Technology Matters\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Affero General Public License as published\n# by the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Affero General Public License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program. If not, see https://www.gnu.org/licenses/.\n\nimport structlog\nfrom dirtyfields import DirtyFieldsMixin\nfrom django.db import models, transaction\n\nfrom apps.core import permission_rules as perm_rules\nfrom apps.core.gis.utils import (\n calculate_geojson_centroid,\n calculate_geojson_feature_area,\n)\nfrom apps.core.models.taxonomy_terms import TaxonomyTerm\n\nfrom .commons import BaseModel, SlugModel, validate_name\nfrom .groups import Group\nfrom .users import User\n\nlogger = structlog.get_logger(__name__)\n\n\nclass Landscape(SlugModel, DirtyFieldsMixin):\n \"\"\"\n This model represents a Landscape on Terraso platform.\n\n A Landscape is a socio-ecological system that consists of natural\n and/or human-modified ecosystems. Defined by its stakeholds, a\n Landscape usually has geographical boundaries. It may correspond to,\n or be a combination of, natural boundaries, distinct land features,\n socially defined areas such as indigenous territories, and/or\n jurisdictional and administrative boundaries. The boundaries of a\n Landscape can cross several countries.\n \"\"\"\n\n fields_to_trim = [\"name\", \"description\"]\n\n name = models.CharField(max_length=128, validators=[validate_name])\n description = models.TextField(blank=True, default=\"\")\n website = models.URLField(max_length=500, blank=True, default=\"\")\n location = models.CharField(max_length=128, blank=True, default=\"\")\n area_polygon = models.JSONField(blank=True, null=True)\n email = models.EmailField(blank=True, default=\"\")\n area_scalar_m2 = models.FloatField(blank=True, null=True)\n\n created_by = models.ForeignKey(\n User,\n blank=True,\n null=True,\n on_delete=models.PROTECT,\n related_name=\"created_landscapes\",\n )\n groups = models.ManyToManyField(Group, through=\"LandscapeGroup\")\n\n area_types = models.JSONField(blank=True, null=True)\n taxonomy_terms = models.ManyToManyField(TaxonomyTerm, blank=True)\n population = models.IntegerField(blank=True, null=True)\n\n PARTNERSHIP_STATUS_NONE = \"\"\n PARTNERSHIP_STATUS_NO = \"no\"\n PARTNERSHIP_STATUS_IN_PROGRESS = \"in-progress\"\n PARTNERSHIP_STATUS_YES = \"yes\"\n\n MEMBERSHIP_TYPES = (\n (PARTNERSHIP_STATUS_NONE, \"None\"),\n (PARTNERSHIP_STATUS_NO, \"No\"),\n (PARTNERSHIP_STATUS_IN_PROGRESS, \"In Progress\"),\n (PARTNERSHIP_STATUS_YES, \"Yes\"),\n )\n partnership_status = models.CharField(\n max_length=32, choices=MEMBERSHIP_TYPES, blank=True, default=PARTNERSHIP_STATUS_NONE\n )\n profile_image = models.URLField(blank=True, default=\"\")\n profile_image_description = models.TextField(blank=True, default=\"\")\n center_coordinates = models.JSONField(blank=True, null=True)\n\n field_to_slug = \"name\"\n\n class Meta(SlugModel.Meta):\n rules_permissions = {\n \"change\": perm_rules.allowed_to_change_landscape,\n \"delete\": perm_rules.allowed_to_delete_landscape,\n }\n _unique_fields = [\"name\"]\n abstract = False\n\n def save(self, *args, **kwargs):\n dirty_fields = self.get_dirty_fields()\n if self.area_polygon and \"area_polygon\" in dirty_fields:\n area_scalar_m2 = calculate_geojson_feature_area(self.area_polygon)\n if area_scalar_m2 is not None:\n self.area_scalar_m2 = round(area_scalar_m2, 3)\n self.center_coordinates = calculate_geojson_centroid(self.area_polygon)\n\n with transaction.atomic():\n creating = not Landscape.objects.filter(pk=self.pk).exists()\n\n super().save(*args, **kwargs)\n\n if creating and self.created_by:\n group = Group(\n name=\"Group {}\".format(self.slug),\n description=\"\",\n created_by=self.created_by,\n )\n group.save()\n landscape_group = LandscapeGroup(\n group=group, landscape=self, is_default_landscape_group=True\n )\n landscape_group.save()\n\n def delete(self, *args, **kwargs):\n default_group = self.get_default_group()\n\n with transaction.atomic():\n ret = super().delete(*args, **kwargs)\n # default group should be deleted as well\n default_group.delete()\n\n return ret\n\n def get_default_group(self):\n \"\"\"\n A default Group in a Landscape is that Group where any\n individual (associated or not with other Groups) is added when\n associating directly with a Landscape.\n \"\"\"\n try:\n # associated_groups is the related_name defined on\n # LandscapeGroup relationship with Landscape. It returns a\n # queryset of LandscapeGroup\n landscape_group = self.associated_groups.get(is_default_landscape_group=True)\n except LandscapeGroup.DoesNotExist:\n logger.error(\n \"Landscape has no default group, but it must have\", extra={\"landscape_id\": self.pk}\n )\n return None\n\n return landscape_group.group\n\n def __str__(self):\n return self.name\n\n\nclass LandscapeDevelopmentStrategy(BaseModel):\n objectives = models.TextField(blank=True, default=\"\")\n opportunities = models.TextField(blank=True, default=\"\")\n problem_situtation = models.TextField(blank=True, default=\"\")\n intervention_strategy = models.TextField(blank=True, default=\"\")\n landscape = models.ForeignKey(\n Landscape, on_delete=models.CASCADE, related_name=\"associated_development_strategy\"\n )\n\n\nclass LandscapeGroup(BaseModel):\n \"\"\"\n This model represents the association between a Landscape and a Group on\n Terraso platform.\n \"\"\"\n\n landscape = models.ForeignKey(\n Landscape, on_delete=models.CASCADE, related_name=\"associated_groups\"\n )\n group = models.ForeignKey(Group, on_delete=models.CASCADE, related_name=\"associated_landscapes\")\n\n is_default_landscape_group = models.BooleanField(blank=True, default=False)\n is_partnership = models.BooleanField(blank=True, default=False)\n partnership_year = models.IntegerField(blank=True, null=True)\n\n class Meta:\n rules_permissions = {\n \"add\": perm_rules.allowed_to_add_landscape_group,\n \"delete\": perm_rules.allowed_to_delete_landscape_group,\n }\n constraints = (\n models.UniqueConstraint(\n fields=(\"group\", \"landscape\", \"is_partnership\"),\n condition=models.Q(deleted_at__isnull=True),\n name=\"unique_active_landscape_group\",\n ),\n )\n", "path": "terraso_backend/apps/core/models/landscapes.py"}], "after_files": [{"content": "# Copyright \u00a9 2021-2023 Technology Matters\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Affero General Public License as published\n# by the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Affero General Public License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program. If not, see https://www.gnu.org/licenses/.\n\nimport structlog\nfrom dirtyfields import DirtyFieldsMixin\nfrom django.db import models, transaction\n\nfrom apps.core import permission_rules as perm_rules\nfrom apps.core.gis.utils import (\n calculate_geojson_centroid,\n calculate_geojson_feature_area,\n)\nfrom apps.core.models.taxonomy_terms import TaxonomyTerm\n\nfrom .commons import BaseModel, SlugModel, validate_name\nfrom .groups import Group\nfrom .users import User\n\nlogger = structlog.get_logger(__name__)\n\n\nclass Landscape(SlugModel, DirtyFieldsMixin):\n \"\"\"\n This model represents a Landscape on Terraso platform.\n\n A Landscape is a socio-ecological system that consists of natural\n and/or human-modified ecosystems. Defined by its stakeholds, a\n Landscape usually has geographical boundaries. It may correspond to,\n or be a combination of, natural boundaries, distinct land features,\n socially defined areas such as indigenous territories, and/or\n jurisdictional and administrative boundaries. The boundaries of a\n Landscape can cross several countries.\n \"\"\"\n\n fields_to_trim = [\"name\", \"description\"]\n\n name = models.CharField(max_length=128, validators=[validate_name])\n description = models.TextField(blank=True, default=\"\")\n website = models.URLField(max_length=500, blank=True, default=\"\")\n location = models.CharField(max_length=128, blank=True, default=\"\")\n area_polygon = models.JSONField(blank=True, null=True)\n email = models.EmailField(blank=True, default=\"\")\n area_scalar_m2 = models.FloatField(blank=True, null=True)\n\n created_by = models.ForeignKey(\n User,\n blank=True,\n null=True,\n on_delete=models.PROTECT,\n related_name=\"created_landscapes\",\n )\n groups = models.ManyToManyField(Group, through=\"LandscapeGroup\")\n\n area_types = models.JSONField(blank=True, null=True)\n taxonomy_terms = models.ManyToManyField(TaxonomyTerm, blank=True)\n population = models.IntegerField(blank=True, null=True)\n\n PARTNERSHIP_STATUS_NONE = \"\"\n PARTNERSHIP_STATUS_NO = \"no\"\n PARTNERSHIP_STATUS_IN_PROGRESS = \"in-progress\"\n PARTNERSHIP_STATUS_YES = \"yes\"\n\n MEMBERSHIP_TYPES = (\n (PARTNERSHIP_STATUS_NONE, \"None\"),\n (PARTNERSHIP_STATUS_NO, \"No\"),\n (PARTNERSHIP_STATUS_IN_PROGRESS, \"In Progress\"),\n (PARTNERSHIP_STATUS_YES, \"Yes\"),\n )\n partnership_status = models.CharField(\n max_length=32, choices=MEMBERSHIP_TYPES, blank=True, default=PARTNERSHIP_STATUS_NONE\n )\n profile_image = models.URLField(blank=True, default=\"\")\n profile_image_description = models.TextField(blank=True, default=\"\")\n center_coordinates = models.JSONField(blank=True, null=True)\n\n field_to_slug = \"name\"\n\n class Meta(SlugModel.Meta):\n rules_permissions = {\n \"change\": perm_rules.allowed_to_change_landscape,\n \"delete\": perm_rules.allowed_to_delete_landscape,\n }\n _unique_fields = [\"name\"]\n abstract = False\n\n def save(self, *args, **kwargs):\n dirty_fields = self.get_dirty_fields()\n if self.area_polygon and \"area_polygon\" in dirty_fields:\n area_scalar_m2 = calculate_geojson_feature_area(self.area_polygon)\n if area_scalar_m2 is not None:\n self.area_scalar_m2 = round(area_scalar_m2, 3)\n self.center_coordinates = calculate_geojson_centroid(self.area_polygon)\n\n with transaction.atomic():\n creating = not Landscape.objects.filter(pk=self.pk).exists()\n\n super().save(*args, **kwargs)\n\n if creating and self.created_by:\n group = Group(\n name=\"Group {}\".format(self.slug),\n description=\"\",\n created_by=self.created_by,\n )\n group.save()\n landscape_group = LandscapeGroup(\n group=group, landscape=self, is_default_landscape_group=True\n )\n landscape_group.save()\n\n def delete(self, *args, **kwargs):\n default_group = self.get_default_group()\n\n with transaction.atomic():\n ret = super().delete(*args, **kwargs)\n # default group should be deleted as well\n if default_group is not None:\n default_group.delete()\n\n return ret\n\n def get_default_group(self):\n \"\"\"\n A default Group in a Landscape is that Group where any\n individual (associated or not with other Groups) is added when\n associating directly with a Landscape.\n \"\"\"\n try:\n # associated_groups is the related_name defined on\n # LandscapeGroup relationship with Landscape. It returns a\n # queryset of LandscapeGroup\n landscape_group = self.associated_groups.get(is_default_landscape_group=True)\n except LandscapeGroup.DoesNotExist:\n logger.error(\n \"Landscape has no default group, but it must have\", extra={\"landscape_id\": self.pk}\n )\n return None\n\n return landscape_group.group\n\n def __str__(self):\n return self.name\n\n\nclass LandscapeDevelopmentStrategy(BaseModel):\n objectives = models.TextField(blank=True, default=\"\")\n opportunities = models.TextField(blank=True, default=\"\")\n problem_situtation = models.TextField(blank=True, default=\"\")\n intervention_strategy = models.TextField(blank=True, default=\"\")\n landscape = models.ForeignKey(\n Landscape, on_delete=models.CASCADE, related_name=\"associated_development_strategy\"\n )\n\n\nclass LandscapeGroup(BaseModel):\n \"\"\"\n This model represents the association between a Landscape and a Group on\n Terraso platform.\n \"\"\"\n\n landscape = models.ForeignKey(\n Landscape, on_delete=models.CASCADE, related_name=\"associated_groups\"\n )\n group = models.ForeignKey(Group, on_delete=models.CASCADE, related_name=\"associated_landscapes\")\n\n is_default_landscape_group = models.BooleanField(blank=True, default=False)\n is_partnership = models.BooleanField(blank=True, default=False)\n partnership_year = models.IntegerField(blank=True, null=True)\n\n class Meta:\n rules_permissions = {\n \"add\": perm_rules.allowed_to_add_landscape_group,\n \"delete\": perm_rules.allowed_to_delete_landscape_group,\n }\n constraints = (\n models.UniqueConstraint(\n fields=(\"group\", \"landscape\", \"is_partnership\"),\n condition=models.Q(deleted_at__isnull=True),\n name=\"unique_active_landscape_group\",\n ),\n )\n", "path": "terraso_backend/apps/core/models/landscapes.py"}]}
| 3,005 | 131 |
gh_patches_debug_22029
|
rasdani/github-patches
|
git_diff
|
microsoft__PubSec-Info-Assistant-447
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Chunks number not correctly reported in statuscontainer
**Describe the bug**
Chunks number for non-PDF files not correctly reported in statuscontainer
**To Reproduce**
Steps to reproduce the behavior:
1. Upload a non-PDF file
2. Wait for the chunking process to complete
3. Go to CosmosDB -> statuscontainer -> item relative to the uploaded non-PDF file
4. Check the "status" field which will display something like:
"FileLayoutParsingOther - chunking complete. <built-in method count of list object at 0x7713c4227440> chunks created"
**Expected behavior**
It should show a number instead of "<built-in method count of list object at 0x7713c4227440>"
**Desktop (please complete the following information):**
- OS: Windows 11
- Browser: Edge
- Version: 120.0.2210.121 (Official build) (64-bit)
**Alpha version details**
- GitHub branch: main
**Additional context**
Minor problem
probably line 166, functions/FileLayoutParsingOther/__init__.py
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `functions/FileLayoutParsingOther/__init__.py`
Content:
```
1 # Copyright (c) Microsoft Corporation.
2 # Licensed under the MIT license.
3
4 import logging
5 import os
6 import json
7 from enum import Enum
8 from io import BytesIO
9 import azure.functions as func
10 from azure.storage.blob import generate_blob_sas
11 from azure.storage.queue import QueueClient, TextBase64EncodePolicy
12 from shared_code.status_log import StatusLog, State, StatusClassification
13 from shared_code.utilities import Utilities, MediaType
14
15 import requests
16
17 azure_blob_storage_account = os.environ["BLOB_STORAGE_ACCOUNT"]
18 azure_blob_storage_endpoint = os.environ["BLOB_STORAGE_ACCOUNT_ENDPOINT"]
19 azure_blob_drop_storage_container = os.environ["BLOB_STORAGE_ACCOUNT_UPLOAD_CONTAINER_NAME"]
20 azure_blob_content_storage_container = os.environ["BLOB_STORAGE_ACCOUNT_OUTPUT_CONTAINER_NAME"]
21 azure_blob_storage_key = os.environ["AZURE_BLOB_STORAGE_KEY"]
22 azure_blob_connection_string = os.environ["BLOB_CONNECTION_STRING"]
23 azure_blob_log_storage_container = os.environ["BLOB_STORAGE_ACCOUNT_LOG_CONTAINER_NAME"]
24 cosmosdb_url = os.environ["COSMOSDB_URL"]
25 cosmosdb_key = os.environ["COSMOSDB_KEY"]
26 cosmosdb_log_database_name = os.environ["COSMOSDB_LOG_DATABASE_NAME"]
27 cosmosdb_log_container_name = os.environ["COSMOSDB_LOG_CONTAINER_NAME"]
28 non_pdf_submit_queue = os.environ["NON_PDF_SUBMIT_QUEUE"]
29 pdf_polling_queue = os.environ["PDF_POLLING_QUEUE"]
30 pdf_submit_queue = os.environ["PDF_SUBMIT_QUEUE"]
31 text_enrichment_queue = os.environ["TEXT_ENRICHMENT_QUEUE"]
32 CHUNK_TARGET_SIZE = int(os.environ["CHUNK_TARGET_SIZE"])
33
34 NEW_AFTER_N_CHARS = 1500
35 COMBINE_UNDER_N_CHARS = 500
36 MAX_CHARACTERS = 1500
37
38
39 utilities = Utilities(azure_blob_storage_account, azure_blob_storage_endpoint, azure_blob_drop_storage_container, azure_blob_content_storage_container, azure_blob_storage_key)
40 function_name = "FileLayoutParsingOther"
41
42 class UnstructuredError(Exception):
43 pass
44
45 def PartitionFile(file_extension: str, file_url: str):
46 """ uses the unstructured.io libraries to analyse a document
47 Returns:
48 elements: A list of available models
49 """
50 # Send a GET request to the URL to download the file
51 response = requests.get(file_url)
52 bytes_io = BytesIO(response.content)
53 response.close()
54 metadata = []
55 elements = None
56 try:
57 if file_extension == '.csv':
58 from unstructured.partition.csv import partition_csv
59 elements = partition_csv(file=bytes_io)
60
61 elif file_extension == '.doc':
62 from unstructured.partition.doc import partition_doc
63 elements = partition_doc(file=bytes_io)
64
65 elif file_extension == '.docx':
66 from unstructured.partition.docx import partition_docx
67 elements = partition_docx(file=bytes_io)
68
69 elif file_extension == '.eml' or file_extension == '.msg':
70 if file_extension == '.msg':
71 from unstructured.partition.msg import partition_msg
72 elements = partition_msg(file=bytes_io)
73 else:
74 from unstructured.partition.email import partition_email
75 elements = partition_email(file=bytes_io)
76 metadata.append(f'Subject: {elements[0].metadata.subject}')
77 metadata.append(f'From: {elements[0].metadata.sent_from[0]}')
78 sent_to_str = 'To: '
79 for sent_to in elements[0].metadata.sent_to:
80 sent_to_str = sent_to_str + " " + sent_to
81 metadata.append(sent_to_str)
82
83 elif file_extension == '.html' or file_extension == '.htm':
84 from unstructured.partition.html import partition_html
85 elements = partition_html(file=bytes_io)
86
87 elif file_extension == '.md':
88 from unstructured.partition.md import partition_md
89 elements = partition_md(file=bytes_io)
90
91 elif file_extension == '.ppt':
92 from unstructured.partition.ppt import partition_ppt
93 elements = partition_ppt(file=bytes_io)
94
95 elif file_extension == '.pptx':
96 from unstructured.partition.pptx import partition_pptx
97 elements = partition_pptx(file=bytes_io)
98
99 elif any(file_extension in x for x in ['.txt', '.json']):
100 from unstructured.partition.text import partition_text
101 elements = partition_text(file=bytes_io)
102
103 elif file_extension == '.xlsx':
104 from unstructured.partition.xlsx import partition_xlsx
105 elements = partition_xlsx(file=bytes_io)
106
107 elif file_extension == '.xml':
108 from unstructured.partition.xml import partition_xml
109 elements = partition_xml(file=bytes_io)
110
111 except Exception as e:
112 raise UnstructuredError(f"An error occurred trying to parse the file: {str(e)}") from e
113
114 return elements, metadata
115
116
117
118 def main(msg: func.QueueMessage) -> None:
119 try:
120 statusLog = StatusLog(cosmosdb_url, cosmosdb_key, cosmosdb_log_database_name, cosmosdb_log_container_name)
121 logging.info('Python queue trigger function processed a queue item: %s',
122 msg.get_body().decode('utf-8'))
123
124 # Receive message from the queue
125 message_body = msg.get_body().decode('utf-8')
126 message_json = json.loads(message_body)
127 blob_name = message_json['blob_name']
128 blob_uri = message_json['blob_uri']
129 statusLog.upsert_document(blob_name, f'{function_name} - Starting to parse the non-PDF file', StatusClassification.INFO, State.PROCESSING)
130 statusLog.upsert_document(blob_name, f'{function_name} - Message received from non-pdf submit queue', StatusClassification.DEBUG)
131
132 # construct blob url
133 blob_path_plus_sas = utilities.get_blob_and_sas(blob_name)
134 statusLog.upsert_document(blob_name, f'{function_name} - SAS token generated to access the file', StatusClassification.DEBUG)
135
136 file_name, file_extension, file_directory = utilities.get_filename_and_extension(blob_name)
137
138 response = requests.get(blob_path_plus_sas)
139 response.raise_for_status()
140
141
142 # Partition the file dependent on file extension
143 elements, metadata = PartitionFile(file_extension, blob_path_plus_sas)
144 metdata_text = ''
145 for metadata_value in metadata:
146 metdata_text += metadata_value + '\n'
147 statusLog.upsert_document(blob_name, f'{function_name} - partitioning complete', StatusClassification.DEBUG)
148
149 title = ''
150 # Capture the file title
151 try:
152 for i, element in enumerate(elements):
153 if title == '' and element.category == 'Title':
154 # capture the first title
155 title = element.text
156 break
157 except:
158 # if this type of eleemnt does not include title, then process with emty value
159 pass
160
161 # Chunk the file
162 from unstructured.chunking.title import chunk_by_title
163 # chunks = chunk_by_title(elements, multipage_sections=True, new_after_n_chars=NEW_AFTER_N_CHARS, combine_under_n_chars=COMBINE_UNDER_N_CHARS)
164 # chunks = chunk_by_title(elements, multipage_sections=True, new_after_n_chars=NEW_AFTER_N_CHARS, combine_under_n_chars=COMBINE_UNDER_N_CHARS, max_characters=MAX_CHARACTERS)
165 chunks = chunk_by_title(elements, multipage_sections=True, new_after_n_chars=NEW_AFTER_N_CHARS, combine_text_under_n_chars=COMBINE_UNDER_N_CHARS)
166 statusLog.upsert_document(blob_name, f'{function_name} - chunking complete. {str(chunks.count)} chunks created', StatusClassification.DEBUG)
167
168 subtitle_name = ''
169 section_name = ''
170 # Complete and write chunks
171 for i, chunk in enumerate(chunks):
172 if chunk.metadata.page_number == None:
173 page_list = [1]
174 else:
175 page_list = [chunk.metadata.page_number]
176 # substitute html if text is a table
177 if chunk.category == 'Table':
178 chunk_text = chunk.metadata.text_as_html
179 else:
180 chunk_text = chunk.text
181 # add filetype specific metadata as chunk text header
182 chunk_text = metdata_text + chunk_text
183 utilities.write_chunk(blob_name, blob_uri,
184 f"{i}",
185 utilities.token_count(chunk.text),
186 chunk_text, page_list,
187 section_name, title, subtitle_name,
188 MediaType.TEXT
189 )
190
191 statusLog.upsert_document(blob_name, f'{function_name} - chunking stored.', StatusClassification.DEBUG)
192
193 # submit message to the text enrichment queue to continue processing
194 queue_client = QueueClient.from_connection_string(azure_blob_connection_string, queue_name=text_enrichment_queue, message_encode_policy=TextBase64EncodePolicy())
195 message_json["text_enrichment_queued_count"] = 1
196 message_string = json.dumps(message_json)
197 queue_client.send_message(message_string)
198 statusLog.upsert_document(blob_name, f"{function_name} - message sent to enrichment queue", StatusClassification.DEBUG, State.QUEUED)
199
200 except Exception as e:
201 statusLog.upsert_document(blob_name, f"{function_name} - An error occurred - {str(e)}", StatusClassification.ERROR, State.ERROR)
202
203 statusLog.save_document(blob_name)
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/functions/FileLayoutParsingOther/__init__.py b/functions/FileLayoutParsingOther/__init__.py
--- a/functions/FileLayoutParsingOther/__init__.py
+++ b/functions/FileLayoutParsingOther/__init__.py
@@ -163,7 +163,7 @@
# chunks = chunk_by_title(elements, multipage_sections=True, new_after_n_chars=NEW_AFTER_N_CHARS, combine_under_n_chars=COMBINE_UNDER_N_CHARS)
# chunks = chunk_by_title(elements, multipage_sections=True, new_after_n_chars=NEW_AFTER_N_CHARS, combine_under_n_chars=COMBINE_UNDER_N_CHARS, max_characters=MAX_CHARACTERS)
chunks = chunk_by_title(elements, multipage_sections=True, new_after_n_chars=NEW_AFTER_N_CHARS, combine_text_under_n_chars=COMBINE_UNDER_N_CHARS)
- statusLog.upsert_document(blob_name, f'{function_name} - chunking complete. {str(chunks.count)} chunks created', StatusClassification.DEBUG)
+ statusLog.upsert_document(blob_name, f'{function_name} - chunking complete. {len(chunks)} chunks created', StatusClassification.DEBUG)
subtitle_name = ''
section_name = ''
|
{"golden_diff": "diff --git a/functions/FileLayoutParsingOther/__init__.py b/functions/FileLayoutParsingOther/__init__.py\n--- a/functions/FileLayoutParsingOther/__init__.py\n+++ b/functions/FileLayoutParsingOther/__init__.py\n@@ -163,7 +163,7 @@\n # chunks = chunk_by_title(elements, multipage_sections=True, new_after_n_chars=NEW_AFTER_N_CHARS, combine_under_n_chars=COMBINE_UNDER_N_CHARS)\r\n # chunks = chunk_by_title(elements, multipage_sections=True, new_after_n_chars=NEW_AFTER_N_CHARS, combine_under_n_chars=COMBINE_UNDER_N_CHARS, max_characters=MAX_CHARACTERS) \r\n chunks = chunk_by_title(elements, multipage_sections=True, new_after_n_chars=NEW_AFTER_N_CHARS, combine_text_under_n_chars=COMBINE_UNDER_N_CHARS)\r\n- statusLog.upsert_document(blob_name, f'{function_name} - chunking complete. {str(chunks.count)} chunks created', StatusClassification.DEBUG)\r\n+ statusLog.upsert_document(blob_name, f'{function_name} - chunking complete. {len(chunks)} chunks created', StatusClassification.DEBUG)\r\n \r\n subtitle_name = ''\r\n section_name = ''\n", "issue": "Chunks number not correctly reported in statuscontainer\n**Describe the bug**\r\nChunks number for non-PDF files not correctly reported in statuscontainer\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Upload a non-PDF file\r\n2. Wait for the chunking process to complete\r\n3. Go to CosmosDB -> statuscontainer -> item relative to the uploaded non-PDF file\r\n4. Check the \"status\" field which will display something like: \r\n\"FileLayoutParsingOther - chunking complete. <built-in method count of list object at 0x7713c4227440> chunks created\" \r\n\r\n**Expected behavior**\r\nIt should show a number instead of \"<built-in method count of list object at 0x7713c4227440>\"\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: Windows 11\r\n - Browser: Edge\r\n - Version: 120.0.2210.121 (Official build) (64-bit)\r\n\r\n**Alpha version details**\r\n - GitHub branch: main\r\n\r\n**Additional context**\r\nMinor problem\r\nprobably line 166, functions/FileLayoutParsingOther/__init__.py\r\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation.\r\n# Licensed under the MIT license.\r\n\r\nimport logging\r\nimport os\r\nimport json\r\nfrom enum import Enum\r\nfrom io import BytesIO\r\nimport azure.functions as func\r\nfrom azure.storage.blob import generate_blob_sas\r\nfrom azure.storage.queue import QueueClient, TextBase64EncodePolicy\r\nfrom shared_code.status_log import StatusLog, State, StatusClassification\r\nfrom shared_code.utilities import Utilities, MediaType\r\n\r\nimport requests\r\n\r\nazure_blob_storage_account = os.environ[\"BLOB_STORAGE_ACCOUNT\"]\r\nazure_blob_storage_endpoint = os.environ[\"BLOB_STORAGE_ACCOUNT_ENDPOINT\"]\r\nazure_blob_drop_storage_container = os.environ[\"BLOB_STORAGE_ACCOUNT_UPLOAD_CONTAINER_NAME\"]\r\nazure_blob_content_storage_container = os.environ[\"BLOB_STORAGE_ACCOUNT_OUTPUT_CONTAINER_NAME\"]\r\nazure_blob_storage_key = os.environ[\"AZURE_BLOB_STORAGE_KEY\"]\r\nazure_blob_connection_string = os.environ[\"BLOB_CONNECTION_STRING\"]\r\nazure_blob_log_storage_container = os.environ[\"BLOB_STORAGE_ACCOUNT_LOG_CONTAINER_NAME\"]\r\ncosmosdb_url = os.environ[\"COSMOSDB_URL\"]\r\ncosmosdb_key = os.environ[\"COSMOSDB_KEY\"]\r\ncosmosdb_log_database_name = os.environ[\"COSMOSDB_LOG_DATABASE_NAME\"]\r\ncosmosdb_log_container_name = os.environ[\"COSMOSDB_LOG_CONTAINER_NAME\"]\r\nnon_pdf_submit_queue = os.environ[\"NON_PDF_SUBMIT_QUEUE\"]\r\npdf_polling_queue = os.environ[\"PDF_POLLING_QUEUE\"]\r\npdf_submit_queue = os.environ[\"PDF_SUBMIT_QUEUE\"]\r\ntext_enrichment_queue = os.environ[\"TEXT_ENRICHMENT_QUEUE\"]\r\nCHUNK_TARGET_SIZE = int(os.environ[\"CHUNK_TARGET_SIZE\"])\r\n\r\nNEW_AFTER_N_CHARS = 1500\r\nCOMBINE_UNDER_N_CHARS = 500\r\nMAX_CHARACTERS = 1500\r\n\r\n\r\nutilities = Utilities(azure_blob_storage_account, azure_blob_storage_endpoint, azure_blob_drop_storage_container, azure_blob_content_storage_container, azure_blob_storage_key)\r\nfunction_name = \"FileLayoutParsingOther\"\r\n\r\nclass UnstructuredError(Exception):\r\n pass\r\n\r\ndef PartitionFile(file_extension: str, file_url: str): \r\n \"\"\" uses the unstructured.io libraries to analyse a document\r\n Returns:\r\n elements: A list of available models\r\n \"\"\" \r\n # Send a GET request to the URL to download the file\r\n response = requests.get(file_url)\r\n bytes_io = BytesIO(response.content)\r\n response.close() \r\n metadata = [] \r\n elements = None\r\n try: \r\n if file_extension == '.csv':\r\n from unstructured.partition.csv import partition_csv\r\n elements = partition_csv(file=bytes_io) \r\n \r\n elif file_extension == '.doc':\r\n from unstructured.partition.doc import partition_doc\r\n elements = partition_doc(file=bytes_io) \r\n \r\n elif file_extension == '.docx':\r\n from unstructured.partition.docx import partition_docx\r\n elements = partition_docx(file=bytes_io)\r\n \r\n elif file_extension == '.eml' or file_extension == '.msg':\r\n if file_extension == '.msg':\r\n from unstructured.partition.msg import partition_msg\r\n elements = partition_msg(file=bytes_io) \r\n else: \r\n from unstructured.partition.email import partition_email\r\n elements = partition_email(file=bytes_io)\r\n metadata.append(f'Subject: {elements[0].metadata.subject}')\r\n metadata.append(f'From: {elements[0].metadata.sent_from[0]}')\r\n sent_to_str = 'To: '\r\n for sent_to in elements[0].metadata.sent_to:\r\n sent_to_str = sent_to_str + \" \" + sent_to\r\n metadata.append(sent_to_str)\r\n \r\n elif file_extension == '.html' or file_extension == '.htm': \r\n from unstructured.partition.html import partition_html\r\n elements = partition_html(file=bytes_io) \r\n \r\n elif file_extension == '.md':\r\n from unstructured.partition.md import partition_md\r\n elements = partition_md(file=bytes_io)\r\n \r\n elif file_extension == '.ppt':\r\n from unstructured.partition.ppt import partition_ppt\r\n elements = partition_ppt(file=bytes_io)\r\n \r\n elif file_extension == '.pptx': \r\n from unstructured.partition.pptx import partition_pptx\r\n elements = partition_pptx(file=bytes_io)\r\n \r\n elif any(file_extension in x for x in ['.txt', '.json']):\r\n from unstructured.partition.text import partition_text\r\n elements = partition_text(file=bytes_io)\r\n \r\n elif file_extension == '.xlsx':\r\n from unstructured.partition.xlsx import partition_xlsx\r\n elements = partition_xlsx(file=bytes_io)\r\n \r\n elif file_extension == '.xml':\r\n from unstructured.partition.xml import partition_xml\r\n elements = partition_xml(file=bytes_io)\r\n \r\n except Exception as e:\r\n raise UnstructuredError(f\"An error occurred trying to parse the file: {str(e)}\") from e\r\n \r\n return elements, metadata\r\n \r\n \r\n\r\ndef main(msg: func.QueueMessage) -> None:\r\n try:\r\n statusLog = StatusLog(cosmosdb_url, cosmosdb_key, cosmosdb_log_database_name, cosmosdb_log_container_name)\r\n logging.info('Python queue trigger function processed a queue item: %s',\r\n msg.get_body().decode('utf-8'))\r\n\r\n # Receive message from the queue\r\n message_body = msg.get_body().decode('utf-8')\r\n message_json = json.loads(message_body)\r\n blob_name = message_json['blob_name']\r\n blob_uri = message_json['blob_uri']\r\n statusLog.upsert_document(blob_name, f'{function_name} - Starting to parse the non-PDF file', StatusClassification.INFO, State.PROCESSING)\r\n statusLog.upsert_document(blob_name, f'{function_name} - Message received from non-pdf submit queue', StatusClassification.DEBUG)\r\n\r\n # construct blob url\r\n blob_path_plus_sas = utilities.get_blob_and_sas(blob_name)\r\n statusLog.upsert_document(blob_name, f'{function_name} - SAS token generated to access the file', StatusClassification.DEBUG)\r\n\r\n file_name, file_extension, file_directory = utilities.get_filename_and_extension(blob_name)\r\n\r\n response = requests.get(blob_path_plus_sas)\r\n response.raise_for_status()\r\n \r\n \r\n # Partition the file dependent on file extension\r\n elements, metadata = PartitionFile(file_extension, blob_path_plus_sas)\r\n metdata_text = ''\r\n for metadata_value in metadata:\r\n metdata_text += metadata_value + '\\n' \r\n statusLog.upsert_document(blob_name, f'{function_name} - partitioning complete', StatusClassification.DEBUG)\r\n \r\n title = ''\r\n # Capture the file title\r\n try:\r\n for i, element in enumerate(elements):\r\n if title == '' and element.category == 'Title':\r\n # capture the first title\r\n title = element.text\r\n break\r\n except:\r\n # if this type of eleemnt does not include title, then process with emty value\r\n pass\r\n \r\n # Chunk the file \r\n from unstructured.chunking.title import chunk_by_title\r\n # chunks = chunk_by_title(elements, multipage_sections=True, new_after_n_chars=NEW_AFTER_N_CHARS, combine_under_n_chars=COMBINE_UNDER_N_CHARS)\r\n # chunks = chunk_by_title(elements, multipage_sections=True, new_after_n_chars=NEW_AFTER_N_CHARS, combine_under_n_chars=COMBINE_UNDER_N_CHARS, max_characters=MAX_CHARACTERS) \r\n chunks = chunk_by_title(elements, multipage_sections=True, new_after_n_chars=NEW_AFTER_N_CHARS, combine_text_under_n_chars=COMBINE_UNDER_N_CHARS)\r\n statusLog.upsert_document(blob_name, f'{function_name} - chunking complete. {str(chunks.count)} chunks created', StatusClassification.DEBUG)\r\n \r\n subtitle_name = ''\r\n section_name = ''\r\n # Complete and write chunks\r\n for i, chunk in enumerate(chunks): \r\n if chunk.metadata.page_number == None:\r\n page_list = [1]\r\n else:\r\n page_list = [chunk.metadata.page_number] \r\n # substitute html if text is a table \r\n if chunk.category == 'Table':\r\n chunk_text = chunk.metadata.text_as_html\r\n else:\r\n chunk_text = chunk.text\r\n # add filetype specific metadata as chunk text header\r\n chunk_text = metdata_text + chunk_text \r\n utilities.write_chunk(blob_name, blob_uri,\r\n f\"{i}\",\r\n utilities.token_count(chunk.text),\r\n chunk_text, page_list,\r\n section_name, title, subtitle_name,\r\n MediaType.TEXT\r\n )\r\n \r\n statusLog.upsert_document(blob_name, f'{function_name} - chunking stored.', StatusClassification.DEBUG) \r\n \r\n # submit message to the text enrichment queue to continue processing \r\n queue_client = QueueClient.from_connection_string(azure_blob_connection_string, queue_name=text_enrichment_queue, message_encode_policy=TextBase64EncodePolicy())\r\n message_json[\"text_enrichment_queued_count\"] = 1\r\n message_string = json.dumps(message_json)\r\n queue_client.send_message(message_string)\r\n statusLog.upsert_document(blob_name, f\"{function_name} - message sent to enrichment queue\", StatusClassification.DEBUG, State.QUEUED) \r\n \r\n except Exception as e:\r\n statusLog.upsert_document(blob_name, f\"{function_name} - An error occurred - {str(e)}\", StatusClassification.ERROR, State.ERROR)\r\n\r\n statusLog.save_document(blob_name)", "path": "functions/FileLayoutParsingOther/__init__.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation.\r\n# Licensed under the MIT license.\r\n\r\nimport logging\r\nimport os\r\nimport json\r\nfrom enum import Enum\r\nfrom io import BytesIO\r\nimport azure.functions as func\r\nfrom azure.storage.blob import generate_blob_sas\r\nfrom azure.storage.queue import QueueClient, TextBase64EncodePolicy\r\nfrom shared_code.status_log import StatusLog, State, StatusClassification\r\nfrom shared_code.utilities import Utilities, MediaType\r\n\r\nimport requests\r\n\r\nazure_blob_storage_account = os.environ[\"BLOB_STORAGE_ACCOUNT\"]\r\nazure_blob_storage_endpoint = os.environ[\"BLOB_STORAGE_ACCOUNT_ENDPOINT\"]\r\nazure_blob_drop_storage_container = os.environ[\"BLOB_STORAGE_ACCOUNT_UPLOAD_CONTAINER_NAME\"]\r\nazure_blob_content_storage_container = os.environ[\"BLOB_STORAGE_ACCOUNT_OUTPUT_CONTAINER_NAME\"]\r\nazure_blob_storage_key = os.environ[\"AZURE_BLOB_STORAGE_KEY\"]\r\nazure_blob_connection_string = os.environ[\"BLOB_CONNECTION_STRING\"]\r\nazure_blob_log_storage_container = os.environ[\"BLOB_STORAGE_ACCOUNT_LOG_CONTAINER_NAME\"]\r\ncosmosdb_url = os.environ[\"COSMOSDB_URL\"]\r\ncosmosdb_key = os.environ[\"COSMOSDB_KEY\"]\r\ncosmosdb_log_database_name = os.environ[\"COSMOSDB_LOG_DATABASE_NAME\"]\r\ncosmosdb_log_container_name = os.environ[\"COSMOSDB_LOG_CONTAINER_NAME\"]\r\nnon_pdf_submit_queue = os.environ[\"NON_PDF_SUBMIT_QUEUE\"]\r\npdf_polling_queue = os.environ[\"PDF_POLLING_QUEUE\"]\r\npdf_submit_queue = os.environ[\"PDF_SUBMIT_QUEUE\"]\r\ntext_enrichment_queue = os.environ[\"TEXT_ENRICHMENT_QUEUE\"]\r\nCHUNK_TARGET_SIZE = int(os.environ[\"CHUNK_TARGET_SIZE\"])\r\n\r\nNEW_AFTER_N_CHARS = 1500\r\nCOMBINE_UNDER_N_CHARS = 500\r\nMAX_CHARACTERS = 1500\r\n\r\n\r\nutilities = Utilities(azure_blob_storage_account, azure_blob_storage_endpoint, azure_blob_drop_storage_container, azure_blob_content_storage_container, azure_blob_storage_key)\r\nfunction_name = \"FileLayoutParsingOther\"\r\n\r\nclass UnstructuredError(Exception):\r\n pass\r\n\r\ndef PartitionFile(file_extension: str, file_url: str): \r\n \"\"\" uses the unstructured.io libraries to analyse a document\r\n Returns:\r\n elements: A list of available models\r\n \"\"\" \r\n # Send a GET request to the URL to download the file\r\n response = requests.get(file_url)\r\n bytes_io = BytesIO(response.content)\r\n response.close() \r\n metadata = [] \r\n elements = None\r\n try: \r\n if file_extension == '.csv':\r\n from unstructured.partition.csv import partition_csv\r\n elements = partition_csv(file=bytes_io) \r\n \r\n elif file_extension == '.doc':\r\n from unstructured.partition.doc import partition_doc\r\n elements = partition_doc(file=bytes_io) \r\n \r\n elif file_extension == '.docx':\r\n from unstructured.partition.docx import partition_docx\r\n elements = partition_docx(file=bytes_io)\r\n \r\n elif file_extension == '.eml' or file_extension == '.msg':\r\n if file_extension == '.msg':\r\n from unstructured.partition.msg import partition_msg\r\n elements = partition_msg(file=bytes_io) \r\n else: \r\n from unstructured.partition.email import partition_email\r\n elements = partition_email(file=bytes_io)\r\n metadata.append(f'Subject: {elements[0].metadata.subject}')\r\n metadata.append(f'From: {elements[0].metadata.sent_from[0]}')\r\n sent_to_str = 'To: '\r\n for sent_to in elements[0].metadata.sent_to:\r\n sent_to_str = sent_to_str + \" \" + sent_to\r\n metadata.append(sent_to_str)\r\n \r\n elif file_extension == '.html' or file_extension == '.htm': \r\n from unstructured.partition.html import partition_html\r\n elements = partition_html(file=bytes_io) \r\n \r\n elif file_extension == '.md':\r\n from unstructured.partition.md import partition_md\r\n elements = partition_md(file=bytes_io)\r\n \r\n elif file_extension == '.ppt':\r\n from unstructured.partition.ppt import partition_ppt\r\n elements = partition_ppt(file=bytes_io)\r\n \r\n elif file_extension == '.pptx': \r\n from unstructured.partition.pptx import partition_pptx\r\n elements = partition_pptx(file=bytes_io)\r\n \r\n elif any(file_extension in x for x in ['.txt', '.json']):\r\n from unstructured.partition.text import partition_text\r\n elements = partition_text(file=bytes_io)\r\n \r\n elif file_extension == '.xlsx':\r\n from unstructured.partition.xlsx import partition_xlsx\r\n elements = partition_xlsx(file=bytes_io)\r\n \r\n elif file_extension == '.xml':\r\n from unstructured.partition.xml import partition_xml\r\n elements = partition_xml(file=bytes_io)\r\n \r\n except Exception as e:\r\n raise UnstructuredError(f\"An error occurred trying to parse the file: {str(e)}\") from e\r\n \r\n return elements, metadata\r\n \r\n \r\n\r\ndef main(msg: func.QueueMessage) -> None:\r\n try:\r\n statusLog = StatusLog(cosmosdb_url, cosmosdb_key, cosmosdb_log_database_name, cosmosdb_log_container_name)\r\n logging.info('Python queue trigger function processed a queue item: %s',\r\n msg.get_body().decode('utf-8'))\r\n\r\n # Receive message from the queue\r\n message_body = msg.get_body().decode('utf-8')\r\n message_json = json.loads(message_body)\r\n blob_name = message_json['blob_name']\r\n blob_uri = message_json['blob_uri']\r\n statusLog.upsert_document(blob_name, f'{function_name} - Starting to parse the non-PDF file', StatusClassification.INFO, State.PROCESSING)\r\n statusLog.upsert_document(blob_name, f'{function_name} - Message received from non-pdf submit queue', StatusClassification.DEBUG)\r\n\r\n # construct blob url\r\n blob_path_plus_sas = utilities.get_blob_and_sas(blob_name)\r\n statusLog.upsert_document(blob_name, f'{function_name} - SAS token generated to access the file', StatusClassification.DEBUG)\r\n\r\n file_name, file_extension, file_directory = utilities.get_filename_and_extension(blob_name)\r\n\r\n response = requests.get(blob_path_plus_sas)\r\n response.raise_for_status()\r\n \r\n \r\n # Partition the file dependent on file extension\r\n elements, metadata = PartitionFile(file_extension, blob_path_plus_sas)\r\n metdata_text = ''\r\n for metadata_value in metadata:\r\n metdata_text += metadata_value + '\\n' \r\n statusLog.upsert_document(blob_name, f'{function_name} - partitioning complete', StatusClassification.DEBUG)\r\n \r\n title = ''\r\n # Capture the file title\r\n try:\r\n for i, element in enumerate(elements):\r\n if title == '' and element.category == 'Title':\r\n # capture the first title\r\n title = element.text\r\n break\r\n except:\r\n # if this type of eleemnt does not include title, then process with emty value\r\n pass\r\n \r\n # Chunk the file \r\n from unstructured.chunking.title import chunk_by_title\r\n # chunks = chunk_by_title(elements, multipage_sections=True, new_after_n_chars=NEW_AFTER_N_CHARS, combine_under_n_chars=COMBINE_UNDER_N_CHARS)\r\n # chunks = chunk_by_title(elements, multipage_sections=True, new_after_n_chars=NEW_AFTER_N_CHARS, combine_under_n_chars=COMBINE_UNDER_N_CHARS, max_characters=MAX_CHARACTERS) \r\n chunks = chunk_by_title(elements, multipage_sections=True, new_after_n_chars=NEW_AFTER_N_CHARS, combine_text_under_n_chars=COMBINE_UNDER_N_CHARS)\r\n statusLog.upsert_document(blob_name, f'{function_name} - chunking complete. {len(chunks)} chunks created', StatusClassification.DEBUG)\r\n \r\n subtitle_name = ''\r\n section_name = ''\r\n # Complete and write chunks\r\n for i, chunk in enumerate(chunks): \r\n if chunk.metadata.page_number == None:\r\n page_list = [1]\r\n else:\r\n page_list = [chunk.metadata.page_number] \r\n # substitute html if text is a table \r\n if chunk.category == 'Table':\r\n chunk_text = chunk.metadata.text_as_html\r\n else:\r\n chunk_text = chunk.text\r\n # add filetype specific metadata as chunk text header\r\n chunk_text = metdata_text + chunk_text \r\n utilities.write_chunk(blob_name, blob_uri,\r\n f\"{i}\",\r\n utilities.token_count(chunk.text),\r\n chunk_text, page_list,\r\n section_name, title, subtitle_name,\r\n MediaType.TEXT\r\n )\r\n \r\n statusLog.upsert_document(blob_name, f'{function_name} - chunking stored.', StatusClassification.DEBUG) \r\n \r\n # submit message to the text enrichment queue to continue processing \r\n queue_client = QueueClient.from_connection_string(azure_blob_connection_string, queue_name=text_enrichment_queue, message_encode_policy=TextBase64EncodePolicy())\r\n message_json[\"text_enrichment_queued_count\"] = 1\r\n message_string = json.dumps(message_json)\r\n queue_client.send_message(message_string)\r\n statusLog.upsert_document(blob_name, f\"{function_name} - message sent to enrichment queue\", StatusClassification.DEBUG, State.QUEUED) \r\n \r\n except Exception as e:\r\n statusLog.upsert_document(blob_name, f\"{function_name} - An error occurred - {str(e)}\", StatusClassification.ERROR, State.ERROR)\r\n\r\n statusLog.save_document(blob_name)", "path": "functions/FileLayoutParsingOther/__init__.py"}]}
| 3,002 | 256 |
gh_patches_debug_8335
|
rasdani/github-patches
|
git_diff
|
cupy__cupy-522
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
No validation for kernel name
Although `ElementwiseKernel`'s `name` argument is directly used as a function name in generated CUDA code, there are no validation process to raise an exception when invalid characters are used in `name`.
That causes CUDA compile error, which is a bit difficult to debug.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cupy/cuda/compiler.py`
Content:
```
1 import hashlib
2 import math
3 import os
4 import shutil
5 import sys
6 import tempfile
7
8 import six
9
10 from cupy.cuda import device
11 from cupy.cuda import function
12 from cupy.cuda import nvrtc
13
14 _nvrtc_version = None
15
16
17 def _get_nvrtc_version():
18 global _nvrtc_version
19 if _nvrtc_version is None:
20 _nvrtc_version = nvrtc.getVersion()
21
22 return _nvrtc_version
23
24
25 def _get_arch():
26 cc = device.Device().compute_capability
27 return 'compute_%s' % cc
28
29
30 class TemporaryDirectory(object):
31 def __enter__(self):
32 self.path = tempfile.mkdtemp()
33 return self.path
34
35 def __exit__(self, exc_type, exc_value, traceback):
36 if exc_value is not None:
37 return
38
39 for name in os.listdir(self.path):
40 os.unlink(os.path.join(self.path, name))
41 os.rmdir(self.path)
42
43
44 def _get_bool_env_variable(name, default):
45 val = os.environ.get(name)
46 if val is None or len(val) == 0:
47 return default
48 try:
49 return int(val) == 1
50 except ValueError:
51 return False
52
53
54 def compile_using_nvrtc(source, options=(), arch=None):
55 if not arch:
56 arch = _get_arch()
57
58 options += ('-arch={}'.format(arch),)
59
60 with TemporaryDirectory() as root_dir:
61 path = os.path.join(root_dir, 'kern')
62 cu_path = '%s.cu' % path
63
64 with open(cu_path, 'w') as cu_file:
65 cu_file.write(source)
66
67 prog = _NVRTCProgram(source, cu_path)
68 try:
69 ptx = prog.compile(options)
70 except CompileException as e:
71 dump = _get_bool_env_variable(
72 'CUPY_DUMP_CUDA_SOURCE_ON_ERROR', False)
73 if dump:
74 e.dump(sys.stderr)
75 raise
76
77 return ptx
78
79
80 def _preprocess(source, options=()):
81 prog = _NVRTCProgram(source, '')
82 try:
83 result = prog.compile(options)
84 except CompileException as e:
85 dump = _get_bool_env_variable(
86 'CUPY_DUMP_CUDA_SOURCE_ON_ERROR', False)
87 if dump:
88 e.dump(sys.stderr)
89 raise
90
91 assert isinstance(result, six.text_type)
92 return result
93
94
95 _default_cache_dir = os.path.expanduser('~/.cupy/kernel_cache')
96
97
98 def get_cache_dir():
99 return os.environ.get('CUPY_CACHE_DIR', _default_cache_dir)
100
101
102 _empty_file_preprocess_cache = {}
103
104
105 def compile_with_cache(source, options=(), arch=None, cache_dir=None,
106 extra_source=None):
107 # NVRTC does not use extra_source. extra_source is used for cache key.
108 global _empty_file_preprocess_cache
109 if cache_dir is None:
110 cache_dir = get_cache_dir()
111 if arch is None:
112 arch = _get_arch()
113
114 options += ('-ftz=true',)
115
116 env = (arch, options, _get_nvrtc_version())
117 base = _empty_file_preprocess_cache.get(env, None)
118 if base is None:
119 # This is checking of NVRTC compiler internal version
120 base = _empty_file_preprocess_cache[env] = _preprocess('', options)
121 key_src = '%s %s %s %s' % (env, base, source, extra_source)
122
123 key_src = key_src.encode('utf-8')
124 name = '%s_2.cubin' % hashlib.md5(key_src).hexdigest()
125
126 if not os.path.isdir(cache_dir):
127 try:
128 os.makedirs(cache_dir)
129 except OSError:
130 if not os.path.isdir(cache_dir):
131 raise
132
133 mod = function.Module()
134 # To handle conflicts in concurrent situation, we adopt lock-free method
135 # to avoid performance degradation.
136 path = os.path.join(cache_dir, name)
137 if os.path.exists(path):
138 with open(path, 'rb') as file:
139 data = file.read()
140 if len(data) >= 32:
141 hash = data[:32]
142 cubin = data[32:]
143 cubin_hash = six.b(hashlib.md5(cubin).hexdigest())
144 if hash == cubin_hash:
145 mod.load(cubin)
146 return mod
147
148 ptx = compile_using_nvrtc(source, options, arch)
149 ls = function.LinkState()
150 ls.add_ptr_data(ptx, six.u('cupy.ptx'))
151 cubin = ls.complete()
152 cubin_hash = six.b(hashlib.md5(cubin).hexdigest())
153
154 # shutil.move is not atomic operation, so it could result in a corrupted
155 # file. We detect it by appending md5 hash at the beginning of each cache
156 # file. If the file is corrupted, it will be ignored next time it is read.
157 with tempfile.NamedTemporaryFile(dir=cache_dir, delete=False) as tf:
158 tf.write(cubin_hash)
159 tf.write(cubin)
160 temp_path = tf.name
161 shutil.move(temp_path, path)
162
163 # Save .cu source file along with .cubin
164 if _get_bool_env_variable('CUPY_CACHE_SAVE_CUDA_SOURCE', False):
165 with open(path + '.cu', 'w') as f:
166 f.write(source)
167
168 mod.load(cubin)
169 return mod
170
171
172 class CompileException(Exception):
173
174 def __init__(self, msg, source, name, options):
175 self._msg = msg
176 self.source = source
177 self.name = name
178 self.options = options
179
180 def __repr__(self):
181 return str(self)
182
183 def __str__(self):
184 return self.get_message()
185
186 def get_message(self):
187 return self._msg
188
189 def dump(self, f):
190 lines = self.source.split('\n')
191 digits = int(math.floor(math.log10(len(lines)))) + 1
192 linum_fmt = '{{:0{}d}} '.format(digits)
193 f.write('NVRTC compilation error: {}\n'.format(self))
194 f.write('-----\n')
195 f.write('Name: {}\n'.format(' '.join(self.name)))
196 f.write('Options: {}\n'.format(' '.join(self.options)))
197 f.write('CUDA source:\n')
198 for i, line in enumerate(lines):
199 f.write(linum_fmt.format(i + 1) + line.rstrip() + '\n')
200 f.write('-----\n')
201 f.flush()
202
203
204 class _NVRTCProgram(object):
205
206 def __init__(self, src, name="default_program", headers=(),
207 include_names=()):
208 self.ptr = None
209
210 if isinstance(src, six.binary_type):
211 src = src.decode('UTF-8')
212 if isinstance(name, six.binary_type):
213 name = name.decode('UTF-8')
214
215 self.src = src
216 self.name = name
217 self.ptr = nvrtc.createProgram(src, name, headers, include_names)
218
219 def __del__(self):
220 if self.ptr:
221 nvrtc.destroyProgram(self.ptr)
222
223 def compile(self, options=()):
224 try:
225 nvrtc.compileProgram(self.ptr, options)
226 return nvrtc.getPTX(self.ptr)
227 except nvrtc.NVRTCError:
228 log = nvrtc.getProgramLog(self.ptr)
229 raise CompileException(log, self.src, self.name, options)
230
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/cupy/cuda/compiler.py b/cupy/cuda/compiler.py
--- a/cupy/cuda/compiler.py
+++ b/cupy/cuda/compiler.py
@@ -1,6 +1,7 @@
import hashlib
import math
import os
+import re
import shutil
import sys
import tempfile
@@ -227,3 +228,7 @@
except nvrtc.NVRTCError:
log = nvrtc.getProgramLog(self.ptr)
raise CompileException(log, self.src, self.name, options)
+
+
+def is_valid_kernel_name(name):
+ return re.match('^[a-zA-Z_][a-zA-Z_0-9]*$', name) is not None
|
{"golden_diff": "diff --git a/cupy/cuda/compiler.py b/cupy/cuda/compiler.py\n--- a/cupy/cuda/compiler.py\n+++ b/cupy/cuda/compiler.py\n@@ -1,6 +1,7 @@\n import hashlib\n import math\n import os\n+import re\n import shutil\n import sys\n import tempfile\n@@ -227,3 +228,7 @@\n except nvrtc.NVRTCError:\n log = nvrtc.getProgramLog(self.ptr)\n raise CompileException(log, self.src, self.name, options)\n+\n+\n+def is_valid_kernel_name(name):\n+ return re.match('^[a-zA-Z_][a-zA-Z_0-9]*$', name) is not None\n", "issue": "No validation for kernel name\n Although `ElementwiseKernel`'s `name` argument is directly used as a function name in generated CUDA code, there are no validation process to raise an exception when invalid characters are used in `name`.\r\nThat causes CUDA compile error, which is a bit difficult to debug.\n", "before_files": [{"content": "import hashlib\nimport math\nimport os\nimport shutil\nimport sys\nimport tempfile\n\nimport six\n\nfrom cupy.cuda import device\nfrom cupy.cuda import function\nfrom cupy.cuda import nvrtc\n\n_nvrtc_version = None\n\n\ndef _get_nvrtc_version():\n global _nvrtc_version\n if _nvrtc_version is None:\n _nvrtc_version = nvrtc.getVersion()\n\n return _nvrtc_version\n\n\ndef _get_arch():\n cc = device.Device().compute_capability\n return 'compute_%s' % cc\n\n\nclass TemporaryDirectory(object):\n def __enter__(self):\n self.path = tempfile.mkdtemp()\n return self.path\n\n def __exit__(self, exc_type, exc_value, traceback):\n if exc_value is not None:\n return\n\n for name in os.listdir(self.path):\n os.unlink(os.path.join(self.path, name))\n os.rmdir(self.path)\n\n\ndef _get_bool_env_variable(name, default):\n val = os.environ.get(name)\n if val is None or len(val) == 0:\n return default\n try:\n return int(val) == 1\n except ValueError:\n return False\n\n\ndef compile_using_nvrtc(source, options=(), arch=None):\n if not arch:\n arch = _get_arch()\n\n options += ('-arch={}'.format(arch),)\n\n with TemporaryDirectory() as root_dir:\n path = os.path.join(root_dir, 'kern')\n cu_path = '%s.cu' % path\n\n with open(cu_path, 'w') as cu_file:\n cu_file.write(source)\n\n prog = _NVRTCProgram(source, cu_path)\n try:\n ptx = prog.compile(options)\n except CompileException as e:\n dump = _get_bool_env_variable(\n 'CUPY_DUMP_CUDA_SOURCE_ON_ERROR', False)\n if dump:\n e.dump(sys.stderr)\n raise\n\n return ptx\n\n\ndef _preprocess(source, options=()):\n prog = _NVRTCProgram(source, '')\n try:\n result = prog.compile(options)\n except CompileException as e:\n dump = _get_bool_env_variable(\n 'CUPY_DUMP_CUDA_SOURCE_ON_ERROR', False)\n if dump:\n e.dump(sys.stderr)\n raise\n\n assert isinstance(result, six.text_type)\n return result\n\n\n_default_cache_dir = os.path.expanduser('~/.cupy/kernel_cache')\n\n\ndef get_cache_dir():\n return os.environ.get('CUPY_CACHE_DIR', _default_cache_dir)\n\n\n_empty_file_preprocess_cache = {}\n\n\ndef compile_with_cache(source, options=(), arch=None, cache_dir=None,\n extra_source=None):\n # NVRTC does not use extra_source. extra_source is used for cache key.\n global _empty_file_preprocess_cache\n if cache_dir is None:\n cache_dir = get_cache_dir()\n if arch is None:\n arch = _get_arch()\n\n options += ('-ftz=true',)\n\n env = (arch, options, _get_nvrtc_version())\n base = _empty_file_preprocess_cache.get(env, None)\n if base is None:\n # This is checking of NVRTC compiler internal version\n base = _empty_file_preprocess_cache[env] = _preprocess('', options)\n key_src = '%s %s %s %s' % (env, base, source, extra_source)\n\n key_src = key_src.encode('utf-8')\n name = '%s_2.cubin' % hashlib.md5(key_src).hexdigest()\n\n if not os.path.isdir(cache_dir):\n try:\n os.makedirs(cache_dir)\n except OSError:\n if not os.path.isdir(cache_dir):\n raise\n\n mod = function.Module()\n # To handle conflicts in concurrent situation, we adopt lock-free method\n # to avoid performance degradation.\n path = os.path.join(cache_dir, name)\n if os.path.exists(path):\n with open(path, 'rb') as file:\n data = file.read()\n if len(data) >= 32:\n hash = data[:32]\n cubin = data[32:]\n cubin_hash = six.b(hashlib.md5(cubin).hexdigest())\n if hash == cubin_hash:\n mod.load(cubin)\n return mod\n\n ptx = compile_using_nvrtc(source, options, arch)\n ls = function.LinkState()\n ls.add_ptr_data(ptx, six.u('cupy.ptx'))\n cubin = ls.complete()\n cubin_hash = six.b(hashlib.md5(cubin).hexdigest())\n\n # shutil.move is not atomic operation, so it could result in a corrupted\n # file. We detect it by appending md5 hash at the beginning of each cache\n # file. If the file is corrupted, it will be ignored next time it is read.\n with tempfile.NamedTemporaryFile(dir=cache_dir, delete=False) as tf:\n tf.write(cubin_hash)\n tf.write(cubin)\n temp_path = tf.name\n shutil.move(temp_path, path)\n\n # Save .cu source file along with .cubin\n if _get_bool_env_variable('CUPY_CACHE_SAVE_CUDA_SOURCE', False):\n with open(path + '.cu', 'w') as f:\n f.write(source)\n\n mod.load(cubin)\n return mod\n\n\nclass CompileException(Exception):\n\n def __init__(self, msg, source, name, options):\n self._msg = msg\n self.source = source\n self.name = name\n self.options = options\n\n def __repr__(self):\n return str(self)\n\n def __str__(self):\n return self.get_message()\n\n def get_message(self):\n return self._msg\n\n def dump(self, f):\n lines = self.source.split('\\n')\n digits = int(math.floor(math.log10(len(lines)))) + 1\n linum_fmt = '{{:0{}d}} '.format(digits)\n f.write('NVRTC compilation error: {}\\n'.format(self))\n f.write('-----\\n')\n f.write('Name: {}\\n'.format(' '.join(self.name)))\n f.write('Options: {}\\n'.format(' '.join(self.options)))\n f.write('CUDA source:\\n')\n for i, line in enumerate(lines):\n f.write(linum_fmt.format(i + 1) + line.rstrip() + '\\n')\n f.write('-----\\n')\n f.flush()\n\n\nclass _NVRTCProgram(object):\n\n def __init__(self, src, name=\"default_program\", headers=(),\n include_names=()):\n self.ptr = None\n\n if isinstance(src, six.binary_type):\n src = src.decode('UTF-8')\n if isinstance(name, six.binary_type):\n name = name.decode('UTF-8')\n\n self.src = src\n self.name = name\n self.ptr = nvrtc.createProgram(src, name, headers, include_names)\n\n def __del__(self):\n if self.ptr:\n nvrtc.destroyProgram(self.ptr)\n\n def compile(self, options=()):\n try:\n nvrtc.compileProgram(self.ptr, options)\n return nvrtc.getPTX(self.ptr)\n except nvrtc.NVRTCError:\n log = nvrtc.getProgramLog(self.ptr)\n raise CompileException(log, self.src, self.name, options)\n", "path": "cupy/cuda/compiler.py"}], "after_files": [{"content": "import hashlib\nimport math\nimport os\nimport re\nimport shutil\nimport sys\nimport tempfile\n\nimport six\n\nfrom cupy.cuda import device\nfrom cupy.cuda import function\nfrom cupy.cuda import nvrtc\n\n_nvrtc_version = None\n\n\ndef _get_nvrtc_version():\n global _nvrtc_version\n if _nvrtc_version is None:\n _nvrtc_version = nvrtc.getVersion()\n\n return _nvrtc_version\n\n\ndef _get_arch():\n cc = device.Device().compute_capability\n return 'compute_%s' % cc\n\n\nclass TemporaryDirectory(object):\n def __enter__(self):\n self.path = tempfile.mkdtemp()\n return self.path\n\n def __exit__(self, exc_type, exc_value, traceback):\n if exc_value is not None:\n return\n\n for name in os.listdir(self.path):\n os.unlink(os.path.join(self.path, name))\n os.rmdir(self.path)\n\n\ndef _get_bool_env_variable(name, default):\n val = os.environ.get(name)\n if val is None or len(val) == 0:\n return default\n try:\n return int(val) == 1\n except ValueError:\n return False\n\n\ndef compile_using_nvrtc(source, options=(), arch=None):\n if not arch:\n arch = _get_arch()\n\n options += ('-arch={}'.format(arch),)\n\n with TemporaryDirectory() as root_dir:\n path = os.path.join(root_dir, 'kern')\n cu_path = '%s.cu' % path\n\n with open(cu_path, 'w') as cu_file:\n cu_file.write(source)\n\n prog = _NVRTCProgram(source, cu_path)\n try:\n ptx = prog.compile(options)\n except CompileException as e:\n dump = _get_bool_env_variable(\n 'CUPY_DUMP_CUDA_SOURCE_ON_ERROR', False)\n if dump:\n e.dump(sys.stderr)\n raise\n\n return ptx\n\n\ndef _preprocess(source, options=()):\n prog = _NVRTCProgram(source, '')\n try:\n result = prog.compile(options)\n except CompileException as e:\n dump = _get_bool_env_variable(\n 'CUPY_DUMP_CUDA_SOURCE_ON_ERROR', False)\n if dump:\n e.dump(sys.stderr)\n raise\n\n assert isinstance(result, six.text_type)\n return result\n\n\n_default_cache_dir = os.path.expanduser('~/.cupy/kernel_cache')\n\n\ndef get_cache_dir():\n return os.environ.get('CUPY_CACHE_DIR', _default_cache_dir)\n\n\n_empty_file_preprocess_cache = {}\n\n\ndef compile_with_cache(source, options=(), arch=None, cache_dir=None,\n extra_source=None):\n # NVRTC does not use extra_source. extra_source is used for cache key.\n global _empty_file_preprocess_cache\n if cache_dir is None:\n cache_dir = get_cache_dir()\n if arch is None:\n arch = _get_arch()\n\n options += ('-ftz=true',)\n\n env = (arch, options, _get_nvrtc_version())\n base = _empty_file_preprocess_cache.get(env, None)\n if base is None:\n # This is checking of NVRTC compiler internal version\n base = _empty_file_preprocess_cache[env] = _preprocess('', options)\n key_src = '%s %s %s %s' % (env, base, source, extra_source)\n\n key_src = key_src.encode('utf-8')\n name = '%s_2.cubin' % hashlib.md5(key_src).hexdigest()\n\n if not os.path.isdir(cache_dir):\n try:\n os.makedirs(cache_dir)\n except OSError:\n if not os.path.isdir(cache_dir):\n raise\n\n mod = function.Module()\n # To handle conflicts in concurrent situation, we adopt lock-free method\n # to avoid performance degradation.\n path = os.path.join(cache_dir, name)\n if os.path.exists(path):\n with open(path, 'rb') as file:\n data = file.read()\n if len(data) >= 32:\n hash = data[:32]\n cubin = data[32:]\n cubin_hash = six.b(hashlib.md5(cubin).hexdigest())\n if hash == cubin_hash:\n mod.load(cubin)\n return mod\n\n ptx = compile_using_nvrtc(source, options, arch)\n ls = function.LinkState()\n ls.add_ptr_data(ptx, six.u('cupy.ptx'))\n cubin = ls.complete()\n cubin_hash = six.b(hashlib.md5(cubin).hexdigest())\n\n # shutil.move is not atomic operation, so it could result in a corrupted\n # file. We detect it by appending md5 hash at the beginning of each cache\n # file. If the file is corrupted, it will be ignored next time it is read.\n with tempfile.NamedTemporaryFile(dir=cache_dir, delete=False) as tf:\n tf.write(cubin_hash)\n tf.write(cubin)\n temp_path = tf.name\n shutil.move(temp_path, path)\n\n # Save .cu source file along with .cubin\n if _get_bool_env_variable('CUPY_CACHE_SAVE_CUDA_SOURCE', False):\n with open(path + '.cu', 'w') as f:\n f.write(source)\n\n mod.load(cubin)\n return mod\n\n\nclass CompileException(Exception):\n\n def __init__(self, msg, source, name, options):\n self._msg = msg\n self.source = source\n self.name = name\n self.options = options\n\n def __repr__(self):\n return str(self)\n\n def __str__(self):\n return self.get_message()\n\n def get_message(self):\n return self._msg\n\n def dump(self, f):\n lines = self.source.split('\\n')\n digits = int(math.floor(math.log10(len(lines)))) + 1\n linum_fmt = '{{:0{}d}} '.format(digits)\n f.write('NVRTC compilation error: {}\\n'.format(self))\n f.write('-----\\n')\n f.write('Name: {}\\n'.format(' '.join(self.name)))\n f.write('Options: {}\\n'.format(' '.join(self.options)))\n f.write('CUDA source:\\n')\n for i, line in enumerate(lines):\n f.write(linum_fmt.format(i + 1) + line.rstrip() + '\\n')\n f.write('-----\\n')\n f.flush()\n\n\nclass _NVRTCProgram(object):\n\n def __init__(self, src, name=\"default_program\", headers=(),\n include_names=()):\n self.ptr = None\n\n if isinstance(src, six.binary_type):\n src = src.decode('UTF-8')\n if isinstance(name, six.binary_type):\n name = name.decode('UTF-8')\n\n self.src = src\n self.name = name\n self.ptr = nvrtc.createProgram(src, name, headers, include_names)\n\n def __del__(self):\n if self.ptr:\n nvrtc.destroyProgram(self.ptr)\n\n def compile(self, options=()):\n try:\n nvrtc.compileProgram(self.ptr, options)\n return nvrtc.getPTX(self.ptr)\n except nvrtc.NVRTCError:\n log = nvrtc.getProgramLog(self.ptr)\n raise CompileException(log, self.src, self.name, options)\n\n\ndef is_valid_kernel_name(name):\n return re.match('^[a-zA-Z_][a-zA-Z_0-9]*$', name) is not None\n", "path": "cupy/cuda/compiler.py"}]}
| 2,521 | 153 |
gh_patches_debug_27167
|
rasdani/github-patches
|
git_diff
|
conan-io__conan-3254
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Enable python 3.7 in CI
At least for Linux
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conans/pylint_plugin.py`
Content:
```
1 """Pylint plugin for ConanFile"""
2
3 import astroid
4 from astroid import MANAGER, scoped_nodes
5
6
7 def register(linter):
8 """Declare package as plugin
9
10 This function needs to be declared so astroid treats
11 current file as a plugin.
12 """
13 pass
14
15
16 def transform_conanfile(node):
17 """Transform definition of ConanFile class so dynamic fields are visible to pylint"""
18
19 str_class = scoped_nodes.builtin_lookup("str")
20 info_class = MANAGER.ast_from_module_name("conans.model.info").lookup(
21 "ConanInfo")
22 build_requires_class = MANAGER.ast_from_module_name(
23 "conans.client.graph.graph_manager").lookup("_RecipeBuildRequires")
24 file_copier_class = MANAGER.ast_from_module_name(
25 "conans.client.file_copier").lookup("FileCopier")
26 file_importer_class = MANAGER.ast_from_module_name(
27 "conans.client.importer").lookup("_FileImporter")
28
29 dynamic_fields = {
30 "source_folder": str_class,
31 "build_folder": str_class,
32 "package_folder": str_class,
33 "build_requires": build_requires_class,
34 "info_build": info_class,
35 "info": info_class,
36 "copy": file_copier_class,
37 "copy_deps": file_importer_class,
38 }
39
40 for f, t in dynamic_fields.items():
41 node.locals[f] = [t]
42
43
44 MANAGER.register_transform(
45 scoped_nodes.Class, transform_conanfile,
46 lambda node: node.qname() == "conans.model.conan_file.ConanFile")
47
```
Path: `.ci/jenkins/conf.py`
Content:
```
1 import argparse
2 import os
3 import platform
4 from contextlib import contextmanager
5
6 winpylocation = {"py27": "C:\\Python27\\python.exe",
7 "py34": "C:\\Python34\\python.exe",
8 "py36": "C:\\Python36\\python.exe"}
9
10 macpylocation = {"py27": "/usr/bin/python", # /Users/jenkins_ci/.pyenv/versions/2.7.11/bin/python",
11 "py34": "/Users/jenkins_ci/.pyenv/versions/3.4.7/bin/python",
12 "py36": "/Users/jenkins_ci/.pyenv/versions/3.6.5/bin/python"}
13
14 linuxpylocation = {"py27": "/usr/bin/python2.7",
15 "py34": "/usr/bin/python3.4",
16 "py36": "/usr/bin/python3.6"}
17
18
19 def get_environ(tmp_path):
20 if platform.system() == "Windows":
21 return {"CONAN_BASH_PATH": "c:/tools/msys64/usr/bin/bash",
22 "CONAN_USER_HOME_SHORT": os.path.join(tmp_path, ".conan")}
23 return {}
24
25
26 class Extender(argparse.Action):
27 """Allows to use the same flag several times in a command and creates a list with the values.
28 For example:
29 conan install MyPackage/1.2@user/channel -o qt:value -o mode:2 -s cucumber:true
30 It creates:
31 options = ['qt:value', 'mode:2']
32 settings = ['cucumber:true']
33 """
34 def __call__(self, parser, namespace, values, option_strings=None): # @UnusedVariable
35 # Need None here incase `argparse.SUPPRESS` was supplied for `dest`
36 dest = getattr(namespace, self.dest, None)
37 if not hasattr(dest, 'extend') or dest == self.default:
38 dest = []
39 setattr(namespace, self.dest, dest)
40 # if default isn't set to None, this method might be called
41 # with the default as `values` for other arguments which
42 # share this destination.
43 parser.set_defaults(**{self.dest: None})
44
45 try:
46 dest.extend(values)
47 except ValueError:
48 dest.append(values)
49
50
51 @contextmanager
52 def environment_append(env_vars):
53 old_env = dict(os.environ)
54 for name, value in env_vars.items():
55 if isinstance(value, list):
56 env_vars[name] = os.pathsep.join(value)
57 if name in old_env:
58 env_vars[name] += os.pathsep + old_env[name]
59 os.environ.update(env_vars)
60 try:
61 yield
62 finally:
63 os.environ.clear()
64 os.environ.update(old_env)
65
66
67 @contextmanager
68 def chdir(newdir):
69 old_path = os.getcwd()
70 os.chdir(newdir)
71 try:
72 yield
73 finally:
74 os.chdir(old_path)
75
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/.ci/jenkins/conf.py b/.ci/jenkins/conf.py
--- a/.ci/jenkins/conf.py
+++ b/.ci/jenkins/conf.py
@@ -13,7 +13,8 @@
linuxpylocation = {"py27": "/usr/bin/python2.7",
"py34": "/usr/bin/python3.4",
- "py36": "/usr/bin/python3.6"}
+ "py36": "/usr/bin/python3.6",
+ "py37": "/usr/bin/python3.7"}
def get_environ(tmp_path):
diff --git a/conans/pylint_plugin.py b/conans/pylint_plugin.py
--- a/conans/pylint_plugin.py
+++ b/conans/pylint_plugin.py
@@ -1,7 +1,6 @@
"""Pylint plugin for ConanFile"""
-
import astroid
-from astroid import MANAGER, scoped_nodes
+from astroid import MANAGER
def register(linter):
@@ -16,7 +15,7 @@
def transform_conanfile(node):
"""Transform definition of ConanFile class so dynamic fields are visible to pylint"""
- str_class = scoped_nodes.builtin_lookup("str")
+ str_class = astroid.builtin_lookup("str")
info_class = MANAGER.ast_from_module_name("conans.model.info").lookup(
"ConanInfo")
build_requires_class = MANAGER.ast_from_module_name(
@@ -42,5 +41,5 @@
MANAGER.register_transform(
- scoped_nodes.Class, transform_conanfile,
+ astroid.ClassDef, transform_conanfile,
lambda node: node.qname() == "conans.model.conan_file.ConanFile")
|
{"golden_diff": "diff --git a/.ci/jenkins/conf.py b/.ci/jenkins/conf.py\n--- a/.ci/jenkins/conf.py\n+++ b/.ci/jenkins/conf.py\n@@ -13,7 +13,8 @@\n \n linuxpylocation = {\"py27\": \"/usr/bin/python2.7\",\n \"py34\": \"/usr/bin/python3.4\",\n- \"py36\": \"/usr/bin/python3.6\"}\n+ \"py36\": \"/usr/bin/python3.6\",\n+ \"py37\": \"/usr/bin/python3.7\"}\n \n \n def get_environ(tmp_path):\ndiff --git a/conans/pylint_plugin.py b/conans/pylint_plugin.py\n--- a/conans/pylint_plugin.py\n+++ b/conans/pylint_plugin.py\n@@ -1,7 +1,6 @@\n \"\"\"Pylint plugin for ConanFile\"\"\"\n-\n import astroid\n-from astroid import MANAGER, scoped_nodes\n+from astroid import MANAGER\n \n \n def register(linter):\n@@ -16,7 +15,7 @@\n def transform_conanfile(node):\n \"\"\"Transform definition of ConanFile class so dynamic fields are visible to pylint\"\"\"\n \n- str_class = scoped_nodes.builtin_lookup(\"str\")\n+ str_class = astroid.builtin_lookup(\"str\")\n info_class = MANAGER.ast_from_module_name(\"conans.model.info\").lookup(\n \"ConanInfo\")\n build_requires_class = MANAGER.ast_from_module_name(\n@@ -42,5 +41,5 @@\n \n \n MANAGER.register_transform(\n- scoped_nodes.Class, transform_conanfile,\n+ astroid.ClassDef, transform_conanfile,\n lambda node: node.qname() == \"conans.model.conan_file.ConanFile\")\n", "issue": "Enable python 3.7 in CI \nAt least for Linux\r\n\r\n\n", "before_files": [{"content": "\"\"\"Pylint plugin for ConanFile\"\"\"\n\nimport astroid\nfrom astroid import MANAGER, scoped_nodes\n\n\ndef register(linter):\n \"\"\"Declare package as plugin\n \n This function needs to be declared so astroid treats\n current file as a plugin.\n \"\"\"\n pass\n\n\ndef transform_conanfile(node):\n \"\"\"Transform definition of ConanFile class so dynamic fields are visible to pylint\"\"\"\n\n str_class = scoped_nodes.builtin_lookup(\"str\")\n info_class = MANAGER.ast_from_module_name(\"conans.model.info\").lookup(\n \"ConanInfo\")\n build_requires_class = MANAGER.ast_from_module_name(\n \"conans.client.graph.graph_manager\").lookup(\"_RecipeBuildRequires\")\n file_copier_class = MANAGER.ast_from_module_name(\n \"conans.client.file_copier\").lookup(\"FileCopier\")\n file_importer_class = MANAGER.ast_from_module_name(\n \"conans.client.importer\").lookup(\"_FileImporter\")\n\n dynamic_fields = {\n \"source_folder\": str_class,\n \"build_folder\": str_class,\n \"package_folder\": str_class,\n \"build_requires\": build_requires_class,\n \"info_build\": info_class,\n \"info\": info_class,\n \"copy\": file_copier_class,\n \"copy_deps\": file_importer_class,\n }\n\n for f, t in dynamic_fields.items():\n node.locals[f] = [t]\n\n\nMANAGER.register_transform(\n scoped_nodes.Class, transform_conanfile,\n lambda node: node.qname() == \"conans.model.conan_file.ConanFile\")\n", "path": "conans/pylint_plugin.py"}, {"content": "import argparse\nimport os\nimport platform\nfrom contextlib import contextmanager\n\nwinpylocation = {\"py27\": \"C:\\\\Python27\\\\python.exe\",\n \"py34\": \"C:\\\\Python34\\\\python.exe\",\n \"py36\": \"C:\\\\Python36\\\\python.exe\"}\n\nmacpylocation = {\"py27\": \"/usr/bin/python\", # /Users/jenkins_ci/.pyenv/versions/2.7.11/bin/python\",\n \"py34\": \"/Users/jenkins_ci/.pyenv/versions/3.4.7/bin/python\",\n \"py36\": \"/Users/jenkins_ci/.pyenv/versions/3.6.5/bin/python\"}\n\nlinuxpylocation = {\"py27\": \"/usr/bin/python2.7\",\n \"py34\": \"/usr/bin/python3.4\",\n \"py36\": \"/usr/bin/python3.6\"}\n\n\ndef get_environ(tmp_path):\n if platform.system() == \"Windows\":\n return {\"CONAN_BASH_PATH\": \"c:/tools/msys64/usr/bin/bash\",\n \"CONAN_USER_HOME_SHORT\": os.path.join(tmp_path, \".conan\")}\n return {}\n\n\nclass Extender(argparse.Action):\n \"\"\"Allows to use the same flag several times in a command and creates a list with the values.\n For example:\n conan install MyPackage/1.2@user/channel -o qt:value -o mode:2 -s cucumber:true\n It creates:\n options = ['qt:value', 'mode:2']\n settings = ['cucumber:true']\n \"\"\"\n def __call__(self, parser, namespace, values, option_strings=None): # @UnusedVariable\n # Need None here incase `argparse.SUPPRESS` was supplied for `dest`\n dest = getattr(namespace, self.dest, None)\n if not hasattr(dest, 'extend') or dest == self.default:\n dest = []\n setattr(namespace, self.dest, dest)\n # if default isn't set to None, this method might be called\n # with the default as `values` for other arguments which\n # share this destination.\n parser.set_defaults(**{self.dest: None})\n\n try:\n dest.extend(values)\n except ValueError:\n dest.append(values)\n\n\n@contextmanager\ndef environment_append(env_vars):\n old_env = dict(os.environ)\n for name, value in env_vars.items():\n if isinstance(value, list):\n env_vars[name] = os.pathsep.join(value)\n if name in old_env:\n env_vars[name] += os.pathsep + old_env[name]\n os.environ.update(env_vars)\n try:\n yield\n finally:\n os.environ.clear()\n os.environ.update(old_env)\n\n\n@contextmanager\ndef chdir(newdir):\n old_path = os.getcwd()\n os.chdir(newdir)\n try:\n yield\n finally:\n os.chdir(old_path)\n", "path": ".ci/jenkins/conf.py"}], "after_files": [{"content": "\"\"\"Pylint plugin for ConanFile\"\"\"\nimport astroid\nfrom astroid import MANAGER\n\n\ndef register(linter):\n \"\"\"Declare package as plugin\n \n This function needs to be declared so astroid treats\n current file as a plugin.\n \"\"\"\n pass\n\n\ndef transform_conanfile(node):\n \"\"\"Transform definition of ConanFile class so dynamic fields are visible to pylint\"\"\"\n\n str_class = astroid.builtin_lookup(\"str\")\n info_class = MANAGER.ast_from_module_name(\"conans.model.info\").lookup(\n \"ConanInfo\")\n build_requires_class = MANAGER.ast_from_module_name(\n \"conans.client.graph.graph_manager\").lookup(\"_RecipeBuildRequires\")\n file_copier_class = MANAGER.ast_from_module_name(\n \"conans.client.file_copier\").lookup(\"FileCopier\")\n file_importer_class = MANAGER.ast_from_module_name(\n \"conans.client.importer\").lookup(\"_FileImporter\")\n\n dynamic_fields = {\n \"source_folder\": str_class,\n \"build_folder\": str_class,\n \"package_folder\": str_class,\n \"build_requires\": build_requires_class,\n \"info_build\": info_class,\n \"info\": info_class,\n \"copy\": file_copier_class,\n \"copy_deps\": file_importer_class,\n }\n\n for f, t in dynamic_fields.items():\n node.locals[f] = [t]\n\n\nMANAGER.register_transform(\n astroid.ClassDef, transform_conanfile,\n lambda node: node.qname() == \"conans.model.conan_file.ConanFile\")\n", "path": "conans/pylint_plugin.py"}, {"content": "import argparse\nimport os\nimport platform\nfrom contextlib import contextmanager\n\nwinpylocation = {\"py27\": \"C:\\\\Python27\\\\python.exe\",\n \"py34\": \"C:\\\\Python34\\\\python.exe\",\n \"py36\": \"C:\\\\Python36\\\\python.exe\"}\n\nmacpylocation = {\"py27\": \"/usr/bin/python\", # /Users/jenkins_ci/.pyenv/versions/2.7.11/bin/python\",\n \"py34\": \"/Users/jenkins_ci/.pyenv/versions/3.4.7/bin/python\",\n \"py36\": \"/Users/jenkins_ci/.pyenv/versions/3.6.5/bin/python\"}\n\nlinuxpylocation = {\"py27\": \"/usr/bin/python2.7\",\n \"py34\": \"/usr/bin/python3.4\",\n \"py36\": \"/usr/bin/python3.6\",\n \"py37\": \"/usr/bin/python3.7\"}\n\n\ndef get_environ(tmp_path):\n if platform.system() == \"Windows\":\n return {\"CONAN_BASH_PATH\": \"c:/tools/msys64/usr/bin/bash\",\n \"CONAN_USER_HOME_SHORT\": os.path.join(tmp_path, \".conan\")}\n return {}\n\n\nclass Extender(argparse.Action):\n \"\"\"Allows to use the same flag several times in a command and creates a list with the values.\n For example:\n conan install MyPackage/1.2@user/channel -o qt:value -o mode:2 -s cucumber:true\n It creates:\n options = ['qt:value', 'mode:2']\n settings = ['cucumber:true']\n \"\"\"\n def __call__(self, parser, namespace, values, option_strings=None): # @UnusedVariable\n # Need None here incase `argparse.SUPPRESS` was supplied for `dest`\n dest = getattr(namespace, self.dest, None)\n if not hasattr(dest, 'extend') or dest == self.default:\n dest = []\n setattr(namespace, self.dest, dest)\n # if default isn't set to None, this method might be called\n # with the default as `values` for other arguments which\n # share this destination.\n parser.set_defaults(**{self.dest: None})\n\n try:\n dest.extend(values)\n except ValueError:\n dest.append(values)\n\n\n@contextmanager\ndef environment_append(env_vars):\n old_env = dict(os.environ)\n for name, value in env_vars.items():\n if isinstance(value, list):\n env_vars[name] = os.pathsep.join(value)\n if name in old_env:\n env_vars[name] += os.pathsep + old_env[name]\n os.environ.update(env_vars)\n try:\n yield\n finally:\n os.environ.clear()\n os.environ.update(old_env)\n\n\n@contextmanager\ndef chdir(newdir):\n old_path = os.getcwd()\n os.chdir(newdir)\n try:\n yield\n finally:\n os.chdir(old_path)\n", "path": ".ci/jenkins/conf.py"}]}
| 1,492 | 376 |
gh_patches_debug_29463
|
rasdani/github-patches
|
git_diff
|
tensorflow__addons-1103
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Incomplete and wrong typing
It seems, there are many errors in the typed code.
For examples
1. https://github.com/tensorflow/addons/blob/master/tensorflow_addons/activations/rrelu.py#L28 (Optional[int])
2. https://github.com/tensorflow/addons/blob/master/tensorflow_addons/losses/giou_loss.py#L101 (TensorLike)
The integration of a type checker such as `mypy` can help to keep it clean if the goal is to integrate type checking.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tensorflow_addons/losses/giou_loss.py`
Content:
```
1 # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15 """Implements GIoU loss."""
16
17 import tensorflow as tf
18
19 from tensorflow_addons.utils.types import TensorLike
20 from typing import List
21 from typeguard import typechecked
22
23
24 @tf.keras.utils.register_keras_serializable(package='Addons')
25 class GIoULoss(tf.keras.losses.Loss):
26 """Implements the GIoU loss function.
27
28 GIoU loss was first introduced in the
29 [Generalized Intersection over Union:
30 A Metric and A Loss for Bounding Box Regression]
31 (https://giou.stanford.edu/GIoU.pdf).
32 GIoU is an enhancement for models which use IoU in object detection.
33
34 Usage:
35
36 ```python
37 gl = tfa.losses.GIoULoss()
38 boxes1 = tf.constant([[4.0, 3.0, 7.0, 5.0], [5.0, 6.0, 10.0, 7.0]])
39 boxes2 = tf.constant([[3.0, 4.0, 6.0, 8.0], [14.0, 14.0, 15.0, 15.0]])
40 loss = gl(boxes1, boxes2)
41 print('Loss: ', loss.numpy()) # Loss: [1.07500000298023224, 1.9333333373069763]
42 ```
43 Usage with tf.keras API:
44
45 ```python
46 model = tf.keras.Model(inputs, outputs)
47 model.compile('sgd', loss=tfa.losses.GIoULoss())
48 ```
49
50 Args:
51 mode: one of ['giou', 'iou'], decided to calculate GIoU or IoU loss.
52 """
53 @typechecked
54 def __init__(self,
55 mode: str = 'giou',
56 reduction: str = tf.keras.losses.Reduction.AUTO,
57 name: str = 'giou_loss'):
58 if mode not in ['giou', 'iou']:
59 raise ValueError("Value of mode should be 'iou' or 'giou'")
60 super().__init__(name=name, reduction=reduction)
61 self.mode = mode
62
63 def get_config(self):
64 base_config = super().get_config()
65 base_config['mode'] = self.mode
66 return base_config
67
68 def call(self,
69 y_true,
70 y_pred):
71 return giou_loss(y_true, y_pred, mode=self.mode)
72
73
74 @tf.keras.utils.register_keras_serializable(package='Addons')
75 @tf.function
76 def giou_loss(y_true: TensorLike,
77 y_pred: TensorLike,
78 mode: str = 'giou') -> tf.Tensor:
79 """
80 Args:
81 y_true: true targets tensor. The coordinates of the each bounding
82 box in boxes are encoded as [y_min, x_min, y_max, x_max].
83 y_pred: predictions tensor. The coordinates of the each bounding
84 box in boxes are encoded as [y_min, x_min, y_max, x_max].
85 mode: one of ['giou', 'iou'], decided to calculate GIoU or IoU loss.
86
87 Returns:
88 GIoU loss float `Tensor`.
89 """
90 if mode not in ['giou', 'iou']:
91 raise ValueError("Value of mode should be 'iou' or 'giou'")
92 y_pred = tf.convert_to_tensor(y_pred)
93 if not y_pred.dtype.is_floating:
94 y_pred = tf.cast(y_pred, tf.float32)
95 y_true = tf.cast(y_true, y_pred.dtype)
96 giou = _calculate_giou(y_pred, y_true, mode)
97
98 return 1 - giou
99
100
101 def _calculate_giou(b1: List[int],
102 b2: List[int],
103 mode: str = 'giou') -> tf.Tensor:
104 """
105 Args:
106 b1: bounding box. The coordinates of the each bounding box in boxes are
107 encoded as [y_min, x_min, y_max, x_max].
108 b2: the other bounding box. The coordinates of the each bounding box
109 in boxes are encoded as [y_min, x_min, y_max, x_max].
110 mode: one of ['giou', 'iou'], decided to calculate GIoU or IoU loss.
111
112 Returns:
113 GIoU loss float `Tensor`.
114 """
115 zero = tf.convert_to_tensor(0., b1.dtype)
116 b1_ymin, b1_xmin, b1_ymax, b1_xmax = tf.unstack(b1, 4, axis=-1)
117 b2_ymin, b2_xmin, b2_ymax, b2_xmax = tf.unstack(b2, 4, axis=-1)
118 b1_width = tf.maximum(zero, b1_xmax - b1_xmin)
119 b1_height = tf.maximum(zero, b1_ymax - b1_ymin)
120 b2_width = tf.maximum(zero, b2_xmax - b2_xmin)
121 b2_height = tf.maximum(zero, b2_ymax - b2_ymin)
122 b1_area = b1_width * b1_height
123 b2_area = b2_width * b2_height
124
125 intersect_ymin = tf.maximum(b1_ymin, b2_ymin)
126 intersect_xmin = tf.maximum(b1_xmin, b2_xmin)
127 intersect_ymax = tf.minimum(b1_ymax, b2_ymax)
128 intersect_xmax = tf.minimum(b1_xmax, b2_xmax)
129 intersect_width = tf.maximum(zero, intersect_xmax - intersect_xmin)
130 intersect_height = tf.maximum(zero, intersect_ymax - intersect_ymin)
131 intersect_area = intersect_width * intersect_height
132
133 union_area = b1_area + b2_area - intersect_area
134 iou = tf.math.divide_no_nan(intersect_area, union_area)
135 if mode == 'iou':
136 return iou
137
138 enclose_ymin = tf.minimum(b1_ymin, b2_ymin)
139 enclose_xmin = tf.minimum(b1_xmin, b2_xmin)
140 enclose_ymax = tf.maximum(b1_ymax, b2_ymax)
141 enclose_xmax = tf.maximum(b1_xmax, b2_xmax)
142 enclose_width = tf.maximum(zero, enclose_xmax - enclose_xmin)
143 enclose_height = tf.maximum(zero, enclose_ymax - enclose_ymin)
144 enclose_area = enclose_width * enclose_height
145 giou = iou - tf.math.divide_no_nan(
146 (enclose_area - union_area), enclose_area)
147 return giou
148
```
Path: `tensorflow_addons/activations/rrelu.py`
Content:
```
1 # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15
16 import tensorflow as tf
17 from tensorflow_addons.utils.types import Number
18 from tensorflow_addons.utils import types
19 from typing import Optional
20
21
22 @tf.keras.utils.register_keras_serializable(package="Addons")
23 def rrelu(
24 x: types.TensorLike,
25 lower: Number = 0.125,
26 upper: Number = 0.3333333333333333,
27 training: Optional[str] = None,
28 seed: Optional[str] = None,
29 ) -> tf.Tensor:
30 """rrelu function.
31
32 Computes rrelu function:
33 `x if x > 0 else random(lower, upper) * x` or
34 `x if x > 0 else x * (lower + upper) / 2`
35 depending on whether training is enabled.
36
37 See [Empirical Evaluation of Rectified Activations in Convolutional Network](https://arxiv.org/abs/1505.00853).
38
39 Args:
40 x: A `Tensor`. Must be one of the following types:
41 `float16`, `float32`, `float64`.
42 lower: `float`, lower bound for random alpha.
43 upper: `float`, upper bound for random alpha.
44 training: `bool`, indicating whether the `call`
45 is meant for training or inference.
46 seed: `int`, this sets the operation-level seed.
47 Returns:
48 result: A `Tensor`. Has the same type as `x`.
49 """
50 x = tf.convert_to_tensor(x)
51 if training is None:
52 training = tf.keras.backend.learning_phase()
53 training = bool(tf.keras.backend.get_value(training))
54
55 if training:
56 alpha = tf.random.uniform(
57 tf.shape(x), minval=lower, maxval=upper, dtype=x.dtype, seed=seed
58 )
59 else:
60 alpha = tf.cast((lower + upper) / 2, x.dtype)
61
62 return tf.where(x >= 0, x, alpha * x)
63
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/tensorflow_addons/activations/rrelu.py b/tensorflow_addons/activations/rrelu.py
--- a/tensorflow_addons/activations/rrelu.py
+++ b/tensorflow_addons/activations/rrelu.py
@@ -24,8 +24,8 @@
x: types.TensorLike,
lower: Number = 0.125,
upper: Number = 0.3333333333333333,
- training: Optional[str] = None,
- seed: Optional[str] = None,
+ training: Optional[bool] = None,
+ seed: Optional[int] = None,
) -> tf.Tensor:
"""rrelu function.
diff --git a/tensorflow_addons/losses/giou_loss.py b/tensorflow_addons/losses/giou_loss.py
--- a/tensorflow_addons/losses/giou_loss.py
+++ b/tensorflow_addons/losses/giou_loss.py
@@ -17,7 +17,7 @@
import tensorflow as tf
from tensorflow_addons.utils.types import TensorLike
-from typing import List
+from typing import Optional
from typeguard import typechecked
@@ -54,7 +54,7 @@
def __init__(self,
mode: str = 'giou',
reduction: str = tf.keras.losses.Reduction.AUTO,
- name: str = 'giou_loss'):
+ name: Optional[str] = 'giou_loss'):
if mode not in ['giou', 'iou']:
raise ValueError("Value of mode should be 'iou' or 'giou'")
super().__init__(name=name, reduction=reduction)
@@ -98,8 +98,8 @@
return 1 - giou
-def _calculate_giou(b1: List[int],
- b2: List[int],
+def _calculate_giou(b1: TensorLike,
+ b2: TensorLike,
mode: str = 'giou') -> tf.Tensor:
"""
Args:
|
{"golden_diff": "diff --git a/tensorflow_addons/activations/rrelu.py b/tensorflow_addons/activations/rrelu.py\n--- a/tensorflow_addons/activations/rrelu.py\n+++ b/tensorflow_addons/activations/rrelu.py\n@@ -24,8 +24,8 @@\n x: types.TensorLike,\n lower: Number = 0.125,\n upper: Number = 0.3333333333333333,\n- training: Optional[str] = None,\n- seed: Optional[str] = None,\n+ training: Optional[bool] = None,\n+ seed: Optional[int] = None,\n ) -> tf.Tensor:\n \"\"\"rrelu function.\n \ndiff --git a/tensorflow_addons/losses/giou_loss.py b/tensorflow_addons/losses/giou_loss.py\n--- a/tensorflow_addons/losses/giou_loss.py\n+++ b/tensorflow_addons/losses/giou_loss.py\n@@ -17,7 +17,7 @@\n import tensorflow as tf\n \n from tensorflow_addons.utils.types import TensorLike\n-from typing import List\n+from typing import Optional\n from typeguard import typechecked\n \n \n@@ -54,7 +54,7 @@\n def __init__(self,\n mode: str = 'giou',\n reduction: str = tf.keras.losses.Reduction.AUTO,\n- name: str = 'giou_loss'):\n+ name: Optional[str] = 'giou_loss'):\n if mode not in ['giou', 'iou']:\n raise ValueError(\"Value of mode should be 'iou' or 'giou'\")\n super().__init__(name=name, reduction=reduction)\n@@ -98,8 +98,8 @@\n return 1 - giou\n \n \n-def _calculate_giou(b1: List[int],\n- b2: List[int],\n+def _calculate_giou(b1: TensorLike,\n+ b2: TensorLike,\n mode: str = 'giou') -> tf.Tensor:\n \"\"\"\n Args:\n", "issue": "Incomplete and wrong typing\nIt seems, there are many errors in the typed code. \r\n\r\nFor examples \r\n1. https://github.com/tensorflow/addons/blob/master/tensorflow_addons/activations/rrelu.py#L28 (Optional[int])\r\n2. https://github.com/tensorflow/addons/blob/master/tensorflow_addons/losses/giou_loss.py#L101 (TensorLike)\r\n\r\nThe integration of a type checker such as `mypy` can help to keep it clean if the goal is to integrate type checking. \r\n\n", "before_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Implements GIoU loss.\"\"\"\n\nimport tensorflow as tf\n\nfrom tensorflow_addons.utils.types import TensorLike\nfrom typing import List\nfrom typeguard import typechecked\n\n\[email protected]_keras_serializable(package='Addons')\nclass GIoULoss(tf.keras.losses.Loss):\n \"\"\"Implements the GIoU loss function.\n\n GIoU loss was first introduced in the\n [Generalized Intersection over Union:\n A Metric and A Loss for Bounding Box Regression]\n (https://giou.stanford.edu/GIoU.pdf).\n GIoU is an enhancement for models which use IoU in object detection.\n\n Usage:\n\n ```python\n gl = tfa.losses.GIoULoss()\n boxes1 = tf.constant([[4.0, 3.0, 7.0, 5.0], [5.0, 6.0, 10.0, 7.0]])\n boxes2 = tf.constant([[3.0, 4.0, 6.0, 8.0], [14.0, 14.0, 15.0, 15.0]])\n loss = gl(boxes1, boxes2)\n print('Loss: ', loss.numpy()) # Loss: [1.07500000298023224, 1.9333333373069763]\n ```\n Usage with tf.keras API:\n\n ```python\n model = tf.keras.Model(inputs, outputs)\n model.compile('sgd', loss=tfa.losses.GIoULoss())\n ```\n\n Args:\n mode: one of ['giou', 'iou'], decided to calculate GIoU or IoU loss.\n \"\"\"\n @typechecked\n def __init__(self,\n mode: str = 'giou',\n reduction: str = tf.keras.losses.Reduction.AUTO,\n name: str = 'giou_loss'):\n if mode not in ['giou', 'iou']:\n raise ValueError(\"Value of mode should be 'iou' or 'giou'\")\n super().__init__(name=name, reduction=reduction)\n self.mode = mode\n\n def get_config(self):\n base_config = super().get_config()\n base_config['mode'] = self.mode\n return base_config\n\n def call(self,\n y_true,\n y_pred):\n return giou_loss(y_true, y_pred, mode=self.mode)\n\n\[email protected]_keras_serializable(package='Addons')\[email protected]\ndef giou_loss(y_true: TensorLike,\n y_pred: TensorLike,\n mode: str = 'giou') -> tf.Tensor:\n \"\"\"\n Args:\n y_true: true targets tensor. The coordinates of the each bounding\n box in boxes are encoded as [y_min, x_min, y_max, x_max].\n y_pred: predictions tensor. The coordinates of the each bounding\n box in boxes are encoded as [y_min, x_min, y_max, x_max].\n mode: one of ['giou', 'iou'], decided to calculate GIoU or IoU loss.\n\n Returns:\n GIoU loss float `Tensor`.\n \"\"\"\n if mode not in ['giou', 'iou']:\n raise ValueError(\"Value of mode should be 'iou' or 'giou'\")\n y_pred = tf.convert_to_tensor(y_pred)\n if not y_pred.dtype.is_floating:\n y_pred = tf.cast(y_pred, tf.float32)\n y_true = tf.cast(y_true, y_pred.dtype)\n giou = _calculate_giou(y_pred, y_true, mode)\n\n return 1 - giou\n\n\ndef _calculate_giou(b1: List[int],\n b2: List[int],\n mode: str = 'giou') -> tf.Tensor:\n \"\"\"\n Args:\n b1: bounding box. The coordinates of the each bounding box in boxes are\n encoded as [y_min, x_min, y_max, x_max].\n b2: the other bounding box. The coordinates of the each bounding box\n in boxes are encoded as [y_min, x_min, y_max, x_max].\n mode: one of ['giou', 'iou'], decided to calculate GIoU or IoU loss.\n\n Returns:\n GIoU loss float `Tensor`.\n \"\"\"\n zero = tf.convert_to_tensor(0., b1.dtype)\n b1_ymin, b1_xmin, b1_ymax, b1_xmax = tf.unstack(b1, 4, axis=-1)\n b2_ymin, b2_xmin, b2_ymax, b2_xmax = tf.unstack(b2, 4, axis=-1)\n b1_width = tf.maximum(zero, b1_xmax - b1_xmin)\n b1_height = tf.maximum(zero, b1_ymax - b1_ymin)\n b2_width = tf.maximum(zero, b2_xmax - b2_xmin)\n b2_height = tf.maximum(zero, b2_ymax - b2_ymin)\n b1_area = b1_width * b1_height\n b2_area = b2_width * b2_height\n\n intersect_ymin = tf.maximum(b1_ymin, b2_ymin)\n intersect_xmin = tf.maximum(b1_xmin, b2_xmin)\n intersect_ymax = tf.minimum(b1_ymax, b2_ymax)\n intersect_xmax = tf.minimum(b1_xmax, b2_xmax)\n intersect_width = tf.maximum(zero, intersect_xmax - intersect_xmin)\n intersect_height = tf.maximum(zero, intersect_ymax - intersect_ymin)\n intersect_area = intersect_width * intersect_height\n\n union_area = b1_area + b2_area - intersect_area\n iou = tf.math.divide_no_nan(intersect_area, union_area)\n if mode == 'iou':\n return iou\n\n enclose_ymin = tf.minimum(b1_ymin, b2_ymin)\n enclose_xmin = tf.minimum(b1_xmin, b2_xmin)\n enclose_ymax = tf.maximum(b1_ymax, b2_ymax)\n enclose_xmax = tf.maximum(b1_xmax, b2_xmax)\n enclose_width = tf.maximum(zero, enclose_xmax - enclose_xmin)\n enclose_height = tf.maximum(zero, enclose_ymax - enclose_ymin)\n enclose_area = enclose_width * enclose_height\n giou = iou - tf.math.divide_no_nan(\n (enclose_area - union_area), enclose_area)\n return giou\n", "path": "tensorflow_addons/losses/giou_loss.py"}, {"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\nimport tensorflow as tf\nfrom tensorflow_addons.utils.types import Number\nfrom tensorflow_addons.utils import types\nfrom typing import Optional\n\n\[email protected]_keras_serializable(package=\"Addons\")\ndef rrelu(\n x: types.TensorLike,\n lower: Number = 0.125,\n upper: Number = 0.3333333333333333,\n training: Optional[str] = None,\n seed: Optional[str] = None,\n) -> tf.Tensor:\n \"\"\"rrelu function.\n\n Computes rrelu function:\n `x if x > 0 else random(lower, upper) * x` or\n `x if x > 0 else x * (lower + upper) / 2`\n depending on whether training is enabled.\n\n See [Empirical Evaluation of Rectified Activations in Convolutional Network](https://arxiv.org/abs/1505.00853).\n\n Args:\n x: A `Tensor`. Must be one of the following types:\n `float16`, `float32`, `float64`.\n lower: `float`, lower bound for random alpha.\n upper: `float`, upper bound for random alpha.\n training: `bool`, indicating whether the `call`\n is meant for training or inference.\n seed: `int`, this sets the operation-level seed.\n Returns:\n result: A `Tensor`. Has the same type as `x`.\n \"\"\"\n x = tf.convert_to_tensor(x)\n if training is None:\n training = tf.keras.backend.learning_phase()\n training = bool(tf.keras.backend.get_value(training))\n\n if training:\n alpha = tf.random.uniform(\n tf.shape(x), minval=lower, maxval=upper, dtype=x.dtype, seed=seed\n )\n else:\n alpha = tf.cast((lower + upper) / 2, x.dtype)\n\n return tf.where(x >= 0, x, alpha * x)\n", "path": "tensorflow_addons/activations/rrelu.py"}], "after_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Implements GIoU loss.\"\"\"\n\nimport tensorflow as tf\n\nfrom tensorflow_addons.utils.types import TensorLike\nfrom typing import Optional\nfrom typeguard import typechecked\n\n\[email protected]_keras_serializable(package='Addons')\nclass GIoULoss(tf.keras.losses.Loss):\n \"\"\"Implements the GIoU loss function.\n\n GIoU loss was first introduced in the\n [Generalized Intersection over Union:\n A Metric and A Loss for Bounding Box Regression]\n (https://giou.stanford.edu/GIoU.pdf).\n GIoU is an enhancement for models which use IoU in object detection.\n\n Usage:\n\n ```python\n gl = tfa.losses.GIoULoss()\n boxes1 = tf.constant([[4.0, 3.0, 7.0, 5.0], [5.0, 6.0, 10.0, 7.0]])\n boxes2 = tf.constant([[3.0, 4.0, 6.0, 8.0], [14.0, 14.0, 15.0, 15.0]])\n loss = gl(boxes1, boxes2)\n print('Loss: ', loss.numpy()) # Loss: [1.07500000298023224, 1.9333333373069763]\n ```\n Usage with tf.keras API:\n\n ```python\n model = tf.keras.Model(inputs, outputs)\n model.compile('sgd', loss=tfa.losses.GIoULoss())\n ```\n\n Args:\n mode: one of ['giou', 'iou'], decided to calculate GIoU or IoU loss.\n \"\"\"\n @typechecked\n def __init__(self,\n mode: str = 'giou',\n reduction: str = tf.keras.losses.Reduction.AUTO,\n name: Optional[str] = 'giou_loss'):\n if mode not in ['giou', 'iou']:\n raise ValueError(\"Value of mode should be 'iou' or 'giou'\")\n super().__init__(name=name, reduction=reduction)\n self.mode = mode\n\n def get_config(self):\n base_config = super().get_config()\n base_config['mode'] = self.mode\n return base_config\n\n def call(self,\n y_true,\n y_pred):\n return giou_loss(y_true, y_pred, mode=self.mode)\n\n\[email protected]_keras_serializable(package='Addons')\[email protected]\ndef giou_loss(y_true: TensorLike,\n y_pred: TensorLike,\n mode: str = 'giou') -> tf.Tensor:\n \"\"\"\n Args:\n y_true: true targets tensor. The coordinates of the each bounding\n box in boxes are encoded as [y_min, x_min, y_max, x_max].\n y_pred: predictions tensor. The coordinates of the each bounding\n box in boxes are encoded as [y_min, x_min, y_max, x_max].\n mode: one of ['giou', 'iou'], decided to calculate GIoU or IoU loss.\n\n Returns:\n GIoU loss float `Tensor`.\n \"\"\"\n if mode not in ['giou', 'iou']:\n raise ValueError(\"Value of mode should be 'iou' or 'giou'\")\n y_pred = tf.convert_to_tensor(y_pred)\n if not y_pred.dtype.is_floating:\n y_pred = tf.cast(y_pred, tf.float32)\n y_true = tf.cast(y_true, y_pred.dtype)\n giou = _calculate_giou(y_pred, y_true, mode)\n\n return 1 - giou\n\n\ndef _calculate_giou(b1: TensorLike,\n b2: TensorLike,\n mode: str = 'giou') -> tf.Tensor:\n \"\"\"\n Args:\n b1: bounding box. The coordinates of the each bounding box in boxes are\n encoded as [y_min, x_min, y_max, x_max].\n b2: the other bounding box. The coordinates of the each bounding box\n in boxes are encoded as [y_min, x_min, y_max, x_max].\n mode: one of ['giou', 'iou'], decided to calculate GIoU or IoU loss.\n\n Returns:\n GIoU loss float `Tensor`.\n \"\"\"\n zero = tf.convert_to_tensor(0., b1.dtype)\n b1_ymin, b1_xmin, b1_ymax, b1_xmax = tf.unstack(b1, 4, axis=-1)\n b2_ymin, b2_xmin, b2_ymax, b2_xmax = tf.unstack(b2, 4, axis=-1)\n b1_width = tf.maximum(zero, b1_xmax - b1_xmin)\n b1_height = tf.maximum(zero, b1_ymax - b1_ymin)\n b2_width = tf.maximum(zero, b2_xmax - b2_xmin)\n b2_height = tf.maximum(zero, b2_ymax - b2_ymin)\n b1_area = b1_width * b1_height\n b2_area = b2_width * b2_height\n\n intersect_ymin = tf.maximum(b1_ymin, b2_ymin)\n intersect_xmin = tf.maximum(b1_xmin, b2_xmin)\n intersect_ymax = tf.minimum(b1_ymax, b2_ymax)\n intersect_xmax = tf.minimum(b1_xmax, b2_xmax)\n intersect_width = tf.maximum(zero, intersect_xmax - intersect_xmin)\n intersect_height = tf.maximum(zero, intersect_ymax - intersect_ymin)\n intersect_area = intersect_width * intersect_height\n\n union_area = b1_area + b2_area - intersect_area\n iou = tf.math.divide_no_nan(intersect_area, union_area)\n if mode == 'iou':\n return iou\n\n enclose_ymin = tf.minimum(b1_ymin, b2_ymin)\n enclose_xmin = tf.minimum(b1_xmin, b2_xmin)\n enclose_ymax = tf.maximum(b1_ymax, b2_ymax)\n enclose_xmax = tf.maximum(b1_xmax, b2_xmax)\n enclose_width = tf.maximum(zero, enclose_xmax - enclose_xmin)\n enclose_height = tf.maximum(zero, enclose_ymax - enclose_ymin)\n enclose_area = enclose_width * enclose_height\n giou = iou - tf.math.divide_no_nan(\n (enclose_area - union_area), enclose_area)\n return giou\n", "path": "tensorflow_addons/losses/giou_loss.py"}, {"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\nimport tensorflow as tf\nfrom tensorflow_addons.utils.types import Number\nfrom tensorflow_addons.utils import types\nfrom typing import Optional\n\n\[email protected]_keras_serializable(package=\"Addons\")\ndef rrelu(\n x: types.TensorLike,\n lower: Number = 0.125,\n upper: Number = 0.3333333333333333,\n training: Optional[bool] = None,\n seed: Optional[int] = None,\n) -> tf.Tensor:\n \"\"\"rrelu function.\n\n Computes rrelu function:\n `x if x > 0 else random(lower, upper) * x` or\n `x if x > 0 else x * (lower + upper) / 2`\n depending on whether training is enabled.\n\n See [Empirical Evaluation of Rectified Activations in Convolutional Network](https://arxiv.org/abs/1505.00853).\n\n Args:\n x: A `Tensor`. Must be one of the following types:\n `float16`, `float32`, `float64`.\n lower: `float`, lower bound for random alpha.\n upper: `float`, upper bound for random alpha.\n training: `bool`, indicating whether the `call`\n is meant for training or inference.\n seed: `int`, this sets the operation-level seed.\n Returns:\n result: A `Tensor`. Has the same type as `x`.\n \"\"\"\n x = tf.convert_to_tensor(x)\n if training is None:\n training = tf.keras.backend.learning_phase()\n training = bool(tf.keras.backend.get_value(training))\n\n if training:\n alpha = tf.random.uniform(\n tf.shape(x), minval=lower, maxval=upper, dtype=x.dtype, seed=seed\n )\n else:\n alpha = tf.cast((lower + upper) / 2, x.dtype)\n\n return tf.where(x >= 0, x, alpha * x)\n", "path": "tensorflow_addons/activations/rrelu.py"}]}
| 3,019 | 455 |
gh_patches_debug_1673
|
rasdani/github-patches
|
git_diff
|
instadeepai__Mava-626
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[TEST] Jax Datasets
### What do you want to test?
Jax dataset components
### Outline of test structure
* Unit tests
* Test components and hooks
### Definition of done
Passing checks, cover all hooks, edge cases considered
### Mandatory checklist before making a PR
* [ ] The success criteria laid down in “Definition of done” are met.
* [ ] Test code is documented - docstrings for methods and classes, static types for arguments.
* [ ] Documentation is updated - README, CONTRIBUTING, or other documentation.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mava/components/jax/building/datasets.py`
Content:
```
1 # python3
2 # Copyright 2021 InstaDeep Ltd. All rights reserved.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 """Commonly used dataset components for system builders"""
17 import abc
18 from dataclasses import dataclass
19 from typing import Any, Callable, Optional
20
21 import reverb
22 from acme import datasets
23
24 from mava.components.jax import Component
25 from mava.core_jax import SystemBuilder
26
27 Transform = Callable[[reverb.ReplaySample], reverb.ReplaySample]
28
29
30 class TrainerDataset(Component):
31 @abc.abstractmethod
32 def __init__(
33 self,
34 config: Any,
35 ):
36 """_summary_
37
38 Args:
39 config : _description_.
40 """
41 self.config = config
42
43 @abc.abstractmethod
44 def on_building_trainer_dataset(self, builder: SystemBuilder) -> None:
45 """_summary_
46
47 Args:
48 builder : _description_
49 """
50 pass
51
52 @staticmethod
53 def name() -> str:
54 """_summary_
55
56 Returns:
57 _description_
58 """
59 return "trainer_dataset"
60
61
62 @dataclass
63 class TransitionDatasetConfig:
64 sample_batch_size: int = 256
65 prefetch_size: Optional[int] = None
66 num_parallel_calls: int = 12
67 max_in_flight_samples_per_worker: Optional[int] = None
68 postprocess: Optional[Transform] = None
69 # dataset_name: str = "transition_dataset"
70
71
72 class TransitionDataset(TrainerDataset):
73 def __init__(
74 self,
75 config: TransitionDatasetConfig = TransitionDatasetConfig(),
76 ):
77 """_summary_
78
79 Args:
80 config : _description_.
81 """
82 self.config = config
83
84 def on_building_trainer_dataset(self, builder: SystemBuilder) -> None:
85 """_summary_
86
87 Args:
88 builder : _description_
89 """
90 max_in_flight_samples_per_worker = self.config.max_in_flight_samples_per_worker
91 dataset = datasets.make_reverb_dataset(
92 table=builder.store.trainer_id,
93 server_address=builder.store.data_server_client.server_address,
94 batch_size=self.config.sample_batch_size,
95 prefetch_size=self.config.prefetch_size,
96 num_parallel_calls=self.config.num_parallel_calls,
97 max_in_flight_samples_per_worker=max_in_flight_samples_per_worker,
98 postprocess=self.config.postprocess,
99 )
100
101 builder.store.dataset = iter(dataset)
102
103 @staticmethod
104 def config_class() -> Optional[Callable]:
105 """Config class used for component.
106
107 Returns:
108 config class/dataclass for component.
109 """
110 return TransitionDatasetConfig
111
112
113 @dataclass
114 class TrajectoryDatasetConfig:
115 sample_batch_size: int = 256
116 max_in_flight_samples_per_worker: int = 512
117 num_workers_per_iterator: int = -1
118 max_samples_per_stream: int = -1
119 rate_limiter_timeout_ms: int = -1
120 get_signature_timeout_secs: Optional[int] = None
121 # max_samples: int = -1
122 # dataset_name: str = "trajectory_dataset"
123
124
125 class TrajectoryDataset(TrainerDataset):
126 def __init__(
127 self,
128 config: TrajectoryDatasetConfig = TrajectoryDatasetConfig(),
129 ):
130 """_summary_
131
132 Args:
133 config : _description_.
134 """
135 self.config = config
136
137 def on_building_trainer_dataset(self, builder: SystemBuilder) -> None:
138 """_summary_
139
140 Args:
141 builder : _description_
142 """
143 dataset = reverb.TrajectoryDataset.from_table_signature(
144 server_address=builder.store.data_server_client.server_address,
145 table=builder.store.trainer_id,
146 max_in_flight_samples_per_worker=2 * self.config.sample_batch_size,
147 num_workers_per_iterator=self.config.num_workers_per_iterator,
148 max_samples_per_stream=self.config.max_samples_per_stream,
149 rate_limiter_timeout_ms=self.config.rate_limiter_timeout_ms,
150 get_signature_timeout_secs=self.config.get_signature_timeout_secs,
151 # max_samples=self.config.max_samples,
152 )
153
154 # Add batch dimension.
155 dataset = dataset.batch(self.config.sample_batch_size, drop_remainder=True)
156 builder.store.sample_batch_size = self.config.sample_batch_size
157
158 builder.store.dataset_iterator = dataset.as_numpy_iterator()
159
160 @staticmethod
161 def config_class() -> Optional[Callable]:
162 """Config class used for component.
163
164 Returns:
165 config class/dataclass for component.
166 """
167 return TrajectoryDatasetConfig
168
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mava/components/jax/building/datasets.py b/mava/components/jax/building/datasets.py
--- a/mava/components/jax/building/datasets.py
+++ b/mava/components/jax/building/datasets.py
@@ -98,7 +98,7 @@
postprocess=self.config.postprocess,
)
- builder.store.dataset = iter(dataset)
+ builder.store.dataset_iterator = iter(dataset)
@staticmethod
def config_class() -> Optional[Callable]:
|
{"golden_diff": "diff --git a/mava/components/jax/building/datasets.py b/mava/components/jax/building/datasets.py\n--- a/mava/components/jax/building/datasets.py\n+++ b/mava/components/jax/building/datasets.py\n@@ -98,7 +98,7 @@\n postprocess=self.config.postprocess,\n )\n \n- builder.store.dataset = iter(dataset)\n+ builder.store.dataset_iterator = iter(dataset)\n \n @staticmethod\n def config_class() -> Optional[Callable]:\n", "issue": "[TEST] Jax Datasets\n### What do you want to test?\r\nJax dataset components\r\n\r\n### Outline of test structure\r\n* Unit tests\r\n* Test components and hooks\r\n\r\n### Definition of done\r\nPassing checks, cover all hooks, edge cases considered\r\n\r\n### Mandatory checklist before making a PR\r\n* [ ] The success criteria laid down in \u201cDefinition of done\u201d are met.\r\n* [ ] Test code is documented - docstrings for methods and classes, static types for arguments.\r\n* [ ] Documentation is updated - README, CONTRIBUTING, or other documentation.\n", "before_files": [{"content": "# python3\n# Copyright 2021 InstaDeep Ltd. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Commonly used dataset components for system builders\"\"\"\nimport abc\nfrom dataclasses import dataclass\nfrom typing import Any, Callable, Optional\n\nimport reverb\nfrom acme import datasets\n\nfrom mava.components.jax import Component\nfrom mava.core_jax import SystemBuilder\n\nTransform = Callable[[reverb.ReplaySample], reverb.ReplaySample]\n\n\nclass TrainerDataset(Component):\n @abc.abstractmethod\n def __init__(\n self,\n config: Any,\n ):\n \"\"\"_summary_\n\n Args:\n config : _description_.\n \"\"\"\n self.config = config\n\n @abc.abstractmethod\n def on_building_trainer_dataset(self, builder: SystemBuilder) -> None:\n \"\"\"_summary_\n\n Args:\n builder : _description_\n \"\"\"\n pass\n\n @staticmethod\n def name() -> str:\n \"\"\"_summary_\n\n Returns:\n _description_\n \"\"\"\n return \"trainer_dataset\"\n\n\n@dataclass\nclass TransitionDatasetConfig:\n sample_batch_size: int = 256\n prefetch_size: Optional[int] = None\n num_parallel_calls: int = 12\n max_in_flight_samples_per_worker: Optional[int] = None\n postprocess: Optional[Transform] = None\n # dataset_name: str = \"transition_dataset\"\n\n\nclass TransitionDataset(TrainerDataset):\n def __init__(\n self,\n config: TransitionDatasetConfig = TransitionDatasetConfig(),\n ):\n \"\"\"_summary_\n\n Args:\n config : _description_.\n \"\"\"\n self.config = config\n\n def on_building_trainer_dataset(self, builder: SystemBuilder) -> None:\n \"\"\"_summary_\n\n Args:\n builder : _description_\n \"\"\"\n max_in_flight_samples_per_worker = self.config.max_in_flight_samples_per_worker\n dataset = datasets.make_reverb_dataset(\n table=builder.store.trainer_id,\n server_address=builder.store.data_server_client.server_address,\n batch_size=self.config.sample_batch_size,\n prefetch_size=self.config.prefetch_size,\n num_parallel_calls=self.config.num_parallel_calls,\n max_in_flight_samples_per_worker=max_in_flight_samples_per_worker,\n postprocess=self.config.postprocess,\n )\n\n builder.store.dataset = iter(dataset)\n\n @staticmethod\n def config_class() -> Optional[Callable]:\n \"\"\"Config class used for component.\n\n Returns:\n config class/dataclass for component.\n \"\"\"\n return TransitionDatasetConfig\n\n\n@dataclass\nclass TrajectoryDatasetConfig:\n sample_batch_size: int = 256\n max_in_flight_samples_per_worker: int = 512\n num_workers_per_iterator: int = -1\n max_samples_per_stream: int = -1\n rate_limiter_timeout_ms: int = -1\n get_signature_timeout_secs: Optional[int] = None\n # max_samples: int = -1\n # dataset_name: str = \"trajectory_dataset\"\n\n\nclass TrajectoryDataset(TrainerDataset):\n def __init__(\n self,\n config: TrajectoryDatasetConfig = TrajectoryDatasetConfig(),\n ):\n \"\"\"_summary_\n\n Args:\n config : _description_.\n \"\"\"\n self.config = config\n\n def on_building_trainer_dataset(self, builder: SystemBuilder) -> None:\n \"\"\"_summary_\n\n Args:\n builder : _description_\n \"\"\"\n dataset = reverb.TrajectoryDataset.from_table_signature(\n server_address=builder.store.data_server_client.server_address,\n table=builder.store.trainer_id,\n max_in_flight_samples_per_worker=2 * self.config.sample_batch_size,\n num_workers_per_iterator=self.config.num_workers_per_iterator,\n max_samples_per_stream=self.config.max_samples_per_stream,\n rate_limiter_timeout_ms=self.config.rate_limiter_timeout_ms,\n get_signature_timeout_secs=self.config.get_signature_timeout_secs,\n # max_samples=self.config.max_samples,\n )\n\n # Add batch dimension.\n dataset = dataset.batch(self.config.sample_batch_size, drop_remainder=True)\n builder.store.sample_batch_size = self.config.sample_batch_size\n\n builder.store.dataset_iterator = dataset.as_numpy_iterator()\n\n @staticmethod\n def config_class() -> Optional[Callable]:\n \"\"\"Config class used for component.\n\n Returns:\n config class/dataclass for component.\n \"\"\"\n return TrajectoryDatasetConfig\n", "path": "mava/components/jax/building/datasets.py"}], "after_files": [{"content": "# python3\n# Copyright 2021 InstaDeep Ltd. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Commonly used dataset components for system builders\"\"\"\nimport abc\nfrom dataclasses import dataclass\nfrom typing import Any, Callable, Optional\n\nimport reverb\nfrom acme import datasets\n\nfrom mava.components.jax import Component\nfrom mava.core_jax import SystemBuilder\n\nTransform = Callable[[reverb.ReplaySample], reverb.ReplaySample]\n\n\nclass TrainerDataset(Component):\n @abc.abstractmethod\n def __init__(\n self,\n config: Any,\n ):\n \"\"\"_summary_\n\n Args:\n config : _description_.\n \"\"\"\n self.config = config\n\n @abc.abstractmethod\n def on_building_trainer_dataset(self, builder: SystemBuilder) -> None:\n \"\"\"_summary_\n\n Args:\n builder : _description_\n \"\"\"\n pass\n\n @staticmethod\n def name() -> str:\n \"\"\"_summary_\n\n Returns:\n _description_\n \"\"\"\n return \"trainer_dataset\"\n\n\n@dataclass\nclass TransitionDatasetConfig:\n sample_batch_size: int = 256\n prefetch_size: Optional[int] = None\n num_parallel_calls: int = 12\n max_in_flight_samples_per_worker: Optional[int] = None\n postprocess: Optional[Transform] = None\n # dataset_name: str = \"transition_dataset\"\n\n\nclass TransitionDataset(TrainerDataset):\n def __init__(\n self,\n config: TransitionDatasetConfig = TransitionDatasetConfig(),\n ):\n \"\"\"_summary_\n\n Args:\n config : _description_.\n \"\"\"\n self.config = config\n\n def on_building_trainer_dataset(self, builder: SystemBuilder) -> None:\n \"\"\"_summary_\n\n Args:\n builder : _description_\n \"\"\"\n max_in_flight_samples_per_worker = self.config.max_in_flight_samples_per_worker\n dataset = datasets.make_reverb_dataset(\n table=builder.store.trainer_id,\n server_address=builder.store.data_server_client.server_address,\n batch_size=self.config.sample_batch_size,\n prefetch_size=self.config.prefetch_size,\n num_parallel_calls=self.config.num_parallel_calls,\n max_in_flight_samples_per_worker=max_in_flight_samples_per_worker,\n postprocess=self.config.postprocess,\n )\n\n builder.store.dataset_iterator = iter(dataset)\n\n @staticmethod\n def config_class() -> Optional[Callable]:\n \"\"\"Config class used for component.\n\n Returns:\n config class/dataclass for component.\n \"\"\"\n return TransitionDatasetConfig\n\n\n@dataclass\nclass TrajectoryDatasetConfig:\n sample_batch_size: int = 256\n max_in_flight_samples_per_worker: int = 512\n num_workers_per_iterator: int = -1\n max_samples_per_stream: int = -1\n rate_limiter_timeout_ms: int = -1\n get_signature_timeout_secs: Optional[int] = None\n # max_samples: int = -1\n # dataset_name: str = \"trajectory_dataset\"\n\n\nclass TrajectoryDataset(TrainerDataset):\n def __init__(\n self,\n config: TrajectoryDatasetConfig = TrajectoryDatasetConfig(),\n ):\n \"\"\"_summary_\n\n Args:\n config : _description_.\n \"\"\"\n self.config = config\n\n def on_building_trainer_dataset(self, builder: SystemBuilder) -> None:\n \"\"\"_summary_\n\n Args:\n builder : _description_\n \"\"\"\n dataset = reverb.TrajectoryDataset.from_table_signature(\n server_address=builder.store.data_server_client.server_address,\n table=builder.store.trainer_id,\n max_in_flight_samples_per_worker=2 * self.config.sample_batch_size,\n num_workers_per_iterator=self.config.num_workers_per_iterator,\n max_samples_per_stream=self.config.max_samples_per_stream,\n rate_limiter_timeout_ms=self.config.rate_limiter_timeout_ms,\n get_signature_timeout_secs=self.config.get_signature_timeout_secs,\n # max_samples=self.config.max_samples,\n )\n\n # Add batch dimension.\n dataset = dataset.batch(self.config.sample_batch_size, drop_remainder=True)\n builder.store.sample_batch_size = self.config.sample_batch_size\n\n builder.store.dataset_iterator = dataset.as_numpy_iterator()\n\n @staticmethod\n def config_class() -> Optional[Callable]:\n \"\"\"Config class used for component.\n\n Returns:\n config class/dataclass for component.\n \"\"\"\n return TrajectoryDatasetConfig\n", "path": "mava/components/jax/building/datasets.py"}]}
| 1,845 | 109 |
gh_patches_debug_31533
|
rasdani/github-patches
|
git_diff
|
Qiskit__qiskit-5943
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Edit user config file programatically
It would be great to be able to modify the user config file from Qiskit. Something like `user_config['default']['option'] = True`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `qiskit/user_config.py`
Content:
```
1 # This code is part of Qiskit.
2 #
3 # (C) Copyright IBM 2017, 2019.
4 #
5 # This code is licensed under the Apache License, Version 2.0. You may
6 # obtain a copy of this license in the LICENSE.txt file in the root directory
7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
8 #
9 # Any modifications or derivative works of this code must retain this
10 # copyright notice, and modified files need to carry a notice indicating
11 # that they have been altered from the originals.
12
13 """Utils for reading a user preference config files."""
14
15 import configparser
16 import os
17 from warnings import warn
18
19 from qiskit import exceptions
20
21 DEFAULT_FILENAME = os.path.join(os.path.expanduser("~"), ".qiskit", "settings.conf")
22
23
24 class UserConfig:
25 """Class representing a user config file
26
27 The config file format should look like:
28
29 [default]
30 circuit_drawer = mpl
31 circuit_mpl_style = default
32 circuit_mpl_style_path = ~/.qiskit:<default location>
33 transpile_optimization_level = 1
34 parallel = False
35 num_processes = 4
36
37 """
38
39 def __init__(self, filename=None):
40 """Create a UserConfig
41
42 Args:
43 filename (str): The path to the user config file. If one isn't
44 specified, ~/.qiskit/settings.conf is used.
45 """
46 if filename is None:
47 self.filename = DEFAULT_FILENAME
48 else:
49 self.filename = filename
50 self.settings = {}
51 self.config_parser = configparser.ConfigParser()
52
53 def read_config_file(self):
54 """Read config file and parse the contents into the settings attr."""
55 if not os.path.isfile(self.filename):
56 return
57 self.config_parser.read(self.filename)
58 if "default" in self.config_parser.sections():
59 # Parse circuit_drawer
60 circuit_drawer = self.config_parser.get("default", "circuit_drawer", fallback=None)
61 if circuit_drawer:
62 if circuit_drawer not in ["text", "mpl", "latex", "latex_source", "auto"]:
63 raise exceptions.QiskitUserConfigError(
64 "%s is not a valid circuit drawer backend. Must be "
65 "either 'text', 'mpl', 'latex', 'latex_source', or "
66 "'auto'." % circuit_drawer
67 )
68 self.settings["circuit_drawer"] = circuit_drawer
69
70 # Parse state_drawer
71 state_drawer = self.config_parser.get("default", "state_drawer", fallback=None)
72 if state_drawer:
73 valid_state_drawers = [
74 "repr",
75 "text",
76 "latex",
77 "latex_source",
78 "qsphere",
79 "hinton",
80 "bloch",
81 ]
82 if state_drawer not in valid_state_drawers:
83 valid_choices_string = "', '".join(c for c in valid_state_drawers)
84 raise exceptions.QiskitUserConfigError(
85 f"'{state_drawer}' is not a valid state drawer backend. "
86 f"Choose from: '{valid_choices_string}'"
87 )
88 self.settings["state_drawer"] = state_drawer
89
90 # Parse circuit_mpl_style
91 circuit_mpl_style = self.config_parser.get(
92 "default", "circuit_mpl_style", fallback=None
93 )
94 if circuit_mpl_style:
95 if not isinstance(circuit_mpl_style, str):
96 warn(
97 "%s is not a valid mpl circuit style. Must be "
98 "a text string. Will not load style." % circuit_mpl_style,
99 UserWarning,
100 2,
101 )
102 self.settings["circuit_mpl_style"] = circuit_mpl_style
103
104 # Parse circuit_mpl_style_path
105 circuit_mpl_style_path = self.config_parser.get(
106 "default", "circuit_mpl_style_path", fallback=None
107 )
108 if circuit_mpl_style_path:
109 cpath_list = circuit_mpl_style_path.split(":")
110 for path in cpath_list:
111 if not os.path.exists(os.path.expanduser(path)):
112 warn(
113 "%s is not a valid circuit mpl style path."
114 " Correct the path in ~/.qiskit/settings.conf." % path,
115 UserWarning,
116 2,
117 )
118 self.settings["circuit_mpl_style_path"] = cpath_list
119
120 # Parse transpile_optimization_level
121 transpile_optimization_level = self.config_parser.getint(
122 "default", "transpile_optimization_level", fallback=-1
123 )
124 if transpile_optimization_level != -1:
125 if transpile_optimization_level < 0 or transpile_optimization_level > 3:
126 raise exceptions.QiskitUserConfigError(
127 "%s is not a valid optimization level. Must be " "0, 1, 2, or 3."
128 )
129 self.settings["transpile_optimization_level"] = transpile_optimization_level
130
131 # Parse parallel
132 parallel_enabled = self.config_parser.getboolean("default", "parallel", fallback=None)
133 if parallel_enabled is not None:
134 self.settings["parallel_enabled"] = parallel_enabled
135
136 # Parse num_processes
137 num_processes = self.config_parser.getint("default", "num_processes", fallback=-1)
138 if num_processes != -1:
139 if num_processes <= 0:
140 raise exceptions.QiskitUserConfigError(
141 "%s is not a valid number of processes. Must be " "greater than 0"
142 )
143 self.settings["num_processes"] = num_processes
144
145
146 def get_config():
147 """Read the config file from the default location or env var
148
149 It will read a config file at either the default location
150 ~/.qiskit/settings.conf or if set the value of the QISKIT_SETTINGS env var.
151
152 It will return the parsed settings dict from the parsed config file.
153 Returns:
154 dict: The settings dict from the parsed config file.
155 """
156 filename = os.getenv("QISKIT_SETTINGS", DEFAULT_FILENAME)
157 if not os.path.isfile(filename):
158 return {}
159 user_config = UserConfig(filename)
160 user_config.read_config_file()
161 return user_config.settings
162
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/qiskit/user_config.py b/qiskit/user_config.py
--- a/qiskit/user_config.py
+++ b/qiskit/user_config.py
@@ -143,6 +143,70 @@
self.settings["num_processes"] = num_processes
+def set_config(key, value, section=None, file_path=None):
+ """Adds or modifies a user configuration
+
+ It will add configuration to the currently configured location
+ or the value of file argument.
+
+ Only valid user config can be set in 'default' section. Custom
+ user config can be added in any other sections.
+
+ Changes to the existing config file will not be reflected in
+ the current session since the config file is parsed at import time.
+
+ Args:
+ key (str): name of the config
+ value (obj): value of the config
+ section (str, optional): if not specified, adds it to the
+ `default` section of the config file.
+ file_path (str, optional): the file to which config is added.
+ If not specified, adds it to the default config file or
+ if set, the value of `QISKIT_SETTINGS` env variable.
+
+ Raises:
+ QiskitUserConfigError: if the config is invalid
+ """
+ filename = file_path or os.getenv("QISKIT_SETTINGS", DEFAULT_FILENAME)
+ section = "default" if section is None else section
+
+ if not isinstance(key, str):
+ raise exceptions.QiskitUserConfigError("Key must be string type")
+
+ valid_config = {
+ "circuit_drawer",
+ "circuit_mpl_style",
+ "circuit_mpl_style_path",
+ "transpile_optimization_level",
+ "parallel",
+ "num_processes",
+ }
+
+ if section in [None, "default"]:
+ if key not in valid_config:
+ raise exceptions.QiskitUserConfigError("{} is not a valid user config.".format(key))
+
+ config = configparser.ConfigParser()
+ config.read(filename)
+
+ if section not in config.sections():
+ config.add_section(section)
+
+ config.set(section, key, str(value))
+
+ try:
+ with open(filename, "w") as cfgfile:
+ config.write(cfgfile)
+ except OSError as ex:
+ raise exceptions.QiskitUserConfigError(
+ "Unable to load the config file {}. Error: '{}'".format(filename, str(ex))
+ )
+
+ # validates config
+ user_config = UserConfig(filename)
+ user_config.read_config_file()
+
+
def get_config():
"""Read the config file from the default location or env var
|
{"golden_diff": "diff --git a/qiskit/user_config.py b/qiskit/user_config.py\n--- a/qiskit/user_config.py\n+++ b/qiskit/user_config.py\n@@ -143,6 +143,70 @@\n self.settings[\"num_processes\"] = num_processes\n \n \n+def set_config(key, value, section=None, file_path=None):\n+ \"\"\"Adds or modifies a user configuration\n+\n+ It will add configuration to the currently configured location\n+ or the value of file argument.\n+\n+ Only valid user config can be set in 'default' section. Custom\n+ user config can be added in any other sections.\n+\n+ Changes to the existing config file will not be reflected in\n+ the current session since the config file is parsed at import time.\n+\n+ Args:\n+ key (str): name of the config\n+ value (obj): value of the config\n+ section (str, optional): if not specified, adds it to the\n+ `default` section of the config file.\n+ file_path (str, optional): the file to which config is added.\n+ If not specified, adds it to the default config file or\n+ if set, the value of `QISKIT_SETTINGS` env variable.\n+\n+ Raises:\n+ QiskitUserConfigError: if the config is invalid\n+ \"\"\"\n+ filename = file_path or os.getenv(\"QISKIT_SETTINGS\", DEFAULT_FILENAME)\n+ section = \"default\" if section is None else section\n+\n+ if not isinstance(key, str):\n+ raise exceptions.QiskitUserConfigError(\"Key must be string type\")\n+\n+ valid_config = {\n+ \"circuit_drawer\",\n+ \"circuit_mpl_style\",\n+ \"circuit_mpl_style_path\",\n+ \"transpile_optimization_level\",\n+ \"parallel\",\n+ \"num_processes\",\n+ }\n+\n+ if section in [None, \"default\"]:\n+ if key not in valid_config:\n+ raise exceptions.QiskitUserConfigError(\"{} is not a valid user config.\".format(key))\n+\n+ config = configparser.ConfigParser()\n+ config.read(filename)\n+\n+ if section not in config.sections():\n+ config.add_section(section)\n+\n+ config.set(section, key, str(value))\n+\n+ try:\n+ with open(filename, \"w\") as cfgfile:\n+ config.write(cfgfile)\n+ except OSError as ex:\n+ raise exceptions.QiskitUserConfigError(\n+ \"Unable to load the config file {}. Error: '{}'\".format(filename, str(ex))\n+ )\n+\n+ # validates config\n+ user_config = UserConfig(filename)\n+ user_config.read_config_file()\n+\n+\n def get_config():\n \"\"\"Read the config file from the default location or env var\n", "issue": "Edit user config file programatically\nIt would be great to be able to modify the user config file from Qiskit. Something like `user_config['default']['option'] = True`.\n", "before_files": [{"content": "# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2017, 2019.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n\"\"\"Utils for reading a user preference config files.\"\"\"\n\nimport configparser\nimport os\nfrom warnings import warn\n\nfrom qiskit import exceptions\n\nDEFAULT_FILENAME = os.path.join(os.path.expanduser(\"~\"), \".qiskit\", \"settings.conf\")\n\n\nclass UserConfig:\n \"\"\"Class representing a user config file\n\n The config file format should look like:\n\n [default]\n circuit_drawer = mpl\n circuit_mpl_style = default\n circuit_mpl_style_path = ~/.qiskit:<default location>\n transpile_optimization_level = 1\n parallel = False\n num_processes = 4\n\n \"\"\"\n\n def __init__(self, filename=None):\n \"\"\"Create a UserConfig\n\n Args:\n filename (str): The path to the user config file. If one isn't\n specified, ~/.qiskit/settings.conf is used.\n \"\"\"\n if filename is None:\n self.filename = DEFAULT_FILENAME\n else:\n self.filename = filename\n self.settings = {}\n self.config_parser = configparser.ConfigParser()\n\n def read_config_file(self):\n \"\"\"Read config file and parse the contents into the settings attr.\"\"\"\n if not os.path.isfile(self.filename):\n return\n self.config_parser.read(self.filename)\n if \"default\" in self.config_parser.sections():\n # Parse circuit_drawer\n circuit_drawer = self.config_parser.get(\"default\", \"circuit_drawer\", fallback=None)\n if circuit_drawer:\n if circuit_drawer not in [\"text\", \"mpl\", \"latex\", \"latex_source\", \"auto\"]:\n raise exceptions.QiskitUserConfigError(\n \"%s is not a valid circuit drawer backend. Must be \"\n \"either 'text', 'mpl', 'latex', 'latex_source', or \"\n \"'auto'.\" % circuit_drawer\n )\n self.settings[\"circuit_drawer\"] = circuit_drawer\n\n # Parse state_drawer\n state_drawer = self.config_parser.get(\"default\", \"state_drawer\", fallback=None)\n if state_drawer:\n valid_state_drawers = [\n \"repr\",\n \"text\",\n \"latex\",\n \"latex_source\",\n \"qsphere\",\n \"hinton\",\n \"bloch\",\n ]\n if state_drawer not in valid_state_drawers:\n valid_choices_string = \"', '\".join(c for c in valid_state_drawers)\n raise exceptions.QiskitUserConfigError(\n f\"'{state_drawer}' is not a valid state drawer backend. \"\n f\"Choose from: '{valid_choices_string}'\"\n )\n self.settings[\"state_drawer\"] = state_drawer\n\n # Parse circuit_mpl_style\n circuit_mpl_style = self.config_parser.get(\n \"default\", \"circuit_mpl_style\", fallback=None\n )\n if circuit_mpl_style:\n if not isinstance(circuit_mpl_style, str):\n warn(\n \"%s is not a valid mpl circuit style. Must be \"\n \"a text string. Will not load style.\" % circuit_mpl_style,\n UserWarning,\n 2,\n )\n self.settings[\"circuit_mpl_style\"] = circuit_mpl_style\n\n # Parse circuit_mpl_style_path\n circuit_mpl_style_path = self.config_parser.get(\n \"default\", \"circuit_mpl_style_path\", fallback=None\n )\n if circuit_mpl_style_path:\n cpath_list = circuit_mpl_style_path.split(\":\")\n for path in cpath_list:\n if not os.path.exists(os.path.expanduser(path)):\n warn(\n \"%s is not a valid circuit mpl style path.\"\n \" Correct the path in ~/.qiskit/settings.conf.\" % path,\n UserWarning,\n 2,\n )\n self.settings[\"circuit_mpl_style_path\"] = cpath_list\n\n # Parse transpile_optimization_level\n transpile_optimization_level = self.config_parser.getint(\n \"default\", \"transpile_optimization_level\", fallback=-1\n )\n if transpile_optimization_level != -1:\n if transpile_optimization_level < 0 or transpile_optimization_level > 3:\n raise exceptions.QiskitUserConfigError(\n \"%s is not a valid optimization level. Must be \" \"0, 1, 2, or 3.\"\n )\n self.settings[\"transpile_optimization_level\"] = transpile_optimization_level\n\n # Parse parallel\n parallel_enabled = self.config_parser.getboolean(\"default\", \"parallel\", fallback=None)\n if parallel_enabled is not None:\n self.settings[\"parallel_enabled\"] = parallel_enabled\n\n # Parse num_processes\n num_processes = self.config_parser.getint(\"default\", \"num_processes\", fallback=-1)\n if num_processes != -1:\n if num_processes <= 0:\n raise exceptions.QiskitUserConfigError(\n \"%s is not a valid number of processes. Must be \" \"greater than 0\"\n )\n self.settings[\"num_processes\"] = num_processes\n\n\ndef get_config():\n \"\"\"Read the config file from the default location or env var\n\n It will read a config file at either the default location\n ~/.qiskit/settings.conf or if set the value of the QISKIT_SETTINGS env var.\n\n It will return the parsed settings dict from the parsed config file.\n Returns:\n dict: The settings dict from the parsed config file.\n \"\"\"\n filename = os.getenv(\"QISKIT_SETTINGS\", DEFAULT_FILENAME)\n if not os.path.isfile(filename):\n return {}\n user_config = UserConfig(filename)\n user_config.read_config_file()\n return user_config.settings\n", "path": "qiskit/user_config.py"}], "after_files": [{"content": "# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2017, 2019.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n\"\"\"Utils for reading a user preference config files.\"\"\"\n\nimport configparser\nimport os\nfrom warnings import warn\n\nfrom qiskit import exceptions\n\nDEFAULT_FILENAME = os.path.join(os.path.expanduser(\"~\"), \".qiskit\", \"settings.conf\")\n\n\nclass UserConfig:\n \"\"\"Class representing a user config file\n\n The config file format should look like:\n\n [default]\n circuit_drawer = mpl\n circuit_mpl_style = default\n circuit_mpl_style_path = ~/.qiskit:<default location>\n transpile_optimization_level = 1\n parallel = False\n num_processes = 4\n\n \"\"\"\n\n def __init__(self, filename=None):\n \"\"\"Create a UserConfig\n\n Args:\n filename (str): The path to the user config file. If one isn't\n specified, ~/.qiskit/settings.conf is used.\n \"\"\"\n if filename is None:\n self.filename = DEFAULT_FILENAME\n else:\n self.filename = filename\n self.settings = {}\n self.config_parser = configparser.ConfigParser()\n\n def read_config_file(self):\n \"\"\"Read config file and parse the contents into the settings attr.\"\"\"\n if not os.path.isfile(self.filename):\n return\n self.config_parser.read(self.filename)\n if \"default\" in self.config_parser.sections():\n # Parse circuit_drawer\n circuit_drawer = self.config_parser.get(\"default\", \"circuit_drawer\", fallback=None)\n if circuit_drawer:\n if circuit_drawer not in [\"text\", \"mpl\", \"latex\", \"latex_source\", \"auto\"]:\n raise exceptions.QiskitUserConfigError(\n \"%s is not a valid circuit drawer backend. Must be \"\n \"either 'text', 'mpl', 'latex', 'latex_source', or \"\n \"'auto'.\" % circuit_drawer\n )\n self.settings[\"circuit_drawer\"] = circuit_drawer\n\n # Parse state_drawer\n state_drawer = self.config_parser.get(\"default\", \"state_drawer\", fallback=None)\n if state_drawer:\n valid_state_drawers = [\n \"repr\",\n \"text\",\n \"latex\",\n \"latex_source\",\n \"qsphere\",\n \"hinton\",\n \"bloch\",\n ]\n if state_drawer not in valid_state_drawers:\n valid_choices_string = \"', '\".join(c for c in valid_state_drawers)\n raise exceptions.QiskitUserConfigError(\n f\"'{state_drawer}' is not a valid state drawer backend. \"\n f\"Choose from: '{valid_choices_string}'\"\n )\n self.settings[\"state_drawer\"] = state_drawer\n\n # Parse circuit_mpl_style\n circuit_mpl_style = self.config_parser.get(\n \"default\", \"circuit_mpl_style\", fallback=None\n )\n if circuit_mpl_style:\n if not isinstance(circuit_mpl_style, str):\n warn(\n \"%s is not a valid mpl circuit style. Must be \"\n \"a text string. Will not load style.\" % circuit_mpl_style,\n UserWarning,\n 2,\n )\n self.settings[\"circuit_mpl_style\"] = circuit_mpl_style\n\n # Parse circuit_mpl_style_path\n circuit_mpl_style_path = self.config_parser.get(\n \"default\", \"circuit_mpl_style_path\", fallback=None\n )\n if circuit_mpl_style_path:\n cpath_list = circuit_mpl_style_path.split(\":\")\n for path in cpath_list:\n if not os.path.exists(os.path.expanduser(path)):\n warn(\n \"%s is not a valid circuit mpl style path.\"\n \" Correct the path in ~/.qiskit/settings.conf.\" % path,\n UserWarning,\n 2,\n )\n self.settings[\"circuit_mpl_style_path\"] = cpath_list\n\n # Parse transpile_optimization_level\n transpile_optimization_level = self.config_parser.getint(\n \"default\", \"transpile_optimization_level\", fallback=-1\n )\n if transpile_optimization_level != -1:\n if transpile_optimization_level < 0 or transpile_optimization_level > 3:\n raise exceptions.QiskitUserConfigError(\n \"%s is not a valid optimization level. Must be \" \"0, 1, 2, or 3.\"\n )\n self.settings[\"transpile_optimization_level\"] = transpile_optimization_level\n\n # Parse parallel\n parallel_enabled = self.config_parser.getboolean(\"default\", \"parallel\", fallback=None)\n if parallel_enabled is not None:\n self.settings[\"parallel_enabled\"] = parallel_enabled\n\n # Parse num_processes\n num_processes = self.config_parser.getint(\"default\", \"num_processes\", fallback=-1)\n if num_processes != -1:\n if num_processes <= 0:\n raise exceptions.QiskitUserConfigError(\n \"%s is not a valid number of processes. Must be \" \"greater than 0\"\n )\n self.settings[\"num_processes\"] = num_processes\n\n\ndef set_config(key, value, section=None, file_path=None):\n \"\"\"Adds or modifies a user configuration\n\n It will add configuration to the currently configured location\n or the value of file argument.\n\n Only valid user config can be set in 'default' section. Custom\n user config can be added in any other sections.\n\n Changes to the existing config file will not be reflected in\n the current session since the config file is parsed at import time.\n\n Args:\n key (str): name of the config\n value (obj): value of the config\n section (str, optional): if not specified, adds it to the\n `default` section of the config file.\n file_path (str, optional): the file to which config is added.\n If not specified, adds it to the default config file or\n if set, the value of `QISKIT_SETTINGS` env variable.\n\n Raises:\n QiskitUserConfigError: if the config is invalid\n \"\"\"\n filename = file_path or os.getenv(\"QISKIT_SETTINGS\", DEFAULT_FILENAME)\n section = \"default\" if section is None else section\n\n if not isinstance(key, str):\n raise exceptions.QiskitUserConfigError(\"Key must be string type\")\n\n valid_config = {\n \"circuit_drawer\",\n \"circuit_mpl_style\",\n \"circuit_mpl_style_path\",\n \"transpile_optimization_level\",\n \"parallel\",\n \"num_processes\",\n }\n\n if section in [None, \"default\"]:\n if key not in valid_config:\n raise exceptions.QiskitUserConfigError(\"{} is not a valid user config.\".format(key))\n\n config = configparser.ConfigParser()\n config.read(filename)\n\n if section not in config.sections():\n config.add_section(section)\n\n config.set(section, key, str(value))\n\n try:\n with open(filename, \"w\") as cfgfile:\n config.write(cfgfile)\n except OSError as ex:\n raise exceptions.QiskitUserConfigError(\n \"Unable to load the config file {}. Error: '{}'\".format(filename, str(ex))\n )\n\n # validates config\n user_config = UserConfig(filename)\n user_config.read_config_file()\n\n\ndef get_config():\n \"\"\"Read the config file from the default location or env var\n\n It will read a config file at either the default location\n ~/.qiskit/settings.conf or if set the value of the QISKIT_SETTINGS env var.\n\n It will return the parsed settings dict from the parsed config file.\n Returns:\n dict: The settings dict from the parsed config file.\n \"\"\"\n filename = os.getenv(\"QISKIT_SETTINGS\", DEFAULT_FILENAME)\n if not os.path.isfile(filename):\n return {}\n user_config = UserConfig(filename)\n user_config.read_config_file()\n return user_config.settings\n", "path": "qiskit/user_config.py"}]}
| 1,986 | 608 |
gh_patches_debug_5778
|
rasdani/github-patches
|
git_diff
|
saleor__saleor-2099
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
'lines__product' is an invalid parameter to prefetch_related()
### What I'm trying to achieve
See order details page in the storefront.
### Steps to reproduce the problem
1. Create a digital product in the dashboard (no shipping required).
2. Log in as a customer and buy this product.
3. Once the order is created, go to order details page in the storefront. I've accessed it from my order confirmation email.
### What I expected to happen
See order details page in the storefront.
### What happened instead/how it failed
The following error happened:
```
Cannot find 'product' on OrderLine object, 'lines__product' is an invalid parameter to prefetch_related()
```
I don't know why why it works for other orders that I have in database (created by `populatedb`).
Seems that the problem is this line : https://github.com/mirumee/saleor/blob/master/saleor/order/views.py#L29
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `saleor/order/views.py`
Content:
```
1 import logging
2
3 from django.conf import settings
4 from django.contrib import auth, messages
5 from django.contrib.auth.decorators import login_required
6 from django.db import transaction
7 from django.http import Http404, HttpResponseForbidden
8 from django.shortcuts import get_object_or_404, redirect
9 from django.template.response import TemplateResponse
10 from django.urls import reverse
11 from django.utils.translation import pgettext_lazy
12 from django.views.decorators.csrf import csrf_exempt
13 from payments import PaymentStatus, RedirectNeeded
14
15 from . import FulfillmentStatus, OrderStatus
16 from ..account.forms import LoginForm
17 from ..account.models import User
18 from ..core.utils import get_client_ip
19 from .forms import (
20 OrderNoteForm, PasswordForm, PaymentDeleteForm, PaymentMethodsForm)
21 from .models import Order, OrderNote, Payment
22 from .utils import attach_order_to_user, check_order_status
23
24 logger = logging.getLogger(__name__)
25
26
27 def details(request, token):
28 orders = Order.objects.confirmed().prefetch_related(
29 'lines__product', 'fulfillments', 'fulfillments__lines',
30 'fulfillments__lines__order_line')
31 orders = orders.select_related(
32 'billing_address', 'shipping_address', 'user')
33 order = get_object_or_404(orders, token=token)
34 notes = order.notes.filter(is_public=True)
35 ctx = {'order': order, 'notes': notes}
36 if order.status == OrderStatus.UNFULFILLED:
37 user = request.user if request.user.is_authenticated else None
38 note = OrderNote(order=order, user=user)
39 note_form = OrderNoteForm(request.POST or None, instance=note)
40 ctx.update({'note_form': note_form})
41 if request.method == 'POST':
42 if note_form.is_valid():
43 note_form.save()
44 return redirect('order:details', token=order.token)
45 fulfillments = order.fulfillments.filter(
46 status=FulfillmentStatus.FULFILLED)
47 ctx.update({'fulfillments': fulfillments})
48 return TemplateResponse(request, 'order/details.html', ctx)
49
50
51 def payment(request, token):
52 orders = Order.objects.confirmed().filter(billing_address__isnull=False)
53 orders = Order.objects.prefetch_related('lines__variant')
54 orders = orders.select_related(
55 'billing_address', 'shipping_address', 'user')
56 order = get_object_or_404(orders, token=token)
57 payments = order.payments.all()
58 form_data = request.POST or None
59 try:
60 waiting_payment = order.payments.get(status=PaymentStatus.WAITING)
61 except Payment.DoesNotExist:
62 waiting_payment = None
63 waiting_payment_form = None
64 else:
65 form_data = None
66 waiting_payment_form = PaymentDeleteForm(
67 None, order=order, initial={'payment_id': waiting_payment.id})
68 if order.is_fully_paid() or not order.billing_address:
69 form_data = None
70 payment_form = None
71 if not order.is_pre_authorized():
72 payment_form = PaymentMethodsForm(form_data)
73 # FIXME: redirect if there is only one payment method
74 if payment_form.is_valid():
75 payment_method = payment_form.cleaned_data['method']
76 return redirect(
77 'order:payment', token=order.token, variant=payment_method)
78 ctx = {
79 'order': order, 'payment_form': payment_form, 'payments': payments,
80 'waiting_payment': waiting_payment,
81 'waiting_payment_form': waiting_payment_form}
82 return TemplateResponse(request, 'order/payment.html', ctx)
83
84
85 @check_order_status
86 def start_payment(request, order, variant):
87 waiting_payments = order.payments.filter(
88 status=PaymentStatus.WAITING).exists()
89 if waiting_payments:
90 return redirect('order:payment', token=order.token)
91 billing = order.billing_address
92 total = order.total
93 defaults = {
94 'total': total.gross.amount,
95 'tax': total.tax.amount,
96 'currency': total.currency,
97 'delivery': order.shipping_price.gross.amount,
98 'billing_first_name': billing.first_name,
99 'billing_last_name': billing.last_name,
100 'billing_address_1': billing.street_address_1,
101 'billing_address_2': billing.street_address_2,
102 'billing_city': billing.city,
103 'billing_postcode': billing.postal_code,
104 'billing_country_code': billing.country.code,
105 'billing_email': order.user_email,
106 'description': pgettext_lazy(
107 'Payment description', 'Order %(order_number)s') % {
108 'order_number': order},
109 'billing_country_area': billing.country_area,
110 'customer_ip_address': get_client_ip(request)}
111 variant_choices = settings.CHECKOUT_PAYMENT_CHOICES
112 if variant not in [code for code, dummy_name in variant_choices]:
113 raise Http404('%r is not a valid payment variant' % (variant,))
114 with transaction.atomic():
115 payment, dummy_created = Payment.objects.get_or_create(
116 variant=variant, status=PaymentStatus.WAITING, order=order,
117 defaults=defaults)
118 try:
119 form = payment.get_form(data=request.POST or None)
120 except RedirectNeeded as redirect_to:
121 return redirect(str(redirect_to))
122 except Exception:
123 logger.exception('Error communicating with the payment gateway')
124 msg = pgettext_lazy(
125 'Payment gateway error',
126 'Oops, it looks like we were unable to contact the selected '
127 'payment service')
128 messages.error(request, msg)
129 payment.change_status(PaymentStatus.ERROR)
130 return redirect('order:payment', token=order.token)
131 template = 'order/payment/%s.html' % variant
132 ctx = {'form': form, 'payment': payment}
133 return TemplateResponse(
134 request, [template, 'order/payment/default.html'], ctx)
135
136
137 @check_order_status
138 def cancel_payment(request, order):
139 form = PaymentDeleteForm(request.POST or None, order=order)
140 if form.is_valid():
141 with transaction.atomic():
142 form.save()
143 return redirect('order:payment', token=order.token)
144 return HttpResponseForbidden()
145
146
147 @csrf_exempt
148 def payment_success(request, token):
149 """Receive request from payment gateway after paying for an order.
150
151 Redirects user to payment success.
152 All post data and query strings are dropped.
153 """
154 url = reverse('order:checkout-success', kwargs={'token': token})
155 return redirect(url)
156
157
158 def checkout_success(request, token):
159 """Redirect user after placing an order.
160
161 Anonymous users are redirected to the checkout success page.
162 Registered users are redirected to order details page and the order
163 is attached to their account.
164 """
165 order = get_object_or_404(Order, token=token)
166 email = order.user_email
167 ctx = {'email': email, 'order': order}
168 if request.user.is_authenticated:
169 return TemplateResponse(request, 'order/checkout_success.html', ctx)
170 form_data = request.POST.copy()
171 if form_data:
172 form_data.update({'email': email})
173 register_form = PasswordForm(form_data or None)
174 if register_form.is_valid():
175 register_form.save()
176 password = register_form.cleaned_data.get('password')
177 user = auth.authenticate(
178 request=request, email=email, password=password)
179 auth.login(request, user)
180 attach_order_to_user(order, user)
181 return redirect('order:details', token=token)
182 user_exists = User.objects.filter(email=email).exists()
183 login_form = LoginForm(
184 initial={'username': email}) if user_exists else None
185 ctx.update({'form': register_form, 'login_form': login_form})
186 return TemplateResponse(
187 request, 'order/checkout_success_anonymous.html', ctx)
188
189
190 @login_required
191 def connect_order_with_user(request, token):
192 """Connect newly created order to an authenticated user."""
193 try:
194 order = Order.objects.get(user_email=request.user.email, token=token)
195 except Order.DoesNotExist:
196 order = None
197 if not order:
198 msg = pgettext_lazy(
199 'Connect order with user warning message',
200 "We couldn't assign the order to your account as the email"
201 " addresses don't match")
202 messages.warning(request, msg)
203 return redirect('account:details')
204 attach_order_to_user(order, request.user)
205 msg = pgettext_lazy(
206 'storefront message',
207 'The order is now assigned to your account')
208 messages.success(request, msg)
209 return redirect('order:details', token=order.token)
210
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/saleor/order/views.py b/saleor/order/views.py
--- a/saleor/order/views.py
+++ b/saleor/order/views.py
@@ -26,7 +26,7 @@
def details(request, token):
orders = Order.objects.confirmed().prefetch_related(
- 'lines__product', 'fulfillments', 'fulfillments__lines',
+ 'lines__variant', 'fulfillments', 'fulfillments__lines',
'fulfillments__lines__order_line')
orders = orders.select_related(
'billing_address', 'shipping_address', 'user')
|
{"golden_diff": "diff --git a/saleor/order/views.py b/saleor/order/views.py\n--- a/saleor/order/views.py\n+++ b/saleor/order/views.py\n@@ -26,7 +26,7 @@\n \n def details(request, token):\n orders = Order.objects.confirmed().prefetch_related(\n- 'lines__product', 'fulfillments', 'fulfillments__lines',\n+ 'lines__variant', 'fulfillments', 'fulfillments__lines',\n 'fulfillments__lines__order_line')\n orders = orders.select_related(\n 'billing_address', 'shipping_address', 'user')\n", "issue": "'lines__product' is an invalid parameter to prefetch_related()\n### What I'm trying to achieve\r\nSee order details page in the storefront.\r\n\r\n### Steps to reproduce the problem\r\n1. Create a digital product in the dashboard (no shipping required).\r\n2. Log in as a customer and buy this product.\r\n3. Once the order is created, go to order details page in the storefront. I've accessed it from my order confirmation email.\r\n\r\n### What I expected to happen\r\nSee order details page in the storefront.\r\n\r\n### What happened instead/how it failed\r\nThe following error happened:\r\n\r\n```\r\nCannot find 'product' on OrderLine object, 'lines__product' is an invalid parameter to prefetch_related()\r\n```\r\n\r\nI don't know why why it works for other orders that I have in database (created by `populatedb`).\r\n\r\nSeems that the problem is this line : https://github.com/mirumee/saleor/blob/master/saleor/order/views.py#L29\n", "before_files": [{"content": "import logging\n\nfrom django.conf import settings\nfrom django.contrib import auth, messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.db import transaction\nfrom django.http import Http404, HttpResponseForbidden\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.template.response import TemplateResponse\nfrom django.urls import reverse\nfrom django.utils.translation import pgettext_lazy\nfrom django.views.decorators.csrf import csrf_exempt\nfrom payments import PaymentStatus, RedirectNeeded\n\nfrom . import FulfillmentStatus, OrderStatus\nfrom ..account.forms import LoginForm\nfrom ..account.models import User\nfrom ..core.utils import get_client_ip\nfrom .forms import (\n OrderNoteForm, PasswordForm, PaymentDeleteForm, PaymentMethodsForm)\nfrom .models import Order, OrderNote, Payment\nfrom .utils import attach_order_to_user, check_order_status\n\nlogger = logging.getLogger(__name__)\n\n\ndef details(request, token):\n orders = Order.objects.confirmed().prefetch_related(\n 'lines__product', 'fulfillments', 'fulfillments__lines',\n 'fulfillments__lines__order_line')\n orders = orders.select_related(\n 'billing_address', 'shipping_address', 'user')\n order = get_object_or_404(orders, token=token)\n notes = order.notes.filter(is_public=True)\n ctx = {'order': order, 'notes': notes}\n if order.status == OrderStatus.UNFULFILLED:\n user = request.user if request.user.is_authenticated else None\n note = OrderNote(order=order, user=user)\n note_form = OrderNoteForm(request.POST or None, instance=note)\n ctx.update({'note_form': note_form})\n if request.method == 'POST':\n if note_form.is_valid():\n note_form.save()\n return redirect('order:details', token=order.token)\n fulfillments = order.fulfillments.filter(\n status=FulfillmentStatus.FULFILLED)\n ctx.update({'fulfillments': fulfillments})\n return TemplateResponse(request, 'order/details.html', ctx)\n\n\ndef payment(request, token):\n orders = Order.objects.confirmed().filter(billing_address__isnull=False)\n orders = Order.objects.prefetch_related('lines__variant')\n orders = orders.select_related(\n 'billing_address', 'shipping_address', 'user')\n order = get_object_or_404(orders, token=token)\n payments = order.payments.all()\n form_data = request.POST or None\n try:\n waiting_payment = order.payments.get(status=PaymentStatus.WAITING)\n except Payment.DoesNotExist:\n waiting_payment = None\n waiting_payment_form = None\n else:\n form_data = None\n waiting_payment_form = PaymentDeleteForm(\n None, order=order, initial={'payment_id': waiting_payment.id})\n if order.is_fully_paid() or not order.billing_address:\n form_data = None\n payment_form = None\n if not order.is_pre_authorized():\n payment_form = PaymentMethodsForm(form_data)\n # FIXME: redirect if there is only one payment method\n if payment_form.is_valid():\n payment_method = payment_form.cleaned_data['method']\n return redirect(\n 'order:payment', token=order.token, variant=payment_method)\n ctx = {\n 'order': order, 'payment_form': payment_form, 'payments': payments,\n 'waiting_payment': waiting_payment,\n 'waiting_payment_form': waiting_payment_form}\n return TemplateResponse(request, 'order/payment.html', ctx)\n\n\n@check_order_status\ndef start_payment(request, order, variant):\n waiting_payments = order.payments.filter(\n status=PaymentStatus.WAITING).exists()\n if waiting_payments:\n return redirect('order:payment', token=order.token)\n billing = order.billing_address\n total = order.total\n defaults = {\n 'total': total.gross.amount,\n 'tax': total.tax.amount,\n 'currency': total.currency,\n 'delivery': order.shipping_price.gross.amount,\n 'billing_first_name': billing.first_name,\n 'billing_last_name': billing.last_name,\n 'billing_address_1': billing.street_address_1,\n 'billing_address_2': billing.street_address_2,\n 'billing_city': billing.city,\n 'billing_postcode': billing.postal_code,\n 'billing_country_code': billing.country.code,\n 'billing_email': order.user_email,\n 'description': pgettext_lazy(\n 'Payment description', 'Order %(order_number)s') % {\n 'order_number': order},\n 'billing_country_area': billing.country_area,\n 'customer_ip_address': get_client_ip(request)}\n variant_choices = settings.CHECKOUT_PAYMENT_CHOICES\n if variant not in [code for code, dummy_name in variant_choices]:\n raise Http404('%r is not a valid payment variant' % (variant,))\n with transaction.atomic():\n payment, dummy_created = Payment.objects.get_or_create(\n variant=variant, status=PaymentStatus.WAITING, order=order,\n defaults=defaults)\n try:\n form = payment.get_form(data=request.POST or None)\n except RedirectNeeded as redirect_to:\n return redirect(str(redirect_to))\n except Exception:\n logger.exception('Error communicating with the payment gateway')\n msg = pgettext_lazy(\n 'Payment gateway error',\n 'Oops, it looks like we were unable to contact the selected '\n 'payment service')\n messages.error(request, msg)\n payment.change_status(PaymentStatus.ERROR)\n return redirect('order:payment', token=order.token)\n template = 'order/payment/%s.html' % variant\n ctx = {'form': form, 'payment': payment}\n return TemplateResponse(\n request, [template, 'order/payment/default.html'], ctx)\n\n\n@check_order_status\ndef cancel_payment(request, order):\n form = PaymentDeleteForm(request.POST or None, order=order)\n if form.is_valid():\n with transaction.atomic():\n form.save()\n return redirect('order:payment', token=order.token)\n return HttpResponseForbidden()\n\n\n@csrf_exempt\ndef payment_success(request, token):\n \"\"\"Receive request from payment gateway after paying for an order.\n\n Redirects user to payment success.\n All post data and query strings are dropped.\n \"\"\"\n url = reverse('order:checkout-success', kwargs={'token': token})\n return redirect(url)\n\n\ndef checkout_success(request, token):\n \"\"\"Redirect user after placing an order.\n\n Anonymous users are redirected to the checkout success page.\n Registered users are redirected to order details page and the order\n is attached to their account.\n \"\"\"\n order = get_object_or_404(Order, token=token)\n email = order.user_email\n ctx = {'email': email, 'order': order}\n if request.user.is_authenticated:\n return TemplateResponse(request, 'order/checkout_success.html', ctx)\n form_data = request.POST.copy()\n if form_data:\n form_data.update({'email': email})\n register_form = PasswordForm(form_data or None)\n if register_form.is_valid():\n register_form.save()\n password = register_form.cleaned_data.get('password')\n user = auth.authenticate(\n request=request, email=email, password=password)\n auth.login(request, user)\n attach_order_to_user(order, user)\n return redirect('order:details', token=token)\n user_exists = User.objects.filter(email=email).exists()\n login_form = LoginForm(\n initial={'username': email}) if user_exists else None\n ctx.update({'form': register_form, 'login_form': login_form})\n return TemplateResponse(\n request, 'order/checkout_success_anonymous.html', ctx)\n\n\n@login_required\ndef connect_order_with_user(request, token):\n \"\"\"Connect newly created order to an authenticated user.\"\"\"\n try:\n order = Order.objects.get(user_email=request.user.email, token=token)\n except Order.DoesNotExist:\n order = None\n if not order:\n msg = pgettext_lazy(\n 'Connect order with user warning message',\n \"We couldn't assign the order to your account as the email\"\n \" addresses don't match\")\n messages.warning(request, msg)\n return redirect('account:details')\n attach_order_to_user(order, request.user)\n msg = pgettext_lazy(\n 'storefront message',\n 'The order is now assigned to your account')\n messages.success(request, msg)\n return redirect('order:details', token=order.token)\n", "path": "saleor/order/views.py"}], "after_files": [{"content": "import logging\n\nfrom django.conf import settings\nfrom django.contrib import auth, messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.db import transaction\nfrom django.http import Http404, HttpResponseForbidden\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.template.response import TemplateResponse\nfrom django.urls import reverse\nfrom django.utils.translation import pgettext_lazy\nfrom django.views.decorators.csrf import csrf_exempt\nfrom payments import PaymentStatus, RedirectNeeded\n\nfrom . import FulfillmentStatus, OrderStatus\nfrom ..account.forms import LoginForm\nfrom ..account.models import User\nfrom ..core.utils import get_client_ip\nfrom .forms import (\n OrderNoteForm, PasswordForm, PaymentDeleteForm, PaymentMethodsForm)\nfrom .models import Order, OrderNote, Payment\nfrom .utils import attach_order_to_user, check_order_status\n\nlogger = logging.getLogger(__name__)\n\n\ndef details(request, token):\n orders = Order.objects.confirmed().prefetch_related(\n 'lines__variant', 'fulfillments', 'fulfillments__lines',\n 'fulfillments__lines__order_line')\n orders = orders.select_related(\n 'billing_address', 'shipping_address', 'user')\n order = get_object_or_404(orders, token=token)\n notes = order.notes.filter(is_public=True)\n ctx = {'order': order, 'notes': notes}\n if order.status == OrderStatus.UNFULFILLED:\n user = request.user if request.user.is_authenticated else None\n note = OrderNote(order=order, user=user)\n note_form = OrderNoteForm(request.POST or None, instance=note)\n ctx.update({'note_form': note_form})\n if request.method == 'POST':\n if note_form.is_valid():\n note_form.save()\n return redirect('order:details', token=order.token)\n fulfillments = order.fulfillments.filter(\n status=FulfillmentStatus.FULFILLED)\n ctx.update({'fulfillments': fulfillments})\n return TemplateResponse(request, 'order/details.html', ctx)\n\n\ndef payment(request, token):\n orders = Order.objects.confirmed().filter(billing_address__isnull=False)\n orders = Order.objects.prefetch_related('lines__variant')\n orders = orders.select_related(\n 'billing_address', 'shipping_address', 'user')\n order = get_object_or_404(orders, token=token)\n payments = order.payments.all()\n form_data = request.POST or None\n try:\n waiting_payment = order.payments.get(status=PaymentStatus.WAITING)\n except Payment.DoesNotExist:\n waiting_payment = None\n waiting_payment_form = None\n else:\n form_data = None\n waiting_payment_form = PaymentDeleteForm(\n None, order=order, initial={'payment_id': waiting_payment.id})\n if order.is_fully_paid() or not order.billing_address:\n form_data = None\n payment_form = None\n if not order.is_pre_authorized():\n payment_form = PaymentMethodsForm(form_data)\n # FIXME: redirect if there is only one payment method\n if payment_form.is_valid():\n payment_method = payment_form.cleaned_data['method']\n return redirect(\n 'order:payment', token=order.token, variant=payment_method)\n ctx = {\n 'order': order, 'payment_form': payment_form, 'payments': payments,\n 'waiting_payment': waiting_payment,\n 'waiting_payment_form': waiting_payment_form}\n return TemplateResponse(request, 'order/payment.html', ctx)\n\n\n@check_order_status\ndef start_payment(request, order, variant):\n waiting_payments = order.payments.filter(\n status=PaymentStatus.WAITING).exists()\n if waiting_payments:\n return redirect('order:payment', token=order.token)\n billing = order.billing_address\n total = order.total\n defaults = {\n 'total': total.gross.amount,\n 'tax': total.tax.amount,\n 'currency': total.currency,\n 'delivery': order.shipping_price.gross.amount,\n 'billing_first_name': billing.first_name,\n 'billing_last_name': billing.last_name,\n 'billing_address_1': billing.street_address_1,\n 'billing_address_2': billing.street_address_2,\n 'billing_city': billing.city,\n 'billing_postcode': billing.postal_code,\n 'billing_country_code': billing.country.code,\n 'billing_email': order.user_email,\n 'description': pgettext_lazy(\n 'Payment description', 'Order %(order_number)s') % {\n 'order_number': order},\n 'billing_country_area': billing.country_area,\n 'customer_ip_address': get_client_ip(request)}\n variant_choices = settings.CHECKOUT_PAYMENT_CHOICES\n if variant not in [code for code, dummy_name in variant_choices]:\n raise Http404('%r is not a valid payment variant' % (variant,))\n with transaction.atomic():\n payment, dummy_created = Payment.objects.get_or_create(\n variant=variant, status=PaymentStatus.WAITING, order=order,\n defaults=defaults)\n try:\n form = payment.get_form(data=request.POST or None)\n except RedirectNeeded as redirect_to:\n return redirect(str(redirect_to))\n except Exception:\n logger.exception('Error communicating with the payment gateway')\n msg = pgettext_lazy(\n 'Payment gateway error',\n 'Oops, it looks like we were unable to contact the selected '\n 'payment service')\n messages.error(request, msg)\n payment.change_status(PaymentStatus.ERROR)\n return redirect('order:payment', token=order.token)\n template = 'order/payment/%s.html' % variant\n ctx = {'form': form, 'payment': payment}\n return TemplateResponse(\n request, [template, 'order/payment/default.html'], ctx)\n\n\n@check_order_status\ndef cancel_payment(request, order):\n form = PaymentDeleteForm(request.POST or None, order=order)\n if form.is_valid():\n with transaction.atomic():\n form.save()\n return redirect('order:payment', token=order.token)\n return HttpResponseForbidden()\n\n\n@csrf_exempt\ndef payment_success(request, token):\n \"\"\"Receive request from payment gateway after paying for an order.\n\n Redirects user to payment success.\n All post data and query strings are dropped.\n \"\"\"\n url = reverse('order:checkout-success', kwargs={'token': token})\n return redirect(url)\n\n\ndef checkout_success(request, token):\n \"\"\"Redirect user after placing an order.\n\n Anonymous users are redirected to the checkout success page.\n Registered users are redirected to order details page and the order\n is attached to their account.\n \"\"\"\n order = get_object_or_404(Order, token=token)\n email = order.user_email\n ctx = {'email': email, 'order': order}\n if request.user.is_authenticated:\n return TemplateResponse(request, 'order/checkout_success.html', ctx)\n form_data = request.POST.copy()\n if form_data:\n form_data.update({'email': email})\n register_form = PasswordForm(form_data or None)\n if register_form.is_valid():\n register_form.save()\n password = register_form.cleaned_data.get('password')\n user = auth.authenticate(\n request=request, email=email, password=password)\n auth.login(request, user)\n attach_order_to_user(order, user)\n return redirect('order:details', token=token)\n user_exists = User.objects.filter(email=email).exists()\n login_form = LoginForm(\n initial={'username': email}) if user_exists else None\n ctx.update({'form': register_form, 'login_form': login_form})\n return TemplateResponse(\n request, 'order/checkout_success_anonymous.html', ctx)\n\n\n@login_required\ndef connect_order_with_user(request, token):\n \"\"\"Connect newly created order to an authenticated user.\"\"\"\n try:\n order = Order.objects.get(user_email=request.user.email, token=token)\n except Order.DoesNotExist:\n order = None\n if not order:\n msg = pgettext_lazy(\n 'Connect order with user warning message',\n \"We couldn't assign the order to your account as the email\"\n \" addresses don't match\")\n messages.warning(request, msg)\n return redirect('account:details')\n attach_order_to_user(order, request.user)\n msg = pgettext_lazy(\n 'storefront message',\n 'The order is now assigned to your account')\n messages.success(request, msg)\n return redirect('order:details', token=order.token)\n", "path": "saleor/order/views.py"}]}
| 2,785 | 135 |
gh_patches_debug_65358
|
rasdani/github-patches
|
git_diff
|
urllib3__urllib3-696
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
PyOpenSSL support doesn't work with TLSv1.1 and TLSv1.2
`docker-py` tries to use latest protocol version in the ssl module, so use ssl.PROTOCOL_TLSv1_2. Unfortunately urllib3.contrib.pyopenssl only has SSLv2_3 and TLSv1 so it breaks:
https://github.com/shazow/urllib3/blob/master/urllib3/contrib/pyopenssl.py#L68
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `urllib3/contrib/pyopenssl.py`
Content:
```
1 '''SSL with SNI_-support for Python 2. Follow these instructions if you would
2 like to verify SSL certificates in Python 2. Note, the default libraries do
3 *not* do certificate checking; you need to do additional work to validate
4 certificates yourself.
5
6 This needs the following packages installed:
7
8 * pyOpenSSL (tested with 0.13)
9 * ndg-httpsclient (tested with 0.3.2)
10 * pyasn1 (tested with 0.1.6)
11
12 You can install them with the following command:
13
14 pip install pyopenssl ndg-httpsclient pyasn1
15
16 To activate certificate checking, call
17 :func:`~urllib3.contrib.pyopenssl.inject_into_urllib3` from your Python code
18 before you begin making HTTP requests. This can be done in a ``sitecustomize``
19 module, or at any other time before your application begins using ``urllib3``,
20 like this::
21
22 try:
23 import urllib3.contrib.pyopenssl
24 urllib3.contrib.pyopenssl.inject_into_urllib3()
25 except ImportError:
26 pass
27
28 Now you can use :mod:`urllib3` as you normally would, and it will support SNI
29 when the required modules are installed.
30
31 Activating this module also has the positive side effect of disabling SSL/TLS
32 compression in Python 2 (see `CRIME attack`_).
33
34 If you want to configure the default list of supported cipher suites, you can
35 set the ``urllib3.contrib.pyopenssl.DEFAULT_SSL_CIPHER_LIST`` variable.
36
37 Module Variables
38 ----------------
39
40 :var DEFAULT_SSL_CIPHER_LIST: The list of supported SSL/TLS cipher suites.
41
42 .. _sni: https://en.wikipedia.org/wiki/Server_Name_Indication
43 .. _crime attack: https://en.wikipedia.org/wiki/CRIME_(security_exploit)
44
45 '''
46
47 try:
48 from ndg.httpsclient.ssl_peer_verification import SUBJ_ALT_NAME_SUPPORT
49 from ndg.httpsclient.subj_alt_name import SubjectAltName as BaseSubjectAltName
50 except SyntaxError as e:
51 raise ImportError(e)
52
53 import OpenSSL.SSL
54 from pyasn1.codec.der import decoder as der_decoder
55 from pyasn1.type import univ, constraint
56 from socket import _fileobject, timeout
57 import ssl
58 import select
59
60 from .. import connection
61 from .. import util
62
63 __all__ = ['inject_into_urllib3', 'extract_from_urllib3']
64
65 # SNI only *really* works if we can read the subjectAltName of certificates.
66 HAS_SNI = SUBJ_ALT_NAME_SUPPORT
67
68 # Map from urllib3 to PyOpenSSL compatible parameter-values.
69 _openssl_versions = {
70 ssl.PROTOCOL_SSLv23: OpenSSL.SSL.SSLv23_METHOD,
71 ssl.PROTOCOL_TLSv1: OpenSSL.SSL.TLSv1_METHOD,
72 }
73
74 try:
75 _openssl_versions.update({ssl.PROTOCOL_SSLv3: OpenSSL.SSL.SSLv3_METHOD})
76 except AttributeError:
77 pass
78
79 _openssl_verify = {
80 ssl.CERT_NONE: OpenSSL.SSL.VERIFY_NONE,
81 ssl.CERT_OPTIONAL: OpenSSL.SSL.VERIFY_PEER,
82 ssl.CERT_REQUIRED: OpenSSL.SSL.VERIFY_PEER
83 + OpenSSL.SSL.VERIFY_FAIL_IF_NO_PEER_CERT,
84 }
85
86 DEFAULT_SSL_CIPHER_LIST = util.ssl_.DEFAULT_CIPHERS
87
88 # OpenSSL will only write 16K at a time
89 SSL_WRITE_BLOCKSIZE = 16384
90
91 try:
92 _ = memoryview
93 has_memoryview = True
94 except NameError:
95 has_memoryview = False
96
97 orig_util_HAS_SNI = util.HAS_SNI
98 orig_connection_ssl_wrap_socket = connection.ssl_wrap_socket
99
100
101 def inject_into_urllib3():
102 'Monkey-patch urllib3 with PyOpenSSL-backed SSL-support.'
103
104 connection.ssl_wrap_socket = ssl_wrap_socket
105 util.HAS_SNI = HAS_SNI
106
107
108 def extract_from_urllib3():
109 'Undo monkey-patching by :func:`inject_into_urllib3`.'
110
111 connection.ssl_wrap_socket = orig_connection_ssl_wrap_socket
112 util.HAS_SNI = orig_util_HAS_SNI
113
114
115 ### Note: This is a slightly bug-fixed version of same from ndg-httpsclient.
116 class SubjectAltName(BaseSubjectAltName):
117 '''ASN.1 implementation for subjectAltNames support'''
118
119 # There is no limit to how many SAN certificates a certificate may have,
120 # however this needs to have some limit so we'll set an arbitrarily high
121 # limit.
122 sizeSpec = univ.SequenceOf.sizeSpec + \
123 constraint.ValueSizeConstraint(1, 1024)
124
125
126 ### Note: This is a slightly bug-fixed version of same from ndg-httpsclient.
127 def get_subj_alt_name(peer_cert):
128 # Search through extensions
129 dns_name = []
130 if not SUBJ_ALT_NAME_SUPPORT:
131 return dns_name
132
133 general_names = SubjectAltName()
134 for i in range(peer_cert.get_extension_count()):
135 ext = peer_cert.get_extension(i)
136 ext_name = ext.get_short_name()
137 if ext_name != 'subjectAltName':
138 continue
139
140 # PyOpenSSL returns extension data in ASN.1 encoded form
141 ext_dat = ext.get_data()
142 decoded_dat = der_decoder.decode(ext_dat,
143 asn1Spec=general_names)
144
145 for name in decoded_dat:
146 if not isinstance(name, SubjectAltName):
147 continue
148 for entry in range(len(name)):
149 component = name.getComponentByPosition(entry)
150 if component.getName() != 'dNSName':
151 continue
152 dns_name.append(str(component.getComponent()))
153
154 return dns_name
155
156
157 class WrappedSocket(object):
158 '''API-compatibility wrapper for Python OpenSSL's Connection-class.
159
160 Note: _makefile_refs, _drop() and _reuse() are needed for the garbage
161 collector of pypy.
162 '''
163
164 def __init__(self, connection, socket, suppress_ragged_eofs=True):
165 self.connection = connection
166 self.socket = socket
167 self.suppress_ragged_eofs = suppress_ragged_eofs
168 self._makefile_refs = 0
169
170 def fileno(self):
171 return self.socket.fileno()
172
173 def makefile(self, mode, bufsize=-1):
174 self._makefile_refs += 1
175 return _fileobject(self, mode, bufsize, close=True)
176
177 def recv(self, *args, **kwargs):
178 try:
179 data = self.connection.recv(*args, **kwargs)
180 except OpenSSL.SSL.SysCallError as e:
181 if self.suppress_ragged_eofs and e.args == (-1, 'Unexpected EOF'):
182 return b''
183 else:
184 raise
185 except OpenSSL.SSL.ZeroReturnError as e:
186 if self.connection.get_shutdown() == OpenSSL.SSL.RECEIVED_SHUTDOWN:
187 return b''
188 else:
189 raise
190 except OpenSSL.SSL.WantReadError:
191 rd, wd, ed = select.select(
192 [self.socket], [], [], self.socket.gettimeout())
193 if not rd:
194 raise timeout('The read operation timed out')
195 else:
196 return self.recv(*args, **kwargs)
197 else:
198 return data
199
200 def settimeout(self, timeout):
201 return self.socket.settimeout(timeout)
202
203 def _send_until_done(self, data):
204 while True:
205 try:
206 return self.connection.send(data)
207 except OpenSSL.SSL.WantWriteError:
208 _, wlist, _ = select.select([], [self.socket], [],
209 self.socket.gettimeout())
210 if not wlist:
211 raise timeout()
212 continue
213
214 def sendall(self, data):
215 if has_memoryview and not isinstance(data, memoryview):
216 data = memoryview(data)
217
218 total_sent = 0
219 while total_sent < len(data):
220 sent = self._send_until_done(data[total_sent:total_sent+SSL_WRITE_BLOCKSIZE])
221 total_sent += sent
222
223 def shutdown(self):
224 # FIXME rethrow compatible exceptions should we ever use this
225 self.connection.shutdown()
226
227 def close(self):
228 if self._makefile_refs < 1:
229 return self.connection.close()
230 else:
231 self._makefile_refs -= 1
232
233 def getpeercert(self, binary_form=False):
234 x509 = self.connection.get_peer_certificate()
235
236 if not x509:
237 return x509
238
239 if binary_form:
240 return OpenSSL.crypto.dump_certificate(
241 OpenSSL.crypto.FILETYPE_ASN1,
242 x509)
243
244 return {
245 'subject': (
246 (('commonName', x509.get_subject().CN),),
247 ),
248 'subjectAltName': [
249 ('DNS', value)
250 for value in get_subj_alt_name(x509)
251 ]
252 }
253
254 def _reuse(self):
255 self._makefile_refs += 1
256
257 def _drop(self):
258 if self._makefile_refs < 1:
259 self.close()
260 else:
261 self._makefile_refs -= 1
262
263
264 def _verify_callback(cnx, x509, err_no, err_depth, return_code):
265 return err_no == 0
266
267
268 def ssl_wrap_socket(sock, keyfile=None, certfile=None, cert_reqs=None,
269 ca_certs=None, server_hostname=None,
270 ssl_version=None):
271 ctx = OpenSSL.SSL.Context(_openssl_versions[ssl_version])
272 if certfile:
273 keyfile = keyfile or certfile # Match behaviour of the normal python ssl library
274 ctx.use_certificate_file(certfile)
275 if keyfile:
276 ctx.use_privatekey_file(keyfile)
277 if cert_reqs != ssl.CERT_NONE:
278 ctx.set_verify(_openssl_verify[cert_reqs], _verify_callback)
279 if ca_certs:
280 try:
281 ctx.load_verify_locations(ca_certs, None)
282 except OpenSSL.SSL.Error as e:
283 raise ssl.SSLError('bad ca_certs: %r' % ca_certs, e)
284 else:
285 ctx.set_default_verify_paths()
286
287 # Disable TLS compression to migitate CRIME attack (issue #309)
288 OP_NO_COMPRESSION = 0x20000
289 ctx.set_options(OP_NO_COMPRESSION)
290
291 # Set list of supported ciphersuites.
292 ctx.set_cipher_list(DEFAULT_SSL_CIPHER_LIST)
293
294 cnx = OpenSSL.SSL.Connection(ctx, sock)
295 cnx.set_tlsext_host_name(server_hostname)
296 cnx.set_connect_state()
297 while True:
298 try:
299 cnx.do_handshake()
300 except OpenSSL.SSL.WantReadError:
301 rd, _, _ = select.select([sock], [], [], sock.gettimeout())
302 if not rd:
303 raise timeout('select timed out')
304 continue
305 except OpenSSL.SSL.Error as e:
306 raise ssl.SSLError('bad handshake: %r' % e)
307 break
308
309 return WrappedSocket(cnx, sock)
310
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/urllib3/contrib/pyopenssl.py b/urllib3/contrib/pyopenssl.py
--- a/urllib3/contrib/pyopenssl.py
+++ b/urllib3/contrib/pyopenssl.py
@@ -71,6 +71,12 @@
ssl.PROTOCOL_TLSv1: OpenSSL.SSL.TLSv1_METHOD,
}
+if hasattr(ssl, 'PROTOCOL_TLSv1_1') and hasattr(OpenSSL.SSL, 'TLSv1_1_METHOD'):
+ _openssl_versions[ssl.PROTOCOL_TLSv1_1] = OpenSSL.SSL.TLSv1_1_METHOD
+
+if hasattr(ssl, 'PROTOCOL_TLSv1_2') and hasattr(OpenSSL.SSL, 'TLSv1_2_METHOD'):
+ _openssl_versions[ssl.PROTOCOL_TLSv1_2] = OpenSSL.SSL.TLSv1_2_METHOD
+
try:
_openssl_versions.update({ssl.PROTOCOL_SSLv3: OpenSSL.SSL.SSLv3_METHOD})
except AttributeError:
|
{"golden_diff": "diff --git a/urllib3/contrib/pyopenssl.py b/urllib3/contrib/pyopenssl.py\n--- a/urllib3/contrib/pyopenssl.py\n+++ b/urllib3/contrib/pyopenssl.py\n@@ -71,6 +71,12 @@\n ssl.PROTOCOL_TLSv1: OpenSSL.SSL.TLSv1_METHOD,\n }\n \n+if hasattr(ssl, 'PROTOCOL_TLSv1_1') and hasattr(OpenSSL.SSL, 'TLSv1_1_METHOD'):\n+ _openssl_versions[ssl.PROTOCOL_TLSv1_1] = OpenSSL.SSL.TLSv1_1_METHOD\n+\n+if hasattr(ssl, 'PROTOCOL_TLSv1_2') and hasattr(OpenSSL.SSL, 'TLSv1_2_METHOD'):\n+ _openssl_versions[ssl.PROTOCOL_TLSv1_2] = OpenSSL.SSL.TLSv1_2_METHOD\n+\n try:\n _openssl_versions.update({ssl.PROTOCOL_SSLv3: OpenSSL.SSL.SSLv3_METHOD})\n except AttributeError:\n", "issue": "PyOpenSSL support doesn't work with TLSv1.1 and TLSv1.2\n`docker-py` tries to use latest protocol version in the ssl module, so use ssl.PROTOCOL_TLSv1_2. Unfortunately urllib3.contrib.pyopenssl only has SSLv2_3 and TLSv1 so it breaks:\n\nhttps://github.com/shazow/urllib3/blob/master/urllib3/contrib/pyopenssl.py#L68\n\n", "before_files": [{"content": "'''SSL with SNI_-support for Python 2. Follow these instructions if you would\nlike to verify SSL certificates in Python 2. Note, the default libraries do\n*not* do certificate checking; you need to do additional work to validate\ncertificates yourself.\n\nThis needs the following packages installed:\n\n* pyOpenSSL (tested with 0.13)\n* ndg-httpsclient (tested with 0.3.2)\n* pyasn1 (tested with 0.1.6)\n\nYou can install them with the following command:\n\n pip install pyopenssl ndg-httpsclient pyasn1\n\nTo activate certificate checking, call\n:func:`~urllib3.contrib.pyopenssl.inject_into_urllib3` from your Python code\nbefore you begin making HTTP requests. This can be done in a ``sitecustomize``\nmodule, or at any other time before your application begins using ``urllib3``,\nlike this::\n\n try:\n import urllib3.contrib.pyopenssl\n urllib3.contrib.pyopenssl.inject_into_urllib3()\n except ImportError:\n pass\n\nNow you can use :mod:`urllib3` as you normally would, and it will support SNI\nwhen the required modules are installed.\n\nActivating this module also has the positive side effect of disabling SSL/TLS\ncompression in Python 2 (see `CRIME attack`_).\n\nIf you want to configure the default list of supported cipher suites, you can\nset the ``urllib3.contrib.pyopenssl.DEFAULT_SSL_CIPHER_LIST`` variable.\n\nModule Variables\n----------------\n\n:var DEFAULT_SSL_CIPHER_LIST: The list of supported SSL/TLS cipher suites.\n\n.. _sni: https://en.wikipedia.org/wiki/Server_Name_Indication\n.. _crime attack: https://en.wikipedia.org/wiki/CRIME_(security_exploit)\n\n'''\n\ntry:\n from ndg.httpsclient.ssl_peer_verification import SUBJ_ALT_NAME_SUPPORT\n from ndg.httpsclient.subj_alt_name import SubjectAltName as BaseSubjectAltName\nexcept SyntaxError as e:\n raise ImportError(e)\n\nimport OpenSSL.SSL\nfrom pyasn1.codec.der import decoder as der_decoder\nfrom pyasn1.type import univ, constraint\nfrom socket import _fileobject, timeout\nimport ssl\nimport select\n\nfrom .. import connection\nfrom .. import util\n\n__all__ = ['inject_into_urllib3', 'extract_from_urllib3']\n\n# SNI only *really* works if we can read the subjectAltName of certificates.\nHAS_SNI = SUBJ_ALT_NAME_SUPPORT\n\n# Map from urllib3 to PyOpenSSL compatible parameter-values.\n_openssl_versions = {\n ssl.PROTOCOL_SSLv23: OpenSSL.SSL.SSLv23_METHOD,\n ssl.PROTOCOL_TLSv1: OpenSSL.SSL.TLSv1_METHOD,\n}\n\ntry:\n _openssl_versions.update({ssl.PROTOCOL_SSLv3: OpenSSL.SSL.SSLv3_METHOD})\nexcept AttributeError:\n pass\n\n_openssl_verify = {\n ssl.CERT_NONE: OpenSSL.SSL.VERIFY_NONE,\n ssl.CERT_OPTIONAL: OpenSSL.SSL.VERIFY_PEER,\n ssl.CERT_REQUIRED: OpenSSL.SSL.VERIFY_PEER\n + OpenSSL.SSL.VERIFY_FAIL_IF_NO_PEER_CERT,\n}\n\nDEFAULT_SSL_CIPHER_LIST = util.ssl_.DEFAULT_CIPHERS\n\n# OpenSSL will only write 16K at a time\nSSL_WRITE_BLOCKSIZE = 16384\n\ntry:\n _ = memoryview\n has_memoryview = True\nexcept NameError:\n has_memoryview = False\n\norig_util_HAS_SNI = util.HAS_SNI\norig_connection_ssl_wrap_socket = connection.ssl_wrap_socket\n\n\ndef inject_into_urllib3():\n 'Monkey-patch urllib3 with PyOpenSSL-backed SSL-support.'\n\n connection.ssl_wrap_socket = ssl_wrap_socket\n util.HAS_SNI = HAS_SNI\n\n\ndef extract_from_urllib3():\n 'Undo monkey-patching by :func:`inject_into_urllib3`.'\n\n connection.ssl_wrap_socket = orig_connection_ssl_wrap_socket\n util.HAS_SNI = orig_util_HAS_SNI\n\n\n### Note: This is a slightly bug-fixed version of same from ndg-httpsclient.\nclass SubjectAltName(BaseSubjectAltName):\n '''ASN.1 implementation for subjectAltNames support'''\n\n # There is no limit to how many SAN certificates a certificate may have,\n # however this needs to have some limit so we'll set an arbitrarily high\n # limit.\n sizeSpec = univ.SequenceOf.sizeSpec + \\\n constraint.ValueSizeConstraint(1, 1024)\n\n\n### Note: This is a slightly bug-fixed version of same from ndg-httpsclient.\ndef get_subj_alt_name(peer_cert):\n # Search through extensions\n dns_name = []\n if not SUBJ_ALT_NAME_SUPPORT:\n return dns_name\n\n general_names = SubjectAltName()\n for i in range(peer_cert.get_extension_count()):\n ext = peer_cert.get_extension(i)\n ext_name = ext.get_short_name()\n if ext_name != 'subjectAltName':\n continue\n\n # PyOpenSSL returns extension data in ASN.1 encoded form\n ext_dat = ext.get_data()\n decoded_dat = der_decoder.decode(ext_dat,\n asn1Spec=general_names)\n\n for name in decoded_dat:\n if not isinstance(name, SubjectAltName):\n continue\n for entry in range(len(name)):\n component = name.getComponentByPosition(entry)\n if component.getName() != 'dNSName':\n continue\n dns_name.append(str(component.getComponent()))\n\n return dns_name\n\n\nclass WrappedSocket(object):\n '''API-compatibility wrapper for Python OpenSSL's Connection-class.\n\n Note: _makefile_refs, _drop() and _reuse() are needed for the garbage\n collector of pypy.\n '''\n\n def __init__(self, connection, socket, suppress_ragged_eofs=True):\n self.connection = connection\n self.socket = socket\n self.suppress_ragged_eofs = suppress_ragged_eofs\n self._makefile_refs = 0\n\n def fileno(self):\n return self.socket.fileno()\n\n def makefile(self, mode, bufsize=-1):\n self._makefile_refs += 1\n return _fileobject(self, mode, bufsize, close=True)\n\n def recv(self, *args, **kwargs):\n try:\n data = self.connection.recv(*args, **kwargs)\n except OpenSSL.SSL.SysCallError as e:\n if self.suppress_ragged_eofs and e.args == (-1, 'Unexpected EOF'):\n return b''\n else:\n raise\n except OpenSSL.SSL.ZeroReturnError as e:\n if self.connection.get_shutdown() == OpenSSL.SSL.RECEIVED_SHUTDOWN:\n return b''\n else:\n raise\n except OpenSSL.SSL.WantReadError:\n rd, wd, ed = select.select(\n [self.socket], [], [], self.socket.gettimeout())\n if not rd:\n raise timeout('The read operation timed out')\n else:\n return self.recv(*args, **kwargs)\n else:\n return data\n\n def settimeout(self, timeout):\n return self.socket.settimeout(timeout)\n\n def _send_until_done(self, data):\n while True:\n try:\n return self.connection.send(data)\n except OpenSSL.SSL.WantWriteError:\n _, wlist, _ = select.select([], [self.socket], [],\n self.socket.gettimeout())\n if not wlist:\n raise timeout()\n continue\n\n def sendall(self, data):\n if has_memoryview and not isinstance(data, memoryview):\n data = memoryview(data)\n\n total_sent = 0\n while total_sent < len(data):\n sent = self._send_until_done(data[total_sent:total_sent+SSL_WRITE_BLOCKSIZE])\n total_sent += sent\n\n def shutdown(self):\n # FIXME rethrow compatible exceptions should we ever use this\n self.connection.shutdown()\n\n def close(self):\n if self._makefile_refs < 1:\n return self.connection.close()\n else:\n self._makefile_refs -= 1\n\n def getpeercert(self, binary_form=False):\n x509 = self.connection.get_peer_certificate()\n\n if not x509:\n return x509\n\n if binary_form:\n return OpenSSL.crypto.dump_certificate(\n OpenSSL.crypto.FILETYPE_ASN1,\n x509)\n\n return {\n 'subject': (\n (('commonName', x509.get_subject().CN),),\n ),\n 'subjectAltName': [\n ('DNS', value)\n for value in get_subj_alt_name(x509)\n ]\n }\n\n def _reuse(self):\n self._makefile_refs += 1\n\n def _drop(self):\n if self._makefile_refs < 1:\n self.close()\n else:\n self._makefile_refs -= 1\n\n\ndef _verify_callback(cnx, x509, err_no, err_depth, return_code):\n return err_no == 0\n\n\ndef ssl_wrap_socket(sock, keyfile=None, certfile=None, cert_reqs=None,\n ca_certs=None, server_hostname=None,\n ssl_version=None):\n ctx = OpenSSL.SSL.Context(_openssl_versions[ssl_version])\n if certfile:\n keyfile = keyfile or certfile # Match behaviour of the normal python ssl library\n ctx.use_certificate_file(certfile)\n if keyfile:\n ctx.use_privatekey_file(keyfile)\n if cert_reqs != ssl.CERT_NONE:\n ctx.set_verify(_openssl_verify[cert_reqs], _verify_callback)\n if ca_certs:\n try:\n ctx.load_verify_locations(ca_certs, None)\n except OpenSSL.SSL.Error as e:\n raise ssl.SSLError('bad ca_certs: %r' % ca_certs, e)\n else:\n ctx.set_default_verify_paths()\n\n # Disable TLS compression to migitate CRIME attack (issue #309)\n OP_NO_COMPRESSION = 0x20000\n ctx.set_options(OP_NO_COMPRESSION)\n\n # Set list of supported ciphersuites.\n ctx.set_cipher_list(DEFAULT_SSL_CIPHER_LIST)\n\n cnx = OpenSSL.SSL.Connection(ctx, sock)\n cnx.set_tlsext_host_name(server_hostname)\n cnx.set_connect_state()\n while True:\n try:\n cnx.do_handshake()\n except OpenSSL.SSL.WantReadError:\n rd, _, _ = select.select([sock], [], [], sock.gettimeout())\n if not rd:\n raise timeout('select timed out')\n continue\n except OpenSSL.SSL.Error as e:\n raise ssl.SSLError('bad handshake: %r' % e)\n break\n\n return WrappedSocket(cnx, sock)\n", "path": "urllib3/contrib/pyopenssl.py"}], "after_files": [{"content": "'''SSL with SNI_-support for Python 2. Follow these instructions if you would\nlike to verify SSL certificates in Python 2. Note, the default libraries do\n*not* do certificate checking; you need to do additional work to validate\ncertificates yourself.\n\nThis needs the following packages installed:\n\n* pyOpenSSL (tested with 0.13)\n* ndg-httpsclient (tested with 0.3.2)\n* pyasn1 (tested with 0.1.6)\n\nYou can install them with the following command:\n\n pip install pyopenssl ndg-httpsclient pyasn1\n\nTo activate certificate checking, call\n:func:`~urllib3.contrib.pyopenssl.inject_into_urllib3` from your Python code\nbefore you begin making HTTP requests. This can be done in a ``sitecustomize``\nmodule, or at any other time before your application begins using ``urllib3``,\nlike this::\n\n try:\n import urllib3.contrib.pyopenssl\n urllib3.contrib.pyopenssl.inject_into_urllib3()\n except ImportError:\n pass\n\nNow you can use :mod:`urllib3` as you normally would, and it will support SNI\nwhen the required modules are installed.\n\nActivating this module also has the positive side effect of disabling SSL/TLS\ncompression in Python 2 (see `CRIME attack`_).\n\nIf you want to configure the default list of supported cipher suites, you can\nset the ``urllib3.contrib.pyopenssl.DEFAULT_SSL_CIPHER_LIST`` variable.\n\nModule Variables\n----------------\n\n:var DEFAULT_SSL_CIPHER_LIST: The list of supported SSL/TLS cipher suites.\n\n.. _sni: https://en.wikipedia.org/wiki/Server_Name_Indication\n.. _crime attack: https://en.wikipedia.org/wiki/CRIME_(security_exploit)\n\n'''\n\ntry:\n from ndg.httpsclient.ssl_peer_verification import SUBJ_ALT_NAME_SUPPORT\n from ndg.httpsclient.subj_alt_name import SubjectAltName as BaseSubjectAltName\nexcept SyntaxError as e:\n raise ImportError(e)\n\nimport OpenSSL.SSL\nfrom pyasn1.codec.der import decoder as der_decoder\nfrom pyasn1.type import univ, constraint\nfrom socket import _fileobject, timeout\nimport ssl\nimport select\n\nfrom .. import connection\nfrom .. import util\n\n__all__ = ['inject_into_urllib3', 'extract_from_urllib3']\n\n# SNI only *really* works if we can read the subjectAltName of certificates.\nHAS_SNI = SUBJ_ALT_NAME_SUPPORT\n\n# Map from urllib3 to PyOpenSSL compatible parameter-values.\n_openssl_versions = {\n ssl.PROTOCOL_SSLv23: OpenSSL.SSL.SSLv23_METHOD,\n ssl.PROTOCOL_TLSv1: OpenSSL.SSL.TLSv1_METHOD,\n}\n\nif hasattr(ssl, 'PROTOCOL_TLSv1_1') and hasattr(OpenSSL.SSL, 'TLSv1_1_METHOD'):\n _openssl_versions[ssl.PROTOCOL_TLSv1_1] = OpenSSL.SSL.TLSv1_1_METHOD\n\nif hasattr(ssl, 'PROTOCOL_TLSv1_2') and hasattr(OpenSSL.SSL, 'TLSv1_2_METHOD'):\n _openssl_versions[ssl.PROTOCOL_TLSv1_2] = OpenSSL.SSL.TLSv1_2_METHOD\n\ntry:\n _openssl_versions.update({ssl.PROTOCOL_SSLv3: OpenSSL.SSL.SSLv3_METHOD})\nexcept AttributeError:\n pass\n\n_openssl_verify = {\n ssl.CERT_NONE: OpenSSL.SSL.VERIFY_NONE,\n ssl.CERT_OPTIONAL: OpenSSL.SSL.VERIFY_PEER,\n ssl.CERT_REQUIRED: OpenSSL.SSL.VERIFY_PEER\n + OpenSSL.SSL.VERIFY_FAIL_IF_NO_PEER_CERT,\n}\n\nDEFAULT_SSL_CIPHER_LIST = util.ssl_.DEFAULT_CIPHERS\n\n# OpenSSL will only write 16K at a time\nSSL_WRITE_BLOCKSIZE = 16384\n\ntry:\n _ = memoryview\n has_memoryview = True\nexcept NameError:\n has_memoryview = False\n\norig_util_HAS_SNI = util.HAS_SNI\norig_connection_ssl_wrap_socket = connection.ssl_wrap_socket\n\n\ndef inject_into_urllib3():\n 'Monkey-patch urllib3 with PyOpenSSL-backed SSL-support.'\n\n connection.ssl_wrap_socket = ssl_wrap_socket\n util.HAS_SNI = HAS_SNI\n\n\ndef extract_from_urllib3():\n 'Undo monkey-patching by :func:`inject_into_urllib3`.'\n\n connection.ssl_wrap_socket = orig_connection_ssl_wrap_socket\n util.HAS_SNI = orig_util_HAS_SNI\n\n\n### Note: This is a slightly bug-fixed version of same from ndg-httpsclient.\nclass SubjectAltName(BaseSubjectAltName):\n '''ASN.1 implementation for subjectAltNames support'''\n\n # There is no limit to how many SAN certificates a certificate may have,\n # however this needs to have some limit so we'll set an arbitrarily high\n # limit.\n sizeSpec = univ.SequenceOf.sizeSpec + \\\n constraint.ValueSizeConstraint(1, 1024)\n\n\n### Note: This is a slightly bug-fixed version of same from ndg-httpsclient.\ndef get_subj_alt_name(peer_cert):\n # Search through extensions\n dns_name = []\n if not SUBJ_ALT_NAME_SUPPORT:\n return dns_name\n\n general_names = SubjectAltName()\n for i in range(peer_cert.get_extension_count()):\n ext = peer_cert.get_extension(i)\n ext_name = ext.get_short_name()\n if ext_name != 'subjectAltName':\n continue\n\n # PyOpenSSL returns extension data in ASN.1 encoded form\n ext_dat = ext.get_data()\n decoded_dat = der_decoder.decode(ext_dat,\n asn1Spec=general_names)\n\n for name in decoded_dat:\n if not isinstance(name, SubjectAltName):\n continue\n for entry in range(len(name)):\n component = name.getComponentByPosition(entry)\n if component.getName() != 'dNSName':\n continue\n dns_name.append(str(component.getComponent()))\n\n return dns_name\n\n\nclass WrappedSocket(object):\n '''API-compatibility wrapper for Python OpenSSL's Connection-class.\n\n Note: _makefile_refs, _drop() and _reuse() are needed for the garbage\n collector of pypy.\n '''\n\n def __init__(self, connection, socket, suppress_ragged_eofs=True):\n self.connection = connection\n self.socket = socket\n self.suppress_ragged_eofs = suppress_ragged_eofs\n self._makefile_refs = 0\n\n def fileno(self):\n return self.socket.fileno()\n\n def makefile(self, mode, bufsize=-1):\n self._makefile_refs += 1\n return _fileobject(self, mode, bufsize, close=True)\n\n def recv(self, *args, **kwargs):\n try:\n data = self.connection.recv(*args, **kwargs)\n except OpenSSL.SSL.SysCallError as e:\n if self.suppress_ragged_eofs and e.args == (-1, 'Unexpected EOF'):\n return b''\n else:\n raise\n except OpenSSL.SSL.ZeroReturnError as e:\n if self.connection.get_shutdown() == OpenSSL.SSL.RECEIVED_SHUTDOWN:\n return b''\n else:\n raise\n except OpenSSL.SSL.WantReadError:\n rd, wd, ed = select.select(\n [self.socket], [], [], self.socket.gettimeout())\n if not rd:\n raise timeout('The read operation timed out')\n else:\n return self.recv(*args, **kwargs)\n else:\n return data\n\n def settimeout(self, timeout):\n return self.socket.settimeout(timeout)\n\n def _send_until_done(self, data):\n while True:\n try:\n return self.connection.send(data)\n except OpenSSL.SSL.WantWriteError:\n _, wlist, _ = select.select([], [self.socket], [],\n self.socket.gettimeout())\n if not wlist:\n raise timeout()\n continue\n\n def sendall(self, data):\n if has_memoryview and not isinstance(data, memoryview):\n data = memoryview(data)\n\n total_sent = 0\n while total_sent < len(data):\n sent = self._send_until_done(data[total_sent:total_sent+SSL_WRITE_BLOCKSIZE])\n total_sent += sent\n\n def shutdown(self):\n # FIXME rethrow compatible exceptions should we ever use this\n self.connection.shutdown()\n\n def close(self):\n if self._makefile_refs < 1:\n return self.connection.close()\n else:\n self._makefile_refs -= 1\n\n def getpeercert(self, binary_form=False):\n x509 = self.connection.get_peer_certificate()\n\n if not x509:\n return x509\n\n if binary_form:\n return OpenSSL.crypto.dump_certificate(\n OpenSSL.crypto.FILETYPE_ASN1,\n x509)\n\n return {\n 'subject': (\n (('commonName', x509.get_subject().CN),),\n ),\n 'subjectAltName': [\n ('DNS', value)\n for value in get_subj_alt_name(x509)\n ]\n }\n\n def _reuse(self):\n self._makefile_refs += 1\n\n def _drop(self):\n if self._makefile_refs < 1:\n self.close()\n else:\n self._makefile_refs -= 1\n\n\ndef _verify_callback(cnx, x509, err_no, err_depth, return_code):\n return err_no == 0\n\n\ndef ssl_wrap_socket(sock, keyfile=None, certfile=None, cert_reqs=None,\n ca_certs=None, server_hostname=None,\n ssl_version=None):\n ctx = OpenSSL.SSL.Context(_openssl_versions[ssl_version])\n if certfile:\n keyfile = keyfile or certfile # Match behaviour of the normal python ssl library\n ctx.use_certificate_file(certfile)\n if keyfile:\n ctx.use_privatekey_file(keyfile)\n if cert_reqs != ssl.CERT_NONE:\n ctx.set_verify(_openssl_verify[cert_reqs], _verify_callback)\n if ca_certs:\n try:\n ctx.load_verify_locations(ca_certs, None)\n except OpenSSL.SSL.Error as e:\n raise ssl.SSLError('bad ca_certs: %r' % ca_certs, e)\n else:\n ctx.set_default_verify_paths()\n\n # Disable TLS compression to migitate CRIME attack (issue #309)\n OP_NO_COMPRESSION = 0x20000\n ctx.set_options(OP_NO_COMPRESSION)\n\n # Set list of supported ciphersuites.\n ctx.set_cipher_list(DEFAULT_SSL_CIPHER_LIST)\n\n cnx = OpenSSL.SSL.Connection(ctx, sock)\n cnx.set_tlsext_host_name(server_hostname)\n cnx.set_connect_state()\n while True:\n try:\n cnx.do_handshake()\n except OpenSSL.SSL.WantReadError:\n rd, _, _ = select.select([sock], [], [], sock.gettimeout())\n if not rd:\n raise timeout('select timed out')\n continue\n except OpenSSL.SSL.Error as e:\n raise ssl.SSLError('bad handshake: %r' % e)\n break\n\n return WrappedSocket(cnx, sock)\n", "path": "urllib3/contrib/pyopenssl.py"}]}
| 3,515 | 225 |
gh_patches_debug_24500
|
rasdani/github-patches
|
git_diff
|
mirumee__ariadne-23
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Create `resolve_to` utility
Python variables are usually named using the `snake_case`, whereas majority of JS written today uses the `camelCase`. Ariadne will need to provide a way to resolve one to another.
The idea is to create `resolver_for` function taking single `name` argument and returning resolver for specified attr/key name, enabling **explicit** mapping as such:
```
resolvers_map = {
"User": {
"lastVisitedOn": resolve_to("last_visited_on")
}
}
```
Other advantage of having such utility would be arbitrary field names:
```
resolvers_map = {
"User": {
"lastVisitedOn": resolve_to("last_visit")
}
}
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ariadne/resolvers.py`
Content:
```
1 from graphql import GraphQLObjectType, GraphQLScalarType, GraphQLSchema
2 from graphql.execution.base import ResolveInfo
3
4
5 def default_resolver(parent, info: ResolveInfo):
6 if isinstance(parent, dict):
7 return parent.get(info.field_name)
8 return getattr(parent, info.field_name, None)
9
10
11 def add_resolve_functions_to_schema(schema: GraphQLSchema, resolvers: dict):
12 for type_name, type_object in schema.get_type_map().items():
13 if isinstance(type_object, GraphQLObjectType):
14 add_resolve_functions_to_object(type_name, type_object, resolvers)
15 if isinstance(type_object, GraphQLScalarType):
16 add_resolve_function_to_scalar(type_name, type_object, resolvers)
17
18
19 def add_resolve_functions_to_object(name: str, obj: GraphQLObjectType, resolvers: dict):
20 type_resolver = resolvers.get(name, {})
21 for field_name, field_object in obj.fields.items():
22 field_resolver = type_resolver.get(field_name, default_resolver)
23 field_object.resolver = field_resolver
24
25
26 def add_resolve_function_to_scalar(name: str, obj: GraphQLObjectType, resolvers: dict):
27 serializer = resolvers.get(name, obj.serialize)
28 obj.serialize = serializer
29
```
Path: `ariadne/__init__.py`
Content:
```
1 from .build_schema import build_schema_from_type_definitions
2 from .executable_schema import make_executable_schema
3 from .resolvers import add_resolve_functions_to_schema, default_resolver
4
5 __all__ = [
6 "add_resolve_functions_to_schema",
7 "build_schema_from_type_definitions",
8 "default_resolver",
9 "make_executable_schema",
10 ]
11
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ariadne/__init__.py b/ariadne/__init__.py
--- a/ariadne/__init__.py
+++ b/ariadne/__init__.py
@@ -1,10 +1,11 @@
from .build_schema import build_schema_from_type_definitions
from .executable_schema import make_executable_schema
-from .resolvers import add_resolve_functions_to_schema, default_resolver
+from .resolvers import add_resolve_functions_to_schema, default_resolver, resolve_to
__all__ = [
"add_resolve_functions_to_schema",
"build_schema_from_type_definitions",
"default_resolver",
"make_executable_schema",
+ "resolve_to",
]
diff --git a/ariadne/resolvers.py b/ariadne/resolvers.py
--- a/ariadne/resolvers.py
+++ b/ariadne/resolvers.py
@@ -2,10 +2,21 @@
from graphql.execution.base import ResolveInfo
-def default_resolver(parent, info: ResolveInfo):
+def resolve_parent_field(parent, name: str):
if isinstance(parent, dict):
- return parent.get(info.field_name)
- return getattr(parent, info.field_name, None)
+ return parent.get(name)
+ return getattr(parent, name, None)
+
+
+def default_resolver(parent, info: ResolveInfo):
+ return resolve_parent_field(parent, info.field_name)
+
+
+def resolve_to(name: str):
+ def resolver(parent, *_):
+ return resolve_parent_field(parent, name)
+
+ return resolver
def add_resolve_functions_to_schema(schema: GraphQLSchema, resolvers: dict):
|
{"golden_diff": "diff --git a/ariadne/__init__.py b/ariadne/__init__.py\n--- a/ariadne/__init__.py\n+++ b/ariadne/__init__.py\n@@ -1,10 +1,11 @@\n from .build_schema import build_schema_from_type_definitions\n from .executable_schema import make_executable_schema\n-from .resolvers import add_resolve_functions_to_schema, default_resolver\n+from .resolvers import add_resolve_functions_to_schema, default_resolver, resolve_to\n \n __all__ = [\n \"add_resolve_functions_to_schema\",\n \"build_schema_from_type_definitions\",\n \"default_resolver\",\n \"make_executable_schema\",\n+ \"resolve_to\",\n ]\ndiff --git a/ariadne/resolvers.py b/ariadne/resolvers.py\n--- a/ariadne/resolvers.py\n+++ b/ariadne/resolvers.py\n@@ -2,10 +2,21 @@\n from graphql.execution.base import ResolveInfo\n \n \n-def default_resolver(parent, info: ResolveInfo):\n+def resolve_parent_field(parent, name: str):\n if isinstance(parent, dict):\n- return parent.get(info.field_name)\n- return getattr(parent, info.field_name, None)\n+ return parent.get(name)\n+ return getattr(parent, name, None)\n+\n+\n+def default_resolver(parent, info: ResolveInfo):\n+ return resolve_parent_field(parent, info.field_name)\n+\n+\n+def resolve_to(name: str):\n+ def resolver(parent, *_):\n+ return resolve_parent_field(parent, name)\n+\n+ return resolver\n \n \n def add_resolve_functions_to_schema(schema: GraphQLSchema, resolvers: dict):\n", "issue": "Create `resolve_to` utility\nPython variables are usually named using the `snake_case`, whereas majority of JS written today uses the `camelCase`. Ariadne will need to provide a way to resolve one to another.\r\n\r\nThe idea is to create `resolver_for` function taking single `name` argument and returning resolver for specified attr/key name, enabling **explicit** mapping as such:\r\n\r\n```\r\nresolvers_map = {\r\n \"User\": {\r\n \"lastVisitedOn\": resolve_to(\"last_visited_on\")\r\n }\r\n}\r\n```\r\n\r\nOther advantage of having such utility would be arbitrary field names:\r\n\r\n```\r\nresolvers_map = {\r\n \"User\": {\r\n \"lastVisitedOn\": resolve_to(\"last_visit\")\r\n }\r\n}\r\n```\n", "before_files": [{"content": "from graphql import GraphQLObjectType, GraphQLScalarType, GraphQLSchema\nfrom graphql.execution.base import ResolveInfo\n\n\ndef default_resolver(parent, info: ResolveInfo):\n if isinstance(parent, dict):\n return parent.get(info.field_name)\n return getattr(parent, info.field_name, None)\n\n\ndef add_resolve_functions_to_schema(schema: GraphQLSchema, resolvers: dict):\n for type_name, type_object in schema.get_type_map().items():\n if isinstance(type_object, GraphQLObjectType):\n add_resolve_functions_to_object(type_name, type_object, resolvers)\n if isinstance(type_object, GraphQLScalarType):\n add_resolve_function_to_scalar(type_name, type_object, resolvers)\n\n\ndef add_resolve_functions_to_object(name: str, obj: GraphQLObjectType, resolvers: dict):\n type_resolver = resolvers.get(name, {})\n for field_name, field_object in obj.fields.items():\n field_resolver = type_resolver.get(field_name, default_resolver)\n field_object.resolver = field_resolver\n\n\ndef add_resolve_function_to_scalar(name: str, obj: GraphQLObjectType, resolvers: dict):\n serializer = resolvers.get(name, obj.serialize)\n obj.serialize = serializer\n", "path": "ariadne/resolvers.py"}, {"content": "from .build_schema import build_schema_from_type_definitions\nfrom .executable_schema import make_executable_schema\nfrom .resolvers import add_resolve_functions_to_schema, default_resolver\n\n__all__ = [\n \"add_resolve_functions_to_schema\",\n \"build_schema_from_type_definitions\",\n \"default_resolver\",\n \"make_executable_schema\",\n]\n", "path": "ariadne/__init__.py"}], "after_files": [{"content": "from graphql import GraphQLObjectType, GraphQLScalarType, GraphQLSchema\nfrom graphql.execution.base import ResolveInfo\n\n\ndef resolve_parent_field(parent, name: str):\n if isinstance(parent, dict):\n return parent.get(name)\n return getattr(parent, name, None)\n\n\ndef default_resolver(parent, info: ResolveInfo):\n return resolve_parent_field(parent, info.field_name)\n\n\ndef resolve_to(name: str):\n def resolver(parent, *_):\n return resolve_parent_field(parent, name)\n\n return resolver\n\n\ndef add_resolve_functions_to_schema(schema: GraphQLSchema, resolvers: dict):\n for type_name, type_object in schema.get_type_map().items():\n if isinstance(type_object, GraphQLObjectType):\n add_resolve_functions_to_object(type_name, type_object, resolvers)\n if isinstance(type_object, GraphQLScalarType):\n add_resolve_function_to_scalar(type_name, type_object, resolvers)\n\n\ndef add_resolve_functions_to_object(name: str, obj: GraphQLObjectType, resolvers: dict):\n type_resolver = resolvers.get(name, {})\n for field_name, field_object in obj.fields.items():\n field_resolver = type_resolver.get(field_name, default_resolver)\n field_object.resolver = field_resolver\n\n\ndef add_resolve_function_to_scalar(name: str, obj: GraphQLObjectType, resolvers: dict):\n serializer = resolvers.get(name, obj.serialize)\n obj.serialize = serializer\n", "path": "ariadne/resolvers.py"}, {"content": "from .build_schema import build_schema_from_type_definitions\nfrom .executable_schema import make_executable_schema\nfrom .resolvers import add_resolve_functions_to_schema, default_resolver, resolve_to\n\n__all__ = [\n \"add_resolve_functions_to_schema\",\n \"build_schema_from_type_definitions\",\n \"default_resolver\",\n \"make_executable_schema\",\n \"resolve_to\",\n]\n", "path": "ariadne/__init__.py"}]}
| 805 | 352 |
gh_patches_debug_5210
|
rasdani/github-patches
|
git_diff
|
fedora-infra__bodhi-3464
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Greenwave consumer should not process the compose results
Currently the greenwave consumer consumes all the messages from `topic=org.fedoraproject.prod.greenwave.decision.update` but some messages on this topic are related to compose results and not koji build.
The consumer should filter the messages based on the `subject_type`, and only process `koji_build` type.
See a koji_build example https://apps.fedoraproject.org/datagrepper/id?id=2019-efb310ed-6acf-4742-809f-90a40c2a5855&is_raw=true&size=extra-large
And a compose example https://apps.fedoraproject.org/datagrepper/id?id=2019-33002e05-241c-4180-8953-5a29cea4443d&is_raw=true&size=extra-large
This is currently raising the following exception in production :
``` 2019-07-26 15:16:55,501 ERROR [bodhi.server][PoolThread-twisted.internet.reactor-0] Bodhi failed to send POST request to Greenwave at the following URL "https://greenwave-web-greenwave.app.os.fedoraproject.org/api/v1.0/decision". The status code was "502". The error was "{'message': 'Failed to retrieve SCM URL from Koji build "Fedora-Workstation-Rawhide-20190726.n.0" at "https://koji.fedoraproject.org/kojihub" (expected SCM URL in "source" attribute)'}". ```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bodhi/server/consumers/greenwave.py`
Content:
```
1 # Copyright © 2019 Red Hat, Inc.
2 #
3 # This file is part of Bodhi.
4 #
5 # This program is free software; you can redistribute it and/or
6 # modify it under the terms of the GNU General Public License
7 # as published by the Free Software Foundation; either version 2
8 # of the License, or (at your option) any later version.
9 #
10 # This program is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU General Public License for more details.
14 #
15 # You should have received a copy of the GNU General Public License along with
16 # this program; if not, write to the Free Software Foundation, Inc., 51
17 # Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
18 """
19 The "greenwave handler".
20
21 This module is responsible for listening for messages from greenwave.
22 It then updates the policies of the build that greenwave checked.
23 """
24
25 import logging
26
27 import fedora_messaging
28
29 from bodhi.server.models import Build
30 from bodhi.server.util import transactional_session_maker
31
32 log = logging.getLogger(__name__)
33
34
35 class GreenwaveHandler:
36 """
37 The Bodhi Greenwave Handler.
38
39 A fedora-messaging listener waiting for messages from greenwave about enforced policies.
40 """
41
42 def __init__(self):
43 """Initialize the GreenwaveHandler."""
44 self.db_factory = transactional_session_maker()
45
46 def __call__(self, message: fedora_messaging.api.Message):
47 """Handle messages arriving with the configured topic."""
48 msg = message.body
49 if not msg:
50 log.debug("Ignoring message without body.")
51 return
52
53 subject_identifier = msg.get("subject_identifier")
54
55 if subject_identifier is None:
56 log.debug("Couldn't find subject_identifier in Greenwave message")
57 return
58
59 with self.db_factory():
60
61 build = Build.get(subject_identifier)
62 if build is None:
63 log.debug(f"Couldn't find build {subject_identifier} in DB")
64 return
65
66 log.info(f"Updating the test_gating_status for: {build.update.alias}")
67 build.update.update_test_gating_status()
68
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bodhi/server/consumers/greenwave.py b/bodhi/server/consumers/greenwave.py
--- a/bodhi/server/consumers/greenwave.py
+++ b/bodhi/server/consumers/greenwave.py
@@ -56,6 +56,11 @@
log.debug("Couldn't find subject_identifier in Greenwave message")
return
+ subject_type = msg.get("subject_type")
+ if subject_type == "compose":
+ log.debug("Not requesting a decision for a compose")
+ return
+
with self.db_factory():
build = Build.get(subject_identifier)
|
{"golden_diff": "diff --git a/bodhi/server/consumers/greenwave.py b/bodhi/server/consumers/greenwave.py\n--- a/bodhi/server/consumers/greenwave.py\n+++ b/bodhi/server/consumers/greenwave.py\n@@ -56,6 +56,11 @@\n log.debug(\"Couldn't find subject_identifier in Greenwave message\")\n return\n \n+ subject_type = msg.get(\"subject_type\")\n+ if subject_type == \"compose\":\n+ log.debug(\"Not requesting a decision for a compose\")\n+ return\n+\n with self.db_factory():\n \n build = Build.get(subject_identifier)\n", "issue": "Greenwave consumer should not process the compose results\nCurrently the greenwave consumer consumes all the messages from `topic=org.fedoraproject.prod.greenwave.decision.update` but some messages on this topic are related to compose results and not koji build.\r\n\r\nThe consumer should filter the messages based on the `subject_type`, and only process `koji_build` type.\r\n\r\nSee a koji_build example https://apps.fedoraproject.org/datagrepper/id?id=2019-efb310ed-6acf-4742-809f-90a40c2a5855&is_raw=true&size=extra-large\r\n\r\nAnd a compose example https://apps.fedoraproject.org/datagrepper/id?id=2019-33002e05-241c-4180-8953-5a29cea4443d&is_raw=true&size=extra-large\r\n\r\nThis is currently raising the following exception in production : \r\n``` 2019-07-26 15:16:55,501 ERROR [bodhi.server][PoolThread-twisted.internet.reactor-0] Bodhi failed to send POST request to Greenwave at the following URL \"https://greenwave-web-greenwave.app.os.fedoraproject.org/api/v1.0/decision\". The status code was \"502\". The error was \"{'message': 'Failed to retrieve SCM URL from Koji build \"Fedora-Workstation-Rawhide-20190726.n.0\" at \"https://koji.fedoraproject.org/kojihub\" (expected SCM URL in \"source\" attribute)'}\". ```\n", "before_files": [{"content": "# Copyright \u00a9 2019 Red Hat, Inc.\n#\n# This file is part of Bodhi.\n#\n# This program is free software; you can redistribute it and/or\n# modify it under the terms of the GNU General Public License\n# as published by the Free Software Foundation; either version 2\n# of the License, or (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n\"\"\"\nThe \"greenwave handler\".\n\nThis module is responsible for listening for messages from greenwave.\nIt then updates the policies of the build that greenwave checked.\n\"\"\"\n\nimport logging\n\nimport fedora_messaging\n\nfrom bodhi.server.models import Build\nfrom bodhi.server.util import transactional_session_maker\n\nlog = logging.getLogger(__name__)\n\n\nclass GreenwaveHandler:\n \"\"\"\n The Bodhi Greenwave Handler.\n\n A fedora-messaging listener waiting for messages from greenwave about enforced policies.\n \"\"\"\n\n def __init__(self):\n \"\"\"Initialize the GreenwaveHandler.\"\"\"\n self.db_factory = transactional_session_maker()\n\n def __call__(self, message: fedora_messaging.api.Message):\n \"\"\"Handle messages arriving with the configured topic.\"\"\"\n msg = message.body\n if not msg:\n log.debug(\"Ignoring message without body.\")\n return\n\n subject_identifier = msg.get(\"subject_identifier\")\n\n if subject_identifier is None:\n log.debug(\"Couldn't find subject_identifier in Greenwave message\")\n return\n\n with self.db_factory():\n\n build = Build.get(subject_identifier)\n if build is None:\n log.debug(f\"Couldn't find build {subject_identifier} in DB\")\n return\n\n log.info(f\"Updating the test_gating_status for: {build.update.alias}\")\n build.update.update_test_gating_status()\n", "path": "bodhi/server/consumers/greenwave.py"}], "after_files": [{"content": "# Copyright \u00a9 2019 Red Hat, Inc.\n#\n# This file is part of Bodhi.\n#\n# This program is free software; you can redistribute it and/or\n# modify it under the terms of the GNU General Public License\n# as published by the Free Software Foundation; either version 2\n# of the License, or (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n\"\"\"\nThe \"greenwave handler\".\n\nThis module is responsible for listening for messages from greenwave.\nIt then updates the policies of the build that greenwave checked.\n\"\"\"\n\nimport logging\n\nimport fedora_messaging\n\nfrom bodhi.server.models import Build\nfrom bodhi.server.util import transactional_session_maker\n\nlog = logging.getLogger(__name__)\n\n\nclass GreenwaveHandler:\n \"\"\"\n The Bodhi Greenwave Handler.\n\n A fedora-messaging listener waiting for messages from greenwave about enforced policies.\n \"\"\"\n\n def __init__(self):\n \"\"\"Initialize the GreenwaveHandler.\"\"\"\n self.db_factory = transactional_session_maker()\n\n def __call__(self, message: fedora_messaging.api.Message):\n \"\"\"Handle messages arriving with the configured topic.\"\"\"\n msg = message.body\n if not msg:\n log.debug(\"Ignoring message without body.\")\n return\n\n subject_identifier = msg.get(\"subject_identifier\")\n\n if subject_identifier is None:\n log.debug(\"Couldn't find subject_identifier in Greenwave message\")\n return\n\n subject_type = msg.get(\"subject_type\")\n if subject_type == \"compose\":\n log.debug(\"Not requesting a decision for a compose\")\n return\n\n with self.db_factory():\n\n build = Build.get(subject_identifier)\n if build is None:\n log.debug(f\"Couldn't find build {subject_identifier} in DB\")\n return\n\n log.info(f\"Updating the test_gating_status for: {build.update.alias}\")\n build.update.update_test_gating_status()\n", "path": "bodhi/server/consumers/greenwave.py"}]}
| 1,253 | 139 |
gh_patches_debug_36513
|
rasdani/github-patches
|
git_diff
|
python-discord__bot-692
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Mods+ should be permitted to use any command in #checkpoint
Currently the bot will ignore all commands inside #checkpoint. However, there are cases where this is not ideal, for example if we want to clean the channel up with the !clean command.
Write a fix that allows moderators, admins and owners to use any command in this channel, but still ignores everything but `!accept` from anyone else.
Make the bot delete any message it posts in #checkpoint after a while
Currently, messages posted in #checkpoint by the bot will usually be posted with a `delete_after` of 10 seconds or so, so they will automatically be removed. This helps the channel stay clean so that the important messages at the top of it will stay there. The bot also automatically deletes any messages posted in this channel by other users.
However, when the bot posts messages in this channel that are not related to the Verification cog, they will never get deleted. For example, if a user in this channel triggers the spam filter, the bot will post a message about muting the user which will never be deleted.
We want this channel to stay clean, so let's implement some logics that ensure that **any message posted by the bot** in this channel will be deleted after a short time, like 10 seconds.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bot/cogs/verification.py`
Content:
```
1 import logging
2 from datetime import datetime
3
4 from discord import Colour, Message, NotFound, Object
5 from discord.ext import tasks
6 from discord.ext.commands import Cog, Context, command
7
8 from bot.bot import Bot
9 from bot.cogs.moderation import ModLog
10 from bot.constants import (
11 Bot as BotConfig,
12 Channels, Colours, Event,
13 Filter, Icons, Roles
14 )
15 from bot.decorators import InChannelCheckFailure, in_channel, without_role
16
17 log = logging.getLogger(__name__)
18
19 WELCOME_MESSAGE = f"""
20 Hello! Welcome to the server, and thanks for verifying yourself!
21
22 For your records, these are the documents you accepted:
23
24 `1)` Our rules, here: <https://pythondiscord.com/pages/rules>
25 `2)` Our privacy policy, here: <https://pythondiscord.com/pages/privacy> - you can find information on how to have \
26 your information removed here as well.
27
28 Feel free to review them at any point!
29
30 Additionally, if you'd like to receive notifications for the announcements we post in <#{Channels.announcements}> \
31 from time to time, you can send `!subscribe` to <#{Channels.bot}> at any time to assign yourself the \
32 **Announcements** role. We'll mention this role every time we make an announcement.
33
34 If you'd like to unsubscribe from the announcement notifications, simply send `!unsubscribe` to <#{Channels.bot}>.
35 """
36
37 PERIODIC_PING = (
38 f"@everyone To verify that you have read our rules, please type `{BotConfig.prefix}accept`."
39 f" If you encounter any problems during the verification process, ping the <@&{Roles.admin}> role in this channel."
40 )
41
42
43 class Verification(Cog):
44 """User verification and role self-management."""
45
46 def __init__(self, bot: Bot):
47 self.bot = bot
48 self.periodic_ping.start()
49
50 @property
51 def mod_log(self) -> ModLog:
52 """Get currently loaded ModLog cog instance."""
53 return self.bot.get_cog("ModLog")
54
55 @Cog.listener()
56 async def on_message(self, message: Message) -> None:
57 """Check new message event for messages to the checkpoint channel & process."""
58 if message.author.bot:
59 return # They're a bot, ignore
60
61 if message.channel.id != Channels.verification:
62 return # Only listen for #checkpoint messages
63
64 # if a user mentions a role or guild member
65 # alert the mods in mod-alerts channel
66 if message.mentions or message.role_mentions:
67 log.debug(
68 f"{message.author} mentioned one or more users "
69 f"and/or roles in {message.channel.name}"
70 )
71
72 embed_text = (
73 f"{message.author.mention} sent a message in "
74 f"{message.channel.mention} that contained user and/or role mentions."
75 f"\n\n**Original message:**\n>>> {message.content}"
76 )
77
78 # Send pretty mod log embed to mod-alerts
79 await self.mod_log.send_log_message(
80 icon_url=Icons.filtering,
81 colour=Colour(Colours.soft_red),
82 title=f"User/Role mentioned in {message.channel.name}",
83 text=embed_text,
84 thumbnail=message.author.avatar_url_as(static_format="png"),
85 channel_id=Channels.mod_alerts,
86 ping_everyone=Filter.ping_everyone,
87 )
88
89 ctx = await self.bot.get_context(message) # type: Context
90
91 if ctx.command is not None and ctx.command.name == "accept":
92 return # They used the accept command
93
94 for role in ctx.author.roles:
95 if role.id == Roles.verified:
96 log.warning(f"{ctx.author} posted '{ctx.message.content}' "
97 "in the verification channel, but is already verified.")
98 return # They're already verified
99
100 log.debug(f"{ctx.author} posted '{ctx.message.content}' in the verification "
101 "channel. We are providing instructions how to verify.")
102 await ctx.send(
103 f"{ctx.author.mention} Please type `!accept` to verify that you accept our rules, "
104 f"and gain access to the rest of the server.",
105 delete_after=20
106 )
107
108 log.trace(f"Deleting the message posted by {ctx.author}")
109
110 try:
111 await ctx.message.delete()
112 except NotFound:
113 log.trace("No message found, it must have been deleted by another bot.")
114
115 @command(name='accept', aliases=('verify', 'verified', 'accepted'), hidden=True)
116 @without_role(Roles.verified)
117 @in_channel(Channels.verification)
118 async def accept_command(self, ctx: Context, *_) -> None: # We don't actually care about the args
119 """Accept our rules and gain access to the rest of the server."""
120 log.debug(f"{ctx.author} called !accept. Assigning the 'Developer' role.")
121 await ctx.author.add_roles(Object(Roles.verified), reason="Accepted the rules")
122 try:
123 await ctx.author.send(WELCOME_MESSAGE)
124 except Exception:
125 # Catch the exception, in case they have DMs off or something
126 log.exception(f"Unable to send welcome message to user {ctx.author}.")
127
128 log.trace(f"Deleting the message posted by {ctx.author}.")
129
130 try:
131 self.mod_log.ignore(Event.message_delete, ctx.message.id)
132 await ctx.message.delete()
133 except NotFound:
134 log.trace("No message found, it must have been deleted by another bot.")
135
136 @command(name='subscribe')
137 @in_channel(Channels.bot)
138 async def subscribe_command(self, ctx: Context, *_) -> None: # We don't actually care about the args
139 """Subscribe to announcement notifications by assigning yourself the role."""
140 has_role = False
141
142 for role in ctx.author.roles:
143 if role.id == Roles.announcements:
144 has_role = True
145 break
146
147 if has_role:
148 await ctx.send(f"{ctx.author.mention} You're already subscribed!")
149 return
150
151 log.debug(f"{ctx.author} called !subscribe. Assigning the 'Announcements' role.")
152 await ctx.author.add_roles(Object(Roles.announcements), reason="Subscribed to announcements")
153
154 log.trace(f"Deleting the message posted by {ctx.author}.")
155
156 await ctx.send(
157 f"{ctx.author.mention} Subscribed to <#{Channels.announcements}> notifications.",
158 )
159
160 @command(name='unsubscribe')
161 @in_channel(Channels.bot)
162 async def unsubscribe_command(self, ctx: Context, *_) -> None: # We don't actually care about the args
163 """Unsubscribe from announcement notifications by removing the role from yourself."""
164 has_role = False
165
166 for role in ctx.author.roles:
167 if role.id == Roles.announcements:
168 has_role = True
169 break
170
171 if not has_role:
172 await ctx.send(f"{ctx.author.mention} You're already unsubscribed!")
173 return
174
175 log.debug(f"{ctx.author} called !unsubscribe. Removing the 'Announcements' role.")
176 await ctx.author.remove_roles(Object(Roles.announcements), reason="Unsubscribed from announcements")
177
178 log.trace(f"Deleting the message posted by {ctx.author}.")
179
180 await ctx.send(
181 f"{ctx.author.mention} Unsubscribed from <#{Channels.announcements}> notifications."
182 )
183
184 # This cannot be static (must have a __func__ attribute).
185 async def cog_command_error(self, ctx: Context, error: Exception) -> None:
186 """Check for & ignore any InChannelCheckFailure."""
187 if isinstance(error, InChannelCheckFailure):
188 error.handled = True
189
190 @staticmethod
191 def bot_check(ctx: Context) -> bool:
192 """Block any command within the verification channel that is not !accept."""
193 if ctx.channel.id == Channels.verification:
194 return ctx.command.name == "accept"
195 else:
196 return True
197
198 @tasks.loop(hours=12)
199 async def periodic_ping(self) -> None:
200 """Every week, mention @everyone to remind them to verify."""
201 messages = self.bot.get_channel(Channels.verification).history(limit=10)
202 need_to_post = True # True if a new message needs to be sent.
203
204 async for message in messages:
205 if message.author == self.bot.user and message.content == PERIODIC_PING:
206 delta = datetime.utcnow() - message.created_at # Time since last message.
207 if delta.days >= 7: # Message is older than a week.
208 await message.delete()
209 else:
210 need_to_post = False
211
212 break
213
214 if need_to_post:
215 await self.bot.get_channel(Channels.verification).send(PERIODIC_PING)
216
217 @periodic_ping.before_loop
218 async def before_ping(self) -> None:
219 """Only start the loop when the bot is ready."""
220 await self.bot.wait_until_ready()
221
222 def cog_unload(self) -> None:
223 """Cancel the periodic ping task when the cog is unloaded."""
224 self.periodic_ping.cancel()
225
226
227 def setup(bot: Bot) -> None:
228 """Load the Verification cog."""
229 bot.add_cog(Verification(bot))
230
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bot/cogs/verification.py b/bot/cogs/verification.py
--- a/bot/cogs/verification.py
+++ b/bot/cogs/verification.py
@@ -10,9 +10,10 @@
from bot.constants import (
Bot as BotConfig,
Channels, Colours, Event,
- Filter, Icons, Roles
+ Filter, Icons, MODERATION_ROLES, Roles
)
from bot.decorators import InChannelCheckFailure, in_channel, without_role
+from bot.utils.checks import without_role_check
log = logging.getLogger(__name__)
@@ -38,6 +39,7 @@
f"@everyone To verify that you have read our rules, please type `{BotConfig.prefix}accept`."
f" If you encounter any problems during the verification process, ping the <@&{Roles.admin}> role in this channel."
)
+BOT_MESSAGE_DELETE_DELAY = 10
class Verification(Cog):
@@ -55,12 +57,16 @@
@Cog.listener()
async def on_message(self, message: Message) -> None:
"""Check new message event for messages to the checkpoint channel & process."""
- if message.author.bot:
- return # They're a bot, ignore
-
if message.channel.id != Channels.verification:
return # Only listen for #checkpoint messages
+ if message.author.bot:
+ # They're a bot, delete their message after the delay.
+ # But not the periodic ping; we like that one.
+ if message.content != PERIODIC_PING:
+ await message.delete(delay=BOT_MESSAGE_DELETE_DELAY)
+ return
+
# if a user mentions a role or guild member
# alert the mods in mod-alerts channel
if message.mentions or message.role_mentions:
@@ -190,7 +196,7 @@
@staticmethod
def bot_check(ctx: Context) -> bool:
"""Block any command within the verification channel that is not !accept."""
- if ctx.channel.id == Channels.verification:
+ if ctx.channel.id == Channels.verification and without_role_check(ctx, *MODERATION_ROLES):
return ctx.command.name == "accept"
else:
return True
|
{"golden_diff": "diff --git a/bot/cogs/verification.py b/bot/cogs/verification.py\n--- a/bot/cogs/verification.py\n+++ b/bot/cogs/verification.py\n@@ -10,9 +10,10 @@\n from bot.constants import (\n Bot as BotConfig,\n Channels, Colours, Event,\n- Filter, Icons, Roles\n+ Filter, Icons, MODERATION_ROLES, Roles\n )\n from bot.decorators import InChannelCheckFailure, in_channel, without_role\n+from bot.utils.checks import without_role_check\n \n log = logging.getLogger(__name__)\n \n@@ -38,6 +39,7 @@\n f\"@everyone To verify that you have read our rules, please type `{BotConfig.prefix}accept`.\"\n f\" If you encounter any problems during the verification process, ping the <@&{Roles.admin}> role in this channel.\"\n )\n+BOT_MESSAGE_DELETE_DELAY = 10\n \n \n class Verification(Cog):\n@@ -55,12 +57,16 @@\n @Cog.listener()\n async def on_message(self, message: Message) -> None:\n \"\"\"Check new message event for messages to the checkpoint channel & process.\"\"\"\n- if message.author.bot:\n- return # They're a bot, ignore\n-\n if message.channel.id != Channels.verification:\n return # Only listen for #checkpoint messages\n \n+ if message.author.bot:\n+ # They're a bot, delete their message after the delay.\n+ # But not the periodic ping; we like that one.\n+ if message.content != PERIODIC_PING:\n+ await message.delete(delay=BOT_MESSAGE_DELETE_DELAY)\n+ return\n+\n # if a user mentions a role or guild member\n # alert the mods in mod-alerts channel\n if message.mentions or message.role_mentions:\n@@ -190,7 +196,7 @@\n @staticmethod\n def bot_check(ctx: Context) -> bool:\n \"\"\"Block any command within the verification channel that is not !accept.\"\"\"\n- if ctx.channel.id == Channels.verification:\n+ if ctx.channel.id == Channels.verification and without_role_check(ctx, *MODERATION_ROLES):\n return ctx.command.name == \"accept\"\n else:\n return True\n", "issue": "Mods+ should be permitted to use any command in #checkpoint\nCurrently the bot will ignore all commands inside #checkpoint. However, there are cases where this is not ideal, for example if we want to clean the channel up with the !clean command.\r\n\r\nWrite a fix that allows moderators, admins and owners to use any command in this channel, but still ignores everything but `!accept` from anyone else.\nMake the bot delete any message it posts in #checkpoint after a while\nCurrently, messages posted in #checkpoint by the bot will usually be posted with a `delete_after` of 10 seconds or so, so they will automatically be removed. This helps the channel stay clean so that the important messages at the top of it will stay there. The bot also automatically deletes any messages posted in this channel by other users.\r\n\r\nHowever, when the bot posts messages in this channel that are not related to the Verification cog, they will never get deleted. For example, if a user in this channel triggers the spam filter, the bot will post a message about muting the user which will never be deleted.\r\n\r\nWe want this channel to stay clean, so let's implement some logics that ensure that **any message posted by the bot** in this channel will be deleted after a short time, like 10 seconds.\n", "before_files": [{"content": "import logging\nfrom datetime import datetime\n\nfrom discord import Colour, Message, NotFound, Object\nfrom discord.ext import tasks\nfrom discord.ext.commands import Cog, Context, command\n\nfrom bot.bot import Bot\nfrom bot.cogs.moderation import ModLog\nfrom bot.constants import (\n Bot as BotConfig,\n Channels, Colours, Event,\n Filter, Icons, Roles\n)\nfrom bot.decorators import InChannelCheckFailure, in_channel, without_role\n\nlog = logging.getLogger(__name__)\n\nWELCOME_MESSAGE = f\"\"\"\nHello! Welcome to the server, and thanks for verifying yourself!\n\nFor your records, these are the documents you accepted:\n\n`1)` Our rules, here: <https://pythondiscord.com/pages/rules>\n`2)` Our privacy policy, here: <https://pythondiscord.com/pages/privacy> - you can find information on how to have \\\nyour information removed here as well.\n\nFeel free to review them at any point!\n\nAdditionally, if you'd like to receive notifications for the announcements we post in <#{Channels.announcements}> \\\nfrom time to time, you can send `!subscribe` to <#{Channels.bot}> at any time to assign yourself the \\\n**Announcements** role. We'll mention this role every time we make an announcement.\n\nIf you'd like to unsubscribe from the announcement notifications, simply send `!unsubscribe` to <#{Channels.bot}>.\n\"\"\"\n\nPERIODIC_PING = (\n f\"@everyone To verify that you have read our rules, please type `{BotConfig.prefix}accept`.\"\n f\" If you encounter any problems during the verification process, ping the <@&{Roles.admin}> role in this channel.\"\n)\n\n\nclass Verification(Cog):\n \"\"\"User verification and role self-management.\"\"\"\n\n def __init__(self, bot: Bot):\n self.bot = bot\n self.periodic_ping.start()\n\n @property\n def mod_log(self) -> ModLog:\n \"\"\"Get currently loaded ModLog cog instance.\"\"\"\n return self.bot.get_cog(\"ModLog\")\n\n @Cog.listener()\n async def on_message(self, message: Message) -> None:\n \"\"\"Check new message event for messages to the checkpoint channel & process.\"\"\"\n if message.author.bot:\n return # They're a bot, ignore\n\n if message.channel.id != Channels.verification:\n return # Only listen for #checkpoint messages\n\n # if a user mentions a role or guild member\n # alert the mods in mod-alerts channel\n if message.mentions or message.role_mentions:\n log.debug(\n f\"{message.author} mentioned one or more users \"\n f\"and/or roles in {message.channel.name}\"\n )\n\n embed_text = (\n f\"{message.author.mention} sent a message in \"\n f\"{message.channel.mention} that contained user and/or role mentions.\"\n f\"\\n\\n**Original message:**\\n>>> {message.content}\"\n )\n\n # Send pretty mod log embed to mod-alerts\n await self.mod_log.send_log_message(\n icon_url=Icons.filtering,\n colour=Colour(Colours.soft_red),\n title=f\"User/Role mentioned in {message.channel.name}\",\n text=embed_text,\n thumbnail=message.author.avatar_url_as(static_format=\"png\"),\n channel_id=Channels.mod_alerts,\n ping_everyone=Filter.ping_everyone,\n )\n\n ctx = await self.bot.get_context(message) # type: Context\n\n if ctx.command is not None and ctx.command.name == \"accept\":\n return # They used the accept command\n\n for role in ctx.author.roles:\n if role.id == Roles.verified:\n log.warning(f\"{ctx.author} posted '{ctx.message.content}' \"\n \"in the verification channel, but is already verified.\")\n return # They're already verified\n\n log.debug(f\"{ctx.author} posted '{ctx.message.content}' in the verification \"\n \"channel. We are providing instructions how to verify.\")\n await ctx.send(\n f\"{ctx.author.mention} Please type `!accept` to verify that you accept our rules, \"\n f\"and gain access to the rest of the server.\",\n delete_after=20\n )\n\n log.trace(f\"Deleting the message posted by {ctx.author}\")\n\n try:\n await ctx.message.delete()\n except NotFound:\n log.trace(\"No message found, it must have been deleted by another bot.\")\n\n @command(name='accept', aliases=('verify', 'verified', 'accepted'), hidden=True)\n @without_role(Roles.verified)\n @in_channel(Channels.verification)\n async def accept_command(self, ctx: Context, *_) -> None: # We don't actually care about the args\n \"\"\"Accept our rules and gain access to the rest of the server.\"\"\"\n log.debug(f\"{ctx.author} called !accept. Assigning the 'Developer' role.\")\n await ctx.author.add_roles(Object(Roles.verified), reason=\"Accepted the rules\")\n try:\n await ctx.author.send(WELCOME_MESSAGE)\n except Exception:\n # Catch the exception, in case they have DMs off or something\n log.exception(f\"Unable to send welcome message to user {ctx.author}.\")\n\n log.trace(f\"Deleting the message posted by {ctx.author}.\")\n\n try:\n self.mod_log.ignore(Event.message_delete, ctx.message.id)\n await ctx.message.delete()\n except NotFound:\n log.trace(\"No message found, it must have been deleted by another bot.\")\n\n @command(name='subscribe')\n @in_channel(Channels.bot)\n async def subscribe_command(self, ctx: Context, *_) -> None: # We don't actually care about the args\n \"\"\"Subscribe to announcement notifications by assigning yourself the role.\"\"\"\n has_role = False\n\n for role in ctx.author.roles:\n if role.id == Roles.announcements:\n has_role = True\n break\n\n if has_role:\n await ctx.send(f\"{ctx.author.mention} You're already subscribed!\")\n return\n\n log.debug(f\"{ctx.author} called !subscribe. Assigning the 'Announcements' role.\")\n await ctx.author.add_roles(Object(Roles.announcements), reason=\"Subscribed to announcements\")\n\n log.trace(f\"Deleting the message posted by {ctx.author}.\")\n\n await ctx.send(\n f\"{ctx.author.mention} Subscribed to <#{Channels.announcements}> notifications.\",\n )\n\n @command(name='unsubscribe')\n @in_channel(Channels.bot)\n async def unsubscribe_command(self, ctx: Context, *_) -> None: # We don't actually care about the args\n \"\"\"Unsubscribe from announcement notifications by removing the role from yourself.\"\"\"\n has_role = False\n\n for role in ctx.author.roles:\n if role.id == Roles.announcements:\n has_role = True\n break\n\n if not has_role:\n await ctx.send(f\"{ctx.author.mention} You're already unsubscribed!\")\n return\n\n log.debug(f\"{ctx.author} called !unsubscribe. Removing the 'Announcements' role.\")\n await ctx.author.remove_roles(Object(Roles.announcements), reason=\"Unsubscribed from announcements\")\n\n log.trace(f\"Deleting the message posted by {ctx.author}.\")\n\n await ctx.send(\n f\"{ctx.author.mention} Unsubscribed from <#{Channels.announcements}> notifications.\"\n )\n\n # This cannot be static (must have a __func__ attribute).\n async def cog_command_error(self, ctx: Context, error: Exception) -> None:\n \"\"\"Check for & ignore any InChannelCheckFailure.\"\"\"\n if isinstance(error, InChannelCheckFailure):\n error.handled = True\n\n @staticmethod\n def bot_check(ctx: Context) -> bool:\n \"\"\"Block any command within the verification channel that is not !accept.\"\"\"\n if ctx.channel.id == Channels.verification:\n return ctx.command.name == \"accept\"\n else:\n return True\n\n @tasks.loop(hours=12)\n async def periodic_ping(self) -> None:\n \"\"\"Every week, mention @everyone to remind them to verify.\"\"\"\n messages = self.bot.get_channel(Channels.verification).history(limit=10)\n need_to_post = True # True if a new message needs to be sent.\n\n async for message in messages:\n if message.author == self.bot.user and message.content == PERIODIC_PING:\n delta = datetime.utcnow() - message.created_at # Time since last message.\n if delta.days >= 7: # Message is older than a week.\n await message.delete()\n else:\n need_to_post = False\n\n break\n\n if need_to_post:\n await self.bot.get_channel(Channels.verification).send(PERIODIC_PING)\n\n @periodic_ping.before_loop\n async def before_ping(self) -> None:\n \"\"\"Only start the loop when the bot is ready.\"\"\"\n await self.bot.wait_until_ready()\n\n def cog_unload(self) -> None:\n \"\"\"Cancel the periodic ping task when the cog is unloaded.\"\"\"\n self.periodic_ping.cancel()\n\n\ndef setup(bot: Bot) -> None:\n \"\"\"Load the Verification cog.\"\"\"\n bot.add_cog(Verification(bot))\n", "path": "bot/cogs/verification.py"}], "after_files": [{"content": "import logging\nfrom datetime import datetime\n\nfrom discord import Colour, Message, NotFound, Object\nfrom discord.ext import tasks\nfrom discord.ext.commands import Cog, Context, command\n\nfrom bot.bot import Bot\nfrom bot.cogs.moderation import ModLog\nfrom bot.constants import (\n Bot as BotConfig,\n Channels, Colours, Event,\n Filter, Icons, MODERATION_ROLES, Roles\n)\nfrom bot.decorators import InChannelCheckFailure, in_channel, without_role\nfrom bot.utils.checks import without_role_check\n\nlog = logging.getLogger(__name__)\n\nWELCOME_MESSAGE = f\"\"\"\nHello! Welcome to the server, and thanks for verifying yourself!\n\nFor your records, these are the documents you accepted:\n\n`1)` Our rules, here: <https://pythondiscord.com/pages/rules>\n`2)` Our privacy policy, here: <https://pythondiscord.com/pages/privacy> - you can find information on how to have \\\nyour information removed here as well.\n\nFeel free to review them at any point!\n\nAdditionally, if you'd like to receive notifications for the announcements we post in <#{Channels.announcements}> \\\nfrom time to time, you can send `!subscribe` to <#{Channels.bot}> at any time to assign yourself the \\\n**Announcements** role. We'll mention this role every time we make an announcement.\n\nIf you'd like to unsubscribe from the announcement notifications, simply send `!unsubscribe` to <#{Channels.bot}>.\n\"\"\"\n\nPERIODIC_PING = (\n f\"@everyone To verify that you have read our rules, please type `{BotConfig.prefix}accept`.\"\n f\" If you encounter any problems during the verification process, ping the <@&{Roles.admin}> role in this channel.\"\n)\nBOT_MESSAGE_DELETE_DELAY = 10\n\n\nclass Verification(Cog):\n \"\"\"User verification and role self-management.\"\"\"\n\n def __init__(self, bot: Bot):\n self.bot = bot\n self.periodic_ping.start()\n\n @property\n def mod_log(self) -> ModLog:\n \"\"\"Get currently loaded ModLog cog instance.\"\"\"\n return self.bot.get_cog(\"ModLog\")\n\n @Cog.listener()\n async def on_message(self, message: Message) -> None:\n \"\"\"Check new message event for messages to the checkpoint channel & process.\"\"\"\n if message.channel.id != Channels.verification:\n return # Only listen for #checkpoint messages\n\n if message.author.bot:\n # They're a bot, delete their message after the delay.\n # But not the periodic ping; we like that one.\n if message.content != PERIODIC_PING:\n await message.delete(delay=BOT_MESSAGE_DELETE_DELAY)\n return\n\n # if a user mentions a role or guild member\n # alert the mods in mod-alerts channel\n if message.mentions or message.role_mentions:\n log.debug(\n f\"{message.author} mentioned one or more users \"\n f\"and/or roles in {message.channel.name}\"\n )\n\n embed_text = (\n f\"{message.author.mention} sent a message in \"\n f\"{message.channel.mention} that contained user and/or role mentions.\"\n f\"\\n\\n**Original message:**\\n>>> {message.content}\"\n )\n\n # Send pretty mod log embed to mod-alerts\n await self.mod_log.send_log_message(\n icon_url=Icons.filtering,\n colour=Colour(Colours.soft_red),\n title=f\"User/Role mentioned in {message.channel.name}\",\n text=embed_text,\n thumbnail=message.author.avatar_url_as(static_format=\"png\"),\n channel_id=Channels.mod_alerts,\n ping_everyone=Filter.ping_everyone,\n )\n\n ctx = await self.bot.get_context(message) # type: Context\n\n if ctx.command is not None and ctx.command.name == \"accept\":\n return # They used the accept command\n\n for role in ctx.author.roles:\n if role.id == Roles.verified:\n log.warning(f\"{ctx.author} posted '{ctx.message.content}' \"\n \"in the verification channel, but is already verified.\")\n return # They're already verified\n\n log.debug(f\"{ctx.author} posted '{ctx.message.content}' in the verification \"\n \"channel. We are providing instructions how to verify.\")\n await ctx.send(\n f\"{ctx.author.mention} Please type `!accept` to verify that you accept our rules, \"\n f\"and gain access to the rest of the server.\",\n delete_after=20\n )\n\n log.trace(f\"Deleting the message posted by {ctx.author}\")\n\n try:\n await ctx.message.delete()\n except NotFound:\n log.trace(\"No message found, it must have been deleted by another bot.\")\n\n @command(name='accept', aliases=('verify', 'verified', 'accepted'), hidden=True)\n @without_role(Roles.verified)\n @in_channel(Channels.verification)\n async def accept_command(self, ctx: Context, *_) -> None: # We don't actually care about the args\n \"\"\"Accept our rules and gain access to the rest of the server.\"\"\"\n log.debug(f\"{ctx.author} called !accept. Assigning the 'Developer' role.\")\n await ctx.author.add_roles(Object(Roles.verified), reason=\"Accepted the rules\")\n try:\n await ctx.author.send(WELCOME_MESSAGE)\n except Exception:\n # Catch the exception, in case they have DMs off or something\n log.exception(f\"Unable to send welcome message to user {ctx.author}.\")\n\n log.trace(f\"Deleting the message posted by {ctx.author}.\")\n\n try:\n self.mod_log.ignore(Event.message_delete, ctx.message.id)\n await ctx.message.delete()\n except NotFound:\n log.trace(\"No message found, it must have been deleted by another bot.\")\n\n @command(name='subscribe')\n @in_channel(Channels.bot)\n async def subscribe_command(self, ctx: Context, *_) -> None: # We don't actually care about the args\n \"\"\"Subscribe to announcement notifications by assigning yourself the role.\"\"\"\n has_role = False\n\n for role in ctx.author.roles:\n if role.id == Roles.announcements:\n has_role = True\n break\n\n if has_role:\n await ctx.send(f\"{ctx.author.mention} You're already subscribed!\")\n return\n\n log.debug(f\"{ctx.author} called !subscribe. Assigning the 'Announcements' role.\")\n await ctx.author.add_roles(Object(Roles.announcements), reason=\"Subscribed to announcements\")\n\n log.trace(f\"Deleting the message posted by {ctx.author}.\")\n\n await ctx.send(\n f\"{ctx.author.mention} Subscribed to <#{Channels.announcements}> notifications.\",\n )\n\n @command(name='unsubscribe')\n @in_channel(Channels.bot)\n async def unsubscribe_command(self, ctx: Context, *_) -> None: # We don't actually care about the args\n \"\"\"Unsubscribe from announcement notifications by removing the role from yourself.\"\"\"\n has_role = False\n\n for role in ctx.author.roles:\n if role.id == Roles.announcements:\n has_role = True\n break\n\n if not has_role:\n await ctx.send(f\"{ctx.author.mention} You're already unsubscribed!\")\n return\n\n log.debug(f\"{ctx.author} called !unsubscribe. Removing the 'Announcements' role.\")\n await ctx.author.remove_roles(Object(Roles.announcements), reason=\"Unsubscribed from announcements\")\n\n log.trace(f\"Deleting the message posted by {ctx.author}.\")\n\n await ctx.send(\n f\"{ctx.author.mention} Unsubscribed from <#{Channels.announcements}> notifications.\"\n )\n\n # This cannot be static (must have a __func__ attribute).\n async def cog_command_error(self, ctx: Context, error: Exception) -> None:\n \"\"\"Check for & ignore any InChannelCheckFailure.\"\"\"\n if isinstance(error, InChannelCheckFailure):\n error.handled = True\n\n @staticmethod\n def bot_check(ctx: Context) -> bool:\n \"\"\"Block any command within the verification channel that is not !accept.\"\"\"\n if ctx.channel.id == Channels.verification and without_role_check(ctx, *MODERATION_ROLES):\n return ctx.command.name == \"accept\"\n else:\n return True\n\n @tasks.loop(hours=12)\n async def periodic_ping(self) -> None:\n \"\"\"Every week, mention @everyone to remind them to verify.\"\"\"\n messages = self.bot.get_channel(Channels.verification).history(limit=10)\n need_to_post = True # True if a new message needs to be sent.\n\n async for message in messages:\n if message.author == self.bot.user and message.content == PERIODIC_PING:\n delta = datetime.utcnow() - message.created_at # Time since last message.\n if delta.days >= 7: # Message is older than a week.\n await message.delete()\n else:\n need_to_post = False\n\n break\n\n if need_to_post:\n await self.bot.get_channel(Channels.verification).send(PERIODIC_PING)\n\n @periodic_ping.before_loop\n async def before_ping(self) -> None:\n \"\"\"Only start the loop when the bot is ready.\"\"\"\n await self.bot.wait_until_ready()\n\n def cog_unload(self) -> None:\n \"\"\"Cancel the periodic ping task when the cog is unloaded.\"\"\"\n self.periodic_ping.cancel()\n\n\ndef setup(bot: Bot) -> None:\n \"\"\"Load the Verification cog.\"\"\"\n bot.add_cog(Verification(bot))\n", "path": "bot/cogs/verification.py"}]}
| 3,076 | 489 |
gh_patches_debug_25905
|
rasdani/github-patches
|
git_diff
|
yt-dlp__yt-dlp-3857
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bloomberg] Extracter does not work.
### Checklist
- [X] I'm reporting a broken site
- [X] I've verified that I'm running yt-dlp version **2022.05.18** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
USA
### Description
Bloomberg extractor complains about being unable to extract ID, same for any video.
I used "https://www.bloomberg.com/news/videos/2022-05-19/bloomberg-studio-1-0-roblox-ceo-david-baszucki-video" as an example.
### Verbose log
```shell
[debug] Command-line config: ['-vU', 'https://www.bloomberg.com/news/videos/2022-05-19/bloomberg-studio-1-0-roblox-ceo-david-baszucki-video']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] Lazy loading extractors is disabled
[debug] Python version 3.10.4 (CPython 64bit) - Windows-10-10.0.19044-SP0
[debug] Checking exe version: ffprobe -bsfs
[debug] Checking exe version: ffmpeg -bsfs
[debug] exe versions: ffmpeg N-106452-g7fbf1dd47e-20220329 (setts), ffprobe N-106452-g7fbf1dd47e-20220329
[debug] Optional libraries: Cryptodome-3.14.1, brotli-1.0.9, certifi-2022.05.18, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
[debug] Proxy map: {}
Latest version: 2022.05.18, Current version: 2022.05.18
yt-dlp is up to date (2022.05.18)
[debug] [Bloomberg] Extracting URL: https://www.bloomberg.com/news/videos/2022-05-19/bloomberg-studio-1-0-roblox-ceo-david-baszucki-video
[Bloomberg] bloomberg-studio-1-0-roblox-ceo-david-baszucki-video: Downloading webpage
ERROR: [Bloomberg] bloomberg-studio-1-0-roblox-ceo-david-baszucki-video: Unable to extract id; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "C:\Users\mystu\AppData\Local\Programs\Python\Python310\lib\site-packages\yt_dlp\extractor\common.py", line 642, in extract
ie_result = self._real_extract(url)
File "C:\Users\mystu\AppData\Local\Programs\Python\Python310\lib\site-packages\yt_dlp\extractor\bloomberg.py", line 54, in _real_extract
bplayer_data = self._parse_json(self._search_regex(
File "C:\Users\mystu\AppData\Local\Programs\Python\Python310\lib\site-packages\yt_dlp\extractor\common.py", line 1226, in _search_regex
raise RegexNotFoundError('Unable to extract %s' % _name)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `yt_dlp/extractor/bloomberg.py`
Content:
```
1 import re
2
3 from .common import InfoExtractor
4
5
6 class BloombergIE(InfoExtractor):
7 _VALID_URL = r'https?://(?:www\.)?bloomberg\.com/(?:[^/]+/)*(?P<id>[^/?#]+)'
8
9 _TESTS = [{
10 'url': 'http://www.bloomberg.com/news/videos/b/aaeae121-5949-481e-a1ce-4562db6f5df2',
11 # The md5 checksum changes
12 'info_dict': {
13 'id': 'qurhIVlJSB6hzkVi229d8g',
14 'ext': 'flv',
15 'title': 'Shah\'s Presentation on Foreign-Exchange Strategies',
16 'description': 'md5:a8ba0302912d03d246979735c17d2761',
17 },
18 'params': {
19 'format': 'best[format_id^=hds]',
20 },
21 }, {
22 # video ID in BPlayer(...)
23 'url': 'http://www.bloomberg.com/features/2016-hello-world-new-zealand/',
24 'info_dict': {
25 'id': '938c7e72-3f25-4ddb-8b85-a9be731baa74',
26 'ext': 'flv',
27 'title': 'Meet the Real-Life Tech Wizards of Middle Earth',
28 'description': 'Hello World, Episode 1: New Zealand’s freaky AI babies, robot exoskeletons, and a virtual you.',
29 },
30 'params': {
31 'format': 'best[format_id^=hds]',
32 },
33 }, {
34 # data-bmmrid=
35 'url': 'https://www.bloomberg.com/politics/articles/2017-02-08/le-pen-aide-briefed-french-central-banker-on-plan-to-print-money',
36 'only_matching': True,
37 }, {
38 'url': 'http://www.bloomberg.com/news/articles/2015-11-12/five-strange-things-that-have-been-happening-in-financial-markets',
39 'only_matching': True,
40 }, {
41 'url': 'http://www.bloomberg.com/politics/videos/2015-11-25/karl-rove-on-jeb-bush-s-struggles-stopping-trump',
42 'only_matching': True,
43 }]
44
45 def _real_extract(self, url):
46 name = self._match_id(url)
47 webpage = self._download_webpage(url, name)
48 video_id = self._search_regex(
49 (r'["\']bmmrId["\']\s*:\s*(["\'])(?P<id>(?:(?!\1).)+)\1',
50 r'videoId\s*:\s*(["\'])(?P<id>(?:(?!\1).)+)\1',
51 r'data-bmmrid=(["\'])(?P<id>(?:(?!\1).)+)\1'),
52 webpage, 'id', group='id', default=None)
53 if not video_id:
54 bplayer_data = self._parse_json(self._search_regex(
55 r'BPlayer\(null,\s*({[^;]+})\);', webpage, 'id'), name)
56 video_id = bplayer_data['id']
57 title = re.sub(': Video$', '', self._og_search_title(webpage))
58
59 embed_info = self._download_json(
60 'http://www.bloomberg.com/api/embed?id=%s' % video_id, video_id)
61 formats = []
62 for stream in embed_info['streams']:
63 stream_url = stream.get('url')
64 if not stream_url:
65 continue
66 if stream['muxing_format'] == 'TS':
67 formats.extend(self._extract_m3u8_formats(
68 stream_url, video_id, 'mp4', m3u8_id='hls', fatal=False))
69 else:
70 formats.extend(self._extract_f4m_formats(
71 stream_url, video_id, f4m_id='hds', fatal=False))
72 self._sort_formats(formats)
73
74 return {
75 'id': video_id,
76 'title': title,
77 'formats': formats,
78 'description': self._og_search_description(webpage),
79 'thumbnail': self._og_search_thumbnail(webpage),
80 }
81
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/yt_dlp/extractor/bloomberg.py b/yt_dlp/extractor/bloomberg.py
--- a/yt_dlp/extractor/bloomberg.py
+++ b/yt_dlp/extractor/bloomberg.py
@@ -7,13 +7,11 @@
_VALID_URL = r'https?://(?:www\.)?bloomberg\.com/(?:[^/]+/)*(?P<id>[^/?#]+)'
_TESTS = [{
- 'url': 'http://www.bloomberg.com/news/videos/b/aaeae121-5949-481e-a1ce-4562db6f5df2',
- # The md5 checksum changes
+ 'url': 'https://www.bloomberg.com/news/videos/2021-09-14/apple-unveils-the-new-iphone-13-stock-doesn-t-move-much-video',
'info_dict': {
- 'id': 'qurhIVlJSB6hzkVi229d8g',
+ 'id': 'V8cFcYMxTHaMcEiiYVr39A',
'ext': 'flv',
- 'title': 'Shah\'s Presentation on Foreign-Exchange Strategies',
- 'description': 'md5:a8ba0302912d03d246979735c17d2761',
+ 'title': 'Apple Unveils the New IPhone 13, Stock Doesn\'t Move Much',
},
'params': {
'format': 'best[format_id^=hds]',
@@ -57,7 +55,7 @@
title = re.sub(': Video$', '', self._og_search_title(webpage))
embed_info = self._download_json(
- 'http://www.bloomberg.com/api/embed?id=%s' % video_id, video_id)
+ 'http://www.bloomberg.com/multimedia/api/embed?id=%s' % video_id, video_id)
formats = []
for stream in embed_info['streams']:
stream_url = stream.get('url')
|
{"golden_diff": "diff --git a/yt_dlp/extractor/bloomberg.py b/yt_dlp/extractor/bloomberg.py\n--- a/yt_dlp/extractor/bloomberg.py\n+++ b/yt_dlp/extractor/bloomberg.py\n@@ -7,13 +7,11 @@\n _VALID_URL = r'https?://(?:www\\.)?bloomberg\\.com/(?:[^/]+/)*(?P<id>[^/?#]+)'\n \n _TESTS = [{\n- 'url': 'http://www.bloomberg.com/news/videos/b/aaeae121-5949-481e-a1ce-4562db6f5df2',\n- # The md5 checksum changes\n+ 'url': 'https://www.bloomberg.com/news/videos/2021-09-14/apple-unveils-the-new-iphone-13-stock-doesn-t-move-much-video',\n 'info_dict': {\n- 'id': 'qurhIVlJSB6hzkVi229d8g',\n+ 'id': 'V8cFcYMxTHaMcEiiYVr39A',\n 'ext': 'flv',\n- 'title': 'Shah\\'s Presentation on Foreign-Exchange Strategies',\n- 'description': 'md5:a8ba0302912d03d246979735c17d2761',\n+ 'title': 'Apple Unveils the New IPhone 13, Stock Doesn\\'t Move Much',\n },\n 'params': {\n 'format': 'best[format_id^=hds]',\n@@ -57,7 +55,7 @@\n title = re.sub(': Video$', '', self._og_search_title(webpage))\n \n embed_info = self._download_json(\n- 'http://www.bloomberg.com/api/embed?id=%s' % video_id, video_id)\n+ 'http://www.bloomberg.com/multimedia/api/embed?id=%s' % video_id, video_id)\n formats = []\n for stream in embed_info['streams']:\n stream_url = stream.get('url')\n", "issue": "[Bloomberg] Extracter does not work.\n### Checklist\n\n- [X] I'm reporting a broken site\n- [X] I've verified that I'm running yt-dlp version **2022.05.18** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\n- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required\n\n### Region\n\nUSA\n\n### Description\n\nBloomberg extractor complains about being unable to extract ID, same for any video.\r\nI used \"https://www.bloomberg.com/news/videos/2022-05-19/bloomberg-studio-1-0-roblox-ceo-david-baszucki-video\" as an example.\n\n### Verbose log\n\n```shell\n[debug] Command-line config: ['-vU', 'https://www.bloomberg.com/news/videos/2022-05-19/bloomberg-studio-1-0-roblox-ceo-david-baszucki-video']\r\n[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8\r\n[debug] Lazy loading extractors is disabled\r\n[debug] Python version 3.10.4 (CPython 64bit) - Windows-10-10.0.19044-SP0\r\n[debug] Checking exe version: ffprobe -bsfs\r\n[debug] Checking exe version: ffmpeg -bsfs\r\n[debug] exe versions: ffmpeg N-106452-g7fbf1dd47e-20220329 (setts), ffprobe N-106452-g7fbf1dd47e-20220329\r\n[debug] Optional libraries: Cryptodome-3.14.1, brotli-1.0.9, certifi-2022.05.18, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3\r\n[debug] Proxy map: {}\r\nLatest version: 2022.05.18, Current version: 2022.05.18\r\nyt-dlp is up to date (2022.05.18)\r\n[debug] [Bloomberg] Extracting URL: https://www.bloomberg.com/news/videos/2022-05-19/bloomberg-studio-1-0-roblox-ceo-david-baszucki-video\r\n[Bloomberg] bloomberg-studio-1-0-roblox-ceo-david-baszucki-video: Downloading webpage\r\nERROR: [Bloomberg] bloomberg-studio-1-0-roblox-ceo-david-baszucki-video: Unable to extract id; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U\r\n File \"C:\\Users\\mystu\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\yt_dlp\\extractor\\common.py\", line 642, in extract\r\n ie_result = self._real_extract(url)\r\n File \"C:\\Users\\mystu\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\yt_dlp\\extractor\\bloomberg.py\", line 54, in _real_extract\r\n bplayer_data = self._parse_json(self._search_regex(\r\n File \"C:\\Users\\mystu\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\yt_dlp\\extractor\\common.py\", line 1226, in _search_regex\r\n raise RegexNotFoundError('Unable to extract %s' % _name)\n```\n\n", "before_files": [{"content": "import re\n\nfrom .common import InfoExtractor\n\n\nclass BloombergIE(InfoExtractor):\n _VALID_URL = r'https?://(?:www\\.)?bloomberg\\.com/(?:[^/]+/)*(?P<id>[^/?#]+)'\n\n _TESTS = [{\n 'url': 'http://www.bloomberg.com/news/videos/b/aaeae121-5949-481e-a1ce-4562db6f5df2',\n # The md5 checksum changes\n 'info_dict': {\n 'id': 'qurhIVlJSB6hzkVi229d8g',\n 'ext': 'flv',\n 'title': 'Shah\\'s Presentation on Foreign-Exchange Strategies',\n 'description': 'md5:a8ba0302912d03d246979735c17d2761',\n },\n 'params': {\n 'format': 'best[format_id^=hds]',\n },\n }, {\n # video ID in BPlayer(...)\n 'url': 'http://www.bloomberg.com/features/2016-hello-world-new-zealand/',\n 'info_dict': {\n 'id': '938c7e72-3f25-4ddb-8b85-a9be731baa74',\n 'ext': 'flv',\n 'title': 'Meet the Real-Life Tech Wizards of Middle Earth',\n 'description': 'Hello World, Episode 1: New Zealand\u2019s freaky AI babies, robot exoskeletons, and a virtual you.',\n },\n 'params': {\n 'format': 'best[format_id^=hds]',\n },\n }, {\n # data-bmmrid=\n 'url': 'https://www.bloomberg.com/politics/articles/2017-02-08/le-pen-aide-briefed-french-central-banker-on-plan-to-print-money',\n 'only_matching': True,\n }, {\n 'url': 'http://www.bloomberg.com/news/articles/2015-11-12/five-strange-things-that-have-been-happening-in-financial-markets',\n 'only_matching': True,\n }, {\n 'url': 'http://www.bloomberg.com/politics/videos/2015-11-25/karl-rove-on-jeb-bush-s-struggles-stopping-trump',\n 'only_matching': True,\n }]\n\n def _real_extract(self, url):\n name = self._match_id(url)\n webpage = self._download_webpage(url, name)\n video_id = self._search_regex(\n (r'[\"\\']bmmrId[\"\\']\\s*:\\s*([\"\\'])(?P<id>(?:(?!\\1).)+)\\1',\n r'videoId\\s*:\\s*([\"\\'])(?P<id>(?:(?!\\1).)+)\\1',\n r'data-bmmrid=([\"\\'])(?P<id>(?:(?!\\1).)+)\\1'),\n webpage, 'id', group='id', default=None)\n if not video_id:\n bplayer_data = self._parse_json(self._search_regex(\n r'BPlayer\\(null,\\s*({[^;]+})\\);', webpage, 'id'), name)\n video_id = bplayer_data['id']\n title = re.sub(': Video$', '', self._og_search_title(webpage))\n\n embed_info = self._download_json(\n 'http://www.bloomberg.com/api/embed?id=%s' % video_id, video_id)\n formats = []\n for stream in embed_info['streams']:\n stream_url = stream.get('url')\n if not stream_url:\n continue\n if stream['muxing_format'] == 'TS':\n formats.extend(self._extract_m3u8_formats(\n stream_url, video_id, 'mp4', m3u8_id='hls', fatal=False))\n else:\n formats.extend(self._extract_f4m_formats(\n stream_url, video_id, f4m_id='hds', fatal=False))\n self._sort_formats(formats)\n\n return {\n 'id': video_id,\n 'title': title,\n 'formats': formats,\n 'description': self._og_search_description(webpage),\n 'thumbnail': self._og_search_thumbnail(webpage),\n }\n", "path": "yt_dlp/extractor/bloomberg.py"}], "after_files": [{"content": "import re\n\nfrom .common import InfoExtractor\n\n\nclass BloombergIE(InfoExtractor):\n _VALID_URL = r'https?://(?:www\\.)?bloomberg\\.com/(?:[^/]+/)*(?P<id>[^/?#]+)'\n\n _TESTS = [{\n 'url': 'https://www.bloomberg.com/news/videos/2021-09-14/apple-unveils-the-new-iphone-13-stock-doesn-t-move-much-video',\n 'info_dict': {\n 'id': 'V8cFcYMxTHaMcEiiYVr39A',\n 'ext': 'flv',\n 'title': 'Apple Unveils the New IPhone 13, Stock Doesn\\'t Move Much',\n },\n 'params': {\n 'format': 'best[format_id^=hds]',\n },\n }, {\n # video ID in BPlayer(...)\n 'url': 'http://www.bloomberg.com/features/2016-hello-world-new-zealand/',\n 'info_dict': {\n 'id': '938c7e72-3f25-4ddb-8b85-a9be731baa74',\n 'ext': 'flv',\n 'title': 'Meet the Real-Life Tech Wizards of Middle Earth',\n 'description': 'Hello World, Episode 1: New Zealand\u2019s freaky AI babies, robot exoskeletons, and a virtual you.',\n },\n 'params': {\n 'format': 'best[format_id^=hds]',\n },\n }, {\n # data-bmmrid=\n 'url': 'https://www.bloomberg.com/politics/articles/2017-02-08/le-pen-aide-briefed-french-central-banker-on-plan-to-print-money',\n 'only_matching': True,\n }, {\n 'url': 'http://www.bloomberg.com/news/articles/2015-11-12/five-strange-things-that-have-been-happening-in-financial-markets',\n 'only_matching': True,\n }, {\n 'url': 'http://www.bloomberg.com/politics/videos/2015-11-25/karl-rove-on-jeb-bush-s-struggles-stopping-trump',\n 'only_matching': True,\n }]\n\n def _real_extract(self, url):\n name = self._match_id(url)\n webpage = self._download_webpage(url, name)\n video_id = self._search_regex(\n (r'[\"\\']bmmrId[\"\\']\\s*:\\s*([\"\\'])(?P<id>(?:(?!\\1).)+)\\1',\n r'videoId\\s*:\\s*([\"\\'])(?P<id>(?:(?!\\1).)+)\\1',\n r'data-bmmrid=([\"\\'])(?P<id>(?:(?!\\1).)+)\\1'),\n webpage, 'id', group='id', default=None)\n if not video_id:\n bplayer_data = self._parse_json(self._search_regex(\n r'BPlayer\\(null,\\s*({[^;]+})\\);', webpage, 'id'), name)\n video_id = bplayer_data['id']\n title = re.sub(': Video$', '', self._og_search_title(webpage))\n\n embed_info = self._download_json(\n 'http://www.bloomberg.com/multimedia/api/embed?id=%s' % video_id, video_id)\n formats = []\n for stream in embed_info['streams']:\n stream_url = stream.get('url')\n if not stream_url:\n continue\n if stream['muxing_format'] == 'TS':\n formats.extend(self._extract_m3u8_formats(\n stream_url, video_id, 'mp4', m3u8_id='hls', fatal=False))\n else:\n formats.extend(self._extract_f4m_formats(\n stream_url, video_id, f4m_id='hds', fatal=False))\n self._sort_formats(formats)\n\n return {\n 'id': video_id,\n 'title': title,\n 'formats': formats,\n 'description': self._og_search_description(webpage),\n 'thumbnail': self._og_search_thumbnail(webpage),\n }\n", "path": "yt_dlp/extractor/bloomberg.py"}]}
| 2,481 | 481 |
gh_patches_debug_36476
|
rasdani/github-patches
|
git_diff
|
buildbot__buildbot-3578
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
DockerLatentWorker: pull specified image if it is not yet on the system
This is merely an enhancement, but it would be nice if the specified image is pulled from dockerhub if it is not on yet on the host system.
This should be pretty easy to accomplish using docker-py, but I don't know if this a desired feature.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `master/buildbot/worker/docker.py`
Content:
```
1 # This file is part of Buildbot. Buildbot is free software: you can
2 # redistribute it and/or modify it under the terms of the GNU General Public
3 # License as published by the Free Software Foundation, version 2.
4 #
5 # This program is distributed in the hope that it will be useful, but WITHOUT
6 # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
7 # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
8 # details.
9 #
10 # You should have received a copy of the GNU General Public License along with
11 # this program; if not, write to the Free Software Foundation, Inc., 51
12 # Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
13 #
14 # Copyright Buildbot Team Members
15
16 from __future__ import absolute_import
17 from __future__ import division
18 from __future__ import print_function
19
20 import hashlib
21 import json
22 import socket
23 from io import BytesIO
24
25 from twisted.internet import defer
26 from twisted.internet import threads
27 from twisted.python import log
28
29 from buildbot import config
30 from buildbot.interfaces import LatentWorkerFailedToSubstantiate
31 from buildbot.util import unicode2bytes
32 from buildbot.worker import AbstractLatentWorker
33
34 try:
35 import docker
36 from docker import client
37 from docker.errors import NotFound
38 _hush_pyflakes = [docker, client]
39 except ImportError:
40 docker = None
41 client = None
42
43
44 def _handle_stream_line(line):
45 """\
46 Input is the json representation of: {'stream': "Content\ncontent"}
47 Output is a generator yield "Content", and then "content"
48 """
49 # XXX This necessary processing is probably a bug from docker-py,
50 # hence, might break if the bug is fixed, i.e. we should get decoded JSON
51 # directly from the API.
52 line = json.loads(line)
53 if 'error' in line:
54 content = "ERROR: " + line['error']
55 else:
56 content = line.get('stream', '')
57 for streamline in content.split('\n'):
58 if streamline:
59 yield streamline
60
61
62 class DockerBaseWorker(AbstractLatentWorker):
63
64 def checkConfig(self, name, password=None, image=None,
65 masterFQDN=None, **kwargs):
66
67 # Set build_wait_timeout to 0 if not explicitly set: Starting a
68 # container is almost immediate, we can afford doing so for each build.
69 if 'build_wait_timeout' not in kwargs:
70 kwargs['build_wait_timeout'] = 0
71 if image is not None and not isinstance(image, str):
72 if not hasattr(image, 'getRenderingFor'):
73 config.error("image must be a string")
74
75 AbstractLatentWorker.checkConfig(self, name, password, **kwargs)
76
77 def reconfigService(self, name, password=None, image=None,
78 masterFQDN=None, **kwargs):
79 # Set build_wait_timeout to 0 if not explicitly set: Starting a
80 # container is almost immediate, we can afford doing so for each build.
81 if 'build_wait_timeout' not in kwargs:
82 kwargs['build_wait_timeout'] = 0
83 if password is None:
84 password = self.getRandomPass()
85 if masterFQDN is None:
86 masterFQDN = socket.getfqdn()
87 self.masterFQDN = masterFQDN
88 self.image = image
89 masterName = unicode2bytes(self.master.name)
90 self.masterhash = hashlib.sha1(masterName).hexdigest()[:6]
91 return AbstractLatentWorker.reconfigService(self, name, password, **kwargs)
92
93 def getContainerName(self):
94 return ('%s-%s' % ('buildbot' + self.masterhash, self.workername)).replace("_", "-")
95
96 @property
97 def shortid(self):
98 if self.instance is None:
99 return None
100 return self.instance['Id'][:6]
101
102 def createEnvironment(self):
103 result = {
104 "BUILDMASTER": self.masterFQDN,
105 "WORKERNAME": self.name,
106 "WORKERPASS": self.password
107 }
108 if self.registration is not None:
109 result["BUILDMASTER_PORT"] = str(self.registration.getPBPort())
110 if ":" in self.masterFQDN:
111 result["BUILDMASTER"], result["BUILDMASTER_PORT"] = self.masterFQDN.split(":")
112 return result
113
114
115 class DockerLatentWorker(DockerBaseWorker):
116 instance = None
117
118 def checkConfig(self, name, password, docker_host, image=None, command=None,
119 volumes=None, dockerfile=None, version=None, tls=None, followStartupLogs=False,
120 masterFQDN=None, hostconfig=None, **kwargs):
121
122 DockerBaseWorker.checkConfig(self, name, password, image, masterFQDN, **kwargs)
123
124 if not client:
125 config.error("The python module 'docker-py>=1.4' is needed to use a"
126 " DockerLatentWorker")
127 if not image and not dockerfile:
128 config.error("DockerLatentWorker: You need to specify at least"
129 " an image name, or a dockerfile")
130
131 # Following block is only for checking config errors,
132 # actual parsing happens in self.parse_volumes()
133 # Renderables can be direct volumes definition or list member
134 if isinstance(volumes, list):
135 for volume_string in (volumes or []):
136 if not isinstance(volume_string, str):
137 continue
138 try:
139 bind, volume = volume_string.split(":", 1)
140 except ValueError:
141 config.error("Invalid volume definition for docker "
142 "%s. Skipping..." % volume_string)
143 continue
144
145 @defer.inlineCallbacks
146 def reconfigService(self, name, password, docker_host, image=None, command=None,
147 volumes=None, dockerfile=None, version=None, tls=None, followStartupLogs=False,
148 masterFQDN=None, hostconfig=None, **kwargs):
149
150 yield DockerBaseWorker.reconfigService(self, name, password, image, masterFQDN, **kwargs)
151 self.volumes = volumes or []
152 self.followStartupLogs = followStartupLogs
153
154 self.command = command or []
155 self.dockerfile = dockerfile
156 self.hostconfig = hostconfig or {}
157 # Prepare the parameters for the Docker Client object.
158 self.client_args = {'base_url': docker_host}
159 if version is not None:
160 self.client_args['version'] = version
161 if tls is not None:
162 self.client_args['tls'] = tls
163
164 def _thd_parse_volumes(self, volumes):
165 volume_list = []
166 binds = {}
167 for volume_string in (volumes or []):
168 try:
169 bind, volume = volume_string.split(":", 1)
170 except ValueError:
171 config.error("Invalid volume definition for docker "
172 "%s. Skipping..." % volume_string)
173 continue
174
175 ro = False
176 if volume.endswith(':ro') or volume.endswith(':rw'):
177 ro = volume[-2:] == 'ro'
178 volume = volume[:-3]
179
180 volume_list.append(volume)
181 binds[bind] = {'bind': volume, 'ro': ro}
182 return volume_list, binds
183
184 def _getDockerClient(self):
185 if docker.version[0] == '1':
186 docker_client = client.Client(**self.client_args)
187 else:
188 docker_client = client.APIClient(**self.client_args)
189 return docker_client
190
191 @defer.inlineCallbacks
192 def start_instance(self, build):
193 if self.instance is not None:
194 raise ValueError('instance active')
195 image = yield build.render(self.image)
196 dockerfile = yield build.render(self.dockerfile)
197 volumes = yield build.render(self.volumes)
198 res = yield threads.deferToThread(self._thd_start_instance, image, dockerfile, volumes)
199 defer.returnValue(res)
200
201 def _image_exists(self, client, name):
202 # Make sure the image exists
203 for image in client.images():
204 for tag in image['RepoTags'] or []:
205 if ':' in name and tag == name:
206 return True
207 if tag.startswith(name + ':'):
208 return True
209 return False
210
211 def _thd_start_instance(self, image, dockerfile, volumes):
212 docker_client = self._getDockerClient()
213 # cleanup the old instances
214 instances = docker_client.containers(
215 all=1,
216 filters=dict(name=self.getContainerName()))
217 for instance in instances:
218 try:
219 docker_client.remove_container(instance['Id'], v=True, force=True)
220 except NotFound:
221 pass # that's a race condition
222
223 found = False
224 if image is not None:
225 found = self._image_exists(docker_client, image)
226 else:
227 image = '%s_%s_image' % (self.workername, id(self))
228 if (not found) and (dockerfile is not None):
229 log.msg("Image '%s' not found, building it from scratch" %
230 image)
231 for line in docker_client.build(fileobj=BytesIO(dockerfile.encode('utf-8')),
232 tag=image):
233 for streamline in _handle_stream_line(line):
234 log.msg(streamline)
235
236 if (not self._image_exists(docker_client, image)):
237 log.msg("Image '%s' not found" % image)
238 raise LatentWorkerFailedToSubstantiate(
239 'Image "%s" not found on docker host.' % image
240 )
241
242 volumes, binds = self._thd_parse_volumes(volumes)
243 host_conf = self.hostconfig.copy()
244 host_conf['binds'] = binds
245 host_conf = docker_client.create_host_config(**host_conf)
246
247 instance = docker_client.create_container(
248 image,
249 self.command,
250 name=self.getContainerName(),
251 volumes=volumes,
252 environment=self.createEnvironment(),
253 host_config=host_conf
254 )
255
256 if instance.get('Id') is None:
257 log.msg('Failed to create the container')
258 raise LatentWorkerFailedToSubstantiate(
259 'Failed to start container'
260 )
261 shortid = instance['Id'][:6]
262 log.msg('Container created, Id: %s...' % (shortid,))
263 instance['image'] = image
264 self.instance = instance
265 docker_client.start(instance)
266 log.msg('Container started')
267 if self.followStartupLogs:
268 logs = docker_client.attach(
269 container=instance, stdout=True, stderr=True, stream=True)
270 for line in logs:
271 log.msg("docker VM %s: %s" % (shortid, line.strip()))
272 if self.conn:
273 break
274 del logs
275 return [instance['Id'], image]
276
277 def stop_instance(self, fast=False):
278 if self.instance is None:
279 # be gentle. Something may just be trying to alert us that an
280 # instance never attached, and it's because, somehow, we never
281 # started.
282 return defer.succeed(None)
283 instance = self.instance
284 self.instance = None
285 return threads.deferToThread(self._thd_stop_instance, instance, fast)
286
287 def _thd_stop_instance(self, instance, fast):
288 docker_client = self._getDockerClient()
289 log.msg('Stopping container %s...' % instance['Id'][:6])
290 docker_client.stop(instance['Id'])
291 if not fast:
292 docker_client.wait(instance['Id'])
293 docker_client.remove_container(instance['Id'], v=True, force=True)
294 if self.image is None:
295 try:
296 docker_client.remove_image(image=instance['image'])
297 except docker.errors.APIError as e:
298 log.msg('Error while removing the image: %s', e)
299
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/master/buildbot/worker/docker.py b/master/buildbot/worker/docker.py
--- a/master/buildbot/worker/docker.py
+++ b/master/buildbot/worker/docker.py
@@ -117,7 +117,7 @@
def checkConfig(self, name, password, docker_host, image=None, command=None,
volumes=None, dockerfile=None, version=None, tls=None, followStartupLogs=False,
- masterFQDN=None, hostconfig=None, **kwargs):
+ masterFQDN=None, hostconfig=None, autopull=False, **kwargs):
DockerBaseWorker.checkConfig(self, name, password, image, masterFQDN, **kwargs)
@@ -145,7 +145,7 @@
@defer.inlineCallbacks
def reconfigService(self, name, password, docker_host, image=None, command=None,
volumes=None, dockerfile=None, version=None, tls=None, followStartupLogs=False,
- masterFQDN=None, hostconfig=None, **kwargs):
+ masterFQDN=None, hostconfig=None, autopull=False, **kwargs):
yield DockerBaseWorker.reconfigService(self, name, password, image, masterFQDN, **kwargs)
self.volumes = volumes or []
@@ -154,6 +154,7 @@
self.command = command or []
self.dockerfile = dockerfile
self.hostconfig = hostconfig or {}
+ self.autopull = autopull
# Prepare the parameters for the Docker Client object.
self.client_args = {'base_url': docker_host}
if version is not None:
@@ -233,6 +234,11 @@
for streamline in _handle_stream_line(line):
log.msg(streamline)
+ if ((not self._image_exists(docker_client, image))) and self.autopull:
+ log.msg("Image '%s' not found, pulling from registry" %
+ image)
+ docker_client.pull(image)
+
if (not self._image_exists(docker_client, image)):
log.msg("Image '%s' not found" % image)
raise LatentWorkerFailedToSubstantiate(
|
{"golden_diff": "diff --git a/master/buildbot/worker/docker.py b/master/buildbot/worker/docker.py\n--- a/master/buildbot/worker/docker.py\n+++ b/master/buildbot/worker/docker.py\n@@ -117,7 +117,7 @@\n \n def checkConfig(self, name, password, docker_host, image=None, command=None,\n volumes=None, dockerfile=None, version=None, tls=None, followStartupLogs=False,\n- masterFQDN=None, hostconfig=None, **kwargs):\n+ masterFQDN=None, hostconfig=None, autopull=False, **kwargs):\n \n DockerBaseWorker.checkConfig(self, name, password, image, masterFQDN, **kwargs)\n \n@@ -145,7 +145,7 @@\n @defer.inlineCallbacks\n def reconfigService(self, name, password, docker_host, image=None, command=None,\n volumes=None, dockerfile=None, version=None, tls=None, followStartupLogs=False,\n- masterFQDN=None, hostconfig=None, **kwargs):\n+ masterFQDN=None, hostconfig=None, autopull=False, **kwargs):\n \n yield DockerBaseWorker.reconfigService(self, name, password, image, masterFQDN, **kwargs)\n self.volumes = volumes or []\n@@ -154,6 +154,7 @@\n self.command = command or []\n self.dockerfile = dockerfile\n self.hostconfig = hostconfig or {}\n+ self.autopull = autopull\n # Prepare the parameters for the Docker Client object.\n self.client_args = {'base_url': docker_host}\n if version is not None:\n@@ -233,6 +234,11 @@\n for streamline in _handle_stream_line(line):\n log.msg(streamline)\n \n+ if ((not self._image_exists(docker_client, image))) and self.autopull:\n+ log.msg(\"Image '%s' not found, pulling from registry\" %\n+ image)\n+ docker_client.pull(image)\n+\n if (not self._image_exists(docker_client, image)):\n log.msg(\"Image '%s' not found\" % image)\n raise LatentWorkerFailedToSubstantiate(\n", "issue": "DockerLatentWorker: pull specified image if it is not yet on the system\nThis is merely an enhancement, but it would be nice if the specified image is pulled from dockerhub if it is not on yet on the host system. \r\nThis should be pretty easy to accomplish using docker-py, but I don't know if this a desired feature.\n", "before_files": [{"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport hashlib\nimport json\nimport socket\nfrom io import BytesIO\n\nfrom twisted.internet import defer\nfrom twisted.internet import threads\nfrom twisted.python import log\n\nfrom buildbot import config\nfrom buildbot.interfaces import LatentWorkerFailedToSubstantiate\nfrom buildbot.util import unicode2bytes\nfrom buildbot.worker import AbstractLatentWorker\n\ntry:\n import docker\n from docker import client\n from docker.errors import NotFound\n _hush_pyflakes = [docker, client]\nexcept ImportError:\n docker = None\n client = None\n\n\ndef _handle_stream_line(line):\n \"\"\"\\\n Input is the json representation of: {'stream': \"Content\\ncontent\"}\n Output is a generator yield \"Content\", and then \"content\"\n \"\"\"\n # XXX This necessary processing is probably a bug from docker-py,\n # hence, might break if the bug is fixed, i.e. we should get decoded JSON\n # directly from the API.\n line = json.loads(line)\n if 'error' in line:\n content = \"ERROR: \" + line['error']\n else:\n content = line.get('stream', '')\n for streamline in content.split('\\n'):\n if streamline:\n yield streamline\n\n\nclass DockerBaseWorker(AbstractLatentWorker):\n\n def checkConfig(self, name, password=None, image=None,\n masterFQDN=None, **kwargs):\n\n # Set build_wait_timeout to 0 if not explicitly set: Starting a\n # container is almost immediate, we can afford doing so for each build.\n if 'build_wait_timeout' not in kwargs:\n kwargs['build_wait_timeout'] = 0\n if image is not None and not isinstance(image, str):\n if not hasattr(image, 'getRenderingFor'):\n config.error(\"image must be a string\")\n\n AbstractLatentWorker.checkConfig(self, name, password, **kwargs)\n\n def reconfigService(self, name, password=None, image=None,\n masterFQDN=None, **kwargs):\n # Set build_wait_timeout to 0 if not explicitly set: Starting a\n # container is almost immediate, we can afford doing so for each build.\n if 'build_wait_timeout' not in kwargs:\n kwargs['build_wait_timeout'] = 0\n if password is None:\n password = self.getRandomPass()\n if masterFQDN is None:\n masterFQDN = socket.getfqdn()\n self.masterFQDN = masterFQDN\n self.image = image\n masterName = unicode2bytes(self.master.name)\n self.masterhash = hashlib.sha1(masterName).hexdigest()[:6]\n return AbstractLatentWorker.reconfigService(self, name, password, **kwargs)\n\n def getContainerName(self):\n return ('%s-%s' % ('buildbot' + self.masterhash, self.workername)).replace(\"_\", \"-\")\n\n @property\n def shortid(self):\n if self.instance is None:\n return None\n return self.instance['Id'][:6]\n\n def createEnvironment(self):\n result = {\n \"BUILDMASTER\": self.masterFQDN,\n \"WORKERNAME\": self.name,\n \"WORKERPASS\": self.password\n }\n if self.registration is not None:\n result[\"BUILDMASTER_PORT\"] = str(self.registration.getPBPort())\n if \":\" in self.masterFQDN:\n result[\"BUILDMASTER\"], result[\"BUILDMASTER_PORT\"] = self.masterFQDN.split(\":\")\n return result\n\n\nclass DockerLatentWorker(DockerBaseWorker):\n instance = None\n\n def checkConfig(self, name, password, docker_host, image=None, command=None,\n volumes=None, dockerfile=None, version=None, tls=None, followStartupLogs=False,\n masterFQDN=None, hostconfig=None, **kwargs):\n\n DockerBaseWorker.checkConfig(self, name, password, image, masterFQDN, **kwargs)\n\n if not client:\n config.error(\"The python module 'docker-py>=1.4' is needed to use a\"\n \" DockerLatentWorker\")\n if not image and not dockerfile:\n config.error(\"DockerLatentWorker: You need to specify at least\"\n \" an image name, or a dockerfile\")\n\n # Following block is only for checking config errors,\n # actual parsing happens in self.parse_volumes()\n # Renderables can be direct volumes definition or list member\n if isinstance(volumes, list):\n for volume_string in (volumes or []):\n if not isinstance(volume_string, str):\n continue\n try:\n bind, volume = volume_string.split(\":\", 1)\n except ValueError:\n config.error(\"Invalid volume definition for docker \"\n \"%s. Skipping...\" % volume_string)\n continue\n\n @defer.inlineCallbacks\n def reconfigService(self, name, password, docker_host, image=None, command=None,\n volumes=None, dockerfile=None, version=None, tls=None, followStartupLogs=False,\n masterFQDN=None, hostconfig=None, **kwargs):\n\n yield DockerBaseWorker.reconfigService(self, name, password, image, masterFQDN, **kwargs)\n self.volumes = volumes or []\n self.followStartupLogs = followStartupLogs\n\n self.command = command or []\n self.dockerfile = dockerfile\n self.hostconfig = hostconfig or {}\n # Prepare the parameters for the Docker Client object.\n self.client_args = {'base_url': docker_host}\n if version is not None:\n self.client_args['version'] = version\n if tls is not None:\n self.client_args['tls'] = tls\n\n def _thd_parse_volumes(self, volumes):\n volume_list = []\n binds = {}\n for volume_string in (volumes or []):\n try:\n bind, volume = volume_string.split(\":\", 1)\n except ValueError:\n config.error(\"Invalid volume definition for docker \"\n \"%s. Skipping...\" % volume_string)\n continue\n\n ro = False\n if volume.endswith(':ro') or volume.endswith(':rw'):\n ro = volume[-2:] == 'ro'\n volume = volume[:-3]\n\n volume_list.append(volume)\n binds[bind] = {'bind': volume, 'ro': ro}\n return volume_list, binds\n\n def _getDockerClient(self):\n if docker.version[0] == '1':\n docker_client = client.Client(**self.client_args)\n else:\n docker_client = client.APIClient(**self.client_args)\n return docker_client\n\n @defer.inlineCallbacks\n def start_instance(self, build):\n if self.instance is not None:\n raise ValueError('instance active')\n image = yield build.render(self.image)\n dockerfile = yield build.render(self.dockerfile)\n volumes = yield build.render(self.volumes)\n res = yield threads.deferToThread(self._thd_start_instance, image, dockerfile, volumes)\n defer.returnValue(res)\n\n def _image_exists(self, client, name):\n # Make sure the image exists\n for image in client.images():\n for tag in image['RepoTags'] or []:\n if ':' in name and tag == name:\n return True\n if tag.startswith(name + ':'):\n return True\n return False\n\n def _thd_start_instance(self, image, dockerfile, volumes):\n docker_client = self._getDockerClient()\n # cleanup the old instances\n instances = docker_client.containers(\n all=1,\n filters=dict(name=self.getContainerName()))\n for instance in instances:\n try:\n docker_client.remove_container(instance['Id'], v=True, force=True)\n except NotFound:\n pass # that's a race condition\n\n found = False\n if image is not None:\n found = self._image_exists(docker_client, image)\n else:\n image = '%s_%s_image' % (self.workername, id(self))\n if (not found) and (dockerfile is not None):\n log.msg(\"Image '%s' not found, building it from scratch\" %\n image)\n for line in docker_client.build(fileobj=BytesIO(dockerfile.encode('utf-8')),\n tag=image):\n for streamline in _handle_stream_line(line):\n log.msg(streamline)\n\n if (not self._image_exists(docker_client, image)):\n log.msg(\"Image '%s' not found\" % image)\n raise LatentWorkerFailedToSubstantiate(\n 'Image \"%s\" not found on docker host.' % image\n )\n\n volumes, binds = self._thd_parse_volumes(volumes)\n host_conf = self.hostconfig.copy()\n host_conf['binds'] = binds\n host_conf = docker_client.create_host_config(**host_conf)\n\n instance = docker_client.create_container(\n image,\n self.command,\n name=self.getContainerName(),\n volumes=volumes,\n environment=self.createEnvironment(),\n host_config=host_conf\n )\n\n if instance.get('Id') is None:\n log.msg('Failed to create the container')\n raise LatentWorkerFailedToSubstantiate(\n 'Failed to start container'\n )\n shortid = instance['Id'][:6]\n log.msg('Container created, Id: %s...' % (shortid,))\n instance['image'] = image\n self.instance = instance\n docker_client.start(instance)\n log.msg('Container started')\n if self.followStartupLogs:\n logs = docker_client.attach(\n container=instance, stdout=True, stderr=True, stream=True)\n for line in logs:\n log.msg(\"docker VM %s: %s\" % (shortid, line.strip()))\n if self.conn:\n break\n del logs\n return [instance['Id'], image]\n\n def stop_instance(self, fast=False):\n if self.instance is None:\n # be gentle. Something may just be trying to alert us that an\n # instance never attached, and it's because, somehow, we never\n # started.\n return defer.succeed(None)\n instance = self.instance\n self.instance = None\n return threads.deferToThread(self._thd_stop_instance, instance, fast)\n\n def _thd_stop_instance(self, instance, fast):\n docker_client = self._getDockerClient()\n log.msg('Stopping container %s...' % instance['Id'][:6])\n docker_client.stop(instance['Id'])\n if not fast:\n docker_client.wait(instance['Id'])\n docker_client.remove_container(instance['Id'], v=True, force=True)\n if self.image is None:\n try:\n docker_client.remove_image(image=instance['image'])\n except docker.errors.APIError as e:\n log.msg('Error while removing the image: %s', e)\n", "path": "master/buildbot/worker/docker.py"}], "after_files": [{"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport hashlib\nimport json\nimport socket\nfrom io import BytesIO\n\nfrom twisted.internet import defer\nfrom twisted.internet import threads\nfrom twisted.python import log\n\nfrom buildbot import config\nfrom buildbot.interfaces import LatentWorkerFailedToSubstantiate\nfrom buildbot.util import unicode2bytes\nfrom buildbot.worker import AbstractLatentWorker\n\ntry:\n import docker\n from docker import client\n from docker.errors import NotFound\n _hush_pyflakes = [docker, client]\nexcept ImportError:\n docker = None\n client = None\n\n\ndef _handle_stream_line(line):\n \"\"\"\\\n Input is the json representation of: {'stream': \"Content\\ncontent\"}\n Output is a generator yield \"Content\", and then \"content\"\n \"\"\"\n # XXX This necessary processing is probably a bug from docker-py,\n # hence, might break if the bug is fixed, i.e. we should get decoded JSON\n # directly from the API.\n line = json.loads(line)\n if 'error' in line:\n content = \"ERROR: \" + line['error']\n else:\n content = line.get('stream', '')\n for streamline in content.split('\\n'):\n if streamline:\n yield streamline\n\n\nclass DockerBaseWorker(AbstractLatentWorker):\n\n def checkConfig(self, name, password=None, image=None,\n masterFQDN=None, **kwargs):\n\n # Set build_wait_timeout to 0 if not explicitly set: Starting a\n # container is almost immediate, we can afford doing so for each build.\n if 'build_wait_timeout' not in kwargs:\n kwargs['build_wait_timeout'] = 0\n if image is not None and not isinstance(image, str):\n if not hasattr(image, 'getRenderingFor'):\n config.error(\"image must be a string\")\n\n AbstractLatentWorker.checkConfig(self, name, password, **kwargs)\n\n def reconfigService(self, name, password=None, image=None,\n masterFQDN=None, **kwargs):\n # Set build_wait_timeout to 0 if not explicitly set: Starting a\n # container is almost immediate, we can afford doing so for each build.\n if 'build_wait_timeout' not in kwargs:\n kwargs['build_wait_timeout'] = 0\n if password is None:\n password = self.getRandomPass()\n if masterFQDN is None:\n masterFQDN = socket.getfqdn()\n self.masterFQDN = masterFQDN\n self.image = image\n masterName = unicode2bytes(self.master.name)\n self.masterhash = hashlib.sha1(masterName).hexdigest()[:6]\n return AbstractLatentWorker.reconfigService(self, name, password, **kwargs)\n\n def getContainerName(self):\n return ('%s-%s' % ('buildbot' + self.masterhash, self.workername)).replace(\"_\", \"-\")\n\n @property\n def shortid(self):\n if self.instance is None:\n return None\n return self.instance['Id'][:6]\n\n def createEnvironment(self):\n result = {\n \"BUILDMASTER\": self.masterFQDN,\n \"WORKERNAME\": self.name,\n \"WORKERPASS\": self.password\n }\n if self.registration is not None:\n result[\"BUILDMASTER_PORT\"] = str(self.registration.getPBPort())\n if \":\" in self.masterFQDN:\n result[\"BUILDMASTER\"], result[\"BUILDMASTER_PORT\"] = self.masterFQDN.split(\":\")\n return result\n\n\nclass DockerLatentWorker(DockerBaseWorker):\n instance = None\n\n def checkConfig(self, name, password, docker_host, image=None, command=None,\n volumes=None, dockerfile=None, version=None, tls=None, followStartupLogs=False,\n masterFQDN=None, hostconfig=None, autopull=False, **kwargs):\n\n DockerBaseWorker.checkConfig(self, name, password, image, masterFQDN, **kwargs)\n\n if not client:\n config.error(\"The python module 'docker-py>=1.4' is needed to use a\"\n \" DockerLatentWorker\")\n if not image and not dockerfile:\n config.error(\"DockerLatentWorker: You need to specify at least\"\n \" an image name, or a dockerfile\")\n\n # Following block is only for checking config errors,\n # actual parsing happens in self.parse_volumes()\n # Renderables can be direct volumes definition or list member\n if isinstance(volumes, list):\n for volume_string in (volumes or []):\n if not isinstance(volume_string, str):\n continue\n try:\n bind, volume = volume_string.split(\":\", 1)\n except ValueError:\n config.error(\"Invalid volume definition for docker \"\n \"%s. Skipping...\" % volume_string)\n continue\n\n @defer.inlineCallbacks\n def reconfigService(self, name, password, docker_host, image=None, command=None,\n volumes=None, dockerfile=None, version=None, tls=None, followStartupLogs=False,\n masterFQDN=None, hostconfig=None, autopull=False, **kwargs):\n\n yield DockerBaseWorker.reconfigService(self, name, password, image, masterFQDN, **kwargs)\n self.volumes = volumes or []\n self.followStartupLogs = followStartupLogs\n\n self.command = command or []\n self.dockerfile = dockerfile\n self.hostconfig = hostconfig or {}\n self.autopull = autopull\n # Prepare the parameters for the Docker Client object.\n self.client_args = {'base_url': docker_host}\n if version is not None:\n self.client_args['version'] = version\n if tls is not None:\n self.client_args['tls'] = tls\n\n def _thd_parse_volumes(self, volumes):\n volume_list = []\n binds = {}\n for volume_string in (volumes or []):\n try:\n bind, volume = volume_string.split(\":\", 1)\n except ValueError:\n config.error(\"Invalid volume definition for docker \"\n \"%s. Skipping...\" % volume_string)\n continue\n\n ro = False\n if volume.endswith(':ro') or volume.endswith(':rw'):\n ro = volume[-2:] == 'ro'\n volume = volume[:-3]\n\n volume_list.append(volume)\n binds[bind] = {'bind': volume, 'ro': ro}\n return volume_list, binds\n\n def _getDockerClient(self):\n if docker.version[0] == '1':\n docker_client = client.Client(**self.client_args)\n else:\n docker_client = client.APIClient(**self.client_args)\n return docker_client\n\n @defer.inlineCallbacks\n def start_instance(self, build):\n if self.instance is not None:\n raise ValueError('instance active')\n image = yield build.render(self.image)\n dockerfile = yield build.render(self.dockerfile)\n volumes = yield build.render(self.volumes)\n res = yield threads.deferToThread(self._thd_start_instance, image, dockerfile, volumes)\n defer.returnValue(res)\n\n def _image_exists(self, client, name):\n # Make sure the image exists\n for image in client.images():\n for tag in image['RepoTags'] or []:\n if ':' in name and tag == name:\n return True\n if tag.startswith(name + ':'):\n return True\n return False\n\n def _thd_start_instance(self, image, dockerfile, volumes):\n docker_client = self._getDockerClient()\n # cleanup the old instances\n instances = docker_client.containers(\n all=1,\n filters=dict(name=self.getContainerName()))\n for instance in instances:\n try:\n docker_client.remove_container(instance['Id'], v=True, force=True)\n except NotFound:\n pass # that's a race condition\n\n found = False\n if image is not None:\n found = self._image_exists(docker_client, image)\n else:\n image = '%s_%s_image' % (self.workername, id(self))\n if (not found) and (dockerfile is not None):\n log.msg(\"Image '%s' not found, building it from scratch\" %\n image)\n for line in docker_client.build(fileobj=BytesIO(dockerfile.encode('utf-8')),\n tag=image):\n for streamline in _handle_stream_line(line):\n log.msg(streamline)\n\n if ((not self._image_exists(docker_client, image))) and self.autopull:\n log.msg(\"Image '%s' not found, pulling from registry\" %\n image)\n docker_client.pull(image)\n\n if (not self._image_exists(docker_client, image)):\n log.msg(\"Image '%s' not found\" % image)\n raise LatentWorkerFailedToSubstantiate(\n 'Image \"%s\" not found on docker host.' % image\n )\n\n volumes, binds = self._thd_parse_volumes(volumes)\n host_conf = self.hostconfig.copy()\n host_conf['binds'] = binds\n host_conf = docker_client.create_host_config(**host_conf)\n\n instance = docker_client.create_container(\n image,\n self.command,\n name=self.getContainerName(),\n volumes=volumes,\n environment=self.createEnvironment(),\n host_config=host_conf\n )\n\n if instance.get('Id') is None:\n log.msg('Failed to create the container')\n raise LatentWorkerFailedToSubstantiate(\n 'Failed to start container'\n )\n shortid = instance['Id'][:6]\n log.msg('Container created, Id: %s...' % (shortid,))\n instance['image'] = image\n self.instance = instance\n docker_client.start(instance)\n log.msg('Container started')\n if self.followStartupLogs:\n logs = docker_client.attach(\n container=instance, stdout=True, stderr=True, stream=True)\n for line in logs:\n log.msg(\"docker VM %s: %s\" % (shortid, line.strip()))\n if self.conn:\n break\n del logs\n return [instance['Id'], image]\n\n def stop_instance(self, fast=False):\n if self.instance is None:\n # be gentle. Something may just be trying to alert us that an\n # instance never attached, and it's because, somehow, we never\n # started.\n return defer.succeed(None)\n instance = self.instance\n self.instance = None\n return threads.deferToThread(self._thd_stop_instance, instance, fast)\n\n def _thd_stop_instance(self, instance, fast):\n docker_client = self._getDockerClient()\n log.msg('Stopping container %s...' % instance['Id'][:6])\n docker_client.stop(instance['Id'])\n if not fast:\n docker_client.wait(instance['Id'])\n docker_client.remove_container(instance['Id'], v=True, force=True)\n if self.image is None:\n try:\n docker_client.remove_image(image=instance['image'])\n except docker.errors.APIError as e:\n log.msg('Error while removing the image: %s', e)\n", "path": "master/buildbot/worker/docker.py"}]}
| 3,619 | 475 |
gh_patches_debug_24389
|
rasdani/github-patches
|
git_diff
|
pulp__pulpcore-4189
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Reclaim space for repository fails with Cannot delete some instances of model 'Artifact' because they are referenced through protected foreign keys: 'ContentArtifact.artifact'."
**Version**
3.16, but probably all versions
**Describe the bug**
- Reclaim space for repository fails with the following error.
~~~
Task paused with error: "("Cannot delete some instances of model 'Artifact' because they are referenced through protected foreign keys: 'ContentArtifact.artifact'.", {<ContentArtifact: pk=452959ad-c045-4e85-bf9f-6651ba37f57d>})"
~~~
**To Reproduce**
See BZ
**Additional context**
https://bugzilla.redhat.com/show_bug.cgi?id=2169322
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pulpcore/app/tasks/reclaim_space.py`
Content:
```
1 from pulpcore.app.models import (
2 Artifact,
3 Content,
4 ContentArtifact,
5 ProgressReport,
6 PublishedMetadata,
7 Repository,
8 RepositoryVersion,
9 )
10 from pulpcore.app.util import get_domain
11
12
13 def reclaim_space(repo_pks, keeplist_rv_pks=None, force=False):
14 """
15 This task frees-up disk space by removing Artifact files from the filesystem for Content
16 exclusive to the list of provided repos.
17
18 Note: content marked as `proctected` will be excluded from the reclaim disk space.
19
20 Kwargs:
21 repo_pks (list): A list of repo pks the disk reclaim space is performed on.
22 keeplist_rv_pks (list): A list of repo version pks that will be excluded from the reclaim
23 disk space.
24 force (bool): If True, uploaded content will be taken into account.
25
26 """
27 reclaimed_repos = Repository.objects.filter(pk__in=repo_pks)
28 for repo in reclaimed_repos:
29 repo.invalidate_cache(everything=True)
30
31 domain = get_domain()
32 rest_of_repos = Repository.objects.filter(pulp_domain=domain).exclude(pk__in=repo_pks)
33 c_keep_qs = Content.objects.filter(repositories__in=rest_of_repos)
34 c_reclaim_qs = Content.objects.filter(repositories__in=repo_pks)
35 c_reclaim_qs = c_reclaim_qs.exclude(
36 pk__in=c_keep_qs, pulp_type=PublishedMetadata.get_pulp_type()
37 )
38
39 if keeplist_rv_pks:
40 rv_qs = RepositoryVersion.objects.filter(pk__in=keeplist_rv_pks)
41 rv_content = Content.objects.none()
42 for rv in rv_qs.iterator():
43 rv_content |= rv.content
44 c_reclaim_qs = c_reclaim_qs.exclude(pk__in=rv_content)
45
46 content_distinct = c_reclaim_qs.distinct("pulp_type")
47 unprotected = []
48 for content in content_distinct:
49 if not content.cast().PROTECTED_FROM_RECLAIM:
50 unprotected.append(content.pulp_type)
51
52 ca_qs = ContentArtifact.objects.select_related("content", "artifact").filter(
53 content__in=c_reclaim_qs.values("pk"), artifact__isnull=False
54 )
55 if not force:
56 ca_qs = ca_qs.filter(remoteartifact__isnull=False)
57 artifact_pks = set()
58 ca_to_update = []
59 for ca in ca_qs.iterator():
60 if ca.content.pulp_type in unprotected:
61 artifact_pks.add(ca.artifact.pk)
62 ca.artifact = None
63 ca_to_update.append(ca)
64
65 ContentArtifact.objects.bulk_update(objs=ca_to_update, fields=["artifact"], batch_size=1000)
66 artifacts_to_delete = Artifact.objects.filter(pk__in=artifact_pks)
67 progress_bar = ProgressReport(
68 message="Reclaim disk space",
69 total=artifacts_to_delete.count(),
70 code="reclaim-space.artifact",
71 done=0,
72 state="running",
73 )
74 progress_bar.save()
75
76 counter = 0
77 interval = 100
78 for artifact in artifacts_to_delete.iterator():
79 # we need to manually call delete() because it cleans up the file on the filesystem
80 artifact.delete()
81 progress_bar.done += 1
82 counter += 1
83
84 if counter >= interval:
85 progress_bar.save()
86 counter = 0
87
88 progress_bar.state = "completed"
89 progress_bar.save()
90
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pulpcore/app/tasks/reclaim_space.py b/pulpcore/app/tasks/reclaim_space.py
--- a/pulpcore/app/tasks/reclaim_space.py
+++ b/pulpcore/app/tasks/reclaim_space.py
@@ -1,3 +1,7 @@
+from logging import getLogger
+
+from django.db.models.deletion import ProtectedError
+
from pulpcore.app.models import (
Artifact,
Content,
@@ -9,6 +13,8 @@
)
from pulpcore.app.util import get_domain
+log = getLogger(__name__)
+
def reclaim_space(repo_pks, keeplist_rv_pks=None, force=False):
"""
@@ -76,10 +82,16 @@
counter = 0
interval = 100
for artifact in artifacts_to_delete.iterator():
- # we need to manually call delete() because it cleans up the file on the filesystem
- artifact.delete()
- progress_bar.done += 1
- counter += 1
+ try:
+ # we need to manually call delete() because it cleans up the file on the filesystem
+ artifact.delete()
+ except ProtectedError as e:
+ # Rarely artifact could be shared between to different content units.
+ # Just log and skip the artifact deletion in this case
+ log.info(e)
+ else:
+ progress_bar.done += 1
+ counter += 1
if counter >= interval:
progress_bar.save()
|
{"golden_diff": "diff --git a/pulpcore/app/tasks/reclaim_space.py b/pulpcore/app/tasks/reclaim_space.py\n--- a/pulpcore/app/tasks/reclaim_space.py\n+++ b/pulpcore/app/tasks/reclaim_space.py\n@@ -1,3 +1,7 @@\n+from logging import getLogger\n+\n+from django.db.models.deletion import ProtectedError\n+\n from pulpcore.app.models import (\n Artifact,\n Content,\n@@ -9,6 +13,8 @@\n )\n from pulpcore.app.util import get_domain\n \n+log = getLogger(__name__)\n+\n \n def reclaim_space(repo_pks, keeplist_rv_pks=None, force=False):\n \"\"\"\n@@ -76,10 +82,16 @@\n counter = 0\n interval = 100\n for artifact in artifacts_to_delete.iterator():\n- # we need to manually call delete() because it cleans up the file on the filesystem\n- artifact.delete()\n- progress_bar.done += 1\n- counter += 1\n+ try:\n+ # we need to manually call delete() because it cleans up the file on the filesystem\n+ artifact.delete()\n+ except ProtectedError as e:\n+ # Rarely artifact could be shared between to different content units.\n+ # Just log and skip the artifact deletion in this case\n+ log.info(e)\n+ else:\n+ progress_bar.done += 1\n+ counter += 1\n \n if counter >= interval:\n progress_bar.save()\n", "issue": "Reclaim space for repository fails with Cannot delete some instances of model 'Artifact' because they are referenced through protected foreign keys: 'ContentArtifact.artifact'.\"\n**Version**\r\n3.16, but probably all versions\r\n\r\n**Describe the bug**\r\n\r\n- Reclaim space for repository fails with the following error.\r\n\r\n ~~~\r\n Task paused with error: \"(\"Cannot delete some instances of model 'Artifact' because they are referenced through protected foreign keys: 'ContentArtifact.artifact'.\", {<ContentArtifact: pk=452959ad-c045-4e85-bf9f-6651ba37f57d>})\"\r\n ~~~\r\n\r\n**To Reproduce**\r\nSee BZ\r\n\r\n**Additional context**\r\nhttps://bugzilla.redhat.com/show_bug.cgi?id=2169322\r\n\n", "before_files": [{"content": "from pulpcore.app.models import (\n Artifact,\n Content,\n ContentArtifact,\n ProgressReport,\n PublishedMetadata,\n Repository,\n RepositoryVersion,\n)\nfrom pulpcore.app.util import get_domain\n\n\ndef reclaim_space(repo_pks, keeplist_rv_pks=None, force=False):\n \"\"\"\n This task frees-up disk space by removing Artifact files from the filesystem for Content\n exclusive to the list of provided repos.\n\n Note: content marked as `proctected` will be excluded from the reclaim disk space.\n\n Kwargs:\n repo_pks (list): A list of repo pks the disk reclaim space is performed on.\n keeplist_rv_pks (list): A list of repo version pks that will be excluded from the reclaim\n disk space.\n force (bool): If True, uploaded content will be taken into account.\n\n \"\"\"\n reclaimed_repos = Repository.objects.filter(pk__in=repo_pks)\n for repo in reclaimed_repos:\n repo.invalidate_cache(everything=True)\n\n domain = get_domain()\n rest_of_repos = Repository.objects.filter(pulp_domain=domain).exclude(pk__in=repo_pks)\n c_keep_qs = Content.objects.filter(repositories__in=rest_of_repos)\n c_reclaim_qs = Content.objects.filter(repositories__in=repo_pks)\n c_reclaim_qs = c_reclaim_qs.exclude(\n pk__in=c_keep_qs, pulp_type=PublishedMetadata.get_pulp_type()\n )\n\n if keeplist_rv_pks:\n rv_qs = RepositoryVersion.objects.filter(pk__in=keeplist_rv_pks)\n rv_content = Content.objects.none()\n for rv in rv_qs.iterator():\n rv_content |= rv.content\n c_reclaim_qs = c_reclaim_qs.exclude(pk__in=rv_content)\n\n content_distinct = c_reclaim_qs.distinct(\"pulp_type\")\n unprotected = []\n for content in content_distinct:\n if not content.cast().PROTECTED_FROM_RECLAIM:\n unprotected.append(content.pulp_type)\n\n ca_qs = ContentArtifact.objects.select_related(\"content\", \"artifact\").filter(\n content__in=c_reclaim_qs.values(\"pk\"), artifact__isnull=False\n )\n if not force:\n ca_qs = ca_qs.filter(remoteartifact__isnull=False)\n artifact_pks = set()\n ca_to_update = []\n for ca in ca_qs.iterator():\n if ca.content.pulp_type in unprotected:\n artifact_pks.add(ca.artifact.pk)\n ca.artifact = None\n ca_to_update.append(ca)\n\n ContentArtifact.objects.bulk_update(objs=ca_to_update, fields=[\"artifact\"], batch_size=1000)\n artifacts_to_delete = Artifact.objects.filter(pk__in=artifact_pks)\n progress_bar = ProgressReport(\n message=\"Reclaim disk space\",\n total=artifacts_to_delete.count(),\n code=\"reclaim-space.artifact\",\n done=0,\n state=\"running\",\n )\n progress_bar.save()\n\n counter = 0\n interval = 100\n for artifact in artifacts_to_delete.iterator():\n # we need to manually call delete() because it cleans up the file on the filesystem\n artifact.delete()\n progress_bar.done += 1\n counter += 1\n\n if counter >= interval:\n progress_bar.save()\n counter = 0\n\n progress_bar.state = \"completed\"\n progress_bar.save()\n", "path": "pulpcore/app/tasks/reclaim_space.py"}], "after_files": [{"content": "from logging import getLogger\n\nfrom django.db.models.deletion import ProtectedError\n\nfrom pulpcore.app.models import (\n Artifact,\n Content,\n ContentArtifact,\n ProgressReport,\n PublishedMetadata,\n Repository,\n RepositoryVersion,\n)\nfrom pulpcore.app.util import get_domain\n\nlog = getLogger(__name__)\n\n\ndef reclaim_space(repo_pks, keeplist_rv_pks=None, force=False):\n \"\"\"\n This task frees-up disk space by removing Artifact files from the filesystem for Content\n exclusive to the list of provided repos.\n\n Note: content marked as `proctected` will be excluded from the reclaim disk space.\n\n Kwargs:\n repo_pks (list): A list of repo pks the disk reclaim space is performed on.\n keeplist_rv_pks (list): A list of repo version pks that will be excluded from the reclaim\n disk space.\n force (bool): If True, uploaded content will be taken into account.\n\n \"\"\"\n reclaimed_repos = Repository.objects.filter(pk__in=repo_pks)\n for repo in reclaimed_repos:\n repo.invalidate_cache(everything=True)\n\n domain = get_domain()\n rest_of_repos = Repository.objects.filter(pulp_domain=domain).exclude(pk__in=repo_pks)\n c_keep_qs = Content.objects.filter(repositories__in=rest_of_repos)\n c_reclaim_qs = Content.objects.filter(repositories__in=repo_pks)\n c_reclaim_qs = c_reclaim_qs.exclude(\n pk__in=c_keep_qs, pulp_type=PublishedMetadata.get_pulp_type()\n )\n\n if keeplist_rv_pks:\n rv_qs = RepositoryVersion.objects.filter(pk__in=keeplist_rv_pks)\n rv_content = Content.objects.none()\n for rv in rv_qs.iterator():\n rv_content |= rv.content\n c_reclaim_qs = c_reclaim_qs.exclude(pk__in=rv_content)\n\n content_distinct = c_reclaim_qs.distinct(\"pulp_type\")\n unprotected = []\n for content in content_distinct:\n if not content.cast().PROTECTED_FROM_RECLAIM:\n unprotected.append(content.pulp_type)\n\n ca_qs = ContentArtifact.objects.select_related(\"content\", \"artifact\").filter(\n content__in=c_reclaim_qs.values(\"pk\"), artifact__isnull=False\n )\n if not force:\n ca_qs = ca_qs.filter(remoteartifact__isnull=False)\n artifact_pks = set()\n ca_to_update = []\n for ca in ca_qs.iterator():\n if ca.content.pulp_type in unprotected:\n artifact_pks.add(ca.artifact.pk)\n ca.artifact = None\n ca_to_update.append(ca)\n\n ContentArtifact.objects.bulk_update(objs=ca_to_update, fields=[\"artifact\"], batch_size=1000)\n artifacts_to_delete = Artifact.objects.filter(pk__in=artifact_pks)\n progress_bar = ProgressReport(\n message=\"Reclaim disk space\",\n total=artifacts_to_delete.count(),\n code=\"reclaim-space.artifact\",\n done=0,\n state=\"running\",\n )\n progress_bar.save()\n\n counter = 0\n interval = 100\n for artifact in artifacts_to_delete.iterator():\n try:\n # we need to manually call delete() because it cleans up the file on the filesystem\n artifact.delete()\n except ProtectedError as e:\n # Rarely artifact could be shared between to different content units.\n # Just log and skip the artifact deletion in this case\n log.info(e)\n else:\n progress_bar.done += 1\n counter += 1\n\n if counter >= interval:\n progress_bar.save()\n counter = 0\n\n progress_bar.state = \"completed\"\n progress_bar.save()\n", "path": "pulpcore/app/tasks/reclaim_space.py"}]}
| 1,349 | 322 |
gh_patches_debug_42390
|
rasdani/github-patches
|
git_diff
|
e-valuation__EvaP-433
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Aggregate reward points redemptions in export per user
When exporting the redemptions of an event, all of them are listed in the Excel export. The redemptions of one user should be aggregated into one line.
Also the number of redemptions on the staff page should be replaced by the number of users who redeemed points for an event.
So when a user redeems 1 point and then 2 points for the same event, the export should only state that 3 points have been redeemed in total.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `evap/rewards/exporters.py`
Content:
```
1 from evap.rewards.models import RewardPointRedemption, RewardPointRedemptionEvent
2
3 from django.utils.translation import ugettext as _
4
5 from operator import attrgetter
6
7 from collections import OrderedDict
8 from collections import defaultdict
9 import datetime
10 import xlwt
11
12 from operator import itemgetter
13
14 from evap.results.exporters import writen, writec
15
16
17 class ExcelExporter(object):
18
19 def __init__(self, reward_point_redemptions):
20 self.reward_point_redemptions = reward_point_redemptions
21
22 styles = {
23 'default': xlwt.Style.default_style,
24 'bold': xlwt.easyxf('font: bold on'),
25 }
26
27 def export(self, response):
28 redemptions = self.reward_point_redemptions
29 redemptions = sorted(redemptions, key=attrgetter('user_profile.user.last_name', 'user_profile.user.first_name'))
30
31 self.workbook = xlwt.Workbook()
32 self.sheet = self.workbook.add_sheet(_(u"Redemptions"))
33 self.row = 0
34 self.col = 0
35
36 writec(self, _("Last name"), "bold")
37 writec(self, _("First name"), "bold")
38 writec(self, _("Email address"), "bold")
39 writec(self, _("Number of points"), "bold")
40
41 for redemption in redemptions:
42 user = redemption.user_profile.user
43 writen(self, user.last_name, "default")
44 writec(self, user.first_name, "default")
45 writec(self, user.email, "default")
46 writec(self, redemption.value, "default")
47
48 self.workbook.save(response)
49
```
Path: `evap/rewards/views.py`
Content:
```
1 from django.contrib import messages
2 from django.db import transaction
3 from django.shortcuts import get_object_or_404, redirect, render_to_response
4 from django.template import RequestContext
5 from django.utils.translation import ugettext as _
6 from django.utils.translation import get_language
7 from django.http import HttpResponse
8 from datetime import datetime
9 from operator import attrgetter
10
11 from evap.evaluation.auth import reward_user_required, fsr_required
12 from evap.evaluation.models import Semester, Course
13
14 from evap.fsr.views import semester_view
15
16 from evap.rewards.models import RewardPointGranting, RewardPointRedemption, RewardPointRedemptionEvent, SemesterActivation
17 from evap.rewards.tools import save_redemptions, reward_points_of_user, can_user_use_reward_points
18 from evap.rewards.forms import RewardPointRedemptionEventForm
19 from evap.rewards.exporters import ExcelExporter
20
21 @reward_user_required
22 def index(request):
23 if request.method == 'POST':
24 redemptions = {}
25 for key, value in request.POST.iteritems():
26 if(key.startswith('points-')):
27 event_id = int(key.rpartition('-')[2])
28 redemptions[event_id] = int(value)
29
30 if save_redemptions(request, redemptions):
31 messages.success(request, _("You successfully redeemed your points."))
32 else:
33 messages.warning(request, _("You don't have enough reward points."))
34
35 total_points_available = reward_points_of_user(request.user.userprofile)
36 reward_point_grantings = RewardPointGranting.objects.filter(user_profile=request.user.userprofile)
37 reward_point_redemptions = RewardPointRedemption.objects.filter(user_profile=request.user.userprofile)
38 events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__gte=datetime.now())
39 events = sorted(events, key=lambda event: event.date)
40
41 reward_point_actions=[]
42 for granting in reward_point_grantings:
43 reward_point_actions.append((granting.granting_time, _('Reward for') + ' ' + granting.semester.name, granting.value, ''))
44 for redemption in reward_point_redemptions:
45 reward_point_actions.append((redemption.redemption_time, redemption.event.name, '', redemption.value))
46
47 reward_point_actions.sort(key=lambda action: action[0], reverse=True)
48
49 return render_to_response(
50 "rewards_index.html",
51 dict(
52 reward_point_actions=reward_point_actions,
53 total_points_available=total_points_available,
54 events=events,
55 point_selection=[x for x in range(0,total_points_available+1)]
56 ),
57 context_instance=RequestContext(request))
58
59
60 @fsr_required
61 def semester_reward_points(request, semester_id):
62 semester = get_object_or_404(Semester, id=semester_id)
63 courses = Course.objects.filter(semester=semester)
64 participants = set()
65 for course in courses:
66 for participant in course.participants.all():
67 if can_user_use_reward_points(participant.userprofile):
68 participants.add(participant)
69 participants = sorted(participants, key=attrgetter('last_name', 'first_name'))
70
71 data = []
72 for participant in participants:
73 number_of_courses = Course.objects.filter(semester=semester, participants=participant).count()
74 number_of_courses_voted_for = Course.objects.filter(semester=semester, voters=participant).count()
75 earned_reward_points = RewardPointGranting.objects.filter(semester=semester, user_profile=participant.userprofile).exists()
76 data.append((participant, number_of_courses_voted_for, number_of_courses, earned_reward_points))
77
78 return render_to_response("rewards_semester_reward_points_view.html", dict(semester=semester, data=data, disable_breadcrumb_semester=False), context_instance=RequestContext(request))
79
80
81 @fsr_required
82 def reward_point_redemption_events(request):
83 upcoming_events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__gte=datetime.now()).order_by('date')
84 past_events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__lt=datetime.now()).order_by('-date')
85 return render_to_response("rewards_reward_point_redemption_events.html", dict(upcoming_events=upcoming_events, past_events=past_events), context_instance=RequestContext(request))
86
87
88 @fsr_required
89 def reward_point_redemption_event_create(request):
90 event = RewardPointRedemptionEvent()
91 form = RewardPointRedemptionEventForm(request.POST or None, instance=event)
92
93 if form.is_valid():
94 form.save()
95 messages.success(request, _("Successfully created event."))
96 return redirect('evap.rewards.views.reward_point_redemption_events')
97 else:
98 return render_to_response("rewards_reward_point_redemption_event_form.html", dict(form=form), context_instance=RequestContext(request))
99
100
101 @fsr_required
102 def reward_point_redemption_event_edit(request, event_id):
103 event = get_object_or_404(RewardPointRedemptionEvent, id=event_id)
104 form = RewardPointRedemptionEventForm(request.POST or None, instance=event)
105
106 if form.is_valid():
107 event = form.save()
108
109 messages.success(request, _("Successfully updated event."))
110 return redirect('evap.rewards.views.reward_point_redemption_events')
111 else:
112 return render_to_response("rewards_reward_point_redemption_event_form.html", dict(event=event, form=form), context_instance=RequestContext(request))
113
114
115 @fsr_required
116 def reward_point_redemption_event_delete(request, event_id):
117 event = get_object_or_404(RewardPointRedemptionEvent, id=event_id)
118
119 if event.can_delete:
120 if request.method == 'POST':
121 event.delete()
122 return redirect('evap.rewards.views.reward_point_redemption_events')
123 else:
124 return render_to_response("rewards_reward_point_redemption_event_delete.html", dict(event=event), context_instance=RequestContext(request))
125 else:
126 messages.warning(request, _("This event cannot be deleted because some users already redeemed points for it."))
127 return redirect('evap.rewards.views.reward_point_redemption_events')
128
129
130 @fsr_required
131 def reward_point_redemption_event_export(request, event_id):
132 event = get_object_or_404(RewardPointRedemptionEvent, id=event_id)
133
134 filename = _("RewardPoints")+"-%s-%s-%s.xls" % (event.date, event.name, get_language())
135
136 response = HttpResponse(content_type="application/vnd.ms-excel")
137 response["Content-Disposition"] = "attachment; filename=\"%s\"" % filename
138
139 ExcelExporter(event.reward_point_redemptions.all()).export(response)
140
141 return response
142
143
144 @fsr_required
145 def semester_activation(request, semester_id, active):
146 if active == 'on':
147 active = True
148 else:
149 active = False
150
151 try:
152 activation = SemesterActivation.objects.filter(semester=Semester.objects.get(id=semester_id)).get()
153 activation.is_active = active
154 except SemesterActivation.DoesNotExist:
155 activation = SemesterActivation(semester=Semester.objects.get(id=semester_id), is_active=active)
156 activation.save()
157
158 return semester_view(request=request, semester_id=semester_id)
159
```
Path: `evap/rewards/models.py`
Content:
```
1 from django.utils.translation import ugettext_lazy as _
2 from django.db import models
3
4 class RewardPointRedemptionEvent(models.Model):
5 name = models.CharField(max_length=1024, verbose_name=_(u"event name"))
6 date = models.DateField(verbose_name=_(u"event date"))
7 redeem_end_date = models.DateField(verbose_name=_(u"redemption end date"))
8
9 @property
10 def can_delete(self):
11 if RewardPointRedemption.objects.filter(event=self).exists():
12 return False
13 return True
14
15 class RewardPointGranting(models.Model):
16 user_profile = models.ForeignKey('evaluation.UserProfile', related_name="reward_point_grantings")
17 semester = models.ForeignKey('evaluation.Semester', related_name="reward_point_grantings", blank=True, null=True)
18 granting_time = models.DateTimeField(verbose_name=_(u"granting time"), auto_now_add=True)
19 value = models.IntegerField(verbose_name=_(u"value"), default=0)
20
21 class RewardPointRedemption(models.Model):
22 user_profile = models.ForeignKey('evaluation.UserProfile', related_name="reward_point_redemptions")
23 redemption_time = models.DateTimeField(verbose_name=_(u"redemption time"), auto_now_add=True)
24 value = models.IntegerField(verbose_name=_(u"value"), default=0)
25 event = models.ForeignKey(RewardPointRedemptionEvent, related_name="reward_point_redemptions")
26
27 class SemesterActivation(models.Model):
28 semester = models.ForeignKey('evaluation.Semester', related_name='rewards_active', unique=True)
29 is_active = models.BooleanField(default=False)
30
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/evap/rewards/exporters.py b/evap/rewards/exporters.py
--- a/evap/rewards/exporters.py
+++ b/evap/rewards/exporters.py
@@ -2,8 +2,6 @@
from django.utils.translation import ugettext as _
-from operator import attrgetter
-
from collections import OrderedDict
from collections import defaultdict
import datetime
@@ -16,8 +14,8 @@
class ExcelExporter(object):
- def __init__(self, reward_point_redemptions):
- self.reward_point_redemptions = reward_point_redemptions
+ def __init__(self, redemptions_by_user):
+ self.redemptions_by_user = redemptions_by_user
styles = {
'default': xlwt.Style.default_style,
@@ -25,9 +23,8 @@
}
def export(self, response):
- redemptions = self.reward_point_redemptions
- redemptions = sorted(redemptions, key=attrgetter('user_profile.user.last_name', 'user_profile.user.first_name'))
-
+ redemptions_dict = self.redemptions_by_user
+
self.workbook = xlwt.Workbook()
self.sheet = self.workbook.add_sheet(_(u"Redemptions"))
self.row = 0
@@ -38,11 +35,10 @@
writec(self, _("Email address"), "bold")
writec(self, _("Number of points"), "bold")
- for redemption in redemptions:
- user = redemption.user_profile.user
- writen(self, user.last_name, "default")
- writec(self, user.first_name, "default")
- writec(self, user.email, "default")
- writec(self, redemption.value, "default")
+ for user_profile, value in redemptions_dict.items():
+ writen(self, user_profile.user.last_name, "default")
+ writec(self, user_profile.user.first_name, "default")
+ writec(self, user_profile.user.email, "default")
+ writec(self, value, "default")
self.workbook.save(response)
diff --git a/evap/rewards/models.py b/evap/rewards/models.py
--- a/evap/rewards/models.py
+++ b/evap/rewards/models.py
@@ -1,6 +1,10 @@
from django.utils.translation import ugettext_lazy as _
from django.db import models
+from collections import OrderedDict
+
+from operator import attrgetter
+
class RewardPointRedemptionEvent(models.Model):
name = models.CharField(max_length=1024, verbose_name=_(u"event name"))
date = models.DateField(verbose_name=_(u"event date"))
@@ -11,6 +15,15 @@
if RewardPointRedemption.objects.filter(event=self).exists():
return False
return True
+
+ def redemptions_by_user(self):
+ redemptions = self.reward_point_redemptions.order_by('user_profile__user__last_name', 'user_profile__user__first_name')
+ redemptions_dict = OrderedDict()
+ for redemption in redemptions:
+ if not redemption.user_profile in redemptions_dict:
+ redemptions_dict[redemption.user_profile] = 0
+ redemptions_dict[redemption.user_profile] += redemption.value
+ return redemptions_dict
class RewardPointGranting(models.Model):
user_profile = models.ForeignKey('evaluation.UserProfile', related_name="reward_point_grantings")
diff --git a/evap/rewards/views.py b/evap/rewards/views.py
--- a/evap/rewards/views.py
+++ b/evap/rewards/views.py
@@ -136,7 +136,7 @@
response = HttpResponse(content_type="application/vnd.ms-excel")
response["Content-Disposition"] = "attachment; filename=\"%s\"" % filename
- ExcelExporter(event.reward_point_redemptions.all()).export(response)
+ ExcelExporter(event.redemptions_by_user()).export(response)
return response
|
{"golden_diff": "diff --git a/evap/rewards/exporters.py b/evap/rewards/exporters.py\n--- a/evap/rewards/exporters.py\n+++ b/evap/rewards/exporters.py\n@@ -2,8 +2,6 @@\n \n from django.utils.translation import ugettext as _\n \n-from operator import attrgetter\n-\n from collections import OrderedDict\n from collections import defaultdict\n import datetime\n@@ -16,8 +14,8 @@\n \n class ExcelExporter(object):\n \n- def __init__(self, reward_point_redemptions):\n- self.reward_point_redemptions = reward_point_redemptions\n+ def __init__(self, redemptions_by_user):\n+ self.redemptions_by_user = redemptions_by_user\n \n styles = {\n 'default': xlwt.Style.default_style,\n@@ -25,9 +23,8 @@\n }\n \n def export(self, response):\n- redemptions = self.reward_point_redemptions\n- redemptions = sorted(redemptions, key=attrgetter('user_profile.user.last_name', 'user_profile.user.first_name'))\n-\n+ redemptions_dict = self.redemptions_by_user\n+ \n self.workbook = xlwt.Workbook()\n self.sheet = self.workbook.add_sheet(_(u\"Redemptions\"))\n self.row = 0\n@@ -38,11 +35,10 @@\n writec(self, _(\"Email address\"), \"bold\")\n writec(self, _(\"Number of points\"), \"bold\")\n \n- for redemption in redemptions:\n- user = redemption.user_profile.user\n- writen(self, user.last_name, \"default\")\n- writec(self, user.first_name, \"default\")\n- writec(self, user.email, \"default\")\n- writec(self, redemption.value, \"default\")\n+ for user_profile, value in redemptions_dict.items():\n+ writen(self, user_profile.user.last_name, \"default\")\n+ writec(self, user_profile.user.first_name, \"default\")\n+ writec(self, user_profile.user.email, \"default\")\n+ writec(self, value, \"default\")\n \n self.workbook.save(response)\ndiff --git a/evap/rewards/models.py b/evap/rewards/models.py\n--- a/evap/rewards/models.py\n+++ b/evap/rewards/models.py\n@@ -1,6 +1,10 @@\n from django.utils.translation import ugettext_lazy as _\n from django.db import models\n \n+from collections import OrderedDict\n+\n+from operator import attrgetter\n+\n class RewardPointRedemptionEvent(models.Model):\n name = models.CharField(max_length=1024, verbose_name=_(u\"event name\"))\n date = models.DateField(verbose_name=_(u\"event date\"))\n@@ -11,6 +15,15 @@\n if RewardPointRedemption.objects.filter(event=self).exists():\n return False\n return True\n+ \n+ def redemptions_by_user(self):\n+ redemptions = self.reward_point_redemptions.order_by('user_profile__user__last_name', 'user_profile__user__first_name')\n+ redemptions_dict = OrderedDict()\n+ for redemption in redemptions:\n+ if not redemption.user_profile in redemptions_dict:\n+ redemptions_dict[redemption.user_profile] = 0\n+ redemptions_dict[redemption.user_profile] += redemption.value\n+ return redemptions_dict\n \n class RewardPointGranting(models.Model):\n user_profile = models.ForeignKey('evaluation.UserProfile', related_name=\"reward_point_grantings\")\ndiff --git a/evap/rewards/views.py b/evap/rewards/views.py\n--- a/evap/rewards/views.py\n+++ b/evap/rewards/views.py\n@@ -136,7 +136,7 @@\n response = HttpResponse(content_type=\"application/vnd.ms-excel\")\n response[\"Content-Disposition\"] = \"attachment; filename=\\\"%s\\\"\" % filename\n \n- ExcelExporter(event.reward_point_redemptions.all()).export(response)\n+ ExcelExporter(event.redemptions_by_user()).export(response)\n \n return response\n", "issue": "Aggregate reward points redemptions in export per user\nWhen exporting the redemptions of an event, all of them are listed in the Excel export. The redemptions of one user should be aggregated into one line.\nAlso the number of redemptions on the staff page should be replaced by the number of users who redeemed points for an event.\n\nSo when a user redeems 1 point and then 2 points for the same event, the export should only state that 3 points have been redeemed in total.\n\n", "before_files": [{"content": "from evap.rewards.models import RewardPointRedemption, RewardPointRedemptionEvent\n\nfrom django.utils.translation import ugettext as _\n\nfrom operator import attrgetter\n\nfrom collections import OrderedDict\nfrom collections import defaultdict\nimport datetime\nimport xlwt\n\nfrom operator import itemgetter\n\nfrom evap.results.exporters import writen, writec\n\n\nclass ExcelExporter(object):\n\n def __init__(self, reward_point_redemptions):\n self.reward_point_redemptions = reward_point_redemptions\n\n styles = {\n 'default': xlwt.Style.default_style,\n 'bold': xlwt.easyxf('font: bold on'),\n }\n\n def export(self, response):\n redemptions = self.reward_point_redemptions\n redemptions = sorted(redemptions, key=attrgetter('user_profile.user.last_name', 'user_profile.user.first_name'))\n\n self.workbook = xlwt.Workbook()\n self.sheet = self.workbook.add_sheet(_(u\"Redemptions\"))\n self.row = 0\n self.col = 0\n\n writec(self, _(\"Last name\"), \"bold\")\n writec(self, _(\"First name\"), \"bold\")\n writec(self, _(\"Email address\"), \"bold\")\n writec(self, _(\"Number of points\"), \"bold\")\n\n for redemption in redemptions:\n user = redemption.user_profile.user\n writen(self, user.last_name, \"default\")\n writec(self, user.first_name, \"default\")\n writec(self, user.email, \"default\")\n writec(self, redemption.value, \"default\")\n\n self.workbook.save(response)\n", "path": "evap/rewards/exporters.py"}, {"content": "from django.contrib import messages\nfrom django.db import transaction\nfrom django.shortcuts import get_object_or_404, redirect, render_to_response\nfrom django.template import RequestContext\nfrom django.utils.translation import ugettext as _\nfrom django.utils.translation import get_language\nfrom django.http import HttpResponse\nfrom datetime import datetime\nfrom operator import attrgetter\n\nfrom evap.evaluation.auth import reward_user_required, fsr_required\nfrom evap.evaluation.models import Semester, Course\n\nfrom evap.fsr.views import semester_view\n\nfrom evap.rewards.models import RewardPointGranting, RewardPointRedemption, RewardPointRedemptionEvent, SemesterActivation\nfrom evap.rewards.tools import save_redemptions, reward_points_of_user, can_user_use_reward_points\nfrom evap.rewards.forms import RewardPointRedemptionEventForm\nfrom evap.rewards.exporters import ExcelExporter\n\n@reward_user_required\ndef index(request):\n if request.method == 'POST':\n redemptions = {}\n for key, value in request.POST.iteritems():\n if(key.startswith('points-')):\n event_id = int(key.rpartition('-')[2])\n redemptions[event_id] = int(value)\n \n if save_redemptions(request, redemptions):\n messages.success(request, _(\"You successfully redeemed your points.\"))\n else:\n messages.warning(request, _(\"You don't have enough reward points.\")) \n\n total_points_available = reward_points_of_user(request.user.userprofile)\n reward_point_grantings = RewardPointGranting.objects.filter(user_profile=request.user.userprofile)\n reward_point_redemptions = RewardPointRedemption.objects.filter(user_profile=request.user.userprofile)\n events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__gte=datetime.now())\n events = sorted(events, key=lambda event: event.date)\n\n reward_point_actions=[]\n for granting in reward_point_grantings:\n reward_point_actions.append((granting.granting_time, _('Reward for') + ' ' + granting.semester.name, granting.value, ''))\n for redemption in reward_point_redemptions:\n reward_point_actions.append((redemption.redemption_time, redemption.event.name, '', redemption.value))\n\n reward_point_actions.sort(key=lambda action: action[0], reverse=True)\n\n return render_to_response(\n \"rewards_index.html\",\n dict(\n reward_point_actions=reward_point_actions,\n total_points_available=total_points_available,\n events=events,\n point_selection=[x for x in range(0,total_points_available+1)]\n ),\n context_instance=RequestContext(request))\n\n\n@fsr_required\ndef semester_reward_points(request, semester_id):\n semester = get_object_or_404(Semester, id=semester_id)\n courses = Course.objects.filter(semester=semester)\n participants = set()\n for course in courses:\n for participant in course.participants.all():\n if can_user_use_reward_points(participant.userprofile):\n participants.add(participant)\n participants = sorted(participants, key=attrgetter('last_name', 'first_name'))\n\n data = []\n for participant in participants:\n number_of_courses = Course.objects.filter(semester=semester, participants=participant).count()\n number_of_courses_voted_for = Course.objects.filter(semester=semester, voters=participant).count()\n earned_reward_points = RewardPointGranting.objects.filter(semester=semester, user_profile=participant.userprofile).exists()\n data.append((participant, number_of_courses_voted_for, number_of_courses, earned_reward_points))\n\n return render_to_response(\"rewards_semester_reward_points_view.html\", dict(semester=semester, data=data, disable_breadcrumb_semester=False), context_instance=RequestContext(request))\n\n\n@fsr_required\ndef reward_point_redemption_events(request):\n upcoming_events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__gte=datetime.now()).order_by('date')\n past_events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__lt=datetime.now()).order_by('-date')\n return render_to_response(\"rewards_reward_point_redemption_events.html\", dict(upcoming_events=upcoming_events, past_events=past_events), context_instance=RequestContext(request))\n\n\n@fsr_required\ndef reward_point_redemption_event_create(request):\n event = RewardPointRedemptionEvent()\n form = RewardPointRedemptionEventForm(request.POST or None, instance=event)\n\n if form.is_valid():\n form.save()\n messages.success(request, _(\"Successfully created event.\"))\n return redirect('evap.rewards.views.reward_point_redemption_events')\n else:\n return render_to_response(\"rewards_reward_point_redemption_event_form.html\", dict(form=form), context_instance=RequestContext(request))\n\n\n@fsr_required\ndef reward_point_redemption_event_edit(request, event_id):\n event = get_object_or_404(RewardPointRedemptionEvent, id=event_id)\n form = RewardPointRedemptionEventForm(request.POST or None, instance=event)\n\n if form.is_valid():\n event = form.save()\n\n messages.success(request, _(\"Successfully updated event.\"))\n return redirect('evap.rewards.views.reward_point_redemption_events')\n else:\n return render_to_response(\"rewards_reward_point_redemption_event_form.html\", dict(event=event, form=form), context_instance=RequestContext(request))\n\n\n@fsr_required\ndef reward_point_redemption_event_delete(request, event_id):\n event = get_object_or_404(RewardPointRedemptionEvent, id=event_id)\n\n if event.can_delete:\n if request.method == 'POST':\n event.delete()\n return redirect('evap.rewards.views.reward_point_redemption_events')\n else:\n return render_to_response(\"rewards_reward_point_redemption_event_delete.html\", dict(event=event), context_instance=RequestContext(request))\n else:\n messages.warning(request, _(\"This event cannot be deleted because some users already redeemed points for it.\"))\n return redirect('evap.rewards.views.reward_point_redemption_events')\n\n\n@fsr_required\ndef reward_point_redemption_event_export(request, event_id):\n event = get_object_or_404(RewardPointRedemptionEvent, id=event_id)\n\n filename = _(\"RewardPoints\")+\"-%s-%s-%s.xls\" % (event.date, event.name, get_language())\n\n response = HttpResponse(content_type=\"application/vnd.ms-excel\")\n response[\"Content-Disposition\"] = \"attachment; filename=\\\"%s\\\"\" % filename\n\n ExcelExporter(event.reward_point_redemptions.all()).export(response)\n\n return response\n\n\n@fsr_required\ndef semester_activation(request, semester_id, active):\n if active == 'on':\n active = True\n else:\n active = False\n\n try:\n activation = SemesterActivation.objects.filter(semester=Semester.objects.get(id=semester_id)).get()\n activation.is_active = active\n except SemesterActivation.DoesNotExist:\n activation = SemesterActivation(semester=Semester.objects.get(id=semester_id), is_active=active)\n activation.save()\n\n return semester_view(request=request, semester_id=semester_id)\n", "path": "evap/rewards/views.py"}, {"content": "from django.utils.translation import ugettext_lazy as _\nfrom django.db import models\n\nclass RewardPointRedemptionEvent(models.Model):\n name = models.CharField(max_length=1024, verbose_name=_(u\"event name\"))\n date = models.DateField(verbose_name=_(u\"event date\"))\n redeem_end_date = models.DateField(verbose_name=_(u\"redemption end date\"))\n\n @property\n def can_delete(self):\n if RewardPointRedemption.objects.filter(event=self).exists():\n return False\n return True\n\nclass RewardPointGranting(models.Model):\n user_profile = models.ForeignKey('evaluation.UserProfile', related_name=\"reward_point_grantings\")\n semester = models.ForeignKey('evaluation.Semester', related_name=\"reward_point_grantings\", blank=True, null=True)\n granting_time = models.DateTimeField(verbose_name=_(u\"granting time\"), auto_now_add=True)\n value = models.IntegerField(verbose_name=_(u\"value\"), default=0)\n\nclass RewardPointRedemption(models.Model):\n user_profile = models.ForeignKey('evaluation.UserProfile', related_name=\"reward_point_redemptions\")\n redemption_time = models.DateTimeField(verbose_name=_(u\"redemption time\"), auto_now_add=True)\n value = models.IntegerField(verbose_name=_(u\"value\"), default=0)\n event = models.ForeignKey(RewardPointRedemptionEvent, related_name=\"reward_point_redemptions\")\n\nclass SemesterActivation(models.Model):\n semester = models.ForeignKey('evaluation.Semester', related_name='rewards_active', unique=True)\n is_active = models.BooleanField(default=False)\n", "path": "evap/rewards/models.py"}], "after_files": [{"content": "from evap.rewards.models import RewardPointRedemption, RewardPointRedemptionEvent\n\nfrom django.utils.translation import ugettext as _\n\nfrom collections import OrderedDict\nfrom collections import defaultdict\nimport datetime\nimport xlwt\n\nfrom operator import itemgetter\n\nfrom evap.results.exporters import writen, writec\n\n\nclass ExcelExporter(object):\n\n def __init__(self, redemptions_by_user):\n self.redemptions_by_user = redemptions_by_user\n\n styles = {\n 'default': xlwt.Style.default_style,\n 'bold': xlwt.easyxf('font: bold on'),\n }\n\n def export(self, response):\n redemptions_dict = self.redemptions_by_user\n \n self.workbook = xlwt.Workbook()\n self.sheet = self.workbook.add_sheet(_(u\"Redemptions\"))\n self.row = 0\n self.col = 0\n\n writec(self, _(\"Last name\"), \"bold\")\n writec(self, _(\"First name\"), \"bold\")\n writec(self, _(\"Email address\"), \"bold\")\n writec(self, _(\"Number of points\"), \"bold\")\n\n for user_profile, value in redemptions_dict.items():\n writen(self, user_profile.user.last_name, \"default\")\n writec(self, user_profile.user.first_name, \"default\")\n writec(self, user_profile.user.email, \"default\")\n writec(self, value, \"default\")\n\n self.workbook.save(response)\n", "path": "evap/rewards/exporters.py"}, {"content": "from django.contrib import messages\nfrom django.db import transaction\nfrom django.shortcuts import get_object_or_404, redirect, render_to_response\nfrom django.template import RequestContext\nfrom django.utils.translation import ugettext as _\nfrom django.utils.translation import get_language\nfrom django.http import HttpResponse\nfrom datetime import datetime\nfrom operator import attrgetter\n\nfrom evap.evaluation.auth import reward_user_required, fsr_required\nfrom evap.evaluation.models import Semester, Course\n\nfrom evap.fsr.views import semester_view\n\nfrom evap.rewards.models import RewardPointGranting, RewardPointRedemption, RewardPointRedemptionEvent, SemesterActivation\nfrom evap.rewards.tools import save_redemptions, reward_points_of_user, can_user_use_reward_points\nfrom evap.rewards.forms import RewardPointRedemptionEventForm\nfrom evap.rewards.exporters import ExcelExporter\n\n@reward_user_required\ndef index(request):\n if request.method == 'POST':\n redemptions = {}\n for key, value in request.POST.iteritems():\n if(key.startswith('points-')):\n event_id = int(key.rpartition('-')[2])\n redemptions[event_id] = int(value)\n \n if save_redemptions(request, redemptions):\n messages.success(request, _(\"You successfully redeemed your points.\"))\n else:\n messages.warning(request, _(\"You don't have enough reward points.\")) \n\n total_points_available = reward_points_of_user(request.user.userprofile)\n reward_point_grantings = RewardPointGranting.objects.filter(user_profile=request.user.userprofile)\n reward_point_redemptions = RewardPointRedemption.objects.filter(user_profile=request.user.userprofile)\n events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__gte=datetime.now())\n events = sorted(events, key=lambda event: event.date)\n\n reward_point_actions=[]\n for granting in reward_point_grantings:\n reward_point_actions.append((granting.granting_time, _('Reward for') + ' ' + granting.semester.name, granting.value, ''))\n for redemption in reward_point_redemptions:\n reward_point_actions.append((redemption.redemption_time, redemption.event.name, '', redemption.value))\n\n reward_point_actions.sort(key=lambda action: action[0], reverse=True)\n\n return render_to_response(\n \"rewards_index.html\",\n dict(\n reward_point_actions=reward_point_actions,\n total_points_available=total_points_available,\n events=events,\n point_selection=[x for x in range(0,total_points_available+1)]\n ),\n context_instance=RequestContext(request))\n\n\n@fsr_required\ndef semester_reward_points(request, semester_id):\n semester = get_object_or_404(Semester, id=semester_id)\n courses = Course.objects.filter(semester=semester)\n participants = set()\n for course in courses:\n for participant in course.participants.all():\n if can_user_use_reward_points(participant.userprofile):\n participants.add(participant)\n participants = sorted(participants, key=attrgetter('last_name', 'first_name'))\n\n data = []\n for participant in participants:\n number_of_courses = Course.objects.filter(semester=semester, participants=participant).count()\n number_of_courses_voted_for = Course.objects.filter(semester=semester, voters=participant).count()\n earned_reward_points = RewardPointGranting.objects.filter(semester=semester, user_profile=participant.userprofile).exists()\n data.append((participant, number_of_courses_voted_for, number_of_courses, earned_reward_points))\n\n return render_to_response(\"rewards_semester_reward_points_view.html\", dict(semester=semester, data=data, disable_breadcrumb_semester=False), context_instance=RequestContext(request))\n\n\n@fsr_required\ndef reward_point_redemption_events(request):\n upcoming_events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__gte=datetime.now()).order_by('date')\n past_events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__lt=datetime.now()).order_by('-date')\n return render_to_response(\"rewards_reward_point_redemption_events.html\", dict(upcoming_events=upcoming_events, past_events=past_events), context_instance=RequestContext(request))\n\n\n@fsr_required\ndef reward_point_redemption_event_create(request):\n event = RewardPointRedemptionEvent()\n form = RewardPointRedemptionEventForm(request.POST or None, instance=event)\n\n if form.is_valid():\n form.save()\n messages.success(request, _(\"Successfully created event.\"))\n return redirect('evap.rewards.views.reward_point_redemption_events')\n else:\n return render_to_response(\"rewards_reward_point_redemption_event_form.html\", dict(form=form), context_instance=RequestContext(request))\n\n\n@fsr_required\ndef reward_point_redemption_event_edit(request, event_id):\n event = get_object_or_404(RewardPointRedemptionEvent, id=event_id)\n form = RewardPointRedemptionEventForm(request.POST or None, instance=event)\n\n if form.is_valid():\n event = form.save()\n\n messages.success(request, _(\"Successfully updated event.\"))\n return redirect('evap.rewards.views.reward_point_redemption_events')\n else:\n return render_to_response(\"rewards_reward_point_redemption_event_form.html\", dict(event=event, form=form), context_instance=RequestContext(request))\n\n\n@fsr_required\ndef reward_point_redemption_event_delete(request, event_id):\n event = get_object_or_404(RewardPointRedemptionEvent, id=event_id)\n\n if event.can_delete:\n if request.method == 'POST':\n event.delete()\n return redirect('evap.rewards.views.reward_point_redemption_events')\n else:\n return render_to_response(\"rewards_reward_point_redemption_event_delete.html\", dict(event=event), context_instance=RequestContext(request))\n else:\n messages.warning(request, _(\"This event cannot be deleted because some users already redeemed points for it.\"))\n return redirect('evap.rewards.views.reward_point_redemption_events')\n\n\n@fsr_required\ndef reward_point_redemption_event_export(request, event_id):\n event = get_object_or_404(RewardPointRedemptionEvent, id=event_id)\n\n filename = _(\"RewardPoints\")+\"-%s-%s-%s.xls\" % (event.date, event.name, get_language())\n\n response = HttpResponse(content_type=\"application/vnd.ms-excel\")\n response[\"Content-Disposition\"] = \"attachment; filename=\\\"%s\\\"\" % filename\n\n ExcelExporter(event.redemptions_by_user()).export(response)\n\n return response\n\n\n@fsr_required\ndef semester_activation(request, semester_id, active):\n if active == 'on':\n active = True\n else:\n active = False\n\n try:\n activation = SemesterActivation.objects.filter(semester=Semester.objects.get(id=semester_id)).get()\n activation.is_active = active\n except SemesterActivation.DoesNotExist:\n activation = SemesterActivation(semester=Semester.objects.get(id=semester_id), is_active=active)\n activation.save()\n\n return semester_view(request=request, semester_id=semester_id)\n", "path": "evap/rewards/views.py"}, {"content": "from django.utils.translation import ugettext_lazy as _\nfrom django.db import models\n\nfrom collections import OrderedDict\n\nfrom operator import attrgetter\n\nclass RewardPointRedemptionEvent(models.Model):\n name = models.CharField(max_length=1024, verbose_name=_(u\"event name\"))\n date = models.DateField(verbose_name=_(u\"event date\"))\n redeem_end_date = models.DateField(verbose_name=_(u\"redemption end date\"))\n\n @property\n def can_delete(self):\n if RewardPointRedemption.objects.filter(event=self).exists():\n return False\n return True\n \n def redemptions_by_user(self):\n redemptions = self.reward_point_redemptions.order_by('user_profile__user__last_name', 'user_profile__user__first_name')\n redemptions_dict = OrderedDict()\n for redemption in redemptions:\n if not redemption.user_profile in redemptions_dict:\n redemptions_dict[redemption.user_profile] = 0\n redemptions_dict[redemption.user_profile] += redemption.value\n return redemptions_dict\n\nclass RewardPointGranting(models.Model):\n user_profile = models.ForeignKey('evaluation.UserProfile', related_name=\"reward_point_grantings\")\n semester = models.ForeignKey('evaluation.Semester', related_name=\"reward_point_grantings\", blank=True, null=True)\n granting_time = models.DateTimeField(verbose_name=_(u\"granting time\"), auto_now_add=True)\n value = models.IntegerField(verbose_name=_(u\"value\"), default=0)\n\nclass RewardPointRedemption(models.Model):\n user_profile = models.ForeignKey('evaluation.UserProfile', related_name=\"reward_point_redemptions\")\n redemption_time = models.DateTimeField(verbose_name=_(u\"redemption time\"), auto_now_add=True)\n value = models.IntegerField(verbose_name=_(u\"value\"), default=0)\n event = models.ForeignKey(RewardPointRedemptionEvent, related_name=\"reward_point_redemptions\")\n\nclass SemesterActivation(models.Model):\n semester = models.ForeignKey('evaluation.Semester', related_name='rewards_active', unique=True)\n is_active = models.BooleanField(default=False)\n", "path": "evap/rewards/models.py"}]}
| 3,092 | 899 |
gh_patches_debug_37435
|
rasdani/github-patches
|
git_diff
|
google__turbinia-1315
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
turbinia-client status summary formatting warning
Running turbini-client status summary might yield the following warning for some requests:
```WARNING turbinia - Error formatting the Markdown report: unsupported format string passed to NoneType.__format__```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `turbinia/api/cli/turbinia_client/helpers/formatter.py`
Content:
```
1 #!/usr/bin/env python
2 #
3 # Copyright 2017 Google Inc.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16 """Methods for formatting text."""
17
18 from __future__ import annotations
19
20 from abc import ABC, abstractmethod
21 from click import echo as click_echo
22
23 import logging
24 import json
25
26 log = logging.getLogger('turbinia')
27
28
29 def echo_json(json_data: dict) -> None:
30 """Pretty print JSON data."""
31 if isinstance(json_data, dict):
32 click_echo(json.dumps(json_data, indent=2))
33
34
35 class MarkdownReportComponent(ABC):
36 """Components for generating Turbinia request/task
37 markdown reports.
38 """
39
40 def __init__(self):
41 """Instantiates a MarkdownReportComponent object."""
42 self._components: list(MarkdownReportComponent) = []
43 self._parent: MarkdownReportComponent = None
44 self._report: str = None
45
46 @property
47 def components(self):
48 """Returns the components list."""
49 return self._components
50
51 @property
52 def report(self):
53 """Returns the markdown report text."""
54 return self._report
55
56 @report.setter
57 def report(self, report):
58 self._report = report
59
60 @property
61 def parent(self) -> MarkdownReportComponent:
62 """Returns the parent object."""
63 return self._parent
64
65 @parent.setter
66 def parent(self, parent: MarkdownReportComponent):
67 self._parent = parent
68
69 def bold(self, text):
70 """Formats text as bold in Markdown format.
71
72 Args:
73 text(string): Text to format
74
75 Return:
76 string: Formatted text.
77 """
78 return f'**{text.strip():s}**'
79
80 def heading1(self, text):
81 """Formats text as heading 1 in Markdown format.
82
83 Args:
84 text(string): Text to format
85
86 Return:
87 string: Formatted text.
88 """
89 return f'# {text.strip():s}'
90
91 def heading2(self, text):
92 """Formats text as heading 2 in Markdown format.
93
94 Args:
95 text(string): Text to format
96
97 Return:
98 string: Formatted text.
99 """
100 return f'## {text.strip():s}'
101
102 def heading3(self, text):
103 """Formats text as heading 3 in Markdown format.
104
105 Args:
106 text(string): Text to format
107
108 Return:
109 string: Formatted text.
110 """
111 return f'### {text.strip():s}'
112
113 def heading4(self, text):
114 """Formats text as heading 4 in Markdown format.
115
116 Args:
117 text(string): Text to format
118
119 Return:
120 string: Formatted text.
121 """
122 return f'#### {text.strip():s}'
123
124 def heading5(self, text):
125 """Formats text as heading 5 in Markdown format.
126 Args:
127 text(string): Text to format
128 Return:
129 string: Formatted text.
130 """
131 return f'##### {text.strip():s}'
132
133 def bullet(self, text, level=1):
134 """Formats text as a bullet in Markdown format.
135
136 Args:
137 text(string): Text to format
138 level(int): Indentation level.
139 Return:
140 string: Formatted text.
141 """
142 return f"{' ' * (level - 1):s}* {text.strip():s}"
143
144 def code(self, text):
145 """Formats text as code in Markdown format.
146
147 Args:
148 text(string): Text to format
149
150 Return:
151 string: Formatted text.
152 """
153 return f'`{text.strip():s}`'
154
155 def add(self, component: MarkdownReportComponent) -> None:
156 """Adds a MarkdownReportComponent object to the components list.
157
158 This method should additionally set the parent object.
159 """
160 pass
161
162 def add_components(self, components: list[MarkdownReportComponent]) -> None:
163 """Adds multiple MarkdownReportComponent objects to the components list."""
164 pass
165
166 def remove(self, component: MarkdownReportComponent) -> None:
167 """Removes a MarkdownReportComponent object from the components list.
168
169 This method should set the component's object to None.
170 """
171 pass
172
173 @abstractmethod
174 def generate_markdown(self) -> str:
175 pass
176
177
178 class TaskMarkdownReport(MarkdownReportComponent):
179 """Turbinia Task markdown report."""
180
181 def __init__(self, request_data: dict = None):
182 """Initialize TaskMarkdownReport"""
183 super().__init__()
184 self._request_data: dict = request_data
185
186 def generate_markdown(self) -> str:
187 """Generate a markdown report."""
188 report: list[str] = []
189 task: dict = self._request_data
190 if not task:
191 return ''
192
193 try:
194 report.append(self.heading2(task.get('name')))
195 line = f"{self.bold('Evidence:'):s} {task.get('evidence_name')!s}"
196 report.append(self.bullet(line))
197 line = f"{self.bold('Status:'):s} {task.get('status'):s}"
198 report.append(self.bullet(line))
199 report.append(self.bullet(f"Task Id: {task.get('id')!s}"))
200 report.append(
201 self.bullet(f"Executed on worker {task.get('worker_name')!s}"))
202 if task.get('report_data'):
203 report.append('')
204 report.append(self.heading3('Task Reported Data'))
205 report.extend(task.get('report_data').splitlines())
206 report.append('')
207 report.append(self.heading3('Saved Task Files:'))
208
209 saved_paths = task.get('saved_paths')
210 if saved_paths:
211 for path in saved_paths:
212 report.append(self.bullet(self.code(path)))
213 report.append('')
214 except TypeError as exception:
215 log.warning(f'Error formatting the Markdown report: {exception!s}')
216
217 self.report = '\n'.join(report)
218 return self.report
219
220
221 class RequestMarkdownReport(MarkdownReportComponent):
222 """Turbinia Request Markdown report."""
223
224 def __init__(self, request_data: dict):
225 """Initializes a RequestMarkdownReport object."""
226 super().__init__()
227 self._request_data: dict = request_data
228
229 tasks = [TaskMarkdownReport(task) for task in request_data.get('tasks')]
230 self.add_components(tasks)
231
232 def add(self, component: MarkdownReportComponent) -> None:
233 if component:
234 self.components.append(component)
235 component.parent = self
236
237 def remove(self, component: MarkdownReportComponent) -> None:
238 self.components.remove(component)
239 component.parent = None
240
241 def add_components(self, components: list[MarkdownReportComponent]) -> None:
242 if components:
243 for component in components:
244 self.components.append(component)
245 component.parent = self
246
247 def generate_markdown(self) -> str:
248 """Generates a Markdown version of Requests results."""
249 report: list[str] = []
250 request_dict: dict = self._request_data
251 if not request_dict:
252 return ''
253
254 try:
255 report.append(
256 self.heading2(f"Request ID: {request_dict.get('request_id')!s}"))
257 report.append(
258 self.bullet(
259 f"Last Update: {request_dict.get('last_task_update_time')!s}"))
260 report.append(
261 self.bullet(f"Requester: {request_dict.get('requester')!s}"))
262 report.append(self.bullet(f"Reason: {request_dict.get('reason')!s}"))
263 report.append(self.bullet(f"Status: {request_dict.get('status')!s}"))
264 report.append(
265 self.bullet(f"Failed tasks: {request_dict.get('failed_tasks'):d}"))
266 report.append(
267 self.bullet(f"Running tasks: {request_dict.get('running_tasks'):d}"))
268 report.append(
269 self.bullet(
270 f"Successful tasks: {request_dict.get('successful_tasks'):d}"))
271 report.append(
272 self.bullet(f"Task Count: {request_dict.get('task_count'):d}"))
273 report.append(
274 self.bullet(f"Queued tasks: {request_dict.get('queued_tasks'):d}"))
275 report.append(
276 self.bullet(f"Evidence Name: {request_dict.get('evidence_name'):d}"))
277 report.append('')
278 except TypeError as exception:
279 log.warning(f'Error formatting the Markdown report: {exception!s}')
280
281 for task in self.components:
282 report.append(task.generate_markdown())
283
284 self.report = '\n'.join(report)
285 return self.report
286
287
288 class SummaryMarkdownReport(MarkdownReportComponent):
289 """A markdown report summary of all Turbinia Requests."""
290
291 def __init__(self, requests_summary: list[dict]):
292 """Initialize SummaryMarkdownReport."""
293 super().__init__()
294 self._requests_summary = requests_summary
295
296 def generate_markdown(self) -> str:
297 """Generate a Markdown version of Requests summary results."""
298 report: list[str] = []
299 requests_status_list = None
300 if self._requests_summary:
301 requests_status_list = self._requests_summary.get('requests_status')
302
303 if not requests_status_list:
304 return '## No requests found.'
305
306 for request_dict in requests_status_list:
307 request_report = RequestMarkdownReport(request_dict).generate_markdown()
308 report.append(request_report)
309
310 self.report = '\n'.join(report)
311 return self.report
312
313
314 class WorkersMarkdownReport(MarkdownReportComponent):
315 """A markdown report of all tasks for a specific worker."""
316
317 def __init__(self, request_data: dict):
318 super().__init__()
319 self._request_data: dict = request_data
320
321 def generate_markdown(self) -> str:
322 """Generates a Markdown version of tasks per worker."""
323 raise NotImplementedError
324
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/turbinia/api/cli/turbinia_client/helpers/formatter.py b/turbinia/api/cli/turbinia_client/helpers/formatter.py
--- a/turbinia/api/cli/turbinia_client/helpers/formatter.py
+++ b/turbinia/api/cli/turbinia_client/helpers/formatter.py
@@ -194,7 +194,7 @@
report.append(self.heading2(task.get('name')))
line = f"{self.bold('Evidence:'):s} {task.get('evidence_name')!s}"
report.append(self.bullet(line))
- line = f"{self.bold('Status:'):s} {task.get('status'):s}"
+ line = f"{self.bold('Status:'):s} {task.get('status')!s}"
report.append(self.bullet(line))
report.append(self.bullet(f"Task Id: {task.get('id')!s}"))
report.append(
@@ -253,27 +253,26 @@
try:
report.append(
- self.heading2(f"Request ID: {request_dict.get('request_id')!s}"))
+ self.heading2(f"Request ID: {request_dict.get('request_id')}"))
report.append(
self.bullet(
- f"Last Update: {request_dict.get('last_task_update_time')!s}"))
+ f"Last Update: {request_dict.get('last_task_update_time')}"))
+ report.append(self.bullet(f"Requester: {request_dict.get('requester')}"))
+ report.append(self.bullet(f"Reason: {request_dict.get('reason')}"))
+ report.append(self.bullet(f"Status: {request_dict.get('status')}"))
report.append(
- self.bullet(f"Requester: {request_dict.get('requester')!s}"))
- report.append(self.bullet(f"Reason: {request_dict.get('reason')!s}"))
- report.append(self.bullet(f"Status: {request_dict.get('status')!s}"))
+ self.bullet(f"Failed tasks: {request_dict.get('failed_tasks')}"))
report.append(
- self.bullet(f"Failed tasks: {request_dict.get('failed_tasks'):d}"))
- report.append(
- self.bullet(f"Running tasks: {request_dict.get('running_tasks'):d}"))
+ self.bullet(f"Running tasks: {request_dict.get('running_tasks')}"))
report.append(
self.bullet(
- f"Successful tasks: {request_dict.get('successful_tasks'):d}"))
+ f"Successful tasks: {request_dict.get('successful_tasks')}"))
report.append(
- self.bullet(f"Task Count: {request_dict.get('task_count'):d}"))
+ self.bullet(f"Task Count: {request_dict.get('task_count')}"))
report.append(
- self.bullet(f"Queued tasks: {request_dict.get('queued_tasks'):d}"))
+ self.bullet(f"Queued tasks: {request_dict.get('queued_tasks')}"))
report.append(
- self.bullet(f"Evidence Name: {request_dict.get('evidence_name'):d}"))
+ self.bullet(f"Evidence Name: {request_dict.get('evidence_name')}"))
report.append('')
except TypeError as exception:
log.warning(f'Error formatting the Markdown report: {exception!s}')
|
{"golden_diff": "diff --git a/turbinia/api/cli/turbinia_client/helpers/formatter.py b/turbinia/api/cli/turbinia_client/helpers/formatter.py\n--- a/turbinia/api/cli/turbinia_client/helpers/formatter.py\n+++ b/turbinia/api/cli/turbinia_client/helpers/formatter.py\n@@ -194,7 +194,7 @@\n report.append(self.heading2(task.get('name')))\n line = f\"{self.bold('Evidence:'):s} {task.get('evidence_name')!s}\"\n report.append(self.bullet(line))\n- line = f\"{self.bold('Status:'):s} {task.get('status'):s}\"\n+ line = f\"{self.bold('Status:'):s} {task.get('status')!s}\"\n report.append(self.bullet(line))\n report.append(self.bullet(f\"Task Id: {task.get('id')!s}\"))\n report.append(\n@@ -253,27 +253,26 @@\n \n try:\n report.append(\n- self.heading2(f\"Request ID: {request_dict.get('request_id')!s}\"))\n+ self.heading2(f\"Request ID: {request_dict.get('request_id')}\"))\n report.append(\n self.bullet(\n- f\"Last Update: {request_dict.get('last_task_update_time')!s}\"))\n+ f\"Last Update: {request_dict.get('last_task_update_time')}\"))\n+ report.append(self.bullet(f\"Requester: {request_dict.get('requester')}\"))\n+ report.append(self.bullet(f\"Reason: {request_dict.get('reason')}\"))\n+ report.append(self.bullet(f\"Status: {request_dict.get('status')}\"))\n report.append(\n- self.bullet(f\"Requester: {request_dict.get('requester')!s}\"))\n- report.append(self.bullet(f\"Reason: {request_dict.get('reason')!s}\"))\n- report.append(self.bullet(f\"Status: {request_dict.get('status')!s}\"))\n+ self.bullet(f\"Failed tasks: {request_dict.get('failed_tasks')}\"))\n report.append(\n- self.bullet(f\"Failed tasks: {request_dict.get('failed_tasks'):d}\"))\n- report.append(\n- self.bullet(f\"Running tasks: {request_dict.get('running_tasks'):d}\"))\n+ self.bullet(f\"Running tasks: {request_dict.get('running_tasks')}\"))\n report.append(\n self.bullet(\n- f\"Successful tasks: {request_dict.get('successful_tasks'):d}\"))\n+ f\"Successful tasks: {request_dict.get('successful_tasks')}\"))\n report.append(\n- self.bullet(f\"Task Count: {request_dict.get('task_count'):d}\"))\n+ self.bullet(f\"Task Count: {request_dict.get('task_count')}\"))\n report.append(\n- self.bullet(f\"Queued tasks: {request_dict.get('queued_tasks'):d}\"))\n+ self.bullet(f\"Queued tasks: {request_dict.get('queued_tasks')}\"))\n report.append(\n- self.bullet(f\"Evidence Name: {request_dict.get('evidence_name'):d}\"))\n+ self.bullet(f\"Evidence Name: {request_dict.get('evidence_name')}\"))\n report.append('')\n except TypeError as exception:\n log.warning(f'Error formatting the Markdown report: {exception!s}')\n", "issue": "turbinia-client status summary formatting warning\nRunning turbini-client status summary might yield the following warning for some requests:\r\n\r\n```WARNING turbinia - Error formatting the Markdown report: unsupported format string passed to NoneType.__format__```\n", "before_files": [{"content": "#!/usr/bin/env python\n#\n# Copyright 2017 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Methods for formatting text.\"\"\"\n\nfrom __future__ import annotations\n\nfrom abc import ABC, abstractmethod\nfrom click import echo as click_echo\n\nimport logging\nimport json\n\nlog = logging.getLogger('turbinia')\n\n\ndef echo_json(json_data: dict) -> None:\n \"\"\"Pretty print JSON data.\"\"\"\n if isinstance(json_data, dict):\n click_echo(json.dumps(json_data, indent=2))\n\n\nclass MarkdownReportComponent(ABC):\n \"\"\"Components for generating Turbinia request/task\n markdown reports.\n \"\"\"\n\n def __init__(self):\n \"\"\"Instantiates a MarkdownReportComponent object.\"\"\"\n self._components: list(MarkdownReportComponent) = []\n self._parent: MarkdownReportComponent = None\n self._report: str = None\n\n @property\n def components(self):\n \"\"\"Returns the components list.\"\"\"\n return self._components\n\n @property\n def report(self):\n \"\"\"Returns the markdown report text.\"\"\"\n return self._report\n\n @report.setter\n def report(self, report):\n self._report = report\n\n @property\n def parent(self) -> MarkdownReportComponent:\n \"\"\"Returns the parent object.\"\"\"\n return self._parent\n\n @parent.setter\n def parent(self, parent: MarkdownReportComponent):\n self._parent = parent\n\n def bold(self, text):\n \"\"\"Formats text as bold in Markdown format.\n\n Args:\n text(string): Text to format\n\n Return:\n string: Formatted text.\n \"\"\"\n return f'**{text.strip():s}**'\n\n def heading1(self, text):\n \"\"\"Formats text as heading 1 in Markdown format.\n\n Args:\n text(string): Text to format\n\n Return:\n string: Formatted text.\n \"\"\"\n return f'# {text.strip():s}'\n\n def heading2(self, text):\n \"\"\"Formats text as heading 2 in Markdown format.\n\n Args:\n text(string): Text to format\n\n Return:\n string: Formatted text.\n \"\"\"\n return f'## {text.strip():s}'\n\n def heading3(self, text):\n \"\"\"Formats text as heading 3 in Markdown format.\n\n Args:\n text(string): Text to format\n\n Return:\n string: Formatted text.\n \"\"\"\n return f'### {text.strip():s}'\n\n def heading4(self, text):\n \"\"\"Formats text as heading 4 in Markdown format.\n\n Args:\n text(string): Text to format\n\n Return:\n string: Formatted text.\n \"\"\"\n return f'#### {text.strip():s}'\n\n def heading5(self, text):\n \"\"\"Formats text as heading 5 in Markdown format.\n Args:\n text(string): Text to format\n Return:\n string: Formatted text.\n \"\"\"\n return f'##### {text.strip():s}'\n\n def bullet(self, text, level=1):\n \"\"\"Formats text as a bullet in Markdown format.\n\n Args:\n text(string): Text to format\n level(int): Indentation level.\n Return:\n string: Formatted text.\n \"\"\"\n return f\"{' ' * (level - 1):s}* {text.strip():s}\"\n\n def code(self, text):\n \"\"\"Formats text as code in Markdown format.\n\n Args:\n text(string): Text to format\n\n Return:\n string: Formatted text.\n \"\"\"\n return f'`{text.strip():s}`'\n\n def add(self, component: MarkdownReportComponent) -> None:\n \"\"\"Adds a MarkdownReportComponent object to the components list.\n\n This method should additionally set the parent object.\n \"\"\"\n pass\n\n def add_components(self, components: list[MarkdownReportComponent]) -> None:\n \"\"\"Adds multiple MarkdownReportComponent objects to the components list.\"\"\"\n pass\n\n def remove(self, component: MarkdownReportComponent) -> None:\n \"\"\"Removes a MarkdownReportComponent object from the components list.\n\n This method should set the component's object to None.\n \"\"\"\n pass\n\n @abstractmethod\n def generate_markdown(self) -> str:\n pass\n\n\nclass TaskMarkdownReport(MarkdownReportComponent):\n \"\"\"Turbinia Task markdown report.\"\"\"\n\n def __init__(self, request_data: dict = None):\n \"\"\"Initialize TaskMarkdownReport\"\"\"\n super().__init__()\n self._request_data: dict = request_data\n\n def generate_markdown(self) -> str:\n \"\"\"Generate a markdown report.\"\"\"\n report: list[str] = []\n task: dict = self._request_data\n if not task:\n return ''\n\n try:\n report.append(self.heading2(task.get('name')))\n line = f\"{self.bold('Evidence:'):s} {task.get('evidence_name')!s}\"\n report.append(self.bullet(line))\n line = f\"{self.bold('Status:'):s} {task.get('status'):s}\"\n report.append(self.bullet(line))\n report.append(self.bullet(f\"Task Id: {task.get('id')!s}\"))\n report.append(\n self.bullet(f\"Executed on worker {task.get('worker_name')!s}\"))\n if task.get('report_data'):\n report.append('')\n report.append(self.heading3('Task Reported Data'))\n report.extend(task.get('report_data').splitlines())\n report.append('')\n report.append(self.heading3('Saved Task Files:'))\n\n saved_paths = task.get('saved_paths')\n if saved_paths:\n for path in saved_paths:\n report.append(self.bullet(self.code(path)))\n report.append('')\n except TypeError as exception:\n log.warning(f'Error formatting the Markdown report: {exception!s}')\n\n self.report = '\\n'.join(report)\n return self.report\n\n\nclass RequestMarkdownReport(MarkdownReportComponent):\n \"\"\"Turbinia Request Markdown report.\"\"\"\n\n def __init__(self, request_data: dict):\n \"\"\"Initializes a RequestMarkdownReport object.\"\"\"\n super().__init__()\n self._request_data: dict = request_data\n\n tasks = [TaskMarkdownReport(task) for task in request_data.get('tasks')]\n self.add_components(tasks)\n\n def add(self, component: MarkdownReportComponent) -> None:\n if component:\n self.components.append(component)\n component.parent = self\n\n def remove(self, component: MarkdownReportComponent) -> None:\n self.components.remove(component)\n component.parent = None\n\n def add_components(self, components: list[MarkdownReportComponent]) -> None:\n if components:\n for component in components:\n self.components.append(component)\n component.parent = self\n\n def generate_markdown(self) -> str:\n \"\"\"Generates a Markdown version of Requests results.\"\"\"\n report: list[str] = []\n request_dict: dict = self._request_data\n if not request_dict:\n return ''\n\n try:\n report.append(\n self.heading2(f\"Request ID: {request_dict.get('request_id')!s}\"))\n report.append(\n self.bullet(\n f\"Last Update: {request_dict.get('last_task_update_time')!s}\"))\n report.append(\n self.bullet(f\"Requester: {request_dict.get('requester')!s}\"))\n report.append(self.bullet(f\"Reason: {request_dict.get('reason')!s}\"))\n report.append(self.bullet(f\"Status: {request_dict.get('status')!s}\"))\n report.append(\n self.bullet(f\"Failed tasks: {request_dict.get('failed_tasks'):d}\"))\n report.append(\n self.bullet(f\"Running tasks: {request_dict.get('running_tasks'):d}\"))\n report.append(\n self.bullet(\n f\"Successful tasks: {request_dict.get('successful_tasks'):d}\"))\n report.append(\n self.bullet(f\"Task Count: {request_dict.get('task_count'):d}\"))\n report.append(\n self.bullet(f\"Queued tasks: {request_dict.get('queued_tasks'):d}\"))\n report.append(\n self.bullet(f\"Evidence Name: {request_dict.get('evidence_name'):d}\"))\n report.append('')\n except TypeError as exception:\n log.warning(f'Error formatting the Markdown report: {exception!s}')\n\n for task in self.components:\n report.append(task.generate_markdown())\n\n self.report = '\\n'.join(report)\n return self.report\n\n\nclass SummaryMarkdownReport(MarkdownReportComponent):\n \"\"\"A markdown report summary of all Turbinia Requests.\"\"\"\n\n def __init__(self, requests_summary: list[dict]):\n \"\"\"Initialize SummaryMarkdownReport.\"\"\"\n super().__init__()\n self._requests_summary = requests_summary\n\n def generate_markdown(self) -> str:\n \"\"\"Generate a Markdown version of Requests summary results.\"\"\"\n report: list[str] = []\n requests_status_list = None\n if self._requests_summary:\n requests_status_list = self._requests_summary.get('requests_status')\n\n if not requests_status_list:\n return '## No requests found.'\n\n for request_dict in requests_status_list:\n request_report = RequestMarkdownReport(request_dict).generate_markdown()\n report.append(request_report)\n\n self.report = '\\n'.join(report)\n return self.report\n\n\nclass WorkersMarkdownReport(MarkdownReportComponent):\n \"\"\"A markdown report of all tasks for a specific worker.\"\"\"\n\n def __init__(self, request_data: dict):\n super().__init__()\n self._request_data: dict = request_data\n\n def generate_markdown(self) -> str:\n \"\"\"Generates a Markdown version of tasks per worker.\"\"\"\n raise NotImplementedError\n", "path": "turbinia/api/cli/turbinia_client/helpers/formatter.py"}], "after_files": [{"content": "#!/usr/bin/env python\n#\n# Copyright 2017 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Methods for formatting text.\"\"\"\n\nfrom __future__ import annotations\n\nfrom abc import ABC, abstractmethod\nfrom click import echo as click_echo\n\nimport logging\nimport json\n\nlog = logging.getLogger('turbinia')\n\n\ndef echo_json(json_data: dict) -> None:\n \"\"\"Pretty print JSON data.\"\"\"\n if isinstance(json_data, dict):\n click_echo(json.dumps(json_data, indent=2))\n\n\nclass MarkdownReportComponent(ABC):\n \"\"\"Components for generating Turbinia request/task\n markdown reports.\n \"\"\"\n\n def __init__(self):\n \"\"\"Instantiates a MarkdownReportComponent object.\"\"\"\n self._components: list(MarkdownReportComponent) = []\n self._parent: MarkdownReportComponent = None\n self._report: str = None\n\n @property\n def components(self):\n \"\"\"Returns the components list.\"\"\"\n return self._components\n\n @property\n def report(self):\n \"\"\"Returns the markdown report text.\"\"\"\n return self._report\n\n @report.setter\n def report(self, report):\n self._report = report\n\n @property\n def parent(self) -> MarkdownReportComponent:\n \"\"\"Returns the parent object.\"\"\"\n return self._parent\n\n @parent.setter\n def parent(self, parent: MarkdownReportComponent):\n self._parent = parent\n\n def bold(self, text):\n \"\"\"Formats text as bold in Markdown format.\n\n Args:\n text(string): Text to format\n\n Return:\n string: Formatted text.\n \"\"\"\n return f'**{text.strip():s}**'\n\n def heading1(self, text):\n \"\"\"Formats text as heading 1 in Markdown format.\n\n Args:\n text(string): Text to format\n\n Return:\n string: Formatted text.\n \"\"\"\n return f'# {text.strip():s}'\n\n def heading2(self, text):\n \"\"\"Formats text as heading 2 in Markdown format.\n\n Args:\n text(string): Text to format\n\n Return:\n string: Formatted text.\n \"\"\"\n return f'## {text.strip():s}'\n\n def heading3(self, text):\n \"\"\"Formats text as heading 3 in Markdown format.\n\n Args:\n text(string): Text to format\n\n Return:\n string: Formatted text.\n \"\"\"\n return f'### {text.strip():s}'\n\n def heading4(self, text):\n \"\"\"Formats text as heading 4 in Markdown format.\n\n Args:\n text(string): Text to format\n\n Return:\n string: Formatted text.\n \"\"\"\n return f'#### {text.strip():s}'\n\n def heading5(self, text):\n \"\"\"Formats text as heading 5 in Markdown format.\n Args:\n text(string): Text to format\n Return:\n string: Formatted text.\n \"\"\"\n return f'##### {text.strip():s}'\n\n def bullet(self, text, level=1):\n \"\"\"Formats text as a bullet in Markdown format.\n\n Args:\n text(string): Text to format\n level(int): Indentation level.\n Return:\n string: Formatted text.\n \"\"\"\n return f\"{' ' * (level - 1):s}* {text.strip():s}\"\n\n def code(self, text):\n \"\"\"Formats text as code in Markdown format.\n\n Args:\n text(string): Text to format\n\n Return:\n string: Formatted text.\n \"\"\"\n return f'`{text.strip():s}`'\n\n def add(self, component: MarkdownReportComponent) -> None:\n \"\"\"Adds a MarkdownReportComponent object to the components list.\n\n This method should additionally set the parent object.\n \"\"\"\n pass\n\n def add_components(self, components: list[MarkdownReportComponent]) -> None:\n \"\"\"Adds multiple MarkdownReportComponent objects to the components list.\"\"\"\n pass\n\n def remove(self, component: MarkdownReportComponent) -> None:\n \"\"\"Removes a MarkdownReportComponent object from the components list.\n\n This method should set the component's object to None.\n \"\"\"\n pass\n\n @abstractmethod\n def generate_markdown(self) -> str:\n pass\n\n\nclass TaskMarkdownReport(MarkdownReportComponent):\n \"\"\"Turbinia Task markdown report.\"\"\"\n\n def __init__(self, request_data: dict = None):\n \"\"\"Initialize TaskMarkdownReport\"\"\"\n super().__init__()\n self._request_data: dict = request_data\n\n def generate_markdown(self) -> str:\n \"\"\"Generate a markdown report.\"\"\"\n report: list[str] = []\n task: dict = self._request_data\n if not task:\n return ''\n\n try:\n report.append(self.heading2(task.get('name')))\n line = f\"{self.bold('Evidence:'):s} {task.get('evidence_name')!s}\"\n report.append(self.bullet(line))\n line = f\"{self.bold('Status:'):s} {task.get('status')!s}\"\n report.append(self.bullet(line))\n report.append(self.bullet(f\"Task Id: {task.get('id')!s}\"))\n report.append(\n self.bullet(f\"Executed on worker {task.get('worker_name')!s}\"))\n if task.get('report_data'):\n report.append('')\n report.append(self.heading3('Task Reported Data'))\n report.extend(task.get('report_data').splitlines())\n report.append('')\n report.append(self.heading3('Saved Task Files:'))\n\n saved_paths = task.get('saved_paths')\n if saved_paths:\n for path in saved_paths:\n report.append(self.bullet(self.code(path)))\n report.append('')\n except TypeError as exception:\n log.warning(f'Error formatting the Markdown report: {exception!s}')\n\n self.report = '\\n'.join(report)\n return self.report\n\n\nclass RequestMarkdownReport(MarkdownReportComponent):\n \"\"\"Turbinia Request Markdown report.\"\"\"\n\n def __init__(self, request_data: dict):\n \"\"\"Initializes a RequestMarkdownReport object.\"\"\"\n super().__init__()\n self._request_data: dict = request_data\n\n tasks = [TaskMarkdownReport(task) for task in request_data.get('tasks')]\n self.add_components(tasks)\n\n def add(self, component: MarkdownReportComponent) -> None:\n if component:\n self.components.append(component)\n component.parent = self\n\n def remove(self, component: MarkdownReportComponent) -> None:\n self.components.remove(component)\n component.parent = None\n\n def add_components(self, components: list[MarkdownReportComponent]) -> None:\n if components:\n for component in components:\n self.components.append(component)\n component.parent = self\n\n def generate_markdown(self) -> str:\n \"\"\"Generates a Markdown version of Requests results.\"\"\"\n report: list[str] = []\n request_dict: dict = self._request_data\n if not request_dict:\n return ''\n\n try:\n report.append(\n self.heading2(f\"Request ID: {request_dict.get('request_id')}\"))\n report.append(\n self.bullet(\n f\"Last Update: {request_dict.get('last_task_update_time')}\"))\n report.append(self.bullet(f\"Requester: {request_dict.get('requester')}\"))\n report.append(self.bullet(f\"Reason: {request_dict.get('reason')}\"))\n report.append(self.bullet(f\"Status: {request_dict.get('status')}\"))\n report.append(\n self.bullet(f\"Failed tasks: {request_dict.get('failed_tasks')}\"))\n report.append(\n self.bullet(f\"Running tasks: {request_dict.get('running_tasks')}\"))\n report.append(\n self.bullet(\n f\"Successful tasks: {request_dict.get('successful_tasks')}\"))\n report.append(\n self.bullet(f\"Task Count: {request_dict.get('task_count')}\"))\n report.append(\n self.bullet(f\"Queued tasks: {request_dict.get('queued_tasks')}\"))\n report.append(\n self.bullet(f\"Evidence Name: {request_dict.get('evidence_name')}\"))\n report.append('')\n except TypeError as exception:\n log.warning(f'Error formatting the Markdown report: {exception!s}')\n\n for task in self.components:\n report.append(task.generate_markdown())\n\n self.report = '\\n'.join(report)\n return self.report\n\n\nclass SummaryMarkdownReport(MarkdownReportComponent):\n \"\"\"A markdown report summary of all Turbinia Requests.\"\"\"\n\n def __init__(self, requests_summary: list[dict]):\n \"\"\"Initialize SummaryMarkdownReport.\"\"\"\n super().__init__()\n self._requests_summary = requests_summary\n\n def generate_markdown(self) -> str:\n \"\"\"Generate a Markdown version of Requests summary results.\"\"\"\n report: list[str] = []\n requests_status_list = None\n if self._requests_summary:\n requests_status_list = self._requests_summary.get('requests_status')\n\n if not requests_status_list:\n return '## No requests found.'\n\n for request_dict in requests_status_list:\n request_report = RequestMarkdownReport(request_dict).generate_markdown()\n report.append(request_report)\n\n self.report = '\\n'.join(report)\n return self.report\n\n\nclass WorkersMarkdownReport(MarkdownReportComponent):\n \"\"\"A markdown report of all tasks for a specific worker.\"\"\"\n\n def __init__(self, request_data: dict):\n super().__init__()\n self._request_data: dict = request_data\n\n def generate_markdown(self) -> str:\n \"\"\"Generates a Markdown version of tasks per worker.\"\"\"\n raise NotImplementedError\n", "path": "turbinia/api/cli/turbinia_client/helpers/formatter.py"}]}
| 3,394 | 743 |
gh_patches_debug_19790
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-1570
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
special option to `pre-commit run` which works against a readonly filesystem
(mostly as a hardening measure for [pre-commit.ci](https://pre-commit.ci))
I believe the only current r/w operation on the store is to mark the configuration as used. this could be skipped and installations could raise an error
(I might be missing something)
easiest reproduction is via docker:
```console
$ docker run --rm -ti -e PRE_COMMIT_HOME=/pc --workdir /src -v ~/workspace/pre-commit:/src:ro -v $PWD/pc:/pc:rw $IMG pre-commit install-hooks
[INFO] Initializing environment for https://github.com/pre-commit/pre-commit-hooks.
[INFO] Initializing environment for https://gitlab.com/pycqa/flake8.
[INFO] Initializing environment for https://gitlab.com/pycqa/flake8:flake8-typing-imports==1.6.0.
[INFO] Initializing environment for https://github.com/pre-commit/mirrors-autopep8.
[INFO] Initializing environment for https://github.com/pre-commit/pre-commit.
[INFO] Initializing environment for https://github.com/asottile/pyupgrade.
[INFO] Initializing environment for https://github.com/asottile/reorder_python_imports.
[INFO] Initializing environment for https://github.com/asottile/add-trailing-comma.
[INFO] Initializing environment for https://github.com/asottile/setup-cfg-fmt.
[INFO] Initializing environment for https://github.com/pre-commit/mirrors-mypy.
[INFO] Installing environment for https://github.com/pre-commit/pre-commit-hooks.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for https://gitlab.com/pycqa/flake8.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for https://github.com/pre-commit/mirrors-autopep8.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for https://github.com/pre-commit/pre-commit.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for https://github.com/asottile/pyupgrade.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for https://github.com/asottile/reorder_python_imports.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for https://github.com/asottile/add-trailing-comma.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for https://github.com/asottile/setup-cfg-fmt.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for https://github.com/pre-commit/mirrors-mypy.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
$ docker run --rm -ti -e PRE_COMMIT_HOME=/pc --workdir /src -v ~/workspace/pre-commit:/src:ro -v $PWD/pc:/pc:ro $IMG pre-commit run --all-files
An unexpected error has occurred: OperationalError: attempt to write a readonly database
Check the log at /pc/pre-commit.log
Traceback (most recent call last):
File "/venv/lib/python3.8/site-packages/pre_commit/error_handler.py", line 56, in error_handler
yield
File "/venv/lib/python3.8/site-packages/pre_commit/main.py", line 351, in main
store.mark_config_used(args.config)
File "/venv/lib/python3.8/site-packages/pre_commit/store.py", line 228, in mark_config_used
db.execute('INSERT OR IGNORE INTO configs VALUES (?)', (path,))
sqlite3.OperationalError: attempt to write a readonly database
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/venv/bin/pre-commit", line 8, in <module>
sys.exit(main())
File "/venv/lib/python3.8/site-packages/pre_commit/main.py", line 404, in main
raise AssertionError(
File "/usr/lib/python3.8/contextlib.py", line 131, in __exit__
self.gen.throw(type, value, traceback)
File "/venv/lib/python3.8/site-packages/pre_commit/error_handler.py", line 64, in error_handler
_log_and_exit(msg, e, traceback.format_exc())
File "/venv/lib/python3.8/site-packages/pre_commit/error_handler.py", line 24, in _log_and_exit
with open(log_path, 'wb') as log:
OSError: [Errno 30] Read-only file system: '/pc/pre-commit.log'
```
may also need another place to put `pre-commit.log` as well 🤔
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/store.py`
Content:
```
1 import contextlib
2 import logging
3 import os.path
4 import sqlite3
5 import tempfile
6 from typing import Callable
7 from typing import Generator
8 from typing import List
9 from typing import Optional
10 from typing import Sequence
11 from typing import Tuple
12
13 import pre_commit.constants as C
14 from pre_commit import file_lock
15 from pre_commit import git
16 from pre_commit.util import CalledProcessError
17 from pre_commit.util import clean_path_on_failure
18 from pre_commit.util import cmd_output_b
19 from pre_commit.util import resource_text
20 from pre_commit.util import rmtree
21
22
23 logger = logging.getLogger('pre_commit')
24
25
26 def _get_default_directory() -> str:
27 """Returns the default directory for the Store. This is intentionally
28 underscored to indicate that `Store.get_default_directory` is the intended
29 way to get this information. This is also done so
30 `Store.get_default_directory` can be mocked in tests and
31 `_get_default_directory` can be tested.
32 """
33 ret = os.environ.get('PRE_COMMIT_HOME') or os.path.join(
34 os.environ.get('XDG_CACHE_HOME') or os.path.expanduser('~/.cache'),
35 'pre-commit',
36 )
37 return os.path.realpath(ret)
38
39
40 class Store:
41 get_default_directory = staticmethod(_get_default_directory)
42
43 def __init__(self, directory: Optional[str] = None) -> None:
44 self.directory = directory or Store.get_default_directory()
45 self.db_path = os.path.join(self.directory, 'db.db')
46
47 if not os.path.exists(self.directory):
48 os.makedirs(self.directory, exist_ok=True)
49 with open(os.path.join(self.directory, 'README'), 'w') as f:
50 f.write(
51 'This directory is maintained by the pre-commit project.\n'
52 'Learn more: https://github.com/pre-commit/pre-commit\n',
53 )
54
55 if os.path.exists(self.db_path):
56 return
57 with self.exclusive_lock():
58 # Another process may have already completed this work
59 if os.path.exists(self.db_path): # pragma: no cover (race)
60 return
61 # To avoid a race where someone ^Cs between db creation and
62 # execution of the CREATE TABLE statement
63 fd, tmpfile = tempfile.mkstemp(dir=self.directory)
64 # We'll be managing this file ourselves
65 os.close(fd)
66 with self.connect(db_path=tmpfile) as db:
67 db.executescript(
68 'CREATE TABLE repos ('
69 ' repo TEXT NOT NULL,'
70 ' ref TEXT NOT NULL,'
71 ' path TEXT NOT NULL,'
72 ' PRIMARY KEY (repo, ref)'
73 ');',
74 )
75 self._create_config_table(db)
76
77 # Atomic file move
78 os.rename(tmpfile, self.db_path)
79
80 @contextlib.contextmanager
81 def exclusive_lock(self) -> Generator[None, None, None]:
82 def blocked_cb() -> None: # pragma: no cover (tests are in-process)
83 logger.info('Locking pre-commit directory')
84
85 with file_lock.lock(os.path.join(self.directory, '.lock'), blocked_cb):
86 yield
87
88 @contextlib.contextmanager
89 def connect(
90 self,
91 db_path: Optional[str] = None,
92 ) -> Generator[sqlite3.Connection, None, None]:
93 db_path = db_path or self.db_path
94 # sqlite doesn't close its fd with its contextmanager >.<
95 # contextlib.closing fixes this.
96 # See: https://stackoverflow.com/a/28032829/812183
97 with contextlib.closing(sqlite3.connect(db_path)) as db:
98 # this creates a transaction
99 with db:
100 yield db
101
102 @classmethod
103 def db_repo_name(cls, repo: str, deps: Sequence[str]) -> str:
104 if deps:
105 return f'{repo}:{",".join(sorted(deps))}'
106 else:
107 return repo
108
109 def _new_repo(
110 self,
111 repo: str,
112 ref: str,
113 deps: Sequence[str],
114 make_strategy: Callable[[str], None],
115 ) -> str:
116 repo = self.db_repo_name(repo, deps)
117
118 def _get_result() -> Optional[str]:
119 # Check if we already exist
120 with self.connect() as db:
121 result = db.execute(
122 'SELECT path FROM repos WHERE repo = ? AND ref = ?',
123 (repo, ref),
124 ).fetchone()
125 return result[0] if result else None
126
127 result = _get_result()
128 if result:
129 return result
130 with self.exclusive_lock():
131 # Another process may have already completed this work
132 result = _get_result()
133 if result: # pragma: no cover (race)
134 return result
135
136 logger.info(f'Initializing environment for {repo}.')
137
138 directory = tempfile.mkdtemp(prefix='repo', dir=self.directory)
139 with clean_path_on_failure(directory):
140 make_strategy(directory)
141
142 # Update our db with the created repo
143 with self.connect() as db:
144 db.execute(
145 'INSERT INTO repos (repo, ref, path) VALUES (?, ?, ?)',
146 [repo, ref, directory],
147 )
148 return directory
149
150 def _complete_clone(self, ref: str, git_cmd: Callable[..., None]) -> None:
151 """Perform a complete clone of a repository and its submodules """
152
153 git_cmd('fetch', 'origin', '--tags')
154 git_cmd('checkout', ref)
155 git_cmd('submodule', 'update', '--init', '--recursive')
156
157 def _shallow_clone(self, ref: str, git_cmd: Callable[..., None]) -> None:
158 """Perform a shallow clone of a repository and its submodules """
159
160 git_config = 'protocol.version=2'
161 git_cmd('-c', git_config, 'fetch', 'origin', ref, '--depth=1')
162 git_cmd('checkout', 'FETCH_HEAD')
163 git_cmd(
164 '-c', git_config, 'submodule', 'update', '--init', '--recursive',
165 '--depth=1',
166 )
167
168 def clone(self, repo: str, ref: str, deps: Sequence[str] = ()) -> str:
169 """Clone the given url and checkout the specific ref."""
170
171 def clone_strategy(directory: str) -> None:
172 git.init_repo(directory, repo)
173 env = git.no_git_env()
174
175 def _git_cmd(*args: str) -> None:
176 cmd_output_b('git', *args, cwd=directory, env=env)
177
178 try:
179 self._shallow_clone(ref, _git_cmd)
180 except CalledProcessError:
181 self._complete_clone(ref, _git_cmd)
182
183 return self._new_repo(repo, ref, deps, clone_strategy)
184
185 LOCAL_RESOURCES = (
186 'Cargo.toml', 'main.go', 'go.mod', 'main.rs', '.npmignore',
187 'package.json', 'pre_commit_dummy_package.gemspec', 'setup.py',
188 'environment.yml', 'Makefile.PL',
189 )
190
191 def make_local(self, deps: Sequence[str]) -> str:
192 def make_local_strategy(directory: str) -> None:
193 for resource in self.LOCAL_RESOURCES:
194 contents = resource_text(f'empty_template_{resource}')
195 with open(os.path.join(directory, resource), 'w') as f:
196 f.write(contents)
197
198 env = git.no_git_env()
199
200 # initialize the git repository so it looks more like cloned repos
201 def _git_cmd(*args: str) -> None:
202 cmd_output_b('git', *args, cwd=directory, env=env)
203
204 git.init_repo(directory, '<<unknown>>')
205 _git_cmd('add', '.')
206 git.commit(repo=directory)
207
208 return self._new_repo(
209 'local', C.LOCAL_REPO_VERSION, deps, make_local_strategy,
210 )
211
212 def _create_config_table(self, db: sqlite3.Connection) -> None:
213 db.executescript(
214 'CREATE TABLE IF NOT EXISTS configs ('
215 ' path TEXT NOT NULL,'
216 ' PRIMARY KEY (path)'
217 ');',
218 )
219
220 def mark_config_used(self, path: str) -> None:
221 path = os.path.realpath(path)
222 # don't insert config files that do not exist
223 if not os.path.exists(path):
224 return
225 with self.connect() as db:
226 # TODO: eventually remove this and only create in _create
227 self._create_config_table(db)
228 db.execute('INSERT OR IGNORE INTO configs VALUES (?)', (path,))
229
230 def select_all_configs(self) -> List[str]:
231 with self.connect() as db:
232 self._create_config_table(db)
233 rows = db.execute('SELECT path FROM configs').fetchall()
234 return [path for path, in rows]
235
236 def delete_configs(self, configs: List[str]) -> None:
237 with self.connect() as db:
238 rows = [(path,) for path in configs]
239 db.executemany('DELETE FROM configs WHERE path = ?', rows)
240
241 def select_all_repos(self) -> List[Tuple[str, str, str]]:
242 with self.connect() as db:
243 return db.execute('SELECT repo, ref, path from repos').fetchall()
244
245 def delete_repo(self, db_repo_name: str, ref: str, path: str) -> None:
246 with self.connect() as db:
247 db.execute(
248 'DELETE FROM repos WHERE repo = ? and ref = ?',
249 (db_repo_name, ref),
250 )
251 rmtree(path)
252
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pre_commit/store.py b/pre_commit/store.py
--- a/pre_commit/store.py
+++ b/pre_commit/store.py
@@ -43,6 +43,10 @@
def __init__(self, directory: Optional[str] = None) -> None:
self.directory = directory or Store.get_default_directory()
self.db_path = os.path.join(self.directory, 'db.db')
+ self.readonly = (
+ os.path.exists(self.directory) and
+ not os.access(self.directory, os.W_OK)
+ )
if not os.path.exists(self.directory):
os.makedirs(self.directory, exist_ok=True)
@@ -218,6 +222,8 @@
)
def mark_config_used(self, path: str) -> None:
+ if self.readonly: # pragma: win32 no cover
+ return
path = os.path.realpath(path)
# don't insert config files that do not exist
if not os.path.exists(path):
|
{"golden_diff": "diff --git a/pre_commit/store.py b/pre_commit/store.py\n--- a/pre_commit/store.py\n+++ b/pre_commit/store.py\n@@ -43,6 +43,10 @@\n def __init__(self, directory: Optional[str] = None) -> None:\n self.directory = directory or Store.get_default_directory()\n self.db_path = os.path.join(self.directory, 'db.db')\n+ self.readonly = (\n+ os.path.exists(self.directory) and\n+ not os.access(self.directory, os.W_OK)\n+ )\n \n if not os.path.exists(self.directory):\n os.makedirs(self.directory, exist_ok=True)\n@@ -218,6 +222,8 @@\n )\n \n def mark_config_used(self, path: str) -> None:\n+ if self.readonly: # pragma: win32 no cover\n+ return\n path = os.path.realpath(path)\n # don't insert config files that do not exist\n if not os.path.exists(path):\n", "issue": "special option to `pre-commit run` which works against a readonly filesystem\n(mostly as a hardening measure for [pre-commit.ci](https://pre-commit.ci))\r\n\r\nI believe the only current r/w operation on the store is to mark the configuration as used. this could be skipped and installations could raise an error\r\n\r\n(I might be missing something)\r\n\r\neasiest reproduction is via docker:\r\n\r\n```console\r\n$ docker run --rm -ti -e PRE_COMMIT_HOME=/pc --workdir /src -v ~/workspace/pre-commit:/src:ro -v $PWD/pc:/pc:rw $IMG pre-commit install-hooks\r\n[INFO] Initializing environment for https://github.com/pre-commit/pre-commit-hooks.\r\n[INFO] Initializing environment for https://gitlab.com/pycqa/flake8.\r\n[INFO] Initializing environment for https://gitlab.com/pycqa/flake8:flake8-typing-imports==1.6.0.\r\n[INFO] Initializing environment for https://github.com/pre-commit/mirrors-autopep8.\r\n[INFO] Initializing environment for https://github.com/pre-commit/pre-commit.\r\n[INFO] Initializing environment for https://github.com/asottile/pyupgrade.\r\n[INFO] Initializing environment for https://github.com/asottile/reorder_python_imports.\r\n[INFO] Initializing environment for https://github.com/asottile/add-trailing-comma.\r\n[INFO] Initializing environment for https://github.com/asottile/setup-cfg-fmt.\r\n[INFO] Initializing environment for https://github.com/pre-commit/mirrors-mypy.\r\n[INFO] Installing environment for https://github.com/pre-commit/pre-commit-hooks.\r\n[INFO] Once installed this environment will be reused.\r\n[INFO] This may take a few minutes...\r\n[INFO] Installing environment for https://gitlab.com/pycqa/flake8.\r\n[INFO] Once installed this environment will be reused.\r\n[INFO] This may take a few minutes...\r\n[INFO] Installing environment for https://github.com/pre-commit/mirrors-autopep8.\r\n[INFO] Once installed this environment will be reused.\r\n[INFO] This may take a few minutes...\r\n[INFO] Installing environment for https://github.com/pre-commit/pre-commit.\r\n[INFO] Once installed this environment will be reused.\r\n[INFO] This may take a few minutes...\r\n[INFO] Installing environment for https://github.com/asottile/pyupgrade.\r\n[INFO] Once installed this environment will be reused.\r\n[INFO] This may take a few minutes...\r\n[INFO] Installing environment for https://github.com/asottile/reorder_python_imports.\r\n[INFO] Once installed this environment will be reused.\r\n[INFO] This may take a few minutes...\r\n[INFO] Installing environment for https://github.com/asottile/add-trailing-comma.\r\n[INFO] Once installed this environment will be reused.\r\n[INFO] This may take a few minutes...\r\n[INFO] Installing environment for https://github.com/asottile/setup-cfg-fmt.\r\n[INFO] Once installed this environment will be reused.\r\n[INFO] This may take a few minutes...\r\n[INFO] Installing environment for https://github.com/pre-commit/mirrors-mypy.\r\n[INFO] Once installed this environment will be reused.\r\n[INFO] This may take a few minutes...\r\n$ docker run --rm -ti -e PRE_COMMIT_HOME=/pc --workdir /src -v ~/workspace/pre-commit:/src:ro -v $PWD/pc:/pc:ro $IMG pre-commit run --all-files\r\nAn unexpected error has occurred: OperationalError: attempt to write a readonly database\r\nCheck the log at /pc/pre-commit.log\r\nTraceback (most recent call last):\r\n File \"/venv/lib/python3.8/site-packages/pre_commit/error_handler.py\", line 56, in error_handler\r\n yield\r\n File \"/venv/lib/python3.8/site-packages/pre_commit/main.py\", line 351, in main\r\n store.mark_config_used(args.config)\r\n File \"/venv/lib/python3.8/site-packages/pre_commit/store.py\", line 228, in mark_config_used\r\n db.execute('INSERT OR IGNORE INTO configs VALUES (?)', (path,))\r\nsqlite3.OperationalError: attempt to write a readonly database\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/venv/bin/pre-commit\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/venv/lib/python3.8/site-packages/pre_commit/main.py\", line 404, in main\r\n raise AssertionError(\r\n File \"/usr/lib/python3.8/contextlib.py\", line 131, in __exit__\r\n self.gen.throw(type, value, traceback)\r\n File \"/venv/lib/python3.8/site-packages/pre_commit/error_handler.py\", line 64, in error_handler\r\n _log_and_exit(msg, e, traceback.format_exc())\r\n File \"/venv/lib/python3.8/site-packages/pre_commit/error_handler.py\", line 24, in _log_and_exit\r\n with open(log_path, 'wb') as log:\r\nOSError: [Errno 30] Read-only file system: '/pc/pre-commit.log'\r\n```\r\n\r\nmay also need another place to put `pre-commit.log` as well \ud83e\udd14 \n", "before_files": [{"content": "import contextlib\nimport logging\nimport os.path\nimport sqlite3\nimport tempfile\nfrom typing import Callable\nfrom typing import Generator\nfrom typing import List\nfrom typing import Optional\nfrom typing import Sequence\nfrom typing import Tuple\n\nimport pre_commit.constants as C\nfrom pre_commit import file_lock\nfrom pre_commit import git\nfrom pre_commit.util import CalledProcessError\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import cmd_output_b\nfrom pre_commit.util import resource_text\nfrom pre_commit.util import rmtree\n\n\nlogger = logging.getLogger('pre_commit')\n\n\ndef _get_default_directory() -> str:\n \"\"\"Returns the default directory for the Store. This is intentionally\n underscored to indicate that `Store.get_default_directory` is the intended\n way to get this information. This is also done so\n `Store.get_default_directory` can be mocked in tests and\n `_get_default_directory` can be tested.\n \"\"\"\n ret = os.environ.get('PRE_COMMIT_HOME') or os.path.join(\n os.environ.get('XDG_CACHE_HOME') or os.path.expanduser('~/.cache'),\n 'pre-commit',\n )\n return os.path.realpath(ret)\n\n\nclass Store:\n get_default_directory = staticmethod(_get_default_directory)\n\n def __init__(self, directory: Optional[str] = None) -> None:\n self.directory = directory or Store.get_default_directory()\n self.db_path = os.path.join(self.directory, 'db.db')\n\n if not os.path.exists(self.directory):\n os.makedirs(self.directory, exist_ok=True)\n with open(os.path.join(self.directory, 'README'), 'w') as f:\n f.write(\n 'This directory is maintained by the pre-commit project.\\n'\n 'Learn more: https://github.com/pre-commit/pre-commit\\n',\n )\n\n if os.path.exists(self.db_path):\n return\n with self.exclusive_lock():\n # Another process may have already completed this work\n if os.path.exists(self.db_path): # pragma: no cover (race)\n return\n # To avoid a race where someone ^Cs between db creation and\n # execution of the CREATE TABLE statement\n fd, tmpfile = tempfile.mkstemp(dir=self.directory)\n # We'll be managing this file ourselves\n os.close(fd)\n with self.connect(db_path=tmpfile) as db:\n db.executescript(\n 'CREATE TABLE repos ('\n ' repo TEXT NOT NULL,'\n ' ref TEXT NOT NULL,'\n ' path TEXT NOT NULL,'\n ' PRIMARY KEY (repo, ref)'\n ');',\n )\n self._create_config_table(db)\n\n # Atomic file move\n os.rename(tmpfile, self.db_path)\n\n @contextlib.contextmanager\n def exclusive_lock(self) -> Generator[None, None, None]:\n def blocked_cb() -> None: # pragma: no cover (tests are in-process)\n logger.info('Locking pre-commit directory')\n\n with file_lock.lock(os.path.join(self.directory, '.lock'), blocked_cb):\n yield\n\n @contextlib.contextmanager\n def connect(\n self,\n db_path: Optional[str] = None,\n ) -> Generator[sqlite3.Connection, None, None]:\n db_path = db_path or self.db_path\n # sqlite doesn't close its fd with its contextmanager >.<\n # contextlib.closing fixes this.\n # See: https://stackoverflow.com/a/28032829/812183\n with contextlib.closing(sqlite3.connect(db_path)) as db:\n # this creates a transaction\n with db:\n yield db\n\n @classmethod\n def db_repo_name(cls, repo: str, deps: Sequence[str]) -> str:\n if deps:\n return f'{repo}:{\",\".join(sorted(deps))}'\n else:\n return repo\n\n def _new_repo(\n self,\n repo: str,\n ref: str,\n deps: Sequence[str],\n make_strategy: Callable[[str], None],\n ) -> str:\n repo = self.db_repo_name(repo, deps)\n\n def _get_result() -> Optional[str]:\n # Check if we already exist\n with self.connect() as db:\n result = db.execute(\n 'SELECT path FROM repos WHERE repo = ? AND ref = ?',\n (repo, ref),\n ).fetchone()\n return result[0] if result else None\n\n result = _get_result()\n if result:\n return result\n with self.exclusive_lock():\n # Another process may have already completed this work\n result = _get_result()\n if result: # pragma: no cover (race)\n return result\n\n logger.info(f'Initializing environment for {repo}.')\n\n directory = tempfile.mkdtemp(prefix='repo', dir=self.directory)\n with clean_path_on_failure(directory):\n make_strategy(directory)\n\n # Update our db with the created repo\n with self.connect() as db:\n db.execute(\n 'INSERT INTO repos (repo, ref, path) VALUES (?, ?, ?)',\n [repo, ref, directory],\n )\n return directory\n\n def _complete_clone(self, ref: str, git_cmd: Callable[..., None]) -> None:\n \"\"\"Perform a complete clone of a repository and its submodules \"\"\"\n\n git_cmd('fetch', 'origin', '--tags')\n git_cmd('checkout', ref)\n git_cmd('submodule', 'update', '--init', '--recursive')\n\n def _shallow_clone(self, ref: str, git_cmd: Callable[..., None]) -> None:\n \"\"\"Perform a shallow clone of a repository and its submodules \"\"\"\n\n git_config = 'protocol.version=2'\n git_cmd('-c', git_config, 'fetch', 'origin', ref, '--depth=1')\n git_cmd('checkout', 'FETCH_HEAD')\n git_cmd(\n '-c', git_config, 'submodule', 'update', '--init', '--recursive',\n '--depth=1',\n )\n\n def clone(self, repo: str, ref: str, deps: Sequence[str] = ()) -> str:\n \"\"\"Clone the given url and checkout the specific ref.\"\"\"\n\n def clone_strategy(directory: str) -> None:\n git.init_repo(directory, repo)\n env = git.no_git_env()\n\n def _git_cmd(*args: str) -> None:\n cmd_output_b('git', *args, cwd=directory, env=env)\n\n try:\n self._shallow_clone(ref, _git_cmd)\n except CalledProcessError:\n self._complete_clone(ref, _git_cmd)\n\n return self._new_repo(repo, ref, deps, clone_strategy)\n\n LOCAL_RESOURCES = (\n 'Cargo.toml', 'main.go', 'go.mod', 'main.rs', '.npmignore',\n 'package.json', 'pre_commit_dummy_package.gemspec', 'setup.py',\n 'environment.yml', 'Makefile.PL',\n )\n\n def make_local(self, deps: Sequence[str]) -> str:\n def make_local_strategy(directory: str) -> None:\n for resource in self.LOCAL_RESOURCES:\n contents = resource_text(f'empty_template_{resource}')\n with open(os.path.join(directory, resource), 'w') as f:\n f.write(contents)\n\n env = git.no_git_env()\n\n # initialize the git repository so it looks more like cloned repos\n def _git_cmd(*args: str) -> None:\n cmd_output_b('git', *args, cwd=directory, env=env)\n\n git.init_repo(directory, '<<unknown>>')\n _git_cmd('add', '.')\n git.commit(repo=directory)\n\n return self._new_repo(\n 'local', C.LOCAL_REPO_VERSION, deps, make_local_strategy,\n )\n\n def _create_config_table(self, db: sqlite3.Connection) -> None:\n db.executescript(\n 'CREATE TABLE IF NOT EXISTS configs ('\n ' path TEXT NOT NULL,'\n ' PRIMARY KEY (path)'\n ');',\n )\n\n def mark_config_used(self, path: str) -> None:\n path = os.path.realpath(path)\n # don't insert config files that do not exist\n if not os.path.exists(path):\n return\n with self.connect() as db:\n # TODO: eventually remove this and only create in _create\n self._create_config_table(db)\n db.execute('INSERT OR IGNORE INTO configs VALUES (?)', (path,))\n\n def select_all_configs(self) -> List[str]:\n with self.connect() as db:\n self._create_config_table(db)\n rows = db.execute('SELECT path FROM configs').fetchall()\n return [path for path, in rows]\n\n def delete_configs(self, configs: List[str]) -> None:\n with self.connect() as db:\n rows = [(path,) for path in configs]\n db.executemany('DELETE FROM configs WHERE path = ?', rows)\n\n def select_all_repos(self) -> List[Tuple[str, str, str]]:\n with self.connect() as db:\n return db.execute('SELECT repo, ref, path from repos').fetchall()\n\n def delete_repo(self, db_repo_name: str, ref: str, path: str) -> None:\n with self.connect() as db:\n db.execute(\n 'DELETE FROM repos WHERE repo = ? and ref = ?',\n (db_repo_name, ref),\n )\n rmtree(path)\n", "path": "pre_commit/store.py"}], "after_files": [{"content": "import contextlib\nimport logging\nimport os.path\nimport sqlite3\nimport tempfile\nfrom typing import Callable\nfrom typing import Generator\nfrom typing import List\nfrom typing import Optional\nfrom typing import Sequence\nfrom typing import Tuple\n\nimport pre_commit.constants as C\nfrom pre_commit import file_lock\nfrom pre_commit import git\nfrom pre_commit.util import CalledProcessError\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import cmd_output_b\nfrom pre_commit.util import resource_text\nfrom pre_commit.util import rmtree\n\n\nlogger = logging.getLogger('pre_commit')\n\n\ndef _get_default_directory() -> str:\n \"\"\"Returns the default directory for the Store. This is intentionally\n underscored to indicate that `Store.get_default_directory` is the intended\n way to get this information. This is also done so\n `Store.get_default_directory` can be mocked in tests and\n `_get_default_directory` can be tested.\n \"\"\"\n ret = os.environ.get('PRE_COMMIT_HOME') or os.path.join(\n os.environ.get('XDG_CACHE_HOME') or os.path.expanduser('~/.cache'),\n 'pre-commit',\n )\n return os.path.realpath(ret)\n\n\nclass Store:\n get_default_directory = staticmethod(_get_default_directory)\n\n def __init__(self, directory: Optional[str] = None) -> None:\n self.directory = directory or Store.get_default_directory()\n self.db_path = os.path.join(self.directory, 'db.db')\n self.readonly = (\n os.path.exists(self.directory) and\n not os.access(self.directory, os.W_OK)\n )\n\n if not os.path.exists(self.directory):\n os.makedirs(self.directory, exist_ok=True)\n with open(os.path.join(self.directory, 'README'), 'w') as f:\n f.write(\n 'This directory is maintained by the pre-commit project.\\n'\n 'Learn more: https://github.com/pre-commit/pre-commit\\n',\n )\n\n if os.path.exists(self.db_path):\n return\n with self.exclusive_lock():\n # Another process may have already completed this work\n if os.path.exists(self.db_path): # pragma: no cover (race)\n return\n # To avoid a race where someone ^Cs between db creation and\n # execution of the CREATE TABLE statement\n fd, tmpfile = tempfile.mkstemp(dir=self.directory)\n # We'll be managing this file ourselves\n os.close(fd)\n with self.connect(db_path=tmpfile) as db:\n db.executescript(\n 'CREATE TABLE repos ('\n ' repo TEXT NOT NULL,'\n ' ref TEXT NOT NULL,'\n ' path TEXT NOT NULL,'\n ' PRIMARY KEY (repo, ref)'\n ');',\n )\n self._create_config_table(db)\n\n # Atomic file move\n os.rename(tmpfile, self.db_path)\n\n @contextlib.contextmanager\n def exclusive_lock(self) -> Generator[None, None, None]:\n def blocked_cb() -> None: # pragma: no cover (tests are in-process)\n logger.info('Locking pre-commit directory')\n\n with file_lock.lock(os.path.join(self.directory, '.lock'), blocked_cb):\n yield\n\n @contextlib.contextmanager\n def connect(\n self,\n db_path: Optional[str] = None,\n ) -> Generator[sqlite3.Connection, None, None]:\n db_path = db_path or self.db_path\n # sqlite doesn't close its fd with its contextmanager >.<\n # contextlib.closing fixes this.\n # See: https://stackoverflow.com/a/28032829/812183\n with contextlib.closing(sqlite3.connect(db_path)) as db:\n # this creates a transaction\n with db:\n yield db\n\n @classmethod\n def db_repo_name(cls, repo: str, deps: Sequence[str]) -> str:\n if deps:\n return f'{repo}:{\",\".join(sorted(deps))}'\n else:\n return repo\n\n def _new_repo(\n self,\n repo: str,\n ref: str,\n deps: Sequence[str],\n make_strategy: Callable[[str], None],\n ) -> str:\n repo = self.db_repo_name(repo, deps)\n\n def _get_result() -> Optional[str]:\n # Check if we already exist\n with self.connect() as db:\n result = db.execute(\n 'SELECT path FROM repos WHERE repo = ? AND ref = ?',\n (repo, ref),\n ).fetchone()\n return result[0] if result else None\n\n result = _get_result()\n if result:\n return result\n with self.exclusive_lock():\n # Another process may have already completed this work\n result = _get_result()\n if result: # pragma: no cover (race)\n return result\n\n logger.info(f'Initializing environment for {repo}.')\n\n directory = tempfile.mkdtemp(prefix='repo', dir=self.directory)\n with clean_path_on_failure(directory):\n make_strategy(directory)\n\n # Update our db with the created repo\n with self.connect() as db:\n db.execute(\n 'INSERT INTO repos (repo, ref, path) VALUES (?, ?, ?)',\n [repo, ref, directory],\n )\n return directory\n\n def _complete_clone(self, ref: str, git_cmd: Callable[..., None]) -> None:\n \"\"\"Perform a complete clone of a repository and its submodules \"\"\"\n\n git_cmd('fetch', 'origin', '--tags')\n git_cmd('checkout', ref)\n git_cmd('submodule', 'update', '--init', '--recursive')\n\n def _shallow_clone(self, ref: str, git_cmd: Callable[..., None]) -> None:\n \"\"\"Perform a shallow clone of a repository and its submodules \"\"\"\n\n git_config = 'protocol.version=2'\n git_cmd('-c', git_config, 'fetch', 'origin', ref, '--depth=1')\n git_cmd('checkout', 'FETCH_HEAD')\n git_cmd(\n '-c', git_config, 'submodule', 'update', '--init', '--recursive',\n '--depth=1',\n )\n\n def clone(self, repo: str, ref: str, deps: Sequence[str] = ()) -> str:\n \"\"\"Clone the given url and checkout the specific ref.\"\"\"\n\n def clone_strategy(directory: str) -> None:\n git.init_repo(directory, repo)\n env = git.no_git_env()\n\n def _git_cmd(*args: str) -> None:\n cmd_output_b('git', *args, cwd=directory, env=env)\n\n try:\n self._shallow_clone(ref, _git_cmd)\n except CalledProcessError:\n self._complete_clone(ref, _git_cmd)\n\n return self._new_repo(repo, ref, deps, clone_strategy)\n\n LOCAL_RESOURCES = (\n 'Cargo.toml', 'main.go', 'go.mod', 'main.rs', '.npmignore',\n 'package.json', 'pre_commit_dummy_package.gemspec', 'setup.py',\n 'environment.yml', 'Makefile.PL',\n )\n\n def make_local(self, deps: Sequence[str]) -> str:\n def make_local_strategy(directory: str) -> None:\n for resource in self.LOCAL_RESOURCES:\n contents = resource_text(f'empty_template_{resource}')\n with open(os.path.join(directory, resource), 'w') as f:\n f.write(contents)\n\n env = git.no_git_env()\n\n # initialize the git repository so it looks more like cloned repos\n def _git_cmd(*args: str) -> None:\n cmd_output_b('git', *args, cwd=directory, env=env)\n\n git.init_repo(directory, '<<unknown>>')\n _git_cmd('add', '.')\n git.commit(repo=directory)\n\n return self._new_repo(\n 'local', C.LOCAL_REPO_VERSION, deps, make_local_strategy,\n )\n\n def _create_config_table(self, db: sqlite3.Connection) -> None:\n db.executescript(\n 'CREATE TABLE IF NOT EXISTS configs ('\n ' path TEXT NOT NULL,'\n ' PRIMARY KEY (path)'\n ');',\n )\n\n def mark_config_used(self, path: str) -> None:\n if self.readonly: # pragma: win32 no cover\n return\n path = os.path.realpath(path)\n # don't insert config files that do not exist\n if not os.path.exists(path):\n return\n with self.connect() as db:\n # TODO: eventually remove this and only create in _create\n self._create_config_table(db)\n db.execute('INSERT OR IGNORE INTO configs VALUES (?)', (path,))\n\n def select_all_configs(self) -> List[str]:\n with self.connect() as db:\n self._create_config_table(db)\n rows = db.execute('SELECT path FROM configs').fetchall()\n return [path for path, in rows]\n\n def delete_configs(self, configs: List[str]) -> None:\n with self.connect() as db:\n rows = [(path,) for path in configs]\n db.executemany('DELETE FROM configs WHERE path = ?', rows)\n\n def select_all_repos(self) -> List[Tuple[str, str, str]]:\n with self.connect() as db:\n return db.execute('SELECT repo, ref, path from repos').fetchall()\n\n def delete_repo(self, db_repo_name: str, ref: str, path: str) -> None:\n with self.connect() as db:\n db.execute(\n 'DELETE FROM repos WHERE repo = ? and ref = ?',\n (db_repo_name, ref),\n )\n rmtree(path)\n", "path": "pre_commit/store.py"}]}
| 4,045 | 217 |
gh_patches_debug_49360
|
rasdani/github-patches
|
git_diff
|
zulip__zulip-5407
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Pull GCI docs into main Zulip repo.
Some of our assets, like git cheatsheets, currently live in the zulip-gci repo. We should move them to the main Zulip repo and link from appropriate places.
cc @synicalsyntax @lonerz @tommyip
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tools/documentation_crawler/documentation_crawler/spiders/common/spiders.py`
Content:
```
1 #!/usr/bin/env python
2 from __future__ import print_function
3
4 import logging
5 import re
6 import scrapy
7
8 from scrapy import Request
9 from scrapy.linkextractors import IGNORED_EXTENSIONS
10 from scrapy.linkextractors.lxmlhtml import LxmlLinkExtractor
11 from scrapy.utils.url import url_has_any_extension
12
13 from typing import Any, Generator, List, Optional, Tuple
14
15 EXCLUDED_URLS = [
16 # Google calendar returns 404s on HEAD requests unconditionally
17 'https://calendar.google.com/calendar/[email protected]',
18 # Returns 409 errors to HEAD requests frequently
19 'https://medium.freecodecamp.com',
20 ]
21
22
23 class BaseDocumentationSpider(scrapy.Spider):
24 name = None # type: Optional[str]
25 # Exclude domain address.
26 deny_domains = [] # type: List[str]
27 start_urls = [] # type: List[str]
28 deny = [] # type: List[str]
29 file_extensions = ['.' + ext for ext in IGNORED_EXTENSIONS] # type: List[str]
30 tags = ('a', 'area', 'img')
31 attrs = ('href', 'src')
32
33 def __init__(self, *args, **kwargs):
34 # type: (*Any, **Any) -> None
35 super(BaseDocumentationSpider, self).__init__(*args, **kwargs)
36 self.has_error = False
37
38 def _set_error_state(self):
39 # type: () -> None
40 self.has_error = True
41
42 def _has_extension(self, url):
43 # type: (str) -> bool
44 return url_has_any_extension(url, self.file_extensions)
45
46 def _is_external_url(self, url):
47 # type: (str) -> bool
48 return url.startswith('http') or self._has_extension(url)
49
50 def check_existing(self, response):
51 # type: (Any) -> None
52 self.log(response)
53
54 def check_permalink(self, response):
55 # type: (Any) -> None
56 self.log(response)
57 xpath_template = "//*[@id='{permalink}' or @name='{permalink}']"
58 m = re.match(r".+\#(?P<permalink>.*)$", response.request.url) # Get anchor value.
59 if not m:
60 return
61 permalink = m.group('permalink')
62 # Check permalink existing on response page.
63 if not response.selector.xpath(xpath_template.format(permalink=permalink)):
64 self._set_error_state()
65 raise Exception(
66 "Permalink #{} is not found on page {}".format(permalink, response.request.url))
67
68 def parse(self, response):
69 # type: (Any) -> Generator[Request, None, None]
70 self.log(response)
71 for link in LxmlLinkExtractor(deny_domains=self.deny_domains, deny_extensions=['doc'],
72 tags=self.tags, attrs=self.attrs, deny=self.deny,
73 canonicalize=False).extract_links(response):
74 callback = self.parse # type: Any
75 dont_filter = False
76 method = 'GET'
77 if self._is_external_url(link.url):
78 callback = self.check_existing
79 method = 'HEAD'
80 elif '#' in link.url:
81 dont_filter = True
82 callback = self.check_permalink
83 yield Request(link.url, method=method, callback=callback, dont_filter=dont_filter,
84 errback=self.error_callback)
85
86 def retry_request_with_get(self, request):
87 # type: (Request) -> Generator[Request, None, None]
88 request.method = 'GET'
89 request.dont_filter = True
90 yield request
91
92 def exclude_error(self, url):
93 # type: (str) -> bool
94 if url in EXCLUDED_URLS:
95 return True
96 return False
97
98 def error_callback(self, failure):
99 # type: (Any) -> Optional[Generator[Any, None, None]]
100 if hasattr(failure.value, 'response') and failure.value.response:
101 response = failure.value.response
102 if self.exclude_error(response.url):
103 return None
104 if response.status == 404:
105 self._set_error_state()
106 raise Exception('Page not found: {}'.format(response))
107 if response.status == 405 and response.request.method == 'HEAD':
108 # Method 'HEAD' not allowed, repeat request with 'GET'
109 return self.retry_request_with_get(response.request)
110 self.log("Error! Please check link: {}".format(response), logging.ERROR)
111 elif isinstance(failure.type, IOError):
112 self._set_error_state()
113 else:
114 raise Exception(failure.value)
115 return None
116
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/tools/documentation_crawler/documentation_crawler/spiders/common/spiders.py b/tools/documentation_crawler/documentation_crawler/spiders/common/spiders.py
--- a/tools/documentation_crawler/documentation_crawler/spiders/common/spiders.py
+++ b/tools/documentation_crawler/documentation_crawler/spiders/common/spiders.py
@@ -17,6 +17,8 @@
'https://calendar.google.com/calendar/[email protected]',
# Returns 409 errors to HEAD requests frequently
'https://medium.freecodecamp.com',
+ # Returns 404 to HEAD requests unconditionally
+ 'https://www.git-tower.com/blog/command-line-cheat-sheet/',
]
|
{"golden_diff": "diff --git a/tools/documentation_crawler/documentation_crawler/spiders/common/spiders.py b/tools/documentation_crawler/documentation_crawler/spiders/common/spiders.py\n--- a/tools/documentation_crawler/documentation_crawler/spiders/common/spiders.py\n+++ b/tools/documentation_crawler/documentation_crawler/spiders/common/spiders.py\n@@ -17,6 +17,8 @@\n 'https://calendar.google.com/calendar/[email protected]',\n # Returns 409 errors to HEAD requests frequently\n 'https://medium.freecodecamp.com',\n+ # Returns 404 to HEAD requests unconditionally\n+ 'https://www.git-tower.com/blog/command-line-cheat-sheet/',\n ]\n", "issue": "Pull GCI docs into main Zulip repo.\nSome of our assets, like git cheatsheets, currently live in the zulip-gci repo. We should move them to the main Zulip repo and link from appropriate places.\r\n\r\ncc @synicalsyntax @lonerz @tommyip \n", "before_files": [{"content": "#!/usr/bin/env python\nfrom __future__ import print_function\n\nimport logging\nimport re\nimport scrapy\n\nfrom scrapy import Request\nfrom scrapy.linkextractors import IGNORED_EXTENSIONS\nfrom scrapy.linkextractors.lxmlhtml import LxmlLinkExtractor\nfrom scrapy.utils.url import url_has_any_extension\n\nfrom typing import Any, Generator, List, Optional, Tuple\n\nEXCLUDED_URLS = [\n # Google calendar returns 404s on HEAD requests unconditionally\n 'https://calendar.google.com/calendar/[email protected]',\n # Returns 409 errors to HEAD requests frequently\n 'https://medium.freecodecamp.com',\n]\n\n\nclass BaseDocumentationSpider(scrapy.Spider):\n name = None # type: Optional[str]\n # Exclude domain address.\n deny_domains = [] # type: List[str]\n start_urls = [] # type: List[str]\n deny = [] # type: List[str]\n file_extensions = ['.' + ext for ext in IGNORED_EXTENSIONS] # type: List[str]\n tags = ('a', 'area', 'img')\n attrs = ('href', 'src')\n\n def __init__(self, *args, **kwargs):\n # type: (*Any, **Any) -> None\n super(BaseDocumentationSpider, self).__init__(*args, **kwargs)\n self.has_error = False\n\n def _set_error_state(self):\n # type: () -> None\n self.has_error = True\n\n def _has_extension(self, url):\n # type: (str) -> bool\n return url_has_any_extension(url, self.file_extensions)\n\n def _is_external_url(self, url):\n # type: (str) -> bool\n return url.startswith('http') or self._has_extension(url)\n\n def check_existing(self, response):\n # type: (Any) -> None\n self.log(response)\n\n def check_permalink(self, response):\n # type: (Any) -> None\n self.log(response)\n xpath_template = \"//*[@id='{permalink}' or @name='{permalink}']\"\n m = re.match(r\".+\\#(?P<permalink>.*)$\", response.request.url) # Get anchor value.\n if not m:\n return\n permalink = m.group('permalink')\n # Check permalink existing on response page.\n if not response.selector.xpath(xpath_template.format(permalink=permalink)):\n self._set_error_state()\n raise Exception(\n \"Permalink #{} is not found on page {}\".format(permalink, response.request.url))\n\n def parse(self, response):\n # type: (Any) -> Generator[Request, None, None]\n self.log(response)\n for link in LxmlLinkExtractor(deny_domains=self.deny_domains, deny_extensions=['doc'],\n tags=self.tags, attrs=self.attrs, deny=self.deny,\n canonicalize=False).extract_links(response):\n callback = self.parse # type: Any\n dont_filter = False\n method = 'GET'\n if self._is_external_url(link.url):\n callback = self.check_existing\n method = 'HEAD'\n elif '#' in link.url:\n dont_filter = True\n callback = self.check_permalink\n yield Request(link.url, method=method, callback=callback, dont_filter=dont_filter,\n errback=self.error_callback)\n\n def retry_request_with_get(self, request):\n # type: (Request) -> Generator[Request, None, None]\n request.method = 'GET'\n request.dont_filter = True\n yield request\n\n def exclude_error(self, url):\n # type: (str) -> bool\n if url in EXCLUDED_URLS:\n return True\n return False\n\n def error_callback(self, failure):\n # type: (Any) -> Optional[Generator[Any, None, None]]\n if hasattr(failure.value, 'response') and failure.value.response:\n response = failure.value.response\n if self.exclude_error(response.url):\n return None\n if response.status == 404:\n self._set_error_state()\n raise Exception('Page not found: {}'.format(response))\n if response.status == 405 and response.request.method == 'HEAD':\n # Method 'HEAD' not allowed, repeat request with 'GET'\n return self.retry_request_with_get(response.request)\n self.log(\"Error! Please check link: {}\".format(response), logging.ERROR)\n elif isinstance(failure.type, IOError):\n self._set_error_state()\n else:\n raise Exception(failure.value)\n return None\n", "path": "tools/documentation_crawler/documentation_crawler/spiders/common/spiders.py"}], "after_files": [{"content": "#!/usr/bin/env python\nfrom __future__ import print_function\n\nimport logging\nimport re\nimport scrapy\n\nfrom scrapy import Request\nfrom scrapy.linkextractors import IGNORED_EXTENSIONS\nfrom scrapy.linkextractors.lxmlhtml import LxmlLinkExtractor\nfrom scrapy.utils.url import url_has_any_extension\n\nfrom typing import Any, Generator, List, Optional, Tuple\n\nEXCLUDED_URLS = [\n # Google calendar returns 404s on HEAD requests unconditionally\n 'https://calendar.google.com/calendar/[email protected]',\n # Returns 409 errors to HEAD requests frequently\n 'https://medium.freecodecamp.com',\n # Returns 404 to HEAD requests unconditionally\n 'https://www.git-tower.com/blog/command-line-cheat-sheet/',\n]\n\n\nclass BaseDocumentationSpider(scrapy.Spider):\n name = None # type: Optional[str]\n # Exclude domain address.\n deny_domains = [] # type: List[str]\n start_urls = [] # type: List[str]\n deny = [] # type: List[str]\n file_extensions = ['.' + ext for ext in IGNORED_EXTENSIONS] # type: List[str]\n tags = ('a', 'area', 'img')\n attrs = ('href', 'src')\n\n def __init__(self, *args, **kwargs):\n # type: (*Any, **Any) -> None\n super(BaseDocumentationSpider, self).__init__(*args, **kwargs)\n self.has_error = False\n\n def _set_error_state(self):\n # type: () -> None\n self.has_error = True\n\n def _has_extension(self, url):\n # type: (str) -> bool\n return url_has_any_extension(url, self.file_extensions)\n\n def _is_external_url(self, url):\n # type: (str) -> bool\n return url.startswith('http') or self._has_extension(url)\n\n def check_existing(self, response):\n # type: (Any) -> None\n self.log(response)\n\n def check_permalink(self, response):\n # type: (Any) -> None\n self.log(response)\n xpath_template = \"//*[@id='{permalink}' or @name='{permalink}']\"\n m = re.match(r\".+\\#(?P<permalink>.*)$\", response.request.url) # Get anchor value.\n if not m:\n return\n permalink = m.group('permalink')\n # Check permalink existing on response page.\n if not response.selector.xpath(xpath_template.format(permalink=permalink)):\n self._set_error_state()\n raise Exception(\n \"Permalink #{} is not found on page {}\".format(permalink, response.request.url))\n\n def parse(self, response):\n # type: (Any) -> Generator[Request, None, None]\n self.log(response)\n for link in LxmlLinkExtractor(deny_domains=self.deny_domains, deny_extensions=['doc'],\n tags=self.tags, attrs=self.attrs, deny=self.deny,\n canonicalize=False).extract_links(response):\n callback = self.parse # type: Any\n dont_filter = False\n method = 'GET'\n if self._is_external_url(link.url):\n callback = self.check_existing\n method = 'HEAD'\n elif '#' in link.url:\n dont_filter = True\n callback = self.check_permalink\n yield Request(link.url, method=method, callback=callback, dont_filter=dont_filter,\n errback=self.error_callback)\n\n def retry_request_with_get(self, request):\n # type: (Request) -> Generator[Request, None, None]\n request.method = 'GET'\n request.dont_filter = True\n yield request\n\n def exclude_error(self, url):\n # type: (str) -> bool\n if url in EXCLUDED_URLS:\n return True\n return False\n\n def error_callback(self, failure):\n # type: (Any) -> Optional[Generator[Any, None, None]]\n if hasattr(failure.value, 'response') and failure.value.response:\n response = failure.value.response\n if self.exclude_error(response.url):\n return None\n if response.status == 404:\n self._set_error_state()\n raise Exception('Page not found: {}'.format(response))\n if response.status == 405 and response.request.method == 'HEAD':\n # Method 'HEAD' not allowed, repeat request with 'GET'\n return self.retry_request_with_get(response.request)\n self.log(\"Error! Please check link: {}\".format(response), logging.ERROR)\n elif isinstance(failure.type, IOError):\n self._set_error_state()\n else:\n raise Exception(failure.value)\n return None\n", "path": "tools/documentation_crawler/documentation_crawler/spiders/common/spiders.py"}]}
| 1,568 | 168 |
gh_patches_debug_28640
|
rasdani/github-patches
|
git_diff
|
microsoft__torchgeo-316
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add "Launch in Hub" button to tutorials
We currently have a "Open in Colab" button for our tutorials. We should also have one for the Planetary Computer. See https://planetarycomputer.microsoft.com/dataset/landsat-8-c2-l2#Example-Notebook for an example of how this works.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 # Configuration file for the Sphinx documentation builder.
5 #
6 # This file only contains a selection of the most common options. For a full
7 # list see the documentation:
8 # https://www.sphinx-doc.org/en/master/usage/configuration.html
9
10 # -- Path setup --------------------------------------------------------------
11
12 import os
13 import sys
14
15 import pytorch_sphinx_theme
16
17 # If extensions (or modules to document with autodoc) are in another directory,
18 # add these directories to sys.path here. If the directory is relative to the
19 # documentation root, use os.path.abspath to make it absolute, like shown here.
20 sys.path.insert(0, os.path.abspath(".."))
21
22 import torchgeo # noqa: E402
23
24 # -- Project information -----------------------------------------------------
25
26 project = "torchgeo"
27 copyright = "2021, Microsoft Corporation"
28 author = torchgeo.__author__
29 version = ".".join(torchgeo.__version__.split(".")[:2])
30 release = torchgeo.__version__
31
32
33 # -- General configuration ---------------------------------------------------
34
35 # Add any Sphinx extension module names here, as strings. They can be
36 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
37 # ones.
38 extensions = [
39 "sphinx.ext.autodoc",
40 "sphinx.ext.intersphinx",
41 "sphinx.ext.napoleon",
42 "sphinx.ext.todo",
43 "sphinx.ext.viewcode",
44 "nbsphinx",
45 ]
46
47 # List of patterns, relative to source directory, that match files and
48 # directories to ignore when looking for source files.
49 # This pattern also affects html_static_path and html_extra_path.
50 exclude_patterns = ["_build"]
51
52 # Sphinx 3.0+ required for:
53 # autodoc_typehints = "description"
54 needs_sphinx = "3.0"
55
56 nitpicky = True
57 nitpick_ignore = [
58 # https://github.com/sphinx-doc/sphinx/issues/8127
59 ("py:class", ".."),
60 # TODO: can't figure out why this isn't found
61 ("py:class", "LightningDataModule"),
62 # Undocumented class
63 ("py:class", "torchvision.models.resnet.ResNet"),
64 ]
65
66
67 # -- Options for HTML output -------------------------------------------------
68
69 # The theme to use for HTML and HTML Help pages. See the documentation for
70 # a list of builtin themes.
71 html_theme = "pytorch_sphinx_theme"
72 html_theme_path = [pytorch_sphinx_theme.get_html_theme_path()]
73
74 # Theme options are theme-specific and customize the look and feel of a theme
75 # further. For a list of options available for each theme, see the
76 # documentation.
77 html_theme_options = {
78 "collapse_navigation": False,
79 "display_version": True,
80 "logo_only": True,
81 "pytorch_project": "docs",
82 "navigation_with_keys": True,
83 "analytics_id": "UA-209075005-1",
84 }
85
86 html_favicon = os.path.join("..", "logo", "favicon.ico")
87
88 html_static_path = ["_static"]
89 html_css_files = ["workaround.css"]
90
91 # -- Extension configuration -------------------------------------------------
92
93 # sphinx.ext.autodoc
94 autodoc_default_options = {
95 "members": True,
96 "special-members": True,
97 "show-inheritance": True,
98 }
99 autodoc_member_order = "bysource"
100 autodoc_typehints = "description"
101
102 # sphinx.ext.intersphinx
103 intersphinx_mapping = {
104 "matplotlib": ("https://matplotlib.org/stable/", None),
105 "python": ("https://docs.python.org/3", None),
106 "pytorch-lightning": ("https://pytorch-lightning.readthedocs.io/en/latest/", None),
107 "rasterio": ("https://rasterio.readthedocs.io/en/latest/", None),
108 "rtree": ("https://rtree.readthedocs.io/en/latest/", None),
109 "torch": ("https://pytorch.org/docs/stable", None),
110 "torchvision": ("https://pytorch.org/vision/stable", None),
111 }
112
113 # nbsphinx
114 nbsphinx_execute = "never"
115 # TODO: branch/tag should change depending on which version of docs you look at
116 # TODO: width option of image directive is broken, see:
117 # https://github.com/pytorch/pytorch_sphinx_theme/issues/140
118 nbsphinx_prolog = """
119 {% set colab = "https://colab.research.google.com" %}
120 {% set repo = "microsoft/torchgeo" %}
121 {% if "dev" in env.config.release %}
122 {% set branch = "main" %}
123 {% else %}
124 {% set branch = "releases/v" ~ env.config.version %}
125 {% endif %}
126
127 .. image:: {{ colab }}/assets/colab-badge.svg
128 :class: colabbadge
129 :alt: Open in Colab
130 :target: {{ colab }}/github/{{ repo }}/blob/{{ branch }}/docs/{{ env.docname }}.ipynb
131 """
132
133 # Disables requirejs in nbsphinx to enable compatibility with the pytorch_sphinx_theme
134 # See more information here https://github.com/spatialaudio/nbsphinx/issues/599
135 # NOTE: This will likely break nbsphinx widgets
136 nbsphinx_requirejs_path = ""
137
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -116,18 +116,35 @@
# TODO: width option of image directive is broken, see:
# https://github.com/pytorch/pytorch_sphinx_theme/issues/140
nbsphinx_prolog = """
-{% set colab = "https://colab.research.google.com" %}
+{% set host = "https://colab.research.google.com" %}
{% set repo = "microsoft/torchgeo" %}
+{% set urlpath = "docs/" ~ env.docname ~ ".ipynb" %}
{% if "dev" in env.config.release %}
{% set branch = "main" %}
{% else %}
{% set branch = "releases/v" ~ env.config.version %}
{% endif %}
-.. image:: {{ colab }}/assets/colab-badge.svg
+.. image:: {{ host }}/assets/colab-badge.svg
:class: colabbadge
:alt: Open in Colab
- :target: {{ colab }}/github/{{ repo }}/blob/{{ branch }}/docs/{{ env.docname }}.ipynb
+ :target: {{ host }}/github/{{ repo }}/blob/{{ branch }}/{{ urlpath }}
+
+{% set host = "https://pccompute.westeurope.cloudapp.azure.com" %}
+{% set host = host ~ "/compute/hub/user-redirect/git-pull" %}
+{% set repo = "https%3A%2F%2Fgithub.com%2Fmicrosoft%2Ftorchgeo" %}
+{% set urlpath = "tree%2Ftorchgeo%2Fdocs%2F" %}
+{% set urlpath = urlpath ~ env.docname | replace("/", "%2F") ~ ".ipynb" %}
+{% if "dev" in env.config.release %}
+ {% set branch = "main" %}
+{% else %}
+ {% set branch = "releases%2Fv" ~ env.config.version %}
+{% endif %}
+
+.. image:: https://img.shields.io/badge/-Open%20on%20Planetary%20Computer-blue
+ :class: colabbadge
+ :alt: Open on Planetary Computer
+ :target: {{ host }}?repo={{ repo }}&urlpath={{ urlpath }}&branch={{ branch }}
"""
# Disables requirejs in nbsphinx to enable compatibility with the pytorch_sphinx_theme
|
{"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -116,18 +116,35 @@\n # TODO: width option of image directive is broken, see:\n # https://github.com/pytorch/pytorch_sphinx_theme/issues/140\n nbsphinx_prolog = \"\"\"\n-{% set colab = \"https://colab.research.google.com\" %}\n+{% set host = \"https://colab.research.google.com\" %}\n {% set repo = \"microsoft/torchgeo\" %}\n+{% set urlpath = \"docs/\" ~ env.docname ~ \".ipynb\" %}\n {% if \"dev\" in env.config.release %}\n {% set branch = \"main\" %}\n {% else %}\n {% set branch = \"releases/v\" ~ env.config.version %}\n {% endif %}\n \n-.. image:: {{ colab }}/assets/colab-badge.svg\n+.. image:: {{ host }}/assets/colab-badge.svg\n :class: colabbadge\n :alt: Open in Colab\n- :target: {{ colab }}/github/{{ repo }}/blob/{{ branch }}/docs/{{ env.docname }}.ipynb\n+ :target: {{ host }}/github/{{ repo }}/blob/{{ branch }}/{{ urlpath }}\n+\n+{% set host = \"https://pccompute.westeurope.cloudapp.azure.com\" %}\n+{% set host = host ~ \"/compute/hub/user-redirect/git-pull\" %}\n+{% set repo = \"https%3A%2F%2Fgithub.com%2Fmicrosoft%2Ftorchgeo\" %}\n+{% set urlpath = \"tree%2Ftorchgeo%2Fdocs%2F\" %}\n+{% set urlpath = urlpath ~ env.docname | replace(\"/\", \"%2F\") ~ \".ipynb\" %}\n+{% if \"dev\" in env.config.release %}\n+ {% set branch = \"main\" %}\n+{% else %}\n+ {% set branch = \"releases%2Fv\" ~ env.config.version %}\n+{% endif %}\n+\n+.. image:: https://img.shields.io/badge/-Open%20on%20Planetary%20Computer-blue\n+ :class: colabbadge\n+ :alt: Open on Planetary Computer\n+ :target: {{ host }}?repo={{ repo }}&urlpath={{ urlpath }}&branch={{ branch }}\n \"\"\"\n \n # Disables requirejs in nbsphinx to enable compatibility with the pytorch_sphinx_theme\n", "issue": "Add \"Launch in Hub\" button to tutorials\nWe currently have a \"Open in Colab\" button for our tutorials. We should also have one for the Planetary Computer. See https://planetarycomputer.microsoft.com/dataset/landsat-8-c2-l2#Example-Notebook for an example of how this works.\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\n# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup --------------------------------------------------------------\n\nimport os\nimport sys\n\nimport pytorch_sphinx_theme\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\nsys.path.insert(0, os.path.abspath(\"..\"))\n\nimport torchgeo # noqa: E402\n\n# -- Project information -----------------------------------------------------\n\nproject = \"torchgeo\"\ncopyright = \"2021, Microsoft Corporation\"\nauthor = torchgeo.__author__\nversion = \".\".join(torchgeo.__version__.split(\".\")[:2])\nrelease = torchgeo.__version__\n\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.napoleon\",\n \"sphinx.ext.todo\",\n \"sphinx.ext.viewcode\",\n \"nbsphinx\",\n]\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = [\"_build\"]\n\n# Sphinx 3.0+ required for:\n# autodoc_typehints = \"description\"\nneeds_sphinx = \"3.0\"\n\nnitpicky = True\nnitpick_ignore = [\n # https://github.com/sphinx-doc/sphinx/issues/8127\n (\"py:class\", \"..\"),\n # TODO: can't figure out why this isn't found\n (\"py:class\", \"LightningDataModule\"),\n # Undocumented class\n (\"py:class\", \"torchvision.models.resnet.ResNet\"),\n]\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nhtml_theme = \"pytorch_sphinx_theme\"\nhtml_theme_path = [pytorch_sphinx_theme.get_html_theme_path()]\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\nhtml_theme_options = {\n \"collapse_navigation\": False,\n \"display_version\": True,\n \"logo_only\": True,\n \"pytorch_project\": \"docs\",\n \"navigation_with_keys\": True,\n \"analytics_id\": \"UA-209075005-1\",\n}\n\nhtml_favicon = os.path.join(\"..\", \"logo\", \"favicon.ico\")\n\nhtml_static_path = [\"_static\"]\nhtml_css_files = [\"workaround.css\"]\n\n# -- Extension configuration -------------------------------------------------\n\n# sphinx.ext.autodoc\nautodoc_default_options = {\n \"members\": True,\n \"special-members\": True,\n \"show-inheritance\": True,\n}\nautodoc_member_order = \"bysource\"\nautodoc_typehints = \"description\"\n\n# sphinx.ext.intersphinx\nintersphinx_mapping = {\n \"matplotlib\": (\"https://matplotlib.org/stable/\", None),\n \"python\": (\"https://docs.python.org/3\", None),\n \"pytorch-lightning\": (\"https://pytorch-lightning.readthedocs.io/en/latest/\", None),\n \"rasterio\": (\"https://rasterio.readthedocs.io/en/latest/\", None),\n \"rtree\": (\"https://rtree.readthedocs.io/en/latest/\", None),\n \"torch\": (\"https://pytorch.org/docs/stable\", None),\n \"torchvision\": (\"https://pytorch.org/vision/stable\", None),\n}\n\n# nbsphinx\nnbsphinx_execute = \"never\"\n# TODO: branch/tag should change depending on which version of docs you look at\n# TODO: width option of image directive is broken, see:\n# https://github.com/pytorch/pytorch_sphinx_theme/issues/140\nnbsphinx_prolog = \"\"\"\n{% set colab = \"https://colab.research.google.com\" %}\n{% set repo = \"microsoft/torchgeo\" %}\n{% if \"dev\" in env.config.release %}\n {% set branch = \"main\" %}\n{% else %}\n {% set branch = \"releases/v\" ~ env.config.version %}\n{% endif %}\n\n.. image:: {{ colab }}/assets/colab-badge.svg\n :class: colabbadge\n :alt: Open in Colab\n :target: {{ colab }}/github/{{ repo }}/blob/{{ branch }}/docs/{{ env.docname }}.ipynb\n\"\"\"\n\n# Disables requirejs in nbsphinx to enable compatibility with the pytorch_sphinx_theme\n# See more information here https://github.com/spatialaudio/nbsphinx/issues/599\n# NOTE: This will likely break nbsphinx widgets\nnbsphinx_requirejs_path = \"\"\n", "path": "docs/conf.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\n# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup --------------------------------------------------------------\n\nimport os\nimport sys\n\nimport pytorch_sphinx_theme\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\nsys.path.insert(0, os.path.abspath(\"..\"))\n\nimport torchgeo # noqa: E402\n\n# -- Project information -----------------------------------------------------\n\nproject = \"torchgeo\"\ncopyright = \"2021, Microsoft Corporation\"\nauthor = torchgeo.__author__\nversion = \".\".join(torchgeo.__version__.split(\".\")[:2])\nrelease = torchgeo.__version__\n\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.napoleon\",\n \"sphinx.ext.todo\",\n \"sphinx.ext.viewcode\",\n \"nbsphinx\",\n]\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = [\"_build\"]\n\n# Sphinx 3.0+ required for:\n# autodoc_typehints = \"description\"\nneeds_sphinx = \"3.0\"\n\nnitpicky = True\nnitpick_ignore = [\n # https://github.com/sphinx-doc/sphinx/issues/8127\n (\"py:class\", \"..\"),\n # TODO: can't figure out why this isn't found\n (\"py:class\", \"LightningDataModule\"),\n # Undocumented class\n (\"py:class\", \"torchvision.models.resnet.ResNet\"),\n]\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nhtml_theme = \"pytorch_sphinx_theme\"\nhtml_theme_path = [pytorch_sphinx_theme.get_html_theme_path()]\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\nhtml_theme_options = {\n \"collapse_navigation\": False,\n \"display_version\": True,\n \"logo_only\": True,\n \"pytorch_project\": \"docs\",\n \"navigation_with_keys\": True,\n \"analytics_id\": \"UA-209075005-1\",\n}\n\nhtml_favicon = os.path.join(\"..\", \"logo\", \"favicon.ico\")\n\nhtml_static_path = [\"_static\"]\nhtml_css_files = [\"workaround.css\"]\n\n# -- Extension configuration -------------------------------------------------\n\n# sphinx.ext.autodoc\nautodoc_default_options = {\n \"members\": True,\n \"special-members\": True,\n \"show-inheritance\": True,\n}\nautodoc_member_order = \"bysource\"\nautodoc_typehints = \"description\"\n\n# sphinx.ext.intersphinx\nintersphinx_mapping = {\n \"matplotlib\": (\"https://matplotlib.org/stable/\", None),\n \"python\": (\"https://docs.python.org/3\", None),\n \"pytorch-lightning\": (\"https://pytorch-lightning.readthedocs.io/en/latest/\", None),\n \"rasterio\": (\"https://rasterio.readthedocs.io/en/latest/\", None),\n \"rtree\": (\"https://rtree.readthedocs.io/en/latest/\", None),\n \"torch\": (\"https://pytorch.org/docs/stable\", None),\n \"torchvision\": (\"https://pytorch.org/vision/stable\", None),\n}\n\n# nbsphinx\nnbsphinx_execute = \"never\"\n# TODO: branch/tag should change depending on which version of docs you look at\n# TODO: width option of image directive is broken, see:\n# https://github.com/pytorch/pytorch_sphinx_theme/issues/140\nnbsphinx_prolog = \"\"\"\n{% set host = \"https://colab.research.google.com\" %}\n{% set repo = \"microsoft/torchgeo\" %}\n{% set urlpath = \"docs/\" ~ env.docname ~ \".ipynb\" %}\n{% if \"dev\" in env.config.release %}\n {% set branch = \"main\" %}\n{% else %}\n {% set branch = \"releases/v\" ~ env.config.version %}\n{% endif %}\n\n.. image:: {{ host }}/assets/colab-badge.svg\n :class: colabbadge\n :alt: Open in Colab\n :target: {{ host }}/github/{{ repo }}/blob/{{ branch }}/{{ urlpath }}\n\n{% set host = \"https://pccompute.westeurope.cloudapp.azure.com\" %}\n{% set host = host ~ \"/compute/hub/user-redirect/git-pull\" %}\n{% set repo = \"https%3A%2F%2Fgithub.com%2Fmicrosoft%2Ftorchgeo\" %}\n{% set urlpath = \"tree%2Ftorchgeo%2Fdocs%2F\" %}\n{% set urlpath = urlpath ~ env.docname | replace(\"/\", \"%2F\") ~ \".ipynb\" %}\n{% if \"dev\" in env.config.release %}\n {% set branch = \"main\" %}\n{% else %}\n {% set branch = \"releases%2Fv\" ~ env.config.version %}\n{% endif %}\n\n.. image:: https://img.shields.io/badge/-Open%20on%20Planetary%20Computer-blue\n :class: colabbadge\n :alt: Open on Planetary Computer\n :target: {{ host }}?repo={{ repo }}&urlpath={{ urlpath }}&branch={{ branch }}\n\"\"\"\n\n# Disables requirejs in nbsphinx to enable compatibility with the pytorch_sphinx_theme\n# See more information here https://github.com/spatialaudio/nbsphinx/issues/599\n# NOTE: This will likely break nbsphinx widgets\nnbsphinx_requirejs_path = \"\"\n", "path": "docs/conf.py"}]}
| 1,767 | 542 |
gh_patches_debug_21843
|
rasdani/github-patches
|
git_diff
|
pymedusa__Medusa-824
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
UnicodeEncodeError: 'ascii' codec can't encode character u'\u014d' in position 31: ordinal not in range(128)
```
Traceback (most recent call last):
File "/home/orangepi/.sickrage/lib/subliminal/core.py", line 543, in refine
refiner_manager[refiner].plugin(video, **kwargs)
File "/home/orangepi/.sickrage/sickbeard/refiners/release.py", line 54, in refine
release_file = get_release_file(dirpath, fileroot, extension) or release_file
File "/home/orangepi/.sickrage/sickbeard/refiners/release.py", line 90, in get_release_file
if os.path.isfile(release_file):
File "/usr/lib/python2.7/genericpath.py", line 37, in isfile
st = os.stat(path)
UnicodeEncodeError: 'ascii' codec can't encode character u'\u014d' in position 31: ordinal not in range(128)
```
This happens because the subtitle contains a non-ascii char and my locale is set to None.
To fix this we would need to use ek() for isfile() or (better imo) use a similar solution to what we du in subtitles.py. What do you think @ratoaq2?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sickbeard/refiners/release.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """Release refiner."""
3 from __future__ import unicode_literals
4
5 import logging
6 import os
7
8 from guessit import guessit
9
10 logger = logging.getLogger(__name__)
11
12 MOVIE_ATTRIBUTES = {'title': 'title', 'year': 'year', 'format': 'format', 'release_group': 'release_group',
13 'resolution': 'screen_size', 'video_codec': 'video_codec', 'audio_codec': 'audio_codec'}
14 EPISODE_ATTRIBUTES = {'series': 'title', 'season': 'season', 'episode': 'episode', 'title': 'episode_title',
15 'year': 'year', 'format': 'format', 'release_group': 'release_group', 'resolution': 'screen_size',
16 'video_codec': 'video_codec', 'audio_codec': 'audio_codec'}
17
18
19 def refine(video, release_name=None, release_file=None, extension='release', **kwargs):
20 """Refine a video by using the original release name.
21
22 The refiner will first try:
23 - Read the file video_name.<extension> seeking for a release name
24 - If no release name, it will read the release_file seeking for a release name
25 - If no release name, it will use the release_name passed as an argument
26 - If no release name, then no change in the video object is made
27
28 When a release name is found, the video object will be enhanced using the guessit properties extracted from it.
29
30 Several :class:`~subliminal.video.Video` attributes can be found:
31
32 * :attr:`~subliminal.video.Video.title`
33 * :attr:`~subliminal.video.Video.series`
34 * :attr:`~subliminal.video.Video.season`
35 * :attr:`~subliminal.video.Video.episode`
36 * :attr:`~subliminal.video.Video.year`
37 * :attr:`~subliminal.video.Video.format`
38 * :attr:`~subliminal.video.Video.release_group`
39 * :attr:`~subliminal.video.Video.resolution`
40 * :attr:`~subliminal.video.Video.video_codec`
41 * :attr:`~subliminal.video.Video.audio_codec`
42
43 :param video: the video to refine.
44 :type video: subliminal.video.Video
45 :param str release_name: the release name to be used.
46 :param str release_file: the release file to be used
47 :param str extension: the release file extension.
48 """
49 logger.debug('Starting release refiner [extension={extension}, release_name={name}, release_file={file}]',
50 extension=extension, name=release_name, file=release_file)
51 dirpath, filename = os.path.split(video.name)
52 dirpath = dirpath or '.'
53 fileroot, fileext = os.path.splitext(filename)
54 release_file = get_release_file(dirpath, fileroot, extension) or release_file
55 release_name = get_release_name(release_file) or release_name
56
57 if not release_name:
58 logger.debug('No release name for {video}', video=video.name)
59 return
60
61 release_path = os.path.join(dirpath, release_name + fileext)
62 logger.debug('Guessing using {path}', path=release_path)
63
64 guess = guessit(release_path)
65 attributes = MOVIE_ATTRIBUTES if guess.get('type') == 'movie' else EPISODE_ATTRIBUTES
66 for key, value in attributes.items():
67 old_value = getattr(video, key)
68 new_value = guess.get(value)
69
70 if new_value and old_value != new_value:
71 setattr(video, key, new_value)
72 logger.debug('Attribute {key} changed from {old} to {new}', key=key, old=old_value, new=new_value)
73
74
75 def get_release_file(dirpath, filename, extension):
76 """Return the release file that should contain the release name for a given a `dirpath`, `filename` and `extension`.
77
78 :param dirpath: the file base folder
79 :type dirpath: str
80 :param filename: the file name without extension
81 :type filename: str
82 :param extension:
83 :type extension: the file extension
84 :return: the release file if the file exists
85 :rtype: str
86 """
87 release_file = os.path.join(dirpath, filename + '.' + extension)
88
89 # skip if info file doesn't exist
90 if os.path.isfile(release_file):
91 logger.debug('Found release file {file}', file=release_file)
92 return release_file
93
94
95 def get_release_name(release_file):
96 """Given a `release_file` it will return the release name.
97
98 :param release_file: the text file that contains the release name
99 :type release_file: str
100 :return: the release name
101 :rtype: str
102 """
103 if not release_file:
104 return
105
106 with open(release_file, 'r') as f:
107 release_name = f.read().strip()
108
109 # skip if no release name was found
110 if not release_name:
111 logger.warning('Release file {file} does not contain a release name', file=release_file)
112
113 return release_name
114
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sickbeard/refiners/release.py b/sickbeard/refiners/release.py
--- a/sickbeard/refiners/release.py
+++ b/sickbeard/refiners/release.py
@@ -1,6 +1,5 @@
# -*- coding: utf-8 -*-
"""Release refiner."""
-from __future__ import unicode_literals
import logging
import os
@@ -10,7 +9,7 @@
logger = logging.getLogger(__name__)
MOVIE_ATTRIBUTES = {'title': 'title', 'year': 'year', 'format': 'format', 'release_group': 'release_group',
- 'resolution': 'screen_size', 'video_codec': 'video_codec', 'audio_codec': 'audio_codec'}
+ 'resolution': 'screen_size', 'video_codec': 'video_codec', 'audio_codec': 'audio_codec'}
EPISODE_ATTRIBUTES = {'series': 'title', 'season': 'season', 'episode': 'episode', 'title': 'episode_title',
'year': 'year', 'format': 'format', 'release_group': 'release_group', 'resolution': 'screen_size',
'video_codec': 'video_codec', 'audio_codec': 'audio_codec'}
|
{"golden_diff": "diff --git a/sickbeard/refiners/release.py b/sickbeard/refiners/release.py\n--- a/sickbeard/refiners/release.py\n+++ b/sickbeard/refiners/release.py\n@@ -1,6 +1,5 @@\n # -*- coding: utf-8 -*-\n \"\"\"Release refiner.\"\"\"\n-from __future__ import unicode_literals\n \n import logging\n import os\n@@ -10,7 +9,7 @@\n logger = logging.getLogger(__name__)\n \n MOVIE_ATTRIBUTES = {'title': 'title', 'year': 'year', 'format': 'format', 'release_group': 'release_group',\n- 'resolution': 'screen_size', 'video_codec': 'video_codec', 'audio_codec': 'audio_codec'}\n+ 'resolution': 'screen_size', 'video_codec': 'video_codec', 'audio_codec': 'audio_codec'}\n EPISODE_ATTRIBUTES = {'series': 'title', 'season': 'season', 'episode': 'episode', 'title': 'episode_title',\n 'year': 'year', 'format': 'format', 'release_group': 'release_group', 'resolution': 'screen_size',\n 'video_codec': 'video_codec', 'audio_codec': 'audio_codec'}\n", "issue": "UnicodeEncodeError: 'ascii' codec can't encode character u'\\u014d' in position 31: ordinal not in range(128)\n```\nTraceback (most recent call last):\n File \"/home/orangepi/.sickrage/lib/subliminal/core.py\", line 543, in refine\n refiner_manager[refiner].plugin(video, **kwargs)\n File \"/home/orangepi/.sickrage/sickbeard/refiners/release.py\", line 54, in refine\n release_file = get_release_file(dirpath, fileroot, extension) or release_file\n File \"/home/orangepi/.sickrage/sickbeard/refiners/release.py\", line 90, in get_release_file\n if os.path.isfile(release_file):\n File \"/usr/lib/python2.7/genericpath.py\", line 37, in isfile\n st = os.stat(path)\nUnicodeEncodeError: 'ascii' codec can't encode character u'\\u014d' in position 31: ordinal not in range(128)\n```\n\nThis happens because the subtitle contains a non-ascii char and my locale is set to None.\nTo fix this we would need to use ek() for isfile() or (better imo) use a similar solution to what we du in subtitles.py. What do you think @ratoaq2?\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Release refiner.\"\"\"\nfrom __future__ import unicode_literals\n\nimport logging\nimport os\n\nfrom guessit import guessit\n\nlogger = logging.getLogger(__name__)\n\nMOVIE_ATTRIBUTES = {'title': 'title', 'year': 'year', 'format': 'format', 'release_group': 'release_group',\n 'resolution': 'screen_size', 'video_codec': 'video_codec', 'audio_codec': 'audio_codec'}\nEPISODE_ATTRIBUTES = {'series': 'title', 'season': 'season', 'episode': 'episode', 'title': 'episode_title',\n 'year': 'year', 'format': 'format', 'release_group': 'release_group', 'resolution': 'screen_size',\n 'video_codec': 'video_codec', 'audio_codec': 'audio_codec'}\n\n\ndef refine(video, release_name=None, release_file=None, extension='release', **kwargs):\n \"\"\"Refine a video by using the original release name.\n\n The refiner will first try:\n - Read the file video_name.<extension> seeking for a release name\n - If no release name, it will read the release_file seeking for a release name\n - If no release name, it will use the release_name passed as an argument\n - If no release name, then no change in the video object is made\n\n When a release name is found, the video object will be enhanced using the guessit properties extracted from it.\n\n Several :class:`~subliminal.video.Video` attributes can be found:\n\n * :attr:`~subliminal.video.Video.title`\n * :attr:`~subliminal.video.Video.series`\n * :attr:`~subliminal.video.Video.season`\n * :attr:`~subliminal.video.Video.episode`\n * :attr:`~subliminal.video.Video.year`\n * :attr:`~subliminal.video.Video.format`\n * :attr:`~subliminal.video.Video.release_group`\n * :attr:`~subliminal.video.Video.resolution`\n * :attr:`~subliminal.video.Video.video_codec`\n * :attr:`~subliminal.video.Video.audio_codec`\n\n :param video: the video to refine.\n :type video: subliminal.video.Video\n :param str release_name: the release name to be used.\n :param str release_file: the release file to be used\n :param str extension: the release file extension.\n \"\"\"\n logger.debug('Starting release refiner [extension={extension}, release_name={name}, release_file={file}]',\n extension=extension, name=release_name, file=release_file)\n dirpath, filename = os.path.split(video.name)\n dirpath = dirpath or '.'\n fileroot, fileext = os.path.splitext(filename)\n release_file = get_release_file(dirpath, fileroot, extension) or release_file\n release_name = get_release_name(release_file) or release_name\n\n if not release_name:\n logger.debug('No release name for {video}', video=video.name)\n return\n\n release_path = os.path.join(dirpath, release_name + fileext)\n logger.debug('Guessing using {path}', path=release_path)\n\n guess = guessit(release_path)\n attributes = MOVIE_ATTRIBUTES if guess.get('type') == 'movie' else EPISODE_ATTRIBUTES\n for key, value in attributes.items():\n old_value = getattr(video, key)\n new_value = guess.get(value)\n\n if new_value and old_value != new_value:\n setattr(video, key, new_value)\n logger.debug('Attribute {key} changed from {old} to {new}', key=key, old=old_value, new=new_value)\n\n\ndef get_release_file(dirpath, filename, extension):\n \"\"\"Return the release file that should contain the release name for a given a `dirpath`, `filename` and `extension`.\n\n :param dirpath: the file base folder\n :type dirpath: str\n :param filename: the file name without extension\n :type filename: str\n :param extension:\n :type extension: the file extension\n :return: the release file if the file exists\n :rtype: str\n \"\"\"\n release_file = os.path.join(dirpath, filename + '.' + extension)\n\n # skip if info file doesn't exist\n if os.path.isfile(release_file):\n logger.debug('Found release file {file}', file=release_file)\n return release_file\n\n\ndef get_release_name(release_file):\n \"\"\"Given a `release_file` it will return the release name.\n\n :param release_file: the text file that contains the release name\n :type release_file: str\n :return: the release name\n :rtype: str\n \"\"\"\n if not release_file:\n return\n\n with open(release_file, 'r') as f:\n release_name = f.read().strip()\n\n # skip if no release name was found\n if not release_name:\n logger.warning('Release file {file} does not contain a release name', file=release_file)\n\n return release_name\n", "path": "sickbeard/refiners/release.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Release refiner.\"\"\"\n\nimport logging\nimport os\n\nfrom guessit import guessit\n\nlogger = logging.getLogger(__name__)\n\nMOVIE_ATTRIBUTES = {'title': 'title', 'year': 'year', 'format': 'format', 'release_group': 'release_group',\n 'resolution': 'screen_size', 'video_codec': 'video_codec', 'audio_codec': 'audio_codec'}\nEPISODE_ATTRIBUTES = {'series': 'title', 'season': 'season', 'episode': 'episode', 'title': 'episode_title',\n 'year': 'year', 'format': 'format', 'release_group': 'release_group', 'resolution': 'screen_size',\n 'video_codec': 'video_codec', 'audio_codec': 'audio_codec'}\n\n\ndef refine(video, release_name=None, release_file=None, extension='release', **kwargs):\n \"\"\"Refine a video by using the original release name.\n\n The refiner will first try:\n - Read the file video_name.<extension> seeking for a release name\n - If no release name, it will read the release_file seeking for a release name\n - If no release name, it will use the release_name passed as an argument\n - If no release name, then no change in the video object is made\n\n When a release name is found, the video object will be enhanced using the guessit properties extracted from it.\n\n Several :class:`~subliminal.video.Video` attributes can be found:\n\n * :attr:`~subliminal.video.Video.title`\n * :attr:`~subliminal.video.Video.series`\n * :attr:`~subliminal.video.Video.season`\n * :attr:`~subliminal.video.Video.episode`\n * :attr:`~subliminal.video.Video.year`\n * :attr:`~subliminal.video.Video.format`\n * :attr:`~subliminal.video.Video.release_group`\n * :attr:`~subliminal.video.Video.resolution`\n * :attr:`~subliminal.video.Video.video_codec`\n * :attr:`~subliminal.video.Video.audio_codec`\n\n :param video: the video to refine.\n :type video: subliminal.video.Video\n :param str release_name: the release name to be used.\n :param str release_file: the release file to be used\n :param str extension: the release file extension.\n \"\"\"\n logger.debug('Starting release refiner [extension={extension}, release_name={name}, release_file={file}]',\n extension=extension, name=release_name, file=release_file)\n dirpath, filename = os.path.split(video.name)\n dirpath = dirpath or '.'\n fileroot, fileext = os.path.splitext(filename)\n release_file = get_release_file(dirpath, fileroot, extension) or release_file\n release_name = get_release_name(release_file) or release_name\n\n if not release_name:\n logger.debug('No release name for {video}', video=video.name)\n return\n\n release_path = os.path.join(dirpath, release_name + fileext)\n logger.debug('Guessing using {path}', path=release_path)\n\n guess = guessit(release_path)\n attributes = MOVIE_ATTRIBUTES if guess.get('type') == 'movie' else EPISODE_ATTRIBUTES\n for key, value in attributes.items():\n old_value = getattr(video, key)\n new_value = guess.get(value)\n\n if new_value and old_value != new_value:\n setattr(video, key, new_value)\n logger.debug('Attribute {key} changed from {old} to {new}', key=key, old=old_value, new=new_value)\n\n\ndef get_release_file(dirpath, filename, extension):\n \"\"\"Return the release file that should contain the release name for a given a `dirpath`, `filename` and `extension`.\n\n :param dirpath: the file base folder\n :type dirpath: str\n :param filename: the file name without extension\n :type filename: str\n :param extension:\n :type extension: the file extension\n :return: the release file if the file exists\n :rtype: str\n \"\"\"\n release_file = os.path.join(dirpath, filename + '.' + extension)\n\n # skip if info file doesn't exist\n if os.path.isfile(release_file):\n logger.debug('Found release file {file}', file=release_file)\n return release_file\n\n\ndef get_release_name(release_file):\n \"\"\"Given a `release_file` it will return the release name.\n\n :param release_file: the text file that contains the release name\n :type release_file: str\n :return: the release name\n :rtype: str\n \"\"\"\n if not release_file:\n return\n\n with open(release_file, 'r') as f:\n release_name = f.read().strip()\n\n # skip if no release name was found\n if not release_name:\n logger.warning('Release file {file} does not contain a release name', file=release_file)\n\n return release_name\n", "path": "sickbeard/refiners/release.py"}]}
| 1,896 | 255 |
gh_patches_debug_4872
|
rasdani/github-patches
|
git_diff
|
matrix-org__synapse-8517
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
/displayname returns non-JSON body
### Description
An attempt to call `/profile/{}/displayname` with an empty body results (rightfully) in a 400 error, with plain text "Unable to parse name" in the response. The expected error payload is something about (could also be `M_NOT_JSON` instead of `M_BAD_JSON`):
```json
{
"errcode": "M_BAD_JSON",
"error": "Unable to parse name"
}
```
### Version information
<!-- IMPORTANT: please answer the following questions, to help us narrow down the problem -->
<!-- Was this issue identified on matrix.org or another homeserver? -->
- **Homeserver**: matrix.org
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `synapse/rest/client/v1/profile.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright 2014-2016 OpenMarket Ltd
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 """ This module contains REST servlets to do with profile: /profile/<paths> """
17
18 from synapse.api.errors import Codes, SynapseError
19 from synapse.http.servlet import RestServlet, parse_json_object_from_request
20 from synapse.rest.client.v2_alpha._base import client_patterns
21 from synapse.types import UserID
22
23
24 class ProfileDisplaynameRestServlet(RestServlet):
25 PATTERNS = client_patterns("/profile/(?P<user_id>[^/]*)/displayname", v1=True)
26
27 def __init__(self, hs):
28 super().__init__()
29 self.hs = hs
30 self.profile_handler = hs.get_profile_handler()
31 self.auth = hs.get_auth()
32
33 async def on_GET(self, request, user_id):
34 requester_user = None
35
36 if self.hs.config.require_auth_for_profile_requests:
37 requester = await self.auth.get_user_by_req(request)
38 requester_user = requester.user
39
40 user = UserID.from_string(user_id)
41
42 await self.profile_handler.check_profile_query_allowed(user, requester_user)
43
44 displayname = await self.profile_handler.get_displayname(user)
45
46 ret = {}
47 if displayname is not None:
48 ret["displayname"] = displayname
49
50 return 200, ret
51
52 async def on_PUT(self, request, user_id):
53 requester = await self.auth.get_user_by_req(request, allow_guest=True)
54 user = UserID.from_string(user_id)
55 is_admin = await self.auth.is_server_admin(requester.user)
56
57 content = parse_json_object_from_request(request)
58
59 try:
60 new_name = content["displayname"]
61 except Exception:
62 return 400, "Unable to parse name"
63
64 await self.profile_handler.set_displayname(user, requester, new_name, is_admin)
65
66 return 200, {}
67
68 def on_OPTIONS(self, request, user_id):
69 return 200, {}
70
71
72 class ProfileAvatarURLRestServlet(RestServlet):
73 PATTERNS = client_patterns("/profile/(?P<user_id>[^/]*)/avatar_url", v1=True)
74
75 def __init__(self, hs):
76 super().__init__()
77 self.hs = hs
78 self.profile_handler = hs.get_profile_handler()
79 self.auth = hs.get_auth()
80
81 async def on_GET(self, request, user_id):
82 requester_user = None
83
84 if self.hs.config.require_auth_for_profile_requests:
85 requester = await self.auth.get_user_by_req(request)
86 requester_user = requester.user
87
88 user = UserID.from_string(user_id)
89
90 await self.profile_handler.check_profile_query_allowed(user, requester_user)
91
92 avatar_url = await self.profile_handler.get_avatar_url(user)
93
94 ret = {}
95 if avatar_url is not None:
96 ret["avatar_url"] = avatar_url
97
98 return 200, ret
99
100 async def on_PUT(self, request, user_id):
101 requester = await self.auth.get_user_by_req(request)
102 user = UserID.from_string(user_id)
103 is_admin = await self.auth.is_server_admin(requester.user)
104
105 content = parse_json_object_from_request(request)
106 try:
107 new_avatar_url = content["avatar_url"]
108 except KeyError:
109 raise SynapseError(
110 400, "Missing key 'avatar_url'", errcode=Codes.MISSING_PARAM
111 )
112
113 await self.profile_handler.set_avatar_url(
114 user, requester, new_avatar_url, is_admin
115 )
116
117 return 200, {}
118
119 def on_OPTIONS(self, request, user_id):
120 return 200, {}
121
122
123 class ProfileRestServlet(RestServlet):
124 PATTERNS = client_patterns("/profile/(?P<user_id>[^/]*)", v1=True)
125
126 def __init__(self, hs):
127 super().__init__()
128 self.hs = hs
129 self.profile_handler = hs.get_profile_handler()
130 self.auth = hs.get_auth()
131
132 async def on_GET(self, request, user_id):
133 requester_user = None
134
135 if self.hs.config.require_auth_for_profile_requests:
136 requester = await self.auth.get_user_by_req(request)
137 requester_user = requester.user
138
139 user = UserID.from_string(user_id)
140
141 await self.profile_handler.check_profile_query_allowed(user, requester_user)
142
143 displayname = await self.profile_handler.get_displayname(user)
144 avatar_url = await self.profile_handler.get_avatar_url(user)
145
146 ret = {}
147 if displayname is not None:
148 ret["displayname"] = displayname
149 if avatar_url is not None:
150 ret["avatar_url"] = avatar_url
151
152 return 200, ret
153
154
155 def register_servlets(hs, http_server):
156 ProfileDisplaynameRestServlet(hs).register(http_server)
157 ProfileAvatarURLRestServlet(hs).register(http_server)
158 ProfileRestServlet(hs).register(http_server)
159
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/synapse/rest/client/v1/profile.py b/synapse/rest/client/v1/profile.py
--- a/synapse/rest/client/v1/profile.py
+++ b/synapse/rest/client/v1/profile.py
@@ -59,7 +59,9 @@
try:
new_name = content["displayname"]
except Exception:
- return 400, "Unable to parse name"
+ raise SynapseError(
+ code=400, msg="Unable to parse name", errcode=Codes.BAD_JSON,
+ )
await self.profile_handler.set_displayname(user, requester, new_name, is_admin)
|
{"golden_diff": "diff --git a/synapse/rest/client/v1/profile.py b/synapse/rest/client/v1/profile.py\n--- a/synapse/rest/client/v1/profile.py\n+++ b/synapse/rest/client/v1/profile.py\n@@ -59,7 +59,9 @@\n try:\n new_name = content[\"displayname\"]\n except Exception:\n- return 400, \"Unable to parse name\"\n+ raise SynapseError(\n+ code=400, msg=\"Unable to parse name\", errcode=Codes.BAD_JSON,\n+ )\n \n await self.profile_handler.set_displayname(user, requester, new_name, is_admin)\n", "issue": "/displayname returns non-JSON body\n### Description\r\n\r\nAn attempt to call `/profile/{}/displayname` with an empty body results (rightfully) in a 400 error, with plain text \"Unable to parse name\" in the response. The expected error payload is something about (could also be `M_NOT_JSON` instead of `M_BAD_JSON`):\r\n```json\r\n{\r\n \"errcode\": \"M_BAD_JSON\",\r\n \"error\": \"Unable to parse name\" \r\n}\r\n```\r\n\r\n### Version information\r\n\r\n<!-- IMPORTANT: please answer the following questions, to help us narrow down the problem -->\r\n\r\n<!-- Was this issue identified on matrix.org or another homeserver? -->\r\n- **Homeserver**: matrix.org\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2014-2016 OpenMarket Ltd\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\" This module contains REST servlets to do with profile: /profile/<paths> \"\"\"\n\nfrom synapse.api.errors import Codes, SynapseError\nfrom synapse.http.servlet import RestServlet, parse_json_object_from_request\nfrom synapse.rest.client.v2_alpha._base import client_patterns\nfrom synapse.types import UserID\n\n\nclass ProfileDisplaynameRestServlet(RestServlet):\n PATTERNS = client_patterns(\"/profile/(?P<user_id>[^/]*)/displayname\", v1=True)\n\n def __init__(self, hs):\n super().__init__()\n self.hs = hs\n self.profile_handler = hs.get_profile_handler()\n self.auth = hs.get_auth()\n\n async def on_GET(self, request, user_id):\n requester_user = None\n\n if self.hs.config.require_auth_for_profile_requests:\n requester = await self.auth.get_user_by_req(request)\n requester_user = requester.user\n\n user = UserID.from_string(user_id)\n\n await self.profile_handler.check_profile_query_allowed(user, requester_user)\n\n displayname = await self.profile_handler.get_displayname(user)\n\n ret = {}\n if displayname is not None:\n ret[\"displayname\"] = displayname\n\n return 200, ret\n\n async def on_PUT(self, request, user_id):\n requester = await self.auth.get_user_by_req(request, allow_guest=True)\n user = UserID.from_string(user_id)\n is_admin = await self.auth.is_server_admin(requester.user)\n\n content = parse_json_object_from_request(request)\n\n try:\n new_name = content[\"displayname\"]\n except Exception:\n return 400, \"Unable to parse name\"\n\n await self.profile_handler.set_displayname(user, requester, new_name, is_admin)\n\n return 200, {}\n\n def on_OPTIONS(self, request, user_id):\n return 200, {}\n\n\nclass ProfileAvatarURLRestServlet(RestServlet):\n PATTERNS = client_patterns(\"/profile/(?P<user_id>[^/]*)/avatar_url\", v1=True)\n\n def __init__(self, hs):\n super().__init__()\n self.hs = hs\n self.profile_handler = hs.get_profile_handler()\n self.auth = hs.get_auth()\n\n async def on_GET(self, request, user_id):\n requester_user = None\n\n if self.hs.config.require_auth_for_profile_requests:\n requester = await self.auth.get_user_by_req(request)\n requester_user = requester.user\n\n user = UserID.from_string(user_id)\n\n await self.profile_handler.check_profile_query_allowed(user, requester_user)\n\n avatar_url = await self.profile_handler.get_avatar_url(user)\n\n ret = {}\n if avatar_url is not None:\n ret[\"avatar_url\"] = avatar_url\n\n return 200, ret\n\n async def on_PUT(self, request, user_id):\n requester = await self.auth.get_user_by_req(request)\n user = UserID.from_string(user_id)\n is_admin = await self.auth.is_server_admin(requester.user)\n\n content = parse_json_object_from_request(request)\n try:\n new_avatar_url = content[\"avatar_url\"]\n except KeyError:\n raise SynapseError(\n 400, \"Missing key 'avatar_url'\", errcode=Codes.MISSING_PARAM\n )\n\n await self.profile_handler.set_avatar_url(\n user, requester, new_avatar_url, is_admin\n )\n\n return 200, {}\n\n def on_OPTIONS(self, request, user_id):\n return 200, {}\n\n\nclass ProfileRestServlet(RestServlet):\n PATTERNS = client_patterns(\"/profile/(?P<user_id>[^/]*)\", v1=True)\n\n def __init__(self, hs):\n super().__init__()\n self.hs = hs\n self.profile_handler = hs.get_profile_handler()\n self.auth = hs.get_auth()\n\n async def on_GET(self, request, user_id):\n requester_user = None\n\n if self.hs.config.require_auth_for_profile_requests:\n requester = await self.auth.get_user_by_req(request)\n requester_user = requester.user\n\n user = UserID.from_string(user_id)\n\n await self.profile_handler.check_profile_query_allowed(user, requester_user)\n\n displayname = await self.profile_handler.get_displayname(user)\n avatar_url = await self.profile_handler.get_avatar_url(user)\n\n ret = {}\n if displayname is not None:\n ret[\"displayname\"] = displayname\n if avatar_url is not None:\n ret[\"avatar_url\"] = avatar_url\n\n return 200, ret\n\n\ndef register_servlets(hs, http_server):\n ProfileDisplaynameRestServlet(hs).register(http_server)\n ProfileAvatarURLRestServlet(hs).register(http_server)\n ProfileRestServlet(hs).register(http_server)\n", "path": "synapse/rest/client/v1/profile.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2014-2016 OpenMarket Ltd\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\" This module contains REST servlets to do with profile: /profile/<paths> \"\"\"\n\nfrom synapse.api.errors import Codes, SynapseError\nfrom synapse.http.servlet import RestServlet, parse_json_object_from_request\nfrom synapse.rest.client.v2_alpha._base import client_patterns\nfrom synapse.types import UserID\n\n\nclass ProfileDisplaynameRestServlet(RestServlet):\n PATTERNS = client_patterns(\"/profile/(?P<user_id>[^/]*)/displayname\", v1=True)\n\n def __init__(self, hs):\n super().__init__()\n self.hs = hs\n self.profile_handler = hs.get_profile_handler()\n self.auth = hs.get_auth()\n\n async def on_GET(self, request, user_id):\n requester_user = None\n\n if self.hs.config.require_auth_for_profile_requests:\n requester = await self.auth.get_user_by_req(request)\n requester_user = requester.user\n\n user = UserID.from_string(user_id)\n\n await self.profile_handler.check_profile_query_allowed(user, requester_user)\n\n displayname = await self.profile_handler.get_displayname(user)\n\n ret = {}\n if displayname is not None:\n ret[\"displayname\"] = displayname\n\n return 200, ret\n\n async def on_PUT(self, request, user_id):\n requester = await self.auth.get_user_by_req(request, allow_guest=True)\n user = UserID.from_string(user_id)\n is_admin = await self.auth.is_server_admin(requester.user)\n\n content = parse_json_object_from_request(request)\n\n try:\n new_name = content[\"displayname\"]\n except Exception:\n raise SynapseError(\n code=400, msg=\"Unable to parse name\", errcode=Codes.BAD_JSON,\n )\n\n await self.profile_handler.set_displayname(user, requester, new_name, is_admin)\n\n return 200, {}\n\n def on_OPTIONS(self, request, user_id):\n return 200, {}\n\n\nclass ProfileAvatarURLRestServlet(RestServlet):\n PATTERNS = client_patterns(\"/profile/(?P<user_id>[^/]*)/avatar_url\", v1=True)\n\n def __init__(self, hs):\n super().__init__()\n self.hs = hs\n self.profile_handler = hs.get_profile_handler()\n self.auth = hs.get_auth()\n\n async def on_GET(self, request, user_id):\n requester_user = None\n\n if self.hs.config.require_auth_for_profile_requests:\n requester = await self.auth.get_user_by_req(request)\n requester_user = requester.user\n\n user = UserID.from_string(user_id)\n\n await self.profile_handler.check_profile_query_allowed(user, requester_user)\n\n avatar_url = await self.profile_handler.get_avatar_url(user)\n\n ret = {}\n if avatar_url is not None:\n ret[\"avatar_url\"] = avatar_url\n\n return 200, ret\n\n async def on_PUT(self, request, user_id):\n requester = await self.auth.get_user_by_req(request)\n user = UserID.from_string(user_id)\n is_admin = await self.auth.is_server_admin(requester.user)\n\n content = parse_json_object_from_request(request)\n try:\n new_avatar_url = content[\"avatar_url\"]\n except KeyError:\n raise SynapseError(\n 400, \"Missing key 'avatar_url'\", errcode=Codes.MISSING_PARAM\n )\n\n await self.profile_handler.set_avatar_url(\n user, requester, new_avatar_url, is_admin\n )\n\n return 200, {}\n\n def on_OPTIONS(self, request, user_id):\n return 200, {}\n\n\nclass ProfileRestServlet(RestServlet):\n PATTERNS = client_patterns(\"/profile/(?P<user_id>[^/]*)\", v1=True)\n\n def __init__(self, hs):\n super().__init__()\n self.hs = hs\n self.profile_handler = hs.get_profile_handler()\n self.auth = hs.get_auth()\n\n async def on_GET(self, request, user_id):\n requester_user = None\n\n if self.hs.config.require_auth_for_profile_requests:\n requester = await self.auth.get_user_by_req(request)\n requester_user = requester.user\n\n user = UserID.from_string(user_id)\n\n await self.profile_handler.check_profile_query_allowed(user, requester_user)\n\n displayname = await self.profile_handler.get_displayname(user)\n avatar_url = await self.profile_handler.get_avatar_url(user)\n\n ret = {}\n if displayname is not None:\n ret[\"displayname\"] = displayname\n if avatar_url is not None:\n ret[\"avatar_url\"] = avatar_url\n\n return 200, ret\n\n\ndef register_servlets(hs, http_server):\n ProfileDisplaynameRestServlet(hs).register(http_server)\n ProfileAvatarURLRestServlet(hs).register(http_server)\n ProfileRestServlet(hs).register(http_server)\n", "path": "synapse/rest/client/v1/profile.py"}]}
| 1,963 | 141 |
gh_patches_debug_23251
|
rasdani/github-patches
|
git_diff
|
cloud-custodian__cloud-custodian-4472
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add delete action for AWS Cloudtrail resource
I dont see a delete action on aws.cloudtrail resource:
```
$ custodian schema aws.cloudtrail.actions
aws.cloudtrail:
actions:
- invoke-lambda
- notify
- post-finding
- put-metric
- set-logging
- update-trail
```
Looking at the code it doesnt look too bad to add a delete action, should be just add something like below to `c7n/resources/cloudtrail.py`:
```python
@CloudTrail.action_registry.register('delete-trail')
class UpdateTrail(Action):
"""Delete trail.
:Example:
.. code-block:: yaml
policies:
- name: cloudtrail-delete
resource: aws.cloudtrail
filters:
- type: value
key: Name
value: delete-me
op: eq
actions:
- type: delete
"""
schema = type_schema(
'delete-trail',
attributes={'type': 'object'},
required=('attributes',))
shape = 'DeleteTrailRequest'
permissions = ('cloudtrail:DeleteTrail',)
def validate(self):
# do the validate
def process(self, resources):
# do the delete
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `c7n/resources/cloudtrail.py`
Content:
```
1 # Copyright 2017-2019 Capital One Services, LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from __future__ import absolute_import, division, print_function, unicode_literals
15
16 import logging
17
18 from c7n.actions import Action
19 from c7n.exceptions import PolicyValidationError
20 from c7n.filters import ValueFilter, Filter
21 from c7n.manager import resources
22 from c7n.query import QueryResourceManager, TypeInfo
23 from c7n.utils import local_session, type_schema
24
25 from .aws import shape_validate, Arn
26
27 log = logging.getLogger('c7n.resources.cloudtrail')
28
29
30 @resources.register('cloudtrail')
31 class CloudTrail(QueryResourceManager):
32
33 class resource_type(TypeInfo):
34 service = 'cloudtrail'
35 enum_spec = ('describe_trails', 'trailList', None)
36 filter_name = 'trailNameList'
37 filter_type = 'list'
38 arn = id = 'TrailARN'
39 name = 'Name'
40 config_type = "AWS::CloudTrail::Trail"
41
42
43 @CloudTrail.filter_registry.register('is-shadow')
44 class IsShadow(Filter):
45 """Identify shadow trails (secondary copies), shadow trails
46 can't be modified directly, the origin trail needs to be modified.
47
48 Shadow trails are created for multi-region trails as well for
49 organizational trails.
50 """
51 schema = type_schema('is-shadow', state={'type': 'boolean'})
52 permissions = ('cloudtrail:DescribeTrails',)
53 embedded = False
54
55 def process(self, resources, event=None):
56 rcount = len(resources)
57 trails = [t for t in resources if (self.is_shadow(t) == self.data.get('state', True))]
58 if len(trails) != rcount and self.embedded:
59 self.log.info("implicitly filtering shadow trails %d -> %d",
60 rcount, len(trails))
61 return trails
62
63 def is_shadow(self, t):
64 if t.get('IsOrganizationTrail') and self.manager.config.account_id not in t['TrailARN']:
65 return True
66 if t.get('IsMultiRegionTrail') and t['HomeRegion'] != self.manager.config.region:
67 return True
68 return False
69
70
71 @CloudTrail.filter_registry.register('status')
72 class Status(ValueFilter):
73 """Filter a cloudtrail by its status.
74
75 :Example:
76
77 .. code-block:: yaml
78
79 policies:
80 - name: cloudtrail-check-status
81 resource: aws.cloudtrail
82 filters:
83 - type: status
84 key: IsLogging
85 value: False
86 """
87
88 schema = type_schema('status', rinherit=ValueFilter.schema)
89 schema_alias = False
90 permissions = ('cloudtrail:GetTrailStatus',)
91 annotation_key = 'c7n:TrailStatus'
92
93 def process(self, resources, event=None):
94 for r in resources:
95 region = self.manager.config.region
96 trail_arn = Arn.parse(r['TrailARN'])
97
98 if (r.get('IsOrganizationTrail') and
99 self.manager.config.account_id != trail_arn.account_id):
100 continue
101 if r.get('HomeRegion') and r['HomeRegion'] != region:
102 region = trail_arn.region
103 if self.annotation_key in r:
104 continue
105 client = local_session(self.manager.session_factory).client(
106 'cloudtrail', region_name=region)
107 status = client.get_trail_status(Name=r['Name'])
108 status.pop('ResponseMetadata')
109 r[self.annotation_key] = status
110
111 return super(Status, self).process(resources)
112
113 def __call__(self, r):
114 return self.match(r['c7n:TrailStatus'])
115
116
117 @CloudTrail.action_registry.register('update-trail')
118 class UpdateTrail(Action):
119 """Update trail attributes.
120
121 :Example:
122
123 .. code-block:: yaml
124
125 policies:
126 - name: cloudtrail-set-log
127 resource: aws.cloudtrail
128 filters:
129 - or:
130 - KmsKeyId: empty
131 - LogFileValidationEnabled: false
132 actions:
133 - type: update-trail
134 attributes:
135 KmsKeyId: arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef
136 EnableLogFileValidation: true
137 """
138 schema = type_schema(
139 'update-trail',
140 attributes={'type': 'object'},
141 required=('attributes',))
142 shape = 'UpdateTrailRequest'
143 permissions = ('cloudtrail:UpdateTrail',)
144
145 def validate(self):
146 attrs = dict(self.data['attributes'])
147 if 'Name' in attrs:
148 raise PolicyValidationError(
149 "Can't include Name in update-trail action")
150 attrs['Name'] = 'PolicyValidation'
151 return shape_validate(
152 attrs,
153 self.shape,
154 self.manager.resource_type.service)
155
156 def process(self, resources):
157 client = local_session(self.manager.session_factory).client('cloudtrail')
158 shadow_check = IsShadow({'state': False}, self.manager)
159 shadow_check.embedded = True
160 resources = shadow_check.process(resources)
161
162 for r in resources:
163 client.update_trail(
164 Name=r['Name'],
165 **self.data['attributes'])
166
167
168 @CloudTrail.action_registry.register('set-logging')
169 class SetLogging(Action):
170 """Set the logging state of a trail
171
172 :Example:
173
174 .. code-block:: yaml
175
176 policies:
177 - name: cloudtrail-set-active
178 resource: aws.cloudtrail
179 filters:
180 - type: status
181 key: IsLogging
182 value: False
183 actions:
184 - type: set-logging
185 enabled: True
186 """
187 schema = type_schema(
188 'set-logging', enabled={'type': 'boolean'})
189
190 def get_permissions(self):
191 enable = self.data.get('enabled', True)
192 if enable is True:
193 return ('cloudtrail:StartLogging',)
194 else:
195 return ('cloudtrail:StopLogging',)
196
197 def process(self, resources):
198 client = local_session(self.manager.session_factory).client('cloudtrail')
199 shadow_check = IsShadow({'state': False}, self.manager)
200 shadow_check.embedded = True
201 resources = shadow_check.process(resources)
202 enable = self.data.get('enabled', True)
203
204 for r in resources:
205 if enable:
206 client.start_logging(Name=r['Name'])
207 else:
208 client.stop_logging(Name=r['Name'])
209
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/c7n/resources/cloudtrail.py b/c7n/resources/cloudtrail.py
--- a/c7n/resources/cloudtrail.py
+++ b/c7n/resources/cloudtrail.py
@@ -15,7 +15,7 @@
import logging
-from c7n.actions import Action
+from c7n.actions import Action, BaseAction
from c7n.exceptions import PolicyValidationError
from c7n.filters import ValueFilter, Filter
from c7n.manager import resources
@@ -206,3 +206,35 @@
client.start_logging(Name=r['Name'])
else:
client.stop_logging(Name=r['Name'])
+
+
[email protected]_registry.register('delete')
+class DeleteTrail(BaseAction):
+ """ Delete a cloud trail
+
+ :example:
+
+ .. code-block:: yaml
+
+ policies:
+ - name: delete-cloudtrail
+ resource: aws.cloudtrail
+ filters:
+ - type: value
+ key: Name
+ value: delete-me
+ op: eq
+ actions:
+ - type: delete
+ """
+
+ schema = type_schema('delete')
+ permissions = ('cloudtrail:DeleteTrail',)
+
+ def process(self, resources):
+ client = local_session(self.manager.session_factory).client('cloudtrail')
+ for r in resources:
+ try:
+ client.delete_trail(Name=r['Name'])
+ except client.exceptions.TrailNotFoundException:
+ continue
|
{"golden_diff": "diff --git a/c7n/resources/cloudtrail.py b/c7n/resources/cloudtrail.py\n--- a/c7n/resources/cloudtrail.py\n+++ b/c7n/resources/cloudtrail.py\n@@ -15,7 +15,7 @@\n \n import logging\n \n-from c7n.actions import Action\n+from c7n.actions import Action, BaseAction\n from c7n.exceptions import PolicyValidationError\n from c7n.filters import ValueFilter, Filter\n from c7n.manager import resources\n@@ -206,3 +206,35 @@\n client.start_logging(Name=r['Name'])\n else:\n client.stop_logging(Name=r['Name'])\n+\n+\[email protected]_registry.register('delete')\n+class DeleteTrail(BaseAction):\n+ \"\"\" Delete a cloud trail\n+\n+ :example:\n+\n+ .. code-block:: yaml\n+\n+ policies:\n+ - name: delete-cloudtrail\n+ resource: aws.cloudtrail\n+ filters:\n+ - type: value\n+ key: Name\n+ value: delete-me\n+ op: eq\n+ actions:\n+ - type: delete\n+ \"\"\"\n+\n+ schema = type_schema('delete')\n+ permissions = ('cloudtrail:DeleteTrail',)\n+\n+ def process(self, resources):\n+ client = local_session(self.manager.session_factory).client('cloudtrail')\n+ for r in resources:\n+ try:\n+ client.delete_trail(Name=r['Name'])\n+ except client.exceptions.TrailNotFoundException:\n+ continue\n", "issue": "Add delete action for AWS Cloudtrail resource\nI dont see a delete action on aws.cloudtrail resource:\r\n\r\n```\r\n$ custodian schema aws.cloudtrail.actions\r\naws.cloudtrail:\r\n actions:\r\n - invoke-lambda\r\n - notify\r\n - post-finding\r\n - put-metric\r\n - set-logging\r\n - update-trail\r\n```\r\n\r\nLooking at the code it doesnt look too bad to add a delete action, should be just add something like below to `c7n/resources/cloudtrail.py`:\r\n\r\n```python\r\[email protected]_registry.register('delete-trail')\r\nclass UpdateTrail(Action):\r\n \"\"\"Delete trail.\r\n :Example:\r\n .. code-block:: yaml\r\n policies:\r\n - name: cloudtrail-delete\r\n resource: aws.cloudtrail\r\n filters:\r\n - type: value\r\n key: Name\r\n value: delete-me\r\n op: eq\r\n actions:\r\n - type: delete\r\n \"\"\"\r\n schema = type_schema(\r\n 'delete-trail',\r\n attributes={'type': 'object'},\r\n required=('attributes',))\r\n shape = 'DeleteTrailRequest'\r\n permissions = ('cloudtrail:DeleteTrail',)\r\n\r\n def validate(self):\r\n # do the validate\r\n\r\n def process(self, resources):\r\n # do the delete\r\n```\n", "before_files": [{"content": "# Copyright 2017-2019 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport logging\n\nfrom c7n.actions import Action\nfrom c7n.exceptions import PolicyValidationError\nfrom c7n.filters import ValueFilter, Filter\nfrom c7n.manager import resources\nfrom c7n.query import QueryResourceManager, TypeInfo\nfrom c7n.utils import local_session, type_schema\n\nfrom .aws import shape_validate, Arn\n\nlog = logging.getLogger('c7n.resources.cloudtrail')\n\n\[email protected]('cloudtrail')\nclass CloudTrail(QueryResourceManager):\n\n class resource_type(TypeInfo):\n service = 'cloudtrail'\n enum_spec = ('describe_trails', 'trailList', None)\n filter_name = 'trailNameList'\n filter_type = 'list'\n arn = id = 'TrailARN'\n name = 'Name'\n config_type = \"AWS::CloudTrail::Trail\"\n\n\[email protected]_registry.register('is-shadow')\nclass IsShadow(Filter):\n \"\"\"Identify shadow trails (secondary copies), shadow trails\n can't be modified directly, the origin trail needs to be modified.\n\n Shadow trails are created for multi-region trails as well for\n organizational trails.\n \"\"\"\n schema = type_schema('is-shadow', state={'type': 'boolean'})\n permissions = ('cloudtrail:DescribeTrails',)\n embedded = False\n\n def process(self, resources, event=None):\n rcount = len(resources)\n trails = [t for t in resources if (self.is_shadow(t) == self.data.get('state', True))]\n if len(trails) != rcount and self.embedded:\n self.log.info(\"implicitly filtering shadow trails %d -> %d\",\n rcount, len(trails))\n return trails\n\n def is_shadow(self, t):\n if t.get('IsOrganizationTrail') and self.manager.config.account_id not in t['TrailARN']:\n return True\n if t.get('IsMultiRegionTrail') and t['HomeRegion'] != self.manager.config.region:\n return True\n return False\n\n\[email protected]_registry.register('status')\nclass Status(ValueFilter):\n \"\"\"Filter a cloudtrail by its status.\n\n :Example:\n\n .. code-block:: yaml\n\n policies:\n - name: cloudtrail-check-status\n resource: aws.cloudtrail\n filters:\n - type: status\n key: IsLogging\n value: False\n \"\"\"\n\n schema = type_schema('status', rinherit=ValueFilter.schema)\n schema_alias = False\n permissions = ('cloudtrail:GetTrailStatus',)\n annotation_key = 'c7n:TrailStatus'\n\n def process(self, resources, event=None):\n for r in resources:\n region = self.manager.config.region\n trail_arn = Arn.parse(r['TrailARN'])\n\n if (r.get('IsOrganizationTrail') and\n self.manager.config.account_id != trail_arn.account_id):\n continue\n if r.get('HomeRegion') and r['HomeRegion'] != region:\n region = trail_arn.region\n if self.annotation_key in r:\n continue\n client = local_session(self.manager.session_factory).client(\n 'cloudtrail', region_name=region)\n status = client.get_trail_status(Name=r['Name'])\n status.pop('ResponseMetadata')\n r[self.annotation_key] = status\n\n return super(Status, self).process(resources)\n\n def __call__(self, r):\n return self.match(r['c7n:TrailStatus'])\n\n\[email protected]_registry.register('update-trail')\nclass UpdateTrail(Action):\n \"\"\"Update trail attributes.\n\n :Example:\n\n .. code-block:: yaml\n\n policies:\n - name: cloudtrail-set-log\n resource: aws.cloudtrail\n filters:\n - or:\n - KmsKeyId: empty\n - LogFileValidationEnabled: false\n actions:\n - type: update-trail\n attributes:\n KmsKeyId: arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef\n EnableLogFileValidation: true\n \"\"\"\n schema = type_schema(\n 'update-trail',\n attributes={'type': 'object'},\n required=('attributes',))\n shape = 'UpdateTrailRequest'\n permissions = ('cloudtrail:UpdateTrail',)\n\n def validate(self):\n attrs = dict(self.data['attributes'])\n if 'Name' in attrs:\n raise PolicyValidationError(\n \"Can't include Name in update-trail action\")\n attrs['Name'] = 'PolicyValidation'\n return shape_validate(\n attrs,\n self.shape,\n self.manager.resource_type.service)\n\n def process(self, resources):\n client = local_session(self.manager.session_factory).client('cloudtrail')\n shadow_check = IsShadow({'state': False}, self.manager)\n shadow_check.embedded = True\n resources = shadow_check.process(resources)\n\n for r in resources:\n client.update_trail(\n Name=r['Name'],\n **self.data['attributes'])\n\n\[email protected]_registry.register('set-logging')\nclass SetLogging(Action):\n \"\"\"Set the logging state of a trail\n\n :Example:\n\n .. code-block:: yaml\n\n policies:\n - name: cloudtrail-set-active\n resource: aws.cloudtrail\n filters:\n - type: status\n key: IsLogging\n value: False\n actions:\n - type: set-logging\n enabled: True\n \"\"\"\n schema = type_schema(\n 'set-logging', enabled={'type': 'boolean'})\n\n def get_permissions(self):\n enable = self.data.get('enabled', True)\n if enable is True:\n return ('cloudtrail:StartLogging',)\n else:\n return ('cloudtrail:StopLogging',)\n\n def process(self, resources):\n client = local_session(self.manager.session_factory).client('cloudtrail')\n shadow_check = IsShadow({'state': False}, self.manager)\n shadow_check.embedded = True\n resources = shadow_check.process(resources)\n enable = self.data.get('enabled', True)\n\n for r in resources:\n if enable:\n client.start_logging(Name=r['Name'])\n else:\n client.stop_logging(Name=r['Name'])\n", "path": "c7n/resources/cloudtrail.py"}], "after_files": [{"content": "# Copyright 2017-2019 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport logging\n\nfrom c7n.actions import Action, BaseAction\nfrom c7n.exceptions import PolicyValidationError\nfrom c7n.filters import ValueFilter, Filter\nfrom c7n.manager import resources\nfrom c7n.query import QueryResourceManager, TypeInfo\nfrom c7n.utils import local_session, type_schema\n\nfrom .aws import shape_validate, Arn\n\nlog = logging.getLogger('c7n.resources.cloudtrail')\n\n\[email protected]('cloudtrail')\nclass CloudTrail(QueryResourceManager):\n\n class resource_type(TypeInfo):\n service = 'cloudtrail'\n enum_spec = ('describe_trails', 'trailList', None)\n filter_name = 'trailNameList'\n filter_type = 'list'\n arn = id = 'TrailARN'\n name = 'Name'\n config_type = \"AWS::CloudTrail::Trail\"\n\n\[email protected]_registry.register('is-shadow')\nclass IsShadow(Filter):\n \"\"\"Identify shadow trails (secondary copies), shadow trails\n can't be modified directly, the origin trail needs to be modified.\n\n Shadow trails are created for multi-region trails as well for\n organizational trails.\n \"\"\"\n schema = type_schema('is-shadow', state={'type': 'boolean'})\n permissions = ('cloudtrail:DescribeTrails',)\n embedded = False\n\n def process(self, resources, event=None):\n rcount = len(resources)\n trails = [t for t in resources if (self.is_shadow(t) == self.data.get('state', True))]\n if len(trails) != rcount and self.embedded:\n self.log.info(\"implicitly filtering shadow trails %d -> %d\",\n rcount, len(trails))\n return trails\n\n def is_shadow(self, t):\n if t.get('IsOrganizationTrail') and self.manager.config.account_id not in t['TrailARN']:\n return True\n if t.get('IsMultiRegionTrail') and t['HomeRegion'] != self.manager.config.region:\n return True\n return False\n\n\[email protected]_registry.register('status')\nclass Status(ValueFilter):\n \"\"\"Filter a cloudtrail by its status.\n\n :Example:\n\n .. code-block:: yaml\n\n policies:\n - name: cloudtrail-check-status\n resource: aws.cloudtrail\n filters:\n - type: status\n key: IsLogging\n value: False\n \"\"\"\n\n schema = type_schema('status', rinherit=ValueFilter.schema)\n schema_alias = False\n permissions = ('cloudtrail:GetTrailStatus',)\n annotation_key = 'c7n:TrailStatus'\n\n def process(self, resources, event=None):\n for r in resources:\n region = self.manager.config.region\n trail_arn = Arn.parse(r['TrailARN'])\n\n if (r.get('IsOrganizationTrail') and\n self.manager.config.account_id != trail_arn.account_id):\n continue\n if r.get('HomeRegion') and r['HomeRegion'] != region:\n region = trail_arn.region\n if self.annotation_key in r:\n continue\n client = local_session(self.manager.session_factory).client(\n 'cloudtrail', region_name=region)\n status = client.get_trail_status(Name=r['Name'])\n status.pop('ResponseMetadata')\n r[self.annotation_key] = status\n\n return super(Status, self).process(resources)\n\n def __call__(self, r):\n return self.match(r['c7n:TrailStatus'])\n\n\[email protected]_registry.register('update-trail')\nclass UpdateTrail(Action):\n \"\"\"Update trail attributes.\n\n :Example:\n\n .. code-block:: yaml\n\n policies:\n - name: cloudtrail-set-log\n resource: aws.cloudtrail\n filters:\n - or:\n - KmsKeyId: empty\n - LogFileValidationEnabled: false\n actions:\n - type: update-trail\n attributes:\n KmsKeyId: arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef\n EnableLogFileValidation: true\n \"\"\"\n schema = type_schema(\n 'update-trail',\n attributes={'type': 'object'},\n required=('attributes',))\n shape = 'UpdateTrailRequest'\n permissions = ('cloudtrail:UpdateTrail',)\n\n def validate(self):\n attrs = dict(self.data['attributes'])\n if 'Name' in attrs:\n raise PolicyValidationError(\n \"Can't include Name in update-trail action\")\n attrs['Name'] = 'PolicyValidation'\n return shape_validate(\n attrs,\n self.shape,\n self.manager.resource_type.service)\n\n def process(self, resources):\n client = local_session(self.manager.session_factory).client('cloudtrail')\n shadow_check = IsShadow({'state': False}, self.manager)\n shadow_check.embedded = True\n resources = shadow_check.process(resources)\n\n for r in resources:\n client.update_trail(\n Name=r['Name'],\n **self.data['attributes'])\n\n\[email protected]_registry.register('set-logging')\nclass SetLogging(Action):\n \"\"\"Set the logging state of a trail\n\n :Example:\n\n .. code-block:: yaml\n\n policies:\n - name: cloudtrail-set-active\n resource: aws.cloudtrail\n filters:\n - type: status\n key: IsLogging\n value: False\n actions:\n - type: set-logging\n enabled: True\n \"\"\"\n schema = type_schema(\n 'set-logging', enabled={'type': 'boolean'})\n\n def get_permissions(self):\n enable = self.data.get('enabled', True)\n if enable is True:\n return ('cloudtrail:StartLogging',)\n else:\n return ('cloudtrail:StopLogging',)\n\n def process(self, resources):\n client = local_session(self.manager.session_factory).client('cloudtrail')\n shadow_check = IsShadow({'state': False}, self.manager)\n shadow_check.embedded = True\n resources = shadow_check.process(resources)\n enable = self.data.get('enabled', True)\n\n for r in resources:\n if enable:\n client.start_logging(Name=r['Name'])\n else:\n client.stop_logging(Name=r['Name'])\n\n\[email protected]_registry.register('delete')\nclass DeleteTrail(BaseAction):\n \"\"\" Delete a cloud trail\n\n :example:\n\n .. code-block:: yaml\n\n policies:\n - name: delete-cloudtrail\n resource: aws.cloudtrail\n filters:\n - type: value\n key: Name\n value: delete-me\n op: eq\n actions:\n - type: delete\n \"\"\"\n\n schema = type_schema('delete')\n permissions = ('cloudtrail:DeleteTrail',)\n\n def process(self, resources):\n client = local_session(self.manager.session_factory).client('cloudtrail')\n for r in resources:\n try:\n client.delete_trail(Name=r['Name'])\n except client.exceptions.TrailNotFoundException:\n continue\n", "path": "c7n/resources/cloudtrail.py"}]}
| 2,538 | 329 |
gh_patches_debug_9501
|
rasdani/github-patches
|
git_diff
|
sopel-irc__sopel-2522
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Deprecation warnings should be printed for dev versions
### Description
`sopel.logger.get_logger` is [marked](https://github.com/sopel-irc/sopel/blob/eac71b07d60d3bea68fe3f1242e88654594a7df5/sopel/logger.py#L164) as [deprecated](https://sopel.chat/docs/api.html#sopel.logger.get_logger), but no deprecation warning is raised when it's used.
### Reproduction steps
1. Create a Sopel plugin with `from sopel.logger import get_logger; LOGGER=get_logger(__name__)`
2. Start Sopel with that plugin
3. No deprecation warnings are produced
### Expected behavior
A deprecation warning is produced
### Relevant logs
```shell
Tested by replacing the logger in admin.py.
[2022-06-11 13:00:50,089] sopel.bot INFO - Plugin loaded: ping
[2022-06-11 13:00:50,090] sopel.plugins.rules DEBUG - Command registered: <Command admin.join []>
[2022-06-11 13:00:50,090] sopel.plugins.rules DEBUG - Command registered: <Command admin.tmpjoin []>
[2022-06-11 13:00:50,091] sopel.plugins.rules DEBUG - Command registered: <Command admin.part []>
[2022-06-11 13:00:50,091] sopel.plugins.rules DEBUG - Command registered: <Command admin.tmppart []>
[2022-06-11 13:00:50,092] sopel.plugins.rules DEBUG - Command registered: <Command admin.chanlist [channels]>
[2022-06-11 13:00:50,092] sopel.plugins.rules DEBUG - Command registered: <Command admin.restart []>
[2022-06-11 13:00:50,093] sopel.plugins.rules DEBUG - Command registered: <Command admin.quit []>
[2022-06-11 13:00:50,093] sopel.plugins.rules DEBUG - Command registered: <Command admin.raw []>
[2022-06-11 13:00:50,094] sopel.plugins.rules DEBUG - Command registered: <Command admin.say [msg]>
[2022-06-11 13:00:50,094] sopel.plugins.rules DEBUG - Command registered: <Command admin.me []>
[2022-06-11 13:00:50,094] sopel.plugins.rules DEBUG - Rule registered: <Rule admin.invite_join (1)>
[2022-06-11 13:00:50,094] sopel.plugins.rules DEBUG - Rule registered: <Rule admin.hold_ground (1)>
[2022-06-11 13:00:50,095] sopel.plugins.rules DEBUG - Command registered: <Command admin.mode []>
[2022-06-11 13:00:50,095] sopel.plugins.rules DEBUG - Command registered: <Command admin.set []>
[2022-06-11 13:00:50,096] sopel.plugins.rules DEBUG - Command registered: <Command admin.unset []>
[2022-06-11 13:00:50,096] sopel.plugins.rules DEBUG - Command registered: <Command admin.save []>
[2022-06-11 13:00:50,096] sopel.bot INFO - Plugin loaded: admin
```
### Notes
_No response_
### Sopel version
eac71b07d60d3bea68fe3f1242e88654594a7df5
### Installation method
`pip install`
### Python version
3.10.4
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sopel/lifecycle.py`
Content:
```
1 """Deprecation module for Sopel developers and plugin authors.
2
3 .. versionadded:: 8.0
4
5 Previously in :mod:`sopel.tools`, the :func:`deprecated` function has been
6 moved to this newly created module, as it can be used in every part of the
7 Sopel codebase, including :mod:`sopel.tools` itself.
8
9 """
10 from __future__ import annotations
11
12 import functools
13 import inspect
14 import logging
15 import traceback
16 from typing import Callable, Optional
17
18 from packaging.version import parse as parse_version
19
20 from sopel import __version__
21
22
23 def deprecated(
24 reason: Optional[str] = None,
25 version: Optional[str] = None,
26 removed_in: Optional[str] = None,
27 warning_in: Optional[str] = None,
28 stack_frame: int = -1,
29 func: Optional[Callable] = None,
30 ):
31 """Decorator to mark deprecated functions in Sopel's API
32
33 :param reason: optional text added to the deprecation warning
34 :param version: optional version number when the decorated function
35 is deprecated
36 :param removed_in: optional version number when the deprecated function
37 will be removed
38 :param warning_in: optional version number when the decorated function
39 should start emitting a warning when called
40 :param stack_frame: optional stack frame to output; defaults to
41 ``-1``; should almost always be negative
42 :param func: deprecated function
43 :return: a callable that depends on how the decorator is called; either
44 the decorated function, or a decorator with the appropriate
45 parameters
46
47 Any time the decorated ``func`` is called, a deprecation warning will be
48 logged, with the last frame of the traceback. The optional ``warning_in``
49 argument suppresses the warning on Sopel versions older than that, allowing
50 for multi-stage deprecation timelines.
51
52 The decorator can be used with or without arguments::
53
54 from sopel.lifecycle import deprecated
55
56 @deprecated
57 def func1():
58 print('func 1')
59
60 @deprecated()
61 def func2():
62 print('func 2')
63
64 @deprecated(reason='obsolete', version='7.0', removed_in='8.0')
65 def func3():
66 print('func 3')
67
68 which will output the following in a console::
69
70 >>> func1()
71 Deprecated: func1
72 File "<stdin>", line 1, in <module>
73 func 1
74 >>> func2()
75 Deprecated: func2
76 File "<stdin>", line 1, in <module>
77 func 2
78 >>> func3()
79 Deprecated since 7.0, will be removed in 8.0: obsolete
80 File "<stdin>", line 1, in <module>
81 func 3
82
83 The ``stack_frame`` argument can be used to choose which stack frame is
84 logged along with the message text. By default, this decorator logs the
85 most recent stack frame (the last entry in the list, ``-1``), corresponding
86 to where the decorated function itself was called. However, in certain
87 cases such as deprecating conditional behavior within an object
88 constructor, it can be useful to show a less recent stack frame instead.
89
90 .. note::
91
92 This decorator can be also used on callables that are not functions,
93 such as classes and callable objects.
94
95 .. versionadded:: 7.0
96 Parameters ``reason``, ``version``, and ``removed_in``.
97
98 .. versionadded:: 7.1
99 The ``warning_in`` and ``stack_frame`` parameters.
100
101 .. versionchanged:: 8.0
102 Moved out of :mod:`sopel.tools` to resolve circular dependency issues.
103
104 """
105 if not any([reason, version, removed_in, warning_in, func]):
106 # common usage: @deprecated()
107 return deprecated
108
109 if callable(reason):
110 # common usage: @deprecated
111 return deprecated(func=reason)
112
113 if func is None:
114 # common usage: @deprecated(message, version, removed_in)
115 def decorator(func):
116 return deprecated(
117 reason, version, removed_in, warning_in, stack_frame, func)
118 return decorator
119
120 # now, we have everything we need to have:
121 # - message is not a callable (could be None)
122 # - func is not None
123 # - version and removed_in can be None but that's OK
124 # so now we can return the actual decorated function
125
126 message = reason or getattr(func, '__name__', '<anonymous-function>')
127
128 template = 'Deprecated: {message}'
129 if version and removed_in:
130 template = (
131 'Deprecated since {version}, '
132 'will be removed in {removed_in}: '
133 '{message}')
134 elif version:
135 template = 'Deprecated since {version}: {message}'
136 elif removed_in:
137 template = 'Deprecated, will be removed in {removed_in}: {message}'
138
139 text = template.format(
140 message=message, version=version, removed_in=removed_in)
141
142 @functools.wraps(func)
143 def deprecated_func(*args, **kwargs):
144 if not (warning_in and
145 parse_version(warning_in) >= parse_version(__version__)):
146 original_frame = inspect.stack()[-stack_frame]
147 mod = inspect.getmodule(original_frame[0])
148 module_name = None
149 if mod:
150 module_name = mod.__name__
151 if module_name:
152 if module_name.startswith('sopel.'):
153 # core, or core plugin
154 logger = logging.getLogger(module_name)
155 else:
156 # probably a plugin; try to handle most cases sanely
157 if module_name.startswith('sopel_modules.'):
158 # namespace package plugins have a prefix, obviously
159 # they will use Sopel's namespace; other won't
160 module_name = module_name.replace(
161 'sopel_modules.',
162 'sopel.externals.',
163 1,
164 )
165 logger = logging.getLogger(module_name)
166 else:
167 # don't know the module/plugin name, but we want to make sure
168 # the log line is still output, so just get *something*
169 logger = logging.getLogger(__name__)
170
171 # Format only the desired stack frame
172 trace = traceback.extract_stack()
173 trace_frame = traceback.format_list(trace[:-1])[stack_frame][:-1]
174
175 # Warn the user
176 logger.warning(text + "\n" + trace_frame)
177
178 return func(*args, **kwargs)
179
180 return deprecated_func
181
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sopel/lifecycle.py b/sopel/lifecycle.py
--- a/sopel/lifecycle.py
+++ b/sopel/lifecycle.py
@@ -141,8 +141,10 @@
@functools.wraps(func)
def deprecated_func(*args, **kwargs):
- if not (warning_in and
- parse_version(warning_in) >= parse_version(__version__)):
+ warn_ver = warning_in and parse_version(warning_in).release
+ this_ver = parse_version(__version__).release
+
+ if not (warn_ver and warn_ver >= this_ver):
original_frame = inspect.stack()[-stack_frame]
mod = inspect.getmodule(original_frame[0])
module_name = None
|
{"golden_diff": "diff --git a/sopel/lifecycle.py b/sopel/lifecycle.py\n--- a/sopel/lifecycle.py\n+++ b/sopel/lifecycle.py\n@@ -141,8 +141,10 @@\n \n @functools.wraps(func)\n def deprecated_func(*args, **kwargs):\n- if not (warning_in and\n- parse_version(warning_in) >= parse_version(__version__)):\n+ warn_ver = warning_in and parse_version(warning_in).release\n+ this_ver = parse_version(__version__).release\n+\n+ if not (warn_ver and warn_ver >= this_ver):\n original_frame = inspect.stack()[-stack_frame]\n mod = inspect.getmodule(original_frame[0])\n module_name = None\n", "issue": "Deprecation warnings should be printed for dev versions\n### Description\r\n\r\n`sopel.logger.get_logger` is [marked](https://github.com/sopel-irc/sopel/blob/eac71b07d60d3bea68fe3f1242e88654594a7df5/sopel/logger.py#L164) as [deprecated](https://sopel.chat/docs/api.html#sopel.logger.get_logger), but no deprecation warning is raised when it's used.\r\n\r\n### Reproduction steps\r\n\r\n1. Create a Sopel plugin with `from sopel.logger import get_logger; LOGGER=get_logger(__name__)`\r\n2. Start Sopel with that plugin\r\n3. No deprecation warnings are produced\r\n\r\n### Expected behavior\r\n\r\nA deprecation warning is produced\r\n\r\n### Relevant logs\r\n\r\n```shell\r\nTested by replacing the logger in admin.py.\r\n\r\n[2022-06-11 13:00:50,089] sopel.bot INFO - Plugin loaded: ping\r\n[2022-06-11 13:00:50,090] sopel.plugins.rules DEBUG - Command registered: <Command admin.join []>\r\n[2022-06-11 13:00:50,090] sopel.plugins.rules DEBUG - Command registered: <Command admin.tmpjoin []>\r\n[2022-06-11 13:00:50,091] sopel.plugins.rules DEBUG - Command registered: <Command admin.part []>\r\n[2022-06-11 13:00:50,091] sopel.plugins.rules DEBUG - Command registered: <Command admin.tmppart []>\r\n[2022-06-11 13:00:50,092] sopel.plugins.rules DEBUG - Command registered: <Command admin.chanlist [channels]>\r\n[2022-06-11 13:00:50,092] sopel.plugins.rules DEBUG - Command registered: <Command admin.restart []>\r\n[2022-06-11 13:00:50,093] sopel.plugins.rules DEBUG - Command registered: <Command admin.quit []>\r\n[2022-06-11 13:00:50,093] sopel.plugins.rules DEBUG - Command registered: <Command admin.raw []>\r\n[2022-06-11 13:00:50,094] sopel.plugins.rules DEBUG - Command registered: <Command admin.say [msg]>\r\n[2022-06-11 13:00:50,094] sopel.plugins.rules DEBUG - Command registered: <Command admin.me []>\r\n[2022-06-11 13:00:50,094] sopel.plugins.rules DEBUG - Rule registered: <Rule admin.invite_join (1)>\r\n[2022-06-11 13:00:50,094] sopel.plugins.rules DEBUG - Rule registered: <Rule admin.hold_ground (1)>\r\n[2022-06-11 13:00:50,095] sopel.plugins.rules DEBUG - Command registered: <Command admin.mode []>\r\n[2022-06-11 13:00:50,095] sopel.plugins.rules DEBUG - Command registered: <Command admin.set []>\r\n[2022-06-11 13:00:50,096] sopel.plugins.rules DEBUG - Command registered: <Command admin.unset []>\r\n[2022-06-11 13:00:50,096] sopel.plugins.rules DEBUG - Command registered: <Command admin.save []>\r\n[2022-06-11 13:00:50,096] sopel.bot INFO - Plugin loaded: admin\r\n```\r\n\r\n\r\n### Notes\r\n\r\n_No response_\r\n\r\n### Sopel version\r\n\r\neac71b07d60d3bea68fe3f1242e88654594a7df5\r\n\r\n### Installation method\r\n\r\n`pip install`\r\n\r\n### Python version\r\n\r\n3.10.4\n", "before_files": [{"content": "\"\"\"Deprecation module for Sopel developers and plugin authors.\n\n.. versionadded:: 8.0\n\n Previously in :mod:`sopel.tools`, the :func:`deprecated` function has been\n moved to this newly created module, as it can be used in every part of the\n Sopel codebase, including :mod:`sopel.tools` itself.\n\n\"\"\"\nfrom __future__ import annotations\n\nimport functools\nimport inspect\nimport logging\nimport traceback\nfrom typing import Callable, Optional\n\nfrom packaging.version import parse as parse_version\n\nfrom sopel import __version__\n\n\ndef deprecated(\n reason: Optional[str] = None,\n version: Optional[str] = None,\n removed_in: Optional[str] = None,\n warning_in: Optional[str] = None,\n stack_frame: int = -1,\n func: Optional[Callable] = None,\n):\n \"\"\"Decorator to mark deprecated functions in Sopel's API\n\n :param reason: optional text added to the deprecation warning\n :param version: optional version number when the decorated function\n is deprecated\n :param removed_in: optional version number when the deprecated function\n will be removed\n :param warning_in: optional version number when the decorated function\n should start emitting a warning when called\n :param stack_frame: optional stack frame to output; defaults to\n ``-1``; should almost always be negative\n :param func: deprecated function\n :return: a callable that depends on how the decorator is called; either\n the decorated function, or a decorator with the appropriate\n parameters\n\n Any time the decorated ``func`` is called, a deprecation warning will be\n logged, with the last frame of the traceback. The optional ``warning_in``\n argument suppresses the warning on Sopel versions older than that, allowing\n for multi-stage deprecation timelines.\n\n The decorator can be used with or without arguments::\n\n from sopel.lifecycle import deprecated\n\n @deprecated\n def func1():\n print('func 1')\n\n @deprecated()\n def func2():\n print('func 2')\n\n @deprecated(reason='obsolete', version='7.0', removed_in='8.0')\n def func3():\n print('func 3')\n\n which will output the following in a console::\n\n >>> func1()\n Deprecated: func1\n File \"<stdin>\", line 1, in <module>\n func 1\n >>> func2()\n Deprecated: func2\n File \"<stdin>\", line 1, in <module>\n func 2\n >>> func3()\n Deprecated since 7.0, will be removed in 8.0: obsolete\n File \"<stdin>\", line 1, in <module>\n func 3\n\n The ``stack_frame`` argument can be used to choose which stack frame is\n logged along with the message text. By default, this decorator logs the\n most recent stack frame (the last entry in the list, ``-1``), corresponding\n to where the decorated function itself was called. However, in certain\n cases such as deprecating conditional behavior within an object\n constructor, it can be useful to show a less recent stack frame instead.\n\n .. note::\n\n This decorator can be also used on callables that are not functions,\n such as classes and callable objects.\n\n .. versionadded:: 7.0\n Parameters ``reason``, ``version``, and ``removed_in``.\n\n .. versionadded:: 7.1\n The ``warning_in`` and ``stack_frame`` parameters.\n\n .. versionchanged:: 8.0\n Moved out of :mod:`sopel.tools` to resolve circular dependency issues.\n\n \"\"\"\n if not any([reason, version, removed_in, warning_in, func]):\n # common usage: @deprecated()\n return deprecated\n\n if callable(reason):\n # common usage: @deprecated\n return deprecated(func=reason)\n\n if func is None:\n # common usage: @deprecated(message, version, removed_in)\n def decorator(func):\n return deprecated(\n reason, version, removed_in, warning_in, stack_frame, func)\n return decorator\n\n # now, we have everything we need to have:\n # - message is not a callable (could be None)\n # - func is not None\n # - version and removed_in can be None but that's OK\n # so now we can return the actual decorated function\n\n message = reason or getattr(func, '__name__', '<anonymous-function>')\n\n template = 'Deprecated: {message}'\n if version and removed_in:\n template = (\n 'Deprecated since {version}, '\n 'will be removed in {removed_in}: '\n '{message}')\n elif version:\n template = 'Deprecated since {version}: {message}'\n elif removed_in:\n template = 'Deprecated, will be removed in {removed_in}: {message}'\n\n text = template.format(\n message=message, version=version, removed_in=removed_in)\n\n @functools.wraps(func)\n def deprecated_func(*args, **kwargs):\n if not (warning_in and\n parse_version(warning_in) >= parse_version(__version__)):\n original_frame = inspect.stack()[-stack_frame]\n mod = inspect.getmodule(original_frame[0])\n module_name = None\n if mod:\n module_name = mod.__name__\n if module_name:\n if module_name.startswith('sopel.'):\n # core, or core plugin\n logger = logging.getLogger(module_name)\n else:\n # probably a plugin; try to handle most cases sanely\n if module_name.startswith('sopel_modules.'):\n # namespace package plugins have a prefix, obviously\n # they will use Sopel's namespace; other won't\n module_name = module_name.replace(\n 'sopel_modules.',\n 'sopel.externals.',\n 1,\n )\n logger = logging.getLogger(module_name)\n else:\n # don't know the module/plugin name, but we want to make sure\n # the log line is still output, so just get *something*\n logger = logging.getLogger(__name__)\n\n # Format only the desired stack frame\n trace = traceback.extract_stack()\n trace_frame = traceback.format_list(trace[:-1])[stack_frame][:-1]\n\n # Warn the user\n logger.warning(text + \"\\n\" + trace_frame)\n\n return func(*args, **kwargs)\n\n return deprecated_func\n", "path": "sopel/lifecycle.py"}], "after_files": [{"content": "\"\"\"Deprecation module for Sopel developers and plugin authors.\n\n.. versionadded:: 8.0\n\n Previously in :mod:`sopel.tools`, the :func:`deprecated` function has been\n moved to this newly created module, as it can be used in every part of the\n Sopel codebase, including :mod:`sopel.tools` itself.\n\n\"\"\"\nfrom __future__ import annotations\n\nimport functools\nimport inspect\nimport logging\nimport traceback\nfrom typing import Callable, Optional\n\nfrom packaging.version import parse as parse_version\n\nfrom sopel import __version__\n\n\ndef deprecated(\n reason: Optional[str] = None,\n version: Optional[str] = None,\n removed_in: Optional[str] = None,\n warning_in: Optional[str] = None,\n stack_frame: int = -1,\n func: Optional[Callable] = None,\n):\n \"\"\"Decorator to mark deprecated functions in Sopel's API\n\n :param reason: optional text added to the deprecation warning\n :param version: optional version number when the decorated function\n is deprecated\n :param removed_in: optional version number when the deprecated function\n will be removed\n :param warning_in: optional version number when the decorated function\n should start emitting a warning when called\n :param stack_frame: optional stack frame to output; defaults to\n ``-1``; should almost always be negative\n :param func: deprecated function\n :return: a callable that depends on how the decorator is called; either\n the decorated function, or a decorator with the appropriate\n parameters\n\n Any time the decorated ``func`` is called, a deprecation warning will be\n logged, with the last frame of the traceback. The optional ``warning_in``\n argument suppresses the warning on Sopel versions older than that, allowing\n for multi-stage deprecation timelines.\n\n The decorator can be used with or without arguments::\n\n from sopel.lifecycle import deprecated\n\n @deprecated\n def func1():\n print('func 1')\n\n @deprecated()\n def func2():\n print('func 2')\n\n @deprecated(reason='obsolete', version='7.0', removed_in='8.0')\n def func3():\n print('func 3')\n\n which will output the following in a console::\n\n >>> func1()\n Deprecated: func1\n File \"<stdin>\", line 1, in <module>\n func 1\n >>> func2()\n Deprecated: func2\n File \"<stdin>\", line 1, in <module>\n func 2\n >>> func3()\n Deprecated since 7.0, will be removed in 8.0: obsolete\n File \"<stdin>\", line 1, in <module>\n func 3\n\n The ``stack_frame`` argument can be used to choose which stack frame is\n logged along with the message text. By default, this decorator logs the\n most recent stack frame (the last entry in the list, ``-1``), corresponding\n to where the decorated function itself was called. However, in certain\n cases such as deprecating conditional behavior within an object\n constructor, it can be useful to show a less recent stack frame instead.\n\n .. note::\n\n This decorator can be also used on callables that are not functions,\n such as classes and callable objects.\n\n .. versionadded:: 7.0\n Parameters ``reason``, ``version``, and ``removed_in``.\n\n .. versionadded:: 7.1\n The ``warning_in`` and ``stack_frame`` parameters.\n\n .. versionchanged:: 8.0\n Moved out of :mod:`sopel.tools` to resolve circular dependency issues.\n\n \"\"\"\n if not any([reason, version, removed_in, warning_in, func]):\n # common usage: @deprecated()\n return deprecated\n\n if callable(reason):\n # common usage: @deprecated\n return deprecated(func=reason)\n\n if func is None:\n # common usage: @deprecated(message, version, removed_in)\n def decorator(func):\n return deprecated(\n reason, version, removed_in, warning_in, stack_frame, func)\n return decorator\n\n # now, we have everything we need to have:\n # - message is not a callable (could be None)\n # - func is not None\n # - version and removed_in can be None but that's OK\n # so now we can return the actual decorated function\n\n message = reason or getattr(func, '__name__', '<anonymous-function>')\n\n template = 'Deprecated: {message}'\n if version and removed_in:\n template = (\n 'Deprecated since {version}, '\n 'will be removed in {removed_in}: '\n '{message}')\n elif version:\n template = 'Deprecated since {version}: {message}'\n elif removed_in:\n template = 'Deprecated, will be removed in {removed_in}: {message}'\n\n text = template.format(\n message=message, version=version, removed_in=removed_in)\n\n @functools.wraps(func)\n def deprecated_func(*args, **kwargs):\n warn_ver = warning_in and parse_version(warning_in).release\n this_ver = parse_version(__version__).release\n\n if not (warn_ver and warn_ver >= this_ver):\n original_frame = inspect.stack()[-stack_frame]\n mod = inspect.getmodule(original_frame[0])\n module_name = None\n if mod:\n module_name = mod.__name__\n if module_name:\n if module_name.startswith('sopel.'):\n # core, or core plugin\n logger = logging.getLogger(module_name)\n else:\n # probably a plugin; try to handle most cases sanely\n if module_name.startswith('sopel_modules.'):\n # namespace package plugins have a prefix, obviously\n # they will use Sopel's namespace; other won't\n module_name = module_name.replace(\n 'sopel_modules.',\n 'sopel.externals.',\n 1,\n )\n logger = logging.getLogger(module_name)\n else:\n # don't know the module/plugin name, but we want to make sure\n # the log line is still output, so just get *something*\n logger = logging.getLogger(__name__)\n\n # Format only the desired stack frame\n trace = traceback.extract_stack()\n trace_frame = traceback.format_list(trace[:-1])[stack_frame][:-1]\n\n # Warn the user\n logger.warning(text + \"\\n\" + trace_frame)\n\n return func(*args, **kwargs)\n\n return deprecated_func\n", "path": "sopel/lifecycle.py"}]}
| 3,140 | 166 |
gh_patches_debug_22582
|
rasdani/github-patches
|
git_diff
|
PaddlePaddle__PaddleSeg-3404
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
tool/model/analyze_model.py 分析flops文件问题
### 问题确认 Search before asking
- [X] 我已经查询[历史issue](https://github.com/PaddlePaddle/PaddleSeg/issues)(包括open与closed),没有发现相似的bug。I have searched the [open and closed issues](https://github.com/PaddlePaddle/PaddleSeg/issues) and found no similar bug report.
### Bug描述 Describe the Bug
在运行tool/model/analyze_model.py出现错误:
AttributeError: 'Config' object has no attribute 'model'
tool/model/analyze_model.py源码中并没有根据model_cfg实例化出一个具体的model,而在'_dynamic_flops'函数中,
代码不断调用`model.train()`,'model.eval()'等函数
### 复现环境 Environment
platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.31
Python: 3.9.0 (default, Nov 15 2020, 14:28:56) [GCC 7.3.0]
Paddle compiled with cuda: False
GCC: gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
PaddleSeg: 2.8.0
PaddlePaddle: 2.4.2
OpenCV: 4.5.5
### Bug描述确认 Bug description confirmation
- [X] 我确认已经提供了Bug复现步骤、代码改动说明、以及环境信息,确认问题是可以复现的。I confirm that the bug replication steps, code change instructions, and environment information have been provided, and the problem can be reproduced.
### 是否愿意提交PR? Are you willing to submit a PR?
- [ ] 我愿意提交PR!I'd like to help by submitting a PR!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tools/model/analyze_model.py`
Content:
```
1 # Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """
15 Analyze the FLOPs and Params of the model.
16 Usually, just call paddle.flops to output the model information.
17 We use this file for output flops clearly.
18 """
19
20 import argparse
21 import os
22 import sys
23
24 import paddle
25 import numpy as np
26
27 from paddleseg.cvlibs import Config
28 from paddleseg.utils import get_sys_env, logger, op_flops_funs
29 from paddle.hapi.dynamic_flops import (count_parameters, register_hooks,
30 count_io_info)
31 from paddle.hapi.static_flops import Table
32
33
34 def parse_args():
35 parser = argparse.ArgumentParser(description='Model training')
36 parser.add_argument("--config", help="The path of config file.", type=str)
37 parser.add_argument(
38 "--input_shape",
39 nargs='+',
40 type=int,
41 help="Set the input shape, such as --input_shape 1 3 1024 1024",
42 default=[1, 3, 1024, 1024])
43 return parser.parse_args()
44
45
46 # For output flops clearly, modify the dynamic_flops function in paddle/hapi/dynamic_flops.py
47 def _dynamic_flops(model, inputs, custom_ops=None, print_detail=False):
48 handler_collection = []
49 types_collection = set()
50 if custom_ops is None:
51 custom_ops = {}
52
53 def add_hooks(m):
54 if len(list(m.children())) > 0:
55 return
56 m.register_buffer('total_ops', paddle.zeros([1], dtype='int64'))
57 m.register_buffer('total_params', paddle.zeros([1], dtype='int64'))
58 m_type = type(m)
59
60 flops_fn = None
61 if m_type in custom_ops:
62 flops_fn = custom_ops[m_type]
63 if m_type not in types_collection:
64 print("Customize Function has been applied to {}".format(
65 m_type))
66 elif m_type in register_hooks:
67 flops_fn = register_hooks[m_type]
68 if m_type not in types_collection:
69 print("{}'s flops has been counted".format(m_type))
70 else:
71 if m_type not in types_collection:
72 print(
73 "Cannot find suitable count function for {}. Treat it as zero FLOPs."
74 .format(m_type))
75
76 if flops_fn is not None:
77 flops_handler = m.register_forward_post_hook(flops_fn)
78 handler_collection.append(flops_handler)
79 params_handler = m.register_forward_post_hook(count_parameters)
80 io_handler = m.register_forward_post_hook(count_io_info)
81 handler_collection.append(params_handler)
82 handler_collection.append(io_handler)
83 types_collection.add(m_type)
84
85 training = model.training
86
87 model.eval()
88 model.apply(add_hooks)
89
90 with paddle.framework.no_grad():
91 model(inputs)
92
93 total_ops = 0
94 total_params = 0
95 for m in model.sublayers():
96 if len(list(m.children())) > 0:
97 continue
98 if set(['total_ops', 'total_params', 'input_shape',
99 'output_shape']).issubset(set(list(m._buffers.keys()))):
100 total_ops += m.total_ops
101 total_params += m.total_params
102
103 if training:
104 model.train()
105 for handler in handler_collection:
106 handler.remove()
107
108 table = Table(
109 ["Layer Name", "Input Shape", "Output Shape", "Params(M)", "Flops(G)"])
110
111 for n, m in model.named_sublayers():
112 if len(list(m.children())) > 0:
113 continue
114 if set(['total_ops', 'total_params', 'input_shape',
115 'output_shape']).issubset(set(list(m._buffers.keys()))):
116 table.add_row([
117 m.full_name(), list(m.input_shape.numpy()),
118 list(m.output_shape.numpy()),
119 round(float(m.total_params / 1e6), 3),
120 round(float(m.total_ops / 1e9), 3)
121 ])
122 m._buffers.pop("total_ops")
123 m._buffers.pop("total_params")
124 m._buffers.pop('input_shape')
125 m._buffers.pop('output_shape')
126 if print_detail:
127 table.print_table()
128 print('Total Flops: {}G Total Params: {}M'.format(
129 round(float(total_ops / 1e9), 3), round(float(total_params / 1e6), 3)))
130 return int(total_ops)
131
132
133 def analyze(args):
134 env_info = get_sys_env()
135 info = ['{}: {}'.format(k, v) for k, v in env_info.items()]
136 info = '\n'.join(['', format('Environment Information', '-^48s')] + info +
137 ['-' * 48])
138 logger.info(info)
139
140 paddle.set_device('cpu')
141
142 cfg = Config(args.config)
143
144 custom_ops = {paddle.nn.SyncBatchNorm: op_flops_funs.count_syncbn}
145 inputs = paddle.randn(args.input_shape)
146 _dynamic_flops(cfg.model, inputs, custom_ops=custom_ops, print_detail=True)
147
148
149 if __name__ == '__main__':
150 args = parse_args()
151 if not args.config:
152 raise RuntimeError('No configuration file specified.')
153
154 logger.info("config:" + args.config)
155 logger.info("input_shape:")
156 logger.info(args.input_shape)
157 analyze(args)
158
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/tools/model/analyze_model.py b/tools/model/analyze_model.py
--- a/tools/model/analyze_model.py
+++ b/tools/model/analyze_model.py
@@ -21,14 +21,14 @@
import os
import sys
-import paddle
import numpy as np
+import paddle
+from paddle.hapi.dynamic_flops import (count_io_info, count_parameters,
+ register_hooks)
+from paddle.hapi.static_flops import Table
-from paddleseg.cvlibs import Config
+from paddleseg.cvlibs import Config, SegBuilder
from paddleseg.utils import get_sys_env, logger, op_flops_funs
-from paddle.hapi.dynamic_flops import (count_parameters, register_hooks,
- count_io_info)
-from paddle.hapi.static_flops import Table
def parse_args():
@@ -140,10 +140,11 @@
paddle.set_device('cpu')
cfg = Config(args.config)
+ builder = SegBuilder(cfg)
custom_ops = {paddle.nn.SyncBatchNorm: op_flops_funs.count_syncbn}
inputs = paddle.randn(args.input_shape)
- _dynamic_flops(cfg.model, inputs, custom_ops=custom_ops, print_detail=True)
+ _dynamic_flops(builder.model, inputs, custom_ops=custom_ops, print_detail=True)
if __name__ == '__main__':
|
{"golden_diff": "diff --git a/tools/model/analyze_model.py b/tools/model/analyze_model.py\n--- a/tools/model/analyze_model.py\n+++ b/tools/model/analyze_model.py\n@@ -21,14 +21,14 @@\n import os\n import sys\n \n-import paddle\n import numpy as np\n+import paddle\n+from paddle.hapi.dynamic_flops import (count_io_info, count_parameters,\n+ register_hooks)\n+from paddle.hapi.static_flops import Table\n \n-from paddleseg.cvlibs import Config\n+from paddleseg.cvlibs import Config, SegBuilder\n from paddleseg.utils import get_sys_env, logger, op_flops_funs\n-from paddle.hapi.dynamic_flops import (count_parameters, register_hooks,\n- count_io_info)\n-from paddle.hapi.static_flops import Table\n \n \n def parse_args():\n@@ -140,10 +140,11 @@\n paddle.set_device('cpu')\n \n cfg = Config(args.config)\n+ builder = SegBuilder(cfg)\n \n custom_ops = {paddle.nn.SyncBatchNorm: op_flops_funs.count_syncbn}\n inputs = paddle.randn(args.input_shape)\n- _dynamic_flops(cfg.model, inputs, custom_ops=custom_ops, print_detail=True)\n+ _dynamic_flops(builder.model, inputs, custom_ops=custom_ops, print_detail=True)\n \n \n if __name__ == '__main__':\n", "issue": "tool/model/analyze_model.py \u5206\u6790flops\u6587\u4ef6\u95ee\u9898\n### \u95ee\u9898\u786e\u8ba4 Search before asking\n\n- [X] \u6211\u5df2\u7ecf\u67e5\u8be2[\u5386\u53f2issue](https://github.com/PaddlePaddle/PaddleSeg/issues)(\u5305\u62ecopen\u4e0eclosed)\uff0c\u6ca1\u6709\u53d1\u73b0\u76f8\u4f3c\u7684bug\u3002I have searched the [open and closed issues](https://github.com/PaddlePaddle/PaddleSeg/issues) and found no similar bug report.\n\n\n### Bug\u63cf\u8ff0 Describe the Bug\n\n\u5728\u8fd0\u884ctool/model/analyze_model.py\u51fa\u73b0\u9519\u8bef\uff1a\r\nAttributeError: 'Config' object has no attribute 'model' \r\n\r\ntool/model/analyze_model.py\u6e90\u7801\u4e2d\u5e76\u6ca1\u6709\u6839\u636emodel_cfg\u5b9e\u4f8b\u5316\u51fa\u4e00\u4e2a\u5177\u4f53\u7684model\uff0c\u800c\u5728'_dynamic_flops'\u51fd\u6570\u4e2d\uff0c\r\n\u4ee3\u7801\u4e0d\u65ad\u8c03\u7528`model.train()`,'model.eval()'\u7b49\u51fd\u6570\n\n### \u590d\u73b0\u73af\u5883 Environment\n\n\r\nplatform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.31\r\nPython: 3.9.0 (default, Nov 15 2020, 14:28:56) [GCC 7.3.0]\r\nPaddle compiled with cuda: False\r\nGCC: gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0\r\nPaddleSeg: 2.8.0\r\nPaddlePaddle: 2.4.2\r\nOpenCV: 4.5.5\n\n### Bug\u63cf\u8ff0\u786e\u8ba4 Bug description confirmation\n\n- [X] \u6211\u786e\u8ba4\u5df2\u7ecf\u63d0\u4f9b\u4e86Bug\u590d\u73b0\u6b65\u9aa4\u3001\u4ee3\u7801\u6539\u52a8\u8bf4\u660e\u3001\u4ee5\u53ca\u73af\u5883\u4fe1\u606f\uff0c\u786e\u8ba4\u95ee\u9898\u662f\u53ef\u4ee5\u590d\u73b0\u7684\u3002I confirm that the bug replication steps, code change instructions, and environment information have been provided, and the problem can be reproduced.\n\n\n### \u662f\u5426\u613f\u610f\u63d0\u4ea4PR\uff1f Are you willing to submit a PR?\n\n- [ ] \u6211\u613f\u610f\u63d0\u4ea4PR\uff01I'd like to help by submitting a PR!\n", "before_files": [{"content": "# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"\nAnalyze the FLOPs and Params of the model.\nUsually, just call paddle.flops to output the model information.\nWe use this file for output flops clearly.\n\"\"\"\n\nimport argparse\nimport os\nimport sys\n\nimport paddle\nimport numpy as np\n\nfrom paddleseg.cvlibs import Config\nfrom paddleseg.utils import get_sys_env, logger, op_flops_funs\nfrom paddle.hapi.dynamic_flops import (count_parameters, register_hooks,\n count_io_info)\nfrom paddle.hapi.static_flops import Table\n\n\ndef parse_args():\n parser = argparse.ArgumentParser(description='Model training')\n parser.add_argument(\"--config\", help=\"The path of config file.\", type=str)\n parser.add_argument(\n \"--input_shape\",\n nargs='+',\n type=int,\n help=\"Set the input shape, such as --input_shape 1 3 1024 1024\",\n default=[1, 3, 1024, 1024])\n return parser.parse_args()\n\n\n# For output flops clearly, modify the dynamic_flops function in paddle/hapi/dynamic_flops.py\ndef _dynamic_flops(model, inputs, custom_ops=None, print_detail=False):\n handler_collection = []\n types_collection = set()\n if custom_ops is None:\n custom_ops = {}\n\n def add_hooks(m):\n if len(list(m.children())) > 0:\n return\n m.register_buffer('total_ops', paddle.zeros([1], dtype='int64'))\n m.register_buffer('total_params', paddle.zeros([1], dtype='int64'))\n m_type = type(m)\n\n flops_fn = None\n if m_type in custom_ops:\n flops_fn = custom_ops[m_type]\n if m_type not in types_collection:\n print(\"Customize Function has been applied to {}\".format(\n m_type))\n elif m_type in register_hooks:\n flops_fn = register_hooks[m_type]\n if m_type not in types_collection:\n print(\"{}'s flops has been counted\".format(m_type))\n else:\n if m_type not in types_collection:\n print(\n \"Cannot find suitable count function for {}. Treat it as zero FLOPs.\"\n .format(m_type))\n\n if flops_fn is not None:\n flops_handler = m.register_forward_post_hook(flops_fn)\n handler_collection.append(flops_handler)\n params_handler = m.register_forward_post_hook(count_parameters)\n io_handler = m.register_forward_post_hook(count_io_info)\n handler_collection.append(params_handler)\n handler_collection.append(io_handler)\n types_collection.add(m_type)\n\n training = model.training\n\n model.eval()\n model.apply(add_hooks)\n\n with paddle.framework.no_grad():\n model(inputs)\n\n total_ops = 0\n total_params = 0\n for m in model.sublayers():\n if len(list(m.children())) > 0:\n continue\n if set(['total_ops', 'total_params', 'input_shape',\n 'output_shape']).issubset(set(list(m._buffers.keys()))):\n total_ops += m.total_ops\n total_params += m.total_params\n\n if training:\n model.train()\n for handler in handler_collection:\n handler.remove()\n\n table = Table(\n [\"Layer Name\", \"Input Shape\", \"Output Shape\", \"Params(M)\", \"Flops(G)\"])\n\n for n, m in model.named_sublayers():\n if len(list(m.children())) > 0:\n continue\n if set(['total_ops', 'total_params', 'input_shape',\n 'output_shape']).issubset(set(list(m._buffers.keys()))):\n table.add_row([\n m.full_name(), list(m.input_shape.numpy()),\n list(m.output_shape.numpy()),\n round(float(m.total_params / 1e6), 3),\n round(float(m.total_ops / 1e9), 3)\n ])\n m._buffers.pop(\"total_ops\")\n m._buffers.pop(\"total_params\")\n m._buffers.pop('input_shape')\n m._buffers.pop('output_shape')\n if print_detail:\n table.print_table()\n print('Total Flops: {}G Total Params: {}M'.format(\n round(float(total_ops / 1e9), 3), round(float(total_params / 1e6), 3)))\n return int(total_ops)\n\n\ndef analyze(args):\n env_info = get_sys_env()\n info = ['{}: {}'.format(k, v) for k, v in env_info.items()]\n info = '\\n'.join(['', format('Environment Information', '-^48s')] + info +\n ['-' * 48])\n logger.info(info)\n\n paddle.set_device('cpu')\n\n cfg = Config(args.config)\n\n custom_ops = {paddle.nn.SyncBatchNorm: op_flops_funs.count_syncbn}\n inputs = paddle.randn(args.input_shape)\n _dynamic_flops(cfg.model, inputs, custom_ops=custom_ops, print_detail=True)\n\n\nif __name__ == '__main__':\n args = parse_args()\n if not args.config:\n raise RuntimeError('No configuration file specified.')\n\n logger.info(\"config:\" + args.config)\n logger.info(\"input_shape:\")\n logger.info(args.input_shape)\n analyze(args)\n", "path": "tools/model/analyze_model.py"}], "after_files": [{"content": "# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"\nAnalyze the FLOPs and Params of the model.\nUsually, just call paddle.flops to output the model information.\nWe use this file for output flops clearly.\n\"\"\"\n\nimport argparse\nimport os\nimport sys\n\nimport numpy as np\nimport paddle\nfrom paddle.hapi.dynamic_flops import (count_io_info, count_parameters,\n register_hooks)\nfrom paddle.hapi.static_flops import Table\n\nfrom paddleseg.cvlibs import Config, SegBuilder\nfrom paddleseg.utils import get_sys_env, logger, op_flops_funs\n\n\ndef parse_args():\n parser = argparse.ArgumentParser(description='Model training')\n parser.add_argument(\"--config\", help=\"The path of config file.\", type=str)\n parser.add_argument(\n \"--input_shape\",\n nargs='+',\n type=int,\n help=\"Set the input shape, such as --input_shape 1 3 1024 1024\",\n default=[1, 3, 1024, 1024])\n return parser.parse_args()\n\n\n# For output flops clearly, modify the dynamic_flops function in paddle/hapi/dynamic_flops.py\ndef _dynamic_flops(model, inputs, custom_ops=None, print_detail=False):\n handler_collection = []\n types_collection = set()\n if custom_ops is None:\n custom_ops = {}\n\n def add_hooks(m):\n if len(list(m.children())) > 0:\n return\n m.register_buffer('total_ops', paddle.zeros([1], dtype='int64'))\n m.register_buffer('total_params', paddle.zeros([1], dtype='int64'))\n m_type = type(m)\n\n flops_fn = None\n if m_type in custom_ops:\n flops_fn = custom_ops[m_type]\n if m_type not in types_collection:\n print(\"Customize Function has been applied to {}\".format(\n m_type))\n elif m_type in register_hooks:\n flops_fn = register_hooks[m_type]\n if m_type not in types_collection:\n print(\"{}'s flops has been counted\".format(m_type))\n else:\n if m_type not in types_collection:\n print(\n \"Cannot find suitable count function for {}. Treat it as zero FLOPs.\"\n .format(m_type))\n\n if flops_fn is not None:\n flops_handler = m.register_forward_post_hook(flops_fn)\n handler_collection.append(flops_handler)\n params_handler = m.register_forward_post_hook(count_parameters)\n io_handler = m.register_forward_post_hook(count_io_info)\n handler_collection.append(params_handler)\n handler_collection.append(io_handler)\n types_collection.add(m_type)\n\n training = model.training\n\n model.eval()\n model.apply(add_hooks)\n\n with paddle.framework.no_grad():\n model(inputs)\n\n total_ops = 0\n total_params = 0\n for m in model.sublayers():\n if len(list(m.children())) > 0:\n continue\n if set(['total_ops', 'total_params', 'input_shape',\n 'output_shape']).issubset(set(list(m._buffers.keys()))):\n total_ops += m.total_ops\n total_params += m.total_params\n\n if training:\n model.train()\n for handler in handler_collection:\n handler.remove()\n\n table = Table(\n [\"Layer Name\", \"Input Shape\", \"Output Shape\", \"Params(M)\", \"Flops(G)\"])\n\n for n, m in model.named_sublayers():\n if len(list(m.children())) > 0:\n continue\n if set(['total_ops', 'total_params', 'input_shape',\n 'output_shape']).issubset(set(list(m._buffers.keys()))):\n table.add_row([\n m.full_name(), list(m.input_shape.numpy()),\n list(m.output_shape.numpy()),\n round(float(m.total_params / 1e6), 3),\n round(float(m.total_ops / 1e9), 3)\n ])\n m._buffers.pop(\"total_ops\")\n m._buffers.pop(\"total_params\")\n m._buffers.pop('input_shape')\n m._buffers.pop('output_shape')\n if print_detail:\n table.print_table()\n print('Total Flops: {}G Total Params: {}M'.format(\n round(float(total_ops / 1e9), 3), round(float(total_params / 1e6), 3)))\n return int(total_ops)\n\n\ndef analyze(args):\n env_info = get_sys_env()\n info = ['{}: {}'.format(k, v) for k, v in env_info.items()]\n info = '\\n'.join(['', format('Environment Information', '-^48s')] + info +\n ['-' * 48])\n logger.info(info)\n\n paddle.set_device('cpu')\n\n cfg = Config(args.config)\n builder = SegBuilder(cfg)\n\n custom_ops = {paddle.nn.SyncBatchNorm: op_flops_funs.count_syncbn}\n inputs = paddle.randn(args.input_shape)\n _dynamic_flops(builder.model, inputs, custom_ops=custom_ops, print_detail=True)\n\n\nif __name__ == '__main__':\n args = parse_args()\n if not args.config:\n raise RuntimeError('No configuration file specified.')\n\n logger.info(\"config:\" + args.config)\n logger.info(\"input_shape:\")\n logger.info(args.input_shape)\n analyze(args)\n", "path": "tools/model/analyze_model.py"}]}
| 2,310 | 295 |
gh_patches_debug_29094
|
rasdani/github-patches
|
git_diff
|
wright-group__WrightTools-897
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
from_PyCMDS should check version and system to determine if d1, d2, and d3 variables need to be scaled
Helps to bypass legacy problems involving wright-group/PyCMDS#210 .
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `WrightTools/data/_pycmds.py`
Content:
```
1 """PyCMDS."""
2
3
4 # --- import --------------------------------------------------------------------------------------
5
6
7 import itertools
8 import os
9 import pathlib
10
11 import numpy as np
12
13 import tidy_headers
14
15 from ._data import Data
16 from .. import kit as wt_kit
17 from .. import units as wt_units
18
19
20 # --- define --------------------------------------------------------------------------------------
21
22
23 __all__ = ["from_PyCMDS"]
24
25
26 # --- from function -------------------------------------------------------------------------------
27
28
29 def from_PyCMDS(filepath, name=None, parent=None, verbose=True) -> Data:
30 """Create a data object from a single PyCMDS output file.
31
32 Parameters
33 ----------
34 filepath : path-like
35 Path to the .data file
36 Can be either a local or remote file (http/ftp).
37 Can be compressed with gz/bz2, decompression based on file name.
38 name : str or None (optional)
39 The name to be applied to the new data object. If None, name is read
40 from file.
41 parent : WrightTools.Collection (optional)
42 Collection to place new data object within. Default is None.
43 verbose : bool (optional)
44 Toggle talkback. Default is True.
45
46 Returns
47 -------
48 data
49 A Data instance.
50 """
51 filestr = os.fspath(filepath)
52 filepath = pathlib.Path(filepath)
53
54 # header
55 ds = np.DataSource(None)
56 file_ = ds.open(filestr, "rt")
57 headers = tidy_headers.read(file_)
58 file_.seek(0)
59 # name
60 if name is None: # name not given in method arguments
61 data_name = headers["data name"]
62 else:
63 data_name = name
64 if data_name == "": # name not given in PyCMDS
65 data_name = headers["data origin"]
66 # create data object
67 kwargs = {
68 "name": data_name,
69 "kind": "PyCMDS",
70 "source": filestr,
71 "created": headers["file created"],
72 }
73 if parent is not None:
74 data = parent.create_data(**kwargs)
75 else:
76 data = Data(**kwargs)
77 # array
78 arr = np.genfromtxt(file_).T
79 file_.close()
80 # get axes and scanned variables
81 axes = []
82 for name, identity, units in zip(
83 headers["axis names"], headers["axis identities"], headers["axis units"]
84 ):
85 # points and centers
86 points = np.array(headers[name + " points"])
87 if name + " centers" in headers.keys():
88 centers = headers[name + " centers"]
89 else:
90 centers = None
91 # create
92 axis = {
93 "points": points,
94 "units": units,
95 "name": name,
96 "identity": identity,
97 "centers": centers,
98 }
99 axes.append(axis)
100 shape = tuple([a["points"].size for a in axes])
101 for i, ax in enumerate(axes):
102 sh = [1] * len(shape)
103 sh[i] = len(ax["points"])
104 data.create_variable(
105 name=ax["name"] + "_points", values=np.array(ax["points"]).reshape(sh)
106 )
107 if ax["centers"] is not None:
108 sh = list(shape)
109 sh[i] = 1
110 data.create_variable(
111 name=ax["name"] + "_centers", values=np.array(ax["centers"]).reshape(sh)
112 )
113 # get assorted remaining things
114 # variables and channels
115 try:
116 signed = iter(headers["channel signed"])
117 except KeyError:
118 signed = itertools.repeat(False)
119 for index, kind, name in zip(range(len(arr)), headers["kind"], headers["name"]):
120 values = np.full(np.prod(shape), np.nan)
121 values[: len(arr[index])] = arr[index]
122 values.shape = shape
123 if name == "time":
124 data.create_variable(name="labtime", values=values)
125 if kind == "hardware":
126 # sadly, recorded tolerances are not reliable
127 # so a bit of hard-coded hacking is needed
128 # if this ends up being too fragile, we might have to use the points arrays
129 # ---Blaise 2018-01-09
130 units = headers["units"][index]
131 label = headers["label"][index]
132 if (
133 "w" in name
134 and name.startswith(tuple(data.variable_names))
135 and name not in headers["axis names"]
136 ):
137 inherited_shape = data[name.split("_")[0]].shape
138 for i, s in enumerate(inherited_shape):
139 if s == 1:
140 values = np.mean(values, axis=i)
141 values = np.expand_dims(values, i)
142 else:
143 tolerance = headers["tolerance"][index]
144 units = headers["units"][index]
145 for i in range(len(shape)):
146 if tolerance is None:
147 break
148 if "d" in name:
149 # This is a hack because delay is particularly
150 # unreliable in tolerance. And 3 fs vs 3 ps is a huge
151 # difference... KFS 2019-2-27
152 if units == "fs":
153 tolerance = 3.
154 else:
155 tolerance = 0.1
156 if "zero" in name:
157 tolerance = 1e-10
158 try:
159 assert i == headers["axis names"].index(name)
160 tolerance = 0
161 except (ValueError, AssertionError):
162 if (
163 name in headers["axis names"]
164 and "%s_centers" % name not in data.variable_names
165 ):
166 tolerance = np.inf
167 mean = np.nanmean(values, axis=i)
168 mean = np.expand_dims(mean, i)
169 values, meanexp = wt_kit.share_nans(values, mean)
170 if np.allclose(meanexp, values, atol=tolerance, equal_nan=True):
171 values = mean
172 if name in headers["axis names"]:
173 points = np.array(headers[name + " points"])
174 pointsshape = [1] * values.ndim
175 for i, ax in enumerate(axes):
176 if ax["name"] == name:
177 pointsshape[i] = len(points)
178 break
179 points.shape = pointsshape
180 points = wt_units.converter(points, headers["axis units"][i], units)
181 for i in range(points.ndim):
182 if points.shape[i] == 1:
183 points = np.repeat(points, values.shape[i], axis=i)
184 if points.size <= values.size:
185 values[np.isnan(values)] = points[np.isnan(values)]
186 data.create_variable(name, values=values, units=units, label=label)
187 if kind == "channel":
188 data.create_channel(name=name, values=values, shape=values.shape, signed=next(signed))
189 # axes
190 for a in axes:
191 expression = a["identity"]
192 if expression.startswith("D"):
193 expression = expression[1:]
194 expression.replace("=D", "=")
195 a["expression"] = expression
196 data.transform(*[a["expression"] for a in axes])
197 for a, u in zip(data.axes, headers["axis units"]):
198 if u is not None:
199 a.convert(u)
200 # return
201 if verbose:
202 print("data created at {0}".format(data.fullpath))
203 print(" axes: {0}".format(data.axis_names))
204 print(" shape: {0}".format(data.shape))
205 return data
206
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/WrightTools/data/_pycmds.py b/WrightTools/data/_pycmds.py
--- a/WrightTools/data/_pycmds.py
+++ b/WrightTools/data/_pycmds.py
@@ -146,7 +146,7 @@
if tolerance is None:
break
if "d" in name:
- # This is a hack because delay is particularly
+ # This is a hack because delay is particularly
# unreliable in tolerance. And 3 fs vs 3 ps is a huge
# difference... KFS 2019-2-27
if units == "fs":
@@ -197,6 +197,23 @@
for a, u in zip(data.axes, headers["axis units"]):
if u is not None:
a.convert(u)
+ if (
+ headers["system name"] == "fs"
+ and int(headers["PyCMDS version"].split(".")[0]) == 0
+ and int(headers["PyCMDS version"].split(".")[1]) < 10
+ ):
+ # in versions of PyCMDS up to (and including) 0.9.0
+ # there was an incorrect hard-coded conversion factor between mm and fs
+ # this ONLY applied to Newport MFA stages
+ # we apply this correction knowing that Newport MFAs were only used on the "fs" system
+ # and knowing that the Newport MFAs were always assigned as "d1", "d2" and "d3"
+ # ---Blaise 2019-04-09
+ for delay in ("d1", "d2", "d3", "d1_points", "d2_points", "d3_points"):
+ if delay not in data.variable_names:
+ continue
+ data[delay][:] *= 6000.671281903963041 / 6671.281903963041
+ if verbose:
+ print(f"Correction factor applied to {delay}")
# return
if verbose:
print("data created at {0}".format(data.fullpath))
|
{"golden_diff": "diff --git a/WrightTools/data/_pycmds.py b/WrightTools/data/_pycmds.py\n--- a/WrightTools/data/_pycmds.py\n+++ b/WrightTools/data/_pycmds.py\n@@ -146,7 +146,7 @@\n if tolerance is None:\n break\n if \"d\" in name:\n- # This is a hack because delay is particularly \n+ # This is a hack because delay is particularly\n # unreliable in tolerance. And 3 fs vs 3 ps is a huge\n # difference... KFS 2019-2-27\n if units == \"fs\":\n@@ -197,6 +197,23 @@\n for a, u in zip(data.axes, headers[\"axis units\"]):\n if u is not None:\n a.convert(u)\n+ if (\n+ headers[\"system name\"] == \"fs\"\n+ and int(headers[\"PyCMDS version\"].split(\".\")[0]) == 0\n+ and int(headers[\"PyCMDS version\"].split(\".\")[1]) < 10\n+ ):\n+ # in versions of PyCMDS up to (and including) 0.9.0\n+ # there was an incorrect hard-coded conversion factor between mm and fs\n+ # this ONLY applied to Newport MFA stages\n+ # we apply this correction knowing that Newport MFAs were only used on the \"fs\" system\n+ # and knowing that the Newport MFAs were always assigned as \"d1\", \"d2\" and \"d3\"\n+ # ---Blaise 2019-04-09\n+ for delay in (\"d1\", \"d2\", \"d3\", \"d1_points\", \"d2_points\", \"d3_points\"):\n+ if delay not in data.variable_names:\n+ continue\n+ data[delay][:] *= 6000.671281903963041 / 6671.281903963041\n+ if verbose:\n+ print(f\"Correction factor applied to {delay}\")\n # return\n if verbose:\n print(\"data created at {0}\".format(data.fullpath))\n", "issue": "from_PyCMDS should check version and system to determine if d1, d2, and d3 variables need to be scaled \nHelps to bypass legacy problems involving wright-group/PyCMDS#210 .\n", "before_files": [{"content": "\"\"\"PyCMDS.\"\"\"\n\n\n# --- import --------------------------------------------------------------------------------------\n\n\nimport itertools\nimport os\nimport pathlib\n\nimport numpy as np\n\nimport tidy_headers\n\nfrom ._data import Data\nfrom .. import kit as wt_kit\nfrom .. import units as wt_units\n\n\n# --- define --------------------------------------------------------------------------------------\n\n\n__all__ = [\"from_PyCMDS\"]\n\n\n# --- from function -------------------------------------------------------------------------------\n\n\ndef from_PyCMDS(filepath, name=None, parent=None, verbose=True) -> Data:\n \"\"\"Create a data object from a single PyCMDS output file.\n\n Parameters\n ----------\n filepath : path-like\n Path to the .data file\n Can be either a local or remote file (http/ftp).\n Can be compressed with gz/bz2, decompression based on file name.\n name : str or None (optional)\n The name to be applied to the new data object. If None, name is read\n from file.\n parent : WrightTools.Collection (optional)\n Collection to place new data object within. Default is None.\n verbose : bool (optional)\n Toggle talkback. Default is True.\n\n Returns\n -------\n data\n A Data instance.\n \"\"\"\n filestr = os.fspath(filepath)\n filepath = pathlib.Path(filepath)\n\n # header\n ds = np.DataSource(None)\n file_ = ds.open(filestr, \"rt\")\n headers = tidy_headers.read(file_)\n file_.seek(0)\n # name\n if name is None: # name not given in method arguments\n data_name = headers[\"data name\"]\n else:\n data_name = name\n if data_name == \"\": # name not given in PyCMDS\n data_name = headers[\"data origin\"]\n # create data object\n kwargs = {\n \"name\": data_name,\n \"kind\": \"PyCMDS\",\n \"source\": filestr,\n \"created\": headers[\"file created\"],\n }\n if parent is not None:\n data = parent.create_data(**kwargs)\n else:\n data = Data(**kwargs)\n # array\n arr = np.genfromtxt(file_).T\n file_.close()\n # get axes and scanned variables\n axes = []\n for name, identity, units in zip(\n headers[\"axis names\"], headers[\"axis identities\"], headers[\"axis units\"]\n ):\n # points and centers\n points = np.array(headers[name + \" points\"])\n if name + \" centers\" in headers.keys():\n centers = headers[name + \" centers\"]\n else:\n centers = None\n # create\n axis = {\n \"points\": points,\n \"units\": units,\n \"name\": name,\n \"identity\": identity,\n \"centers\": centers,\n }\n axes.append(axis)\n shape = tuple([a[\"points\"].size for a in axes])\n for i, ax in enumerate(axes):\n sh = [1] * len(shape)\n sh[i] = len(ax[\"points\"])\n data.create_variable(\n name=ax[\"name\"] + \"_points\", values=np.array(ax[\"points\"]).reshape(sh)\n )\n if ax[\"centers\"] is not None:\n sh = list(shape)\n sh[i] = 1\n data.create_variable(\n name=ax[\"name\"] + \"_centers\", values=np.array(ax[\"centers\"]).reshape(sh)\n )\n # get assorted remaining things\n # variables and channels\n try:\n signed = iter(headers[\"channel signed\"])\n except KeyError:\n signed = itertools.repeat(False)\n for index, kind, name in zip(range(len(arr)), headers[\"kind\"], headers[\"name\"]):\n values = np.full(np.prod(shape), np.nan)\n values[: len(arr[index])] = arr[index]\n values.shape = shape\n if name == \"time\":\n data.create_variable(name=\"labtime\", values=values)\n if kind == \"hardware\":\n # sadly, recorded tolerances are not reliable\n # so a bit of hard-coded hacking is needed\n # if this ends up being too fragile, we might have to use the points arrays\n # ---Blaise 2018-01-09\n units = headers[\"units\"][index]\n label = headers[\"label\"][index]\n if (\n \"w\" in name\n and name.startswith(tuple(data.variable_names))\n and name not in headers[\"axis names\"]\n ):\n inherited_shape = data[name.split(\"_\")[0]].shape\n for i, s in enumerate(inherited_shape):\n if s == 1:\n values = np.mean(values, axis=i)\n values = np.expand_dims(values, i)\n else:\n tolerance = headers[\"tolerance\"][index]\n units = headers[\"units\"][index]\n for i in range(len(shape)):\n if tolerance is None:\n break\n if \"d\" in name:\n # This is a hack because delay is particularly \n # unreliable in tolerance. And 3 fs vs 3 ps is a huge\n # difference... KFS 2019-2-27\n if units == \"fs\":\n tolerance = 3.\n else:\n tolerance = 0.1\n if \"zero\" in name:\n tolerance = 1e-10\n try:\n assert i == headers[\"axis names\"].index(name)\n tolerance = 0\n except (ValueError, AssertionError):\n if (\n name in headers[\"axis names\"]\n and \"%s_centers\" % name not in data.variable_names\n ):\n tolerance = np.inf\n mean = np.nanmean(values, axis=i)\n mean = np.expand_dims(mean, i)\n values, meanexp = wt_kit.share_nans(values, mean)\n if np.allclose(meanexp, values, atol=tolerance, equal_nan=True):\n values = mean\n if name in headers[\"axis names\"]:\n points = np.array(headers[name + \" points\"])\n pointsshape = [1] * values.ndim\n for i, ax in enumerate(axes):\n if ax[\"name\"] == name:\n pointsshape[i] = len(points)\n break\n points.shape = pointsshape\n points = wt_units.converter(points, headers[\"axis units\"][i], units)\n for i in range(points.ndim):\n if points.shape[i] == 1:\n points = np.repeat(points, values.shape[i], axis=i)\n if points.size <= values.size:\n values[np.isnan(values)] = points[np.isnan(values)]\n data.create_variable(name, values=values, units=units, label=label)\n if kind == \"channel\":\n data.create_channel(name=name, values=values, shape=values.shape, signed=next(signed))\n # axes\n for a in axes:\n expression = a[\"identity\"]\n if expression.startswith(\"D\"):\n expression = expression[1:]\n expression.replace(\"=D\", \"=\")\n a[\"expression\"] = expression\n data.transform(*[a[\"expression\"] for a in axes])\n for a, u in zip(data.axes, headers[\"axis units\"]):\n if u is not None:\n a.convert(u)\n # return\n if verbose:\n print(\"data created at {0}\".format(data.fullpath))\n print(\" axes: {0}\".format(data.axis_names))\n print(\" shape: {0}\".format(data.shape))\n return data\n", "path": "WrightTools/data/_pycmds.py"}], "after_files": [{"content": "\"\"\"PyCMDS.\"\"\"\n\n\n# --- import --------------------------------------------------------------------------------------\n\n\nimport itertools\nimport os\nimport pathlib\n\nimport numpy as np\n\nimport tidy_headers\n\nfrom ._data import Data\nfrom .. import kit as wt_kit\nfrom .. import units as wt_units\n\n\n# --- define --------------------------------------------------------------------------------------\n\n\n__all__ = [\"from_PyCMDS\"]\n\n\n# --- from function -------------------------------------------------------------------------------\n\n\ndef from_PyCMDS(filepath, name=None, parent=None, verbose=True) -> Data:\n \"\"\"Create a data object from a single PyCMDS output file.\n\n Parameters\n ----------\n filepath : path-like\n Path to the .data file\n Can be either a local or remote file (http/ftp).\n Can be compressed with gz/bz2, decompression based on file name.\n name : str or None (optional)\n The name to be applied to the new data object. If None, name is read\n from file.\n parent : WrightTools.Collection (optional)\n Collection to place new data object within. Default is None.\n verbose : bool (optional)\n Toggle talkback. Default is True.\n\n Returns\n -------\n data\n A Data instance.\n \"\"\"\n filestr = os.fspath(filepath)\n filepath = pathlib.Path(filepath)\n\n # header\n ds = np.DataSource(None)\n file_ = ds.open(filestr, \"rt\")\n headers = tidy_headers.read(file_)\n file_.seek(0)\n # name\n if name is None: # name not given in method arguments\n data_name = headers[\"data name\"]\n else:\n data_name = name\n if data_name == \"\": # name not given in PyCMDS\n data_name = headers[\"data origin\"]\n # create data object\n kwargs = {\n \"name\": data_name,\n \"kind\": \"PyCMDS\",\n \"source\": filestr,\n \"created\": headers[\"file created\"],\n }\n if parent is not None:\n data = parent.create_data(**kwargs)\n else:\n data = Data(**kwargs)\n # array\n arr = np.genfromtxt(file_).T\n file_.close()\n # get axes and scanned variables\n axes = []\n for name, identity, units in zip(\n headers[\"axis names\"], headers[\"axis identities\"], headers[\"axis units\"]\n ):\n # points and centers\n points = np.array(headers[name + \" points\"])\n if name + \" centers\" in headers.keys():\n centers = headers[name + \" centers\"]\n else:\n centers = None\n # create\n axis = {\n \"points\": points,\n \"units\": units,\n \"name\": name,\n \"identity\": identity,\n \"centers\": centers,\n }\n axes.append(axis)\n shape = tuple([a[\"points\"].size for a in axes])\n for i, ax in enumerate(axes):\n sh = [1] * len(shape)\n sh[i] = len(ax[\"points\"])\n data.create_variable(\n name=ax[\"name\"] + \"_points\", values=np.array(ax[\"points\"]).reshape(sh)\n )\n if ax[\"centers\"] is not None:\n sh = list(shape)\n sh[i] = 1\n data.create_variable(\n name=ax[\"name\"] + \"_centers\", values=np.array(ax[\"centers\"]).reshape(sh)\n )\n # get assorted remaining things\n # variables and channels\n try:\n signed = iter(headers[\"channel signed\"])\n except KeyError:\n signed = itertools.repeat(False)\n for index, kind, name in zip(range(len(arr)), headers[\"kind\"], headers[\"name\"]):\n values = np.full(np.prod(shape), np.nan)\n values[: len(arr[index])] = arr[index]\n values.shape = shape\n if name == \"time\":\n data.create_variable(name=\"labtime\", values=values)\n if kind == \"hardware\":\n # sadly, recorded tolerances are not reliable\n # so a bit of hard-coded hacking is needed\n # if this ends up being too fragile, we might have to use the points arrays\n # ---Blaise 2018-01-09\n units = headers[\"units\"][index]\n label = headers[\"label\"][index]\n if (\n \"w\" in name\n and name.startswith(tuple(data.variable_names))\n and name not in headers[\"axis names\"]\n ):\n inherited_shape = data[name.split(\"_\")[0]].shape\n for i, s in enumerate(inherited_shape):\n if s == 1:\n values = np.mean(values, axis=i)\n values = np.expand_dims(values, i)\n else:\n tolerance = headers[\"tolerance\"][index]\n units = headers[\"units\"][index]\n for i in range(len(shape)):\n if tolerance is None:\n break\n if \"d\" in name:\n # This is a hack because delay is particularly\n # unreliable in tolerance. And 3 fs vs 3 ps is a huge\n # difference... KFS 2019-2-27\n if units == \"fs\":\n tolerance = 3.\n else:\n tolerance = 0.1\n if \"zero\" in name:\n tolerance = 1e-10\n try:\n assert i == headers[\"axis names\"].index(name)\n tolerance = 0\n except (ValueError, AssertionError):\n if (\n name in headers[\"axis names\"]\n and \"%s_centers\" % name not in data.variable_names\n ):\n tolerance = np.inf\n mean = np.nanmean(values, axis=i)\n mean = np.expand_dims(mean, i)\n values, meanexp = wt_kit.share_nans(values, mean)\n if np.allclose(meanexp, values, atol=tolerance, equal_nan=True):\n values = mean\n if name in headers[\"axis names\"]:\n points = np.array(headers[name + \" points\"])\n pointsshape = [1] * values.ndim\n for i, ax in enumerate(axes):\n if ax[\"name\"] == name:\n pointsshape[i] = len(points)\n break\n points.shape = pointsshape\n points = wt_units.converter(points, headers[\"axis units\"][i], units)\n for i in range(points.ndim):\n if points.shape[i] == 1:\n points = np.repeat(points, values.shape[i], axis=i)\n if points.size <= values.size:\n values[np.isnan(values)] = points[np.isnan(values)]\n data.create_variable(name, values=values, units=units, label=label)\n if kind == \"channel\":\n data.create_channel(name=name, values=values, shape=values.shape, signed=next(signed))\n # axes\n for a in axes:\n expression = a[\"identity\"]\n if expression.startswith(\"D\"):\n expression = expression[1:]\n expression.replace(\"=D\", \"=\")\n a[\"expression\"] = expression\n data.transform(*[a[\"expression\"] for a in axes])\n for a, u in zip(data.axes, headers[\"axis units\"]):\n if u is not None:\n a.convert(u)\n if (\n headers[\"system name\"] == \"fs\"\n and int(headers[\"PyCMDS version\"].split(\".\")[0]) == 0\n and int(headers[\"PyCMDS version\"].split(\".\")[1]) < 10\n ):\n # in versions of PyCMDS up to (and including) 0.9.0\n # there was an incorrect hard-coded conversion factor between mm and fs\n # this ONLY applied to Newport MFA stages\n # we apply this correction knowing that Newport MFAs were only used on the \"fs\" system\n # and knowing that the Newport MFAs were always assigned as \"d1\", \"d2\" and \"d3\"\n # ---Blaise 2019-04-09\n for delay in (\"d1\", \"d2\", \"d3\", \"d1_points\", \"d2_points\", \"d3_points\"):\n if delay not in data.variable_names:\n continue\n data[delay][:] *= 6000.671281903963041 / 6671.281903963041\n if verbose:\n print(f\"Correction factor applied to {delay}\")\n # return\n if verbose:\n print(\"data created at {0}\".format(data.fullpath))\n print(\" axes: {0}\".format(data.axis_names))\n print(\" shape: {0}\".format(data.shape))\n return data\n", "path": "WrightTools/data/_pycmds.py"}]}
| 2,405 | 494 |
gh_patches_debug_28511
|
rasdani/github-patches
|
git_diff
|
pytorch__ignite-2272
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
A utility for hashing a checkpoint file
### 🚀 Feature
By default Ignite save a checkpoint with the following structure `{filename_prefix}_{name}_{suffix}.{ext}` which is great.
At first, I would like to add an argument for hashing a checkpoint in `Checkpoint` class. But that could occur a read + write overhead if we turn on hashing which may not be used very often during development.
So instead, I would like to propose to add a utility in `ignite.util` for hashing a saved checkpoint file so that we could use `torch.hub.load_state_dict_from_url` with `check_hash=True`.
Implementation (stripped down and untested version):
```py
def hash_checkpoint(checkpoint):
with open(checkpoint, 'rb') as f:
hash = hashlib.sha256(f.read()).hexdigest()
checkpoint_prefix = checkpoint.strip('.pt')
filname = '-'.join([checkpoint_prefix, hash[:8]]) + '.pt'
shutil.move(checkpoint, filename)
return filename
```
Implementation in Code-Generator:
https://github.com/pytorch-ignite/code-generator/blob/a2ed5f771e7cf8a57c6f68b5a3797792f4676433/templates/single/utils.py#L134
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ignite/utils.py`
Content:
```
1 import collections.abc as collections
2 import functools
3 import logging
4 import random
5 import warnings
6 from typing import Any, Callable, Dict, Optional, TextIO, Tuple, Type, TypeVar, Union, cast
7
8 import torch
9
10 __all__ = ["convert_tensor", "apply_to_tensor", "apply_to_type", "to_onehot", "setup_logger", "manual_seed"]
11
12
13 def convert_tensor(
14 x: Union[torch.Tensor, collections.Sequence, collections.Mapping, str, bytes],
15 device: Optional[Union[str, torch.device]] = None,
16 non_blocking: bool = False,
17 ) -> Union[torch.Tensor, collections.Sequence, collections.Mapping, str, bytes]:
18 """Move tensors to relevant device.
19
20 Args:
21 x: input tensor or mapping, or sequence of tensors.
22 device: device type to move ``x``.
23 non_blocking: convert a CPU Tensor with pinned memory to a CUDA Tensor
24 asynchronously with respect to the host if possible
25 """
26
27 def _func(tensor: torch.Tensor) -> torch.Tensor:
28 return tensor.to(device=device, non_blocking=non_blocking) if device is not None else tensor
29
30 return apply_to_tensor(x, _func)
31
32
33 def apply_to_tensor(
34 x: Union[torch.Tensor, collections.Sequence, collections.Mapping, str, bytes], func: Callable
35 ) -> Union[torch.Tensor, collections.Sequence, collections.Mapping, str, bytes]:
36 """Apply a function on a tensor or mapping, or sequence of tensors.
37
38 Args:
39 x: input tensor or mapping, or sequence of tensors.
40 func: the function to apply on ``x``.
41 """
42 return apply_to_type(x, torch.Tensor, func)
43
44
45 def apply_to_type(
46 x: Union[Any, collections.Sequence, collections.Mapping, str, bytes],
47 input_type: Union[Type, Tuple[Type[Any], Any]],
48 func: Callable,
49 ) -> Union[Any, collections.Sequence, collections.Mapping, str, bytes]:
50 """Apply a function on an object of `input_type` or mapping, or sequence of objects of `input_type`.
51
52 Args:
53 x: object or mapping or sequence.
54 input_type: data type of ``x``.
55 func: the function to apply on ``x``.
56 """
57 if isinstance(x, input_type):
58 return func(x)
59 if isinstance(x, (str, bytes)):
60 return x
61 if isinstance(x, collections.Mapping):
62 return cast(Callable, type(x))({k: apply_to_type(sample, input_type, func) for k, sample in x.items()})
63 if isinstance(x, tuple) and hasattr(x, "_fields"): # namedtuple
64 return cast(Callable, type(x))(*(apply_to_type(sample, input_type, func) for sample in x))
65 if isinstance(x, collections.Sequence):
66 return cast(Callable, type(x))([apply_to_type(sample, input_type, func) for sample in x])
67 raise TypeError((f"x must contain {input_type}, dicts or lists; found {type(x)}"))
68
69
70 def to_onehot(indices: torch.Tensor, num_classes: int) -> torch.Tensor:
71 """Convert a tensor of indices of any shape `(N, ...)` to a
72 tensor of one-hot indicators of shape `(N, num_classes, ...)` and of type uint8. Output's device is equal to the
73 input's device`.
74
75 Args:
76 indices: input tensor to convert.
77 num_classes: number of classes for one-hot tensor.
78
79 .. versionchanged:: 0.4.3
80 This functions is now torchscriptable.
81 """
82 new_shape = (indices.shape[0], num_classes) + indices.shape[1:]
83 onehot = torch.zeros(new_shape, dtype=torch.uint8, device=indices.device)
84 return onehot.scatter_(1, indices.unsqueeze(1), 1)
85
86
87 def setup_logger(
88 name: Optional[str] = "ignite",
89 level: int = logging.INFO,
90 stream: Optional[TextIO] = None,
91 format: str = "%(asctime)s %(name)s %(levelname)s: %(message)s",
92 filepath: Optional[str] = None,
93 distributed_rank: Optional[int] = None,
94 reset: bool = False,
95 ) -> logging.Logger:
96 """Setups logger: name, level, format etc.
97
98 Args:
99 name: new name for the logger. If None, the standard logger is used.
100 level: logging level, e.g. CRITICAL, ERROR, WARNING, INFO, DEBUG.
101 stream: logging stream. If None, the standard stream is used (sys.stderr).
102 format: logging format. By default, `%(asctime)s %(name)s %(levelname)s: %(message)s`.
103 filepath: Optional logging file path. If not None, logs are written to the file.
104 distributed_rank: Optional, rank in distributed configuration to avoid logger setup for workers.
105 If None, distributed_rank is initialized to the rank of process.
106 reset: if True, reset an existing logger rather than keep format, handlers, and level.
107
108 Returns:
109 logging.Logger
110
111 Examples:
112 Improve logs readability when training with a trainer and evaluator:
113
114 .. code-block:: python
115
116 from ignite.utils import setup_logger
117
118 trainer = ...
119 evaluator = ...
120
121 trainer.logger = setup_logger("trainer")
122 evaluator.logger = setup_logger("evaluator")
123
124 trainer.run(data, max_epochs=10)
125
126 # Logs will look like
127 # 2020-01-21 12:46:07,356 trainer INFO: Engine run starting with max_epochs=5.
128 # 2020-01-21 12:46:07,358 trainer INFO: Epoch[1] Complete. Time taken: 00:5:23
129 # 2020-01-21 12:46:07,358 evaluator INFO: Engine run starting with max_epochs=1.
130 # 2020-01-21 12:46:07,358 evaluator INFO: Epoch[1] Complete. Time taken: 00:01:02
131 # ...
132
133 Every existing logger can be reset if needed
134
135 .. code-block:: python
136
137 logger = setup_logger(name="my-logger", format="=== %(name)s %(message)s")
138 logger.info("first message")
139 setup_logger(name="my-logger", format="+++ %(name)s %(message)s", reset=True)
140 logger.info("second message")
141
142 # Logs will look like
143 # === my-logger first message
144 # +++ my-logger second message
145
146 Change the level of an existing internal logger
147
148 .. code-block:: python
149
150 setup_logger(
151 name="ignite.distributed.launcher.Parallel",
152 level=logging.WARNING
153 )
154
155 .. versionchanged:: 0.4.3
156 Added ``stream`` parameter.
157
158 .. versionchanged:: 0.4.5
159 Added ``reset`` parameter.
160 """
161 # check if the logger already exists
162 existing = name is None or name in logging.root.manager.loggerDict
163
164 # if existing, get the logger otherwise create a new one
165 logger = logging.getLogger(name)
166
167 if distributed_rank is None:
168 import ignite.distributed as idist
169
170 distributed_rank = idist.get_rank()
171
172 # Remove previous handlers
173 if distributed_rank > 0 or reset:
174
175 if logger.hasHandlers():
176 for h in list(logger.handlers):
177 logger.removeHandler(h)
178
179 if distributed_rank > 0:
180
181 # Add null handler to avoid multiple parallel messages
182 logger.addHandler(logging.NullHandler())
183
184 # Keep the existing configuration if not reset
185 if existing and not reset:
186 return logger
187
188 if distributed_rank == 0:
189 logger.setLevel(level)
190
191 formatter = logging.Formatter(format)
192
193 ch = logging.StreamHandler(stream=stream)
194 ch.setLevel(level)
195 ch.setFormatter(formatter)
196 logger.addHandler(ch)
197
198 if filepath is not None:
199 fh = logging.FileHandler(filepath)
200 fh.setLevel(level)
201 fh.setFormatter(formatter)
202 logger.addHandler(fh)
203
204 # don't propagate to ancestors
205 # the problem here is to attach handlers to loggers
206 # should we provide a default configuration less open ?
207 if name is not None:
208 logger.propagate = False
209
210 return logger
211
212
213 def manual_seed(seed: int) -> None:
214 """Setup random state from a seed for `torch`, `random` and optionally `numpy` (if can be imported).
215
216 Args:
217 seed: Random state seed
218
219 .. versionchanged:: 0.4.3
220 Added ``torch.cuda.manual_seed_all(seed)``.
221
222 .. versionchanged:: 0.4.5
223 Added ``torch_xla.core.xla_model.set_rng_state(seed)``.
224 """
225 random.seed(seed)
226 torch.manual_seed(seed)
227
228 if torch.cuda.is_available():
229 torch.cuda.manual_seed_all(seed)
230
231 try:
232 import torch_xla.core.xla_model as xm
233
234 xm.set_rng_state(seed)
235 except ImportError:
236 pass
237
238 try:
239 import numpy as np
240
241 np.random.seed(seed)
242 except ImportError:
243 pass
244
245
246 def deprecated(
247 deprecated_in: str, removed_in: str = "", reasons: Tuple[str, ...] = (), raise_exception: bool = False
248 ) -> Callable:
249
250 F = TypeVar("F", bound=Callable[..., Any])
251
252 def decorator(func: F) -> F:
253 func_doc = func.__doc__ if func.__doc__ else ""
254 deprecation_warning = (
255 f"This function has been deprecated since version {deprecated_in}"
256 + (f" and will be removed in version {removed_in}" if removed_in else "")
257 + ".\n Please refer to the documentation for more details."
258 )
259
260 @functools.wraps(func)
261 def wrapper(*args: Any, **kwargs: Dict[str, Any]) -> Callable:
262 if raise_exception:
263 raise DeprecationWarning(deprecation_warning)
264 warnings.warn(deprecation_warning, DeprecationWarning, stacklevel=2)
265 return func(*args, **kwargs)
266
267 appended_doc = f".. deprecated:: {deprecated_in}" + ("\n\n\t" if len(reasons) > 0 else "")
268
269 for reason in reasons:
270 appended_doc += "\n\t- " + reason
271 wrapper.__doc__ = f"**Deprecated function**.\n\n {func_doc}{appended_doc}"
272 return cast(F, wrapper)
273
274 return decorator
275
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ignite/utils.py b/ignite/utils.py
--- a/ignite/utils.py
+++ b/ignite/utils.py
@@ -1,13 +1,24 @@
import collections.abc as collections
import functools
+import hashlib
import logging
import random
+import shutil
import warnings
+from pathlib import Path
from typing import Any, Callable, Dict, Optional, TextIO, Tuple, Type, TypeVar, Union, cast
import torch
-__all__ = ["convert_tensor", "apply_to_tensor", "apply_to_type", "to_onehot", "setup_logger", "manual_seed"]
+__all__ = [
+ "convert_tensor",
+ "apply_to_tensor",
+ "apply_to_type",
+ "to_onehot",
+ "setup_logger",
+ "manual_seed",
+ "hash_checkpoint",
+]
def convert_tensor(
@@ -272,3 +283,34 @@
return cast(F, wrapper)
return decorator
+
+
+def hash_checkpoint(checkpoint_path: Union[str, Path], output_dir: Union[str, Path],) -> Tuple[Path, str]:
+ """
+ Hash the checkpoint file in the format of ``<filename>-<hash>.<ext>``
+ to be used with ``check_hash`` of :func:`torch.hub.load_state_dict_from_url`.
+
+ Args:
+ checkpoint_path: Path to the checkpoint file.
+ output_dir: Output directory to store the hashed checkpoint file.
+
+ Returns:
+ Path to the hashed checkpoint file, The 8 digits of SHA256 hash.
+
+ .. versionadded:: 0.5.0
+ """
+
+ if isinstance(checkpoint_path, str):
+ checkpoint_path = Path(checkpoint_path)
+
+ if isinstance(output_dir, str):
+ output_dir = Path(output_dir)
+
+ sha_hash = hashlib.sha256(checkpoint_path.read_bytes()).hexdigest()
+ old_filename = checkpoint_path.stem
+ new_filename = "-".join((old_filename, sha_hash[:8])) + ".pt"
+
+ hash_checkpoint_path = output_dir / new_filename
+ shutil.move(str(checkpoint_path), hash_checkpoint_path)
+
+ return hash_checkpoint_path, sha_hash
|
{"golden_diff": "diff --git a/ignite/utils.py b/ignite/utils.py\n--- a/ignite/utils.py\n+++ b/ignite/utils.py\n@@ -1,13 +1,24 @@\n import collections.abc as collections\n import functools\n+import hashlib\n import logging\n import random\n+import shutil\n import warnings\n+from pathlib import Path\n from typing import Any, Callable, Dict, Optional, TextIO, Tuple, Type, TypeVar, Union, cast\n \n import torch\n \n-__all__ = [\"convert_tensor\", \"apply_to_tensor\", \"apply_to_type\", \"to_onehot\", \"setup_logger\", \"manual_seed\"]\n+__all__ = [\n+ \"convert_tensor\",\n+ \"apply_to_tensor\",\n+ \"apply_to_type\",\n+ \"to_onehot\",\n+ \"setup_logger\",\n+ \"manual_seed\",\n+ \"hash_checkpoint\",\n+]\n \n \n def convert_tensor(\n@@ -272,3 +283,34 @@\n return cast(F, wrapper)\n \n return decorator\n+\n+\n+def hash_checkpoint(checkpoint_path: Union[str, Path], output_dir: Union[str, Path],) -> Tuple[Path, str]:\n+ \"\"\"\n+ Hash the checkpoint file in the format of ``<filename>-<hash>.<ext>``\n+ to be used with ``check_hash`` of :func:`torch.hub.load_state_dict_from_url`.\n+\n+ Args:\n+ checkpoint_path: Path to the checkpoint file.\n+ output_dir: Output directory to store the hashed checkpoint file.\n+\n+ Returns:\n+ Path to the hashed checkpoint file, The 8 digits of SHA256 hash.\n+\n+ .. versionadded:: 0.5.0\n+ \"\"\"\n+\n+ if isinstance(checkpoint_path, str):\n+ checkpoint_path = Path(checkpoint_path)\n+\n+ if isinstance(output_dir, str):\n+ output_dir = Path(output_dir)\n+\n+ sha_hash = hashlib.sha256(checkpoint_path.read_bytes()).hexdigest()\n+ old_filename = checkpoint_path.stem\n+ new_filename = \"-\".join((old_filename, sha_hash[:8])) + \".pt\"\n+\n+ hash_checkpoint_path = output_dir / new_filename\n+ shutil.move(str(checkpoint_path), hash_checkpoint_path)\n+\n+ return hash_checkpoint_path, sha_hash\n", "issue": "A utility for hashing a checkpoint file\n### \ud83d\ude80 Feature\r\n\r\nBy default Ignite save a checkpoint with the following structure `{filename_prefix}_{name}_{suffix}.{ext}` which is great.\r\nAt first, I would like to add an argument for hashing a checkpoint in `Checkpoint` class. But that could occur a read + write overhead if we turn on hashing which may not be used very often during development.\r\n\r\nSo instead, I would like to propose to add a utility in `ignite.util` for hashing a saved checkpoint file so that we could use `torch.hub.load_state_dict_from_url` with `check_hash=True`.\r\n\r\nImplementation (stripped down and untested version):\r\n\r\n```py\r\ndef hash_checkpoint(checkpoint):\r\n with open(checkpoint, 'rb') as f:\r\n hash = hashlib.sha256(f.read()).hexdigest()\r\n\r\n checkpoint_prefix = checkpoint.strip('.pt')\r\n filname = '-'.join([checkpoint_prefix, hash[:8]]) + '.pt'\r\n shutil.move(checkpoint, filename)\r\n return filename\r\n```\r\n\r\nImplementation in Code-Generator:\r\n\r\nhttps://github.com/pytorch-ignite/code-generator/blob/a2ed5f771e7cf8a57c6f68b5a3797792f4676433/templates/single/utils.py#L134\n", "before_files": [{"content": "import collections.abc as collections\nimport functools\nimport logging\nimport random\nimport warnings\nfrom typing import Any, Callable, Dict, Optional, TextIO, Tuple, Type, TypeVar, Union, cast\n\nimport torch\n\n__all__ = [\"convert_tensor\", \"apply_to_tensor\", \"apply_to_type\", \"to_onehot\", \"setup_logger\", \"manual_seed\"]\n\n\ndef convert_tensor(\n x: Union[torch.Tensor, collections.Sequence, collections.Mapping, str, bytes],\n device: Optional[Union[str, torch.device]] = None,\n non_blocking: bool = False,\n) -> Union[torch.Tensor, collections.Sequence, collections.Mapping, str, bytes]:\n \"\"\"Move tensors to relevant device.\n\n Args:\n x: input tensor or mapping, or sequence of tensors.\n device: device type to move ``x``.\n non_blocking: convert a CPU Tensor with pinned memory to a CUDA Tensor\n asynchronously with respect to the host if possible\n \"\"\"\n\n def _func(tensor: torch.Tensor) -> torch.Tensor:\n return tensor.to(device=device, non_blocking=non_blocking) if device is not None else tensor\n\n return apply_to_tensor(x, _func)\n\n\ndef apply_to_tensor(\n x: Union[torch.Tensor, collections.Sequence, collections.Mapping, str, bytes], func: Callable\n) -> Union[torch.Tensor, collections.Sequence, collections.Mapping, str, bytes]:\n \"\"\"Apply a function on a tensor or mapping, or sequence of tensors.\n\n Args:\n x: input tensor or mapping, or sequence of tensors.\n func: the function to apply on ``x``.\n \"\"\"\n return apply_to_type(x, torch.Tensor, func)\n\n\ndef apply_to_type(\n x: Union[Any, collections.Sequence, collections.Mapping, str, bytes],\n input_type: Union[Type, Tuple[Type[Any], Any]],\n func: Callable,\n) -> Union[Any, collections.Sequence, collections.Mapping, str, bytes]:\n \"\"\"Apply a function on an object of `input_type` or mapping, or sequence of objects of `input_type`.\n\n Args:\n x: object or mapping or sequence.\n input_type: data type of ``x``.\n func: the function to apply on ``x``.\n \"\"\"\n if isinstance(x, input_type):\n return func(x)\n if isinstance(x, (str, bytes)):\n return x\n if isinstance(x, collections.Mapping):\n return cast(Callable, type(x))({k: apply_to_type(sample, input_type, func) for k, sample in x.items()})\n if isinstance(x, tuple) and hasattr(x, \"_fields\"): # namedtuple\n return cast(Callable, type(x))(*(apply_to_type(sample, input_type, func) for sample in x))\n if isinstance(x, collections.Sequence):\n return cast(Callable, type(x))([apply_to_type(sample, input_type, func) for sample in x])\n raise TypeError((f\"x must contain {input_type}, dicts or lists; found {type(x)}\"))\n\n\ndef to_onehot(indices: torch.Tensor, num_classes: int) -> torch.Tensor:\n \"\"\"Convert a tensor of indices of any shape `(N, ...)` to a\n tensor of one-hot indicators of shape `(N, num_classes, ...)` and of type uint8. Output's device is equal to the\n input's device`.\n\n Args:\n indices: input tensor to convert.\n num_classes: number of classes for one-hot tensor.\n\n .. versionchanged:: 0.4.3\n This functions is now torchscriptable.\n \"\"\"\n new_shape = (indices.shape[0], num_classes) + indices.shape[1:]\n onehot = torch.zeros(new_shape, dtype=torch.uint8, device=indices.device)\n return onehot.scatter_(1, indices.unsqueeze(1), 1)\n\n\ndef setup_logger(\n name: Optional[str] = \"ignite\",\n level: int = logging.INFO,\n stream: Optional[TextIO] = None,\n format: str = \"%(asctime)s %(name)s %(levelname)s: %(message)s\",\n filepath: Optional[str] = None,\n distributed_rank: Optional[int] = None,\n reset: bool = False,\n) -> logging.Logger:\n \"\"\"Setups logger: name, level, format etc.\n\n Args:\n name: new name for the logger. If None, the standard logger is used.\n level: logging level, e.g. CRITICAL, ERROR, WARNING, INFO, DEBUG.\n stream: logging stream. If None, the standard stream is used (sys.stderr).\n format: logging format. By default, `%(asctime)s %(name)s %(levelname)s: %(message)s`.\n filepath: Optional logging file path. If not None, logs are written to the file.\n distributed_rank: Optional, rank in distributed configuration to avoid logger setup for workers.\n If None, distributed_rank is initialized to the rank of process.\n reset: if True, reset an existing logger rather than keep format, handlers, and level.\n\n Returns:\n logging.Logger\n\n Examples:\n Improve logs readability when training with a trainer and evaluator:\n\n .. code-block:: python\n\n from ignite.utils import setup_logger\n\n trainer = ...\n evaluator = ...\n\n trainer.logger = setup_logger(\"trainer\")\n evaluator.logger = setup_logger(\"evaluator\")\n\n trainer.run(data, max_epochs=10)\n\n # Logs will look like\n # 2020-01-21 12:46:07,356 trainer INFO: Engine run starting with max_epochs=5.\n # 2020-01-21 12:46:07,358 trainer INFO: Epoch[1] Complete. Time taken: 00:5:23\n # 2020-01-21 12:46:07,358 evaluator INFO: Engine run starting with max_epochs=1.\n # 2020-01-21 12:46:07,358 evaluator INFO: Epoch[1] Complete. Time taken: 00:01:02\n # ...\n\n Every existing logger can be reset if needed\n\n .. code-block:: python\n\n logger = setup_logger(name=\"my-logger\", format=\"=== %(name)s %(message)s\")\n logger.info(\"first message\")\n setup_logger(name=\"my-logger\", format=\"+++ %(name)s %(message)s\", reset=True)\n logger.info(\"second message\")\n\n # Logs will look like\n # === my-logger first message\n # +++ my-logger second message\n\n Change the level of an existing internal logger\n\n .. code-block:: python\n\n setup_logger(\n name=\"ignite.distributed.launcher.Parallel\",\n level=logging.WARNING\n )\n\n .. versionchanged:: 0.4.3\n Added ``stream`` parameter.\n\n .. versionchanged:: 0.4.5\n Added ``reset`` parameter.\n \"\"\"\n # check if the logger already exists\n existing = name is None or name in logging.root.manager.loggerDict\n\n # if existing, get the logger otherwise create a new one\n logger = logging.getLogger(name)\n\n if distributed_rank is None:\n import ignite.distributed as idist\n\n distributed_rank = idist.get_rank()\n\n # Remove previous handlers\n if distributed_rank > 0 or reset:\n\n if logger.hasHandlers():\n for h in list(logger.handlers):\n logger.removeHandler(h)\n\n if distributed_rank > 0:\n\n # Add null handler to avoid multiple parallel messages\n logger.addHandler(logging.NullHandler())\n\n # Keep the existing configuration if not reset\n if existing and not reset:\n return logger\n\n if distributed_rank == 0:\n logger.setLevel(level)\n\n formatter = logging.Formatter(format)\n\n ch = logging.StreamHandler(stream=stream)\n ch.setLevel(level)\n ch.setFormatter(formatter)\n logger.addHandler(ch)\n\n if filepath is not None:\n fh = logging.FileHandler(filepath)\n fh.setLevel(level)\n fh.setFormatter(formatter)\n logger.addHandler(fh)\n\n # don't propagate to ancestors\n # the problem here is to attach handlers to loggers\n # should we provide a default configuration less open ?\n if name is not None:\n logger.propagate = False\n\n return logger\n\n\ndef manual_seed(seed: int) -> None:\n \"\"\"Setup random state from a seed for `torch`, `random` and optionally `numpy` (if can be imported).\n\n Args:\n seed: Random state seed\n\n .. versionchanged:: 0.4.3\n Added ``torch.cuda.manual_seed_all(seed)``.\n\n .. versionchanged:: 0.4.5\n Added ``torch_xla.core.xla_model.set_rng_state(seed)``.\n \"\"\"\n random.seed(seed)\n torch.manual_seed(seed)\n\n if torch.cuda.is_available():\n torch.cuda.manual_seed_all(seed)\n\n try:\n import torch_xla.core.xla_model as xm\n\n xm.set_rng_state(seed)\n except ImportError:\n pass\n\n try:\n import numpy as np\n\n np.random.seed(seed)\n except ImportError:\n pass\n\n\ndef deprecated(\n deprecated_in: str, removed_in: str = \"\", reasons: Tuple[str, ...] = (), raise_exception: bool = False\n) -> Callable:\n\n F = TypeVar(\"F\", bound=Callable[..., Any])\n\n def decorator(func: F) -> F:\n func_doc = func.__doc__ if func.__doc__ else \"\"\n deprecation_warning = (\n f\"This function has been deprecated since version {deprecated_in}\"\n + (f\" and will be removed in version {removed_in}\" if removed_in else \"\")\n + \".\\n Please refer to the documentation for more details.\"\n )\n\n @functools.wraps(func)\n def wrapper(*args: Any, **kwargs: Dict[str, Any]) -> Callable:\n if raise_exception:\n raise DeprecationWarning(deprecation_warning)\n warnings.warn(deprecation_warning, DeprecationWarning, stacklevel=2)\n return func(*args, **kwargs)\n\n appended_doc = f\".. deprecated:: {deprecated_in}\" + (\"\\n\\n\\t\" if len(reasons) > 0 else \"\")\n\n for reason in reasons:\n appended_doc += \"\\n\\t- \" + reason\n wrapper.__doc__ = f\"**Deprecated function**.\\n\\n {func_doc}{appended_doc}\"\n return cast(F, wrapper)\n\n return decorator\n", "path": "ignite/utils.py"}], "after_files": [{"content": "import collections.abc as collections\nimport functools\nimport hashlib\nimport logging\nimport random\nimport shutil\nimport warnings\nfrom pathlib import Path\nfrom typing import Any, Callable, Dict, Optional, TextIO, Tuple, Type, TypeVar, Union, cast\n\nimport torch\n\n__all__ = [\n \"convert_tensor\",\n \"apply_to_tensor\",\n \"apply_to_type\",\n \"to_onehot\",\n \"setup_logger\",\n \"manual_seed\",\n \"hash_checkpoint\",\n]\n\n\ndef convert_tensor(\n x: Union[torch.Tensor, collections.Sequence, collections.Mapping, str, bytes],\n device: Optional[Union[str, torch.device]] = None,\n non_blocking: bool = False,\n) -> Union[torch.Tensor, collections.Sequence, collections.Mapping, str, bytes]:\n \"\"\"Move tensors to relevant device.\n\n Args:\n x: input tensor or mapping, or sequence of tensors.\n device: device type to move ``x``.\n non_blocking: convert a CPU Tensor with pinned memory to a CUDA Tensor\n asynchronously with respect to the host if possible\n \"\"\"\n\n def _func(tensor: torch.Tensor) -> torch.Tensor:\n return tensor.to(device=device, non_blocking=non_blocking) if device is not None else tensor\n\n return apply_to_tensor(x, _func)\n\n\ndef apply_to_tensor(\n x: Union[torch.Tensor, collections.Sequence, collections.Mapping, str, bytes], func: Callable\n) -> Union[torch.Tensor, collections.Sequence, collections.Mapping, str, bytes]:\n \"\"\"Apply a function on a tensor or mapping, or sequence of tensors.\n\n Args:\n x: input tensor or mapping, or sequence of tensors.\n func: the function to apply on ``x``.\n \"\"\"\n return apply_to_type(x, torch.Tensor, func)\n\n\ndef apply_to_type(\n x: Union[Any, collections.Sequence, collections.Mapping, str, bytes],\n input_type: Union[Type, Tuple[Type[Any], Any]],\n func: Callable,\n) -> Union[Any, collections.Sequence, collections.Mapping, str, bytes]:\n \"\"\"Apply a function on an object of `input_type` or mapping, or sequence of objects of `input_type`.\n\n Args:\n x: object or mapping or sequence.\n input_type: data type of ``x``.\n func: the function to apply on ``x``.\n \"\"\"\n if isinstance(x, input_type):\n return func(x)\n if isinstance(x, (str, bytes)):\n return x\n if isinstance(x, collections.Mapping):\n return cast(Callable, type(x))({k: apply_to_type(sample, input_type, func) for k, sample in x.items()})\n if isinstance(x, tuple) and hasattr(x, \"_fields\"): # namedtuple\n return cast(Callable, type(x))(*(apply_to_type(sample, input_type, func) for sample in x))\n if isinstance(x, collections.Sequence):\n return cast(Callable, type(x))([apply_to_type(sample, input_type, func) for sample in x])\n raise TypeError((f\"x must contain {input_type}, dicts or lists; found {type(x)}\"))\n\n\ndef to_onehot(indices: torch.Tensor, num_classes: int) -> torch.Tensor:\n \"\"\"Convert a tensor of indices of any shape `(N, ...)` to a\n tensor of one-hot indicators of shape `(N, num_classes, ...)` and of type uint8. Output's device is equal to the\n input's device`.\n\n Args:\n indices: input tensor to convert.\n num_classes: number of classes for one-hot tensor.\n\n .. versionchanged:: 0.4.3\n This functions is now torchscriptable.\n \"\"\"\n new_shape = (indices.shape[0], num_classes) + indices.shape[1:]\n onehot = torch.zeros(new_shape, dtype=torch.uint8, device=indices.device)\n return onehot.scatter_(1, indices.unsqueeze(1), 1)\n\n\ndef setup_logger(\n name: Optional[str] = \"ignite\",\n level: int = logging.INFO,\n stream: Optional[TextIO] = None,\n format: str = \"%(asctime)s %(name)s %(levelname)s: %(message)s\",\n filepath: Optional[str] = None,\n distributed_rank: Optional[int] = None,\n reset: bool = False,\n) -> logging.Logger:\n \"\"\"Setups logger: name, level, format etc.\n\n Args:\n name: new name for the logger. If None, the standard logger is used.\n level: logging level, e.g. CRITICAL, ERROR, WARNING, INFO, DEBUG.\n stream: logging stream. If None, the standard stream is used (sys.stderr).\n format: logging format. By default, `%(asctime)s %(name)s %(levelname)s: %(message)s`.\n filepath: Optional logging file path. If not None, logs are written to the file.\n distributed_rank: Optional, rank in distributed configuration to avoid logger setup for workers.\n If None, distributed_rank is initialized to the rank of process.\n reset: if True, reset an existing logger rather than keep format, handlers, and level.\n\n Returns:\n logging.Logger\n\n Examples:\n Improve logs readability when training with a trainer and evaluator:\n\n .. code-block:: python\n\n from ignite.utils import setup_logger\n\n trainer = ...\n evaluator = ...\n\n trainer.logger = setup_logger(\"trainer\")\n evaluator.logger = setup_logger(\"evaluator\")\n\n trainer.run(data, max_epochs=10)\n\n # Logs will look like\n # 2020-01-21 12:46:07,356 trainer INFO: Engine run starting with max_epochs=5.\n # 2020-01-21 12:46:07,358 trainer INFO: Epoch[1] Complete. Time taken: 00:5:23\n # 2020-01-21 12:46:07,358 evaluator INFO: Engine run starting with max_epochs=1.\n # 2020-01-21 12:46:07,358 evaluator INFO: Epoch[1] Complete. Time taken: 00:01:02\n # ...\n\n Every existing logger can be reset if needed\n\n .. code-block:: python\n\n logger = setup_logger(name=\"my-logger\", format=\"=== %(name)s %(message)s\")\n logger.info(\"first message\")\n setup_logger(name=\"my-logger\", format=\"+++ %(name)s %(message)s\", reset=True)\n logger.info(\"second message\")\n\n # Logs will look like\n # === my-logger first message\n # +++ my-logger second message\n\n Change the level of an existing internal logger\n\n .. code-block:: python\n\n setup_logger(\n name=\"ignite.distributed.launcher.Parallel\",\n level=logging.WARNING\n )\n\n .. versionchanged:: 0.4.3\n Added ``stream`` parameter.\n\n .. versionchanged:: 0.4.5\n Added ``reset`` parameter.\n \"\"\"\n # check if the logger already exists\n existing = name is None or name in logging.root.manager.loggerDict\n\n # if existing, get the logger otherwise create a new one\n logger = logging.getLogger(name)\n\n if distributed_rank is None:\n import ignite.distributed as idist\n\n distributed_rank = idist.get_rank()\n\n # Remove previous handlers\n if distributed_rank > 0 or reset:\n\n if logger.hasHandlers():\n for h in list(logger.handlers):\n logger.removeHandler(h)\n\n if distributed_rank > 0:\n\n # Add null handler to avoid multiple parallel messages\n logger.addHandler(logging.NullHandler())\n\n # Keep the existing configuration if not reset\n if existing and not reset:\n return logger\n\n if distributed_rank == 0:\n logger.setLevel(level)\n\n formatter = logging.Formatter(format)\n\n ch = logging.StreamHandler(stream=stream)\n ch.setLevel(level)\n ch.setFormatter(formatter)\n logger.addHandler(ch)\n\n if filepath is not None:\n fh = logging.FileHandler(filepath)\n fh.setLevel(level)\n fh.setFormatter(formatter)\n logger.addHandler(fh)\n\n # don't propagate to ancestors\n # the problem here is to attach handlers to loggers\n # should we provide a default configuration less open ?\n if name is not None:\n logger.propagate = False\n\n return logger\n\n\ndef manual_seed(seed: int) -> None:\n \"\"\"Setup random state from a seed for `torch`, `random` and optionally `numpy` (if can be imported).\n\n Args:\n seed: Random state seed\n\n .. versionchanged:: 0.4.3\n Added ``torch.cuda.manual_seed_all(seed)``.\n\n .. versionchanged:: 0.4.5\n Added ``torch_xla.core.xla_model.set_rng_state(seed)``.\n \"\"\"\n random.seed(seed)\n torch.manual_seed(seed)\n\n if torch.cuda.is_available():\n torch.cuda.manual_seed_all(seed)\n\n try:\n import torch_xla.core.xla_model as xm\n\n xm.set_rng_state(seed)\n except ImportError:\n pass\n\n try:\n import numpy as np\n\n np.random.seed(seed)\n except ImportError:\n pass\n\n\ndef deprecated(\n deprecated_in: str, removed_in: str = \"\", reasons: Tuple[str, ...] = (), raise_exception: bool = False\n) -> Callable:\n\n F = TypeVar(\"F\", bound=Callable[..., Any])\n\n def decorator(func: F) -> F:\n func_doc = func.__doc__ if func.__doc__ else \"\"\n deprecation_warning = (\n f\"This function has been deprecated since version {deprecated_in}\"\n + (f\" and will be removed in version {removed_in}\" if removed_in else \"\")\n + \".\\n Please refer to the documentation for more details.\"\n )\n\n @functools.wraps(func)\n def wrapper(*args: Any, **kwargs: Dict[str, Any]) -> Callable:\n if raise_exception:\n raise DeprecationWarning(deprecation_warning)\n warnings.warn(deprecation_warning, DeprecationWarning, stacklevel=2)\n return func(*args, **kwargs)\n\n appended_doc = f\".. deprecated:: {deprecated_in}\" + (\"\\n\\n\\t\" if len(reasons) > 0 else \"\")\n\n for reason in reasons:\n appended_doc += \"\\n\\t- \" + reason\n wrapper.__doc__ = f\"**Deprecated function**.\\n\\n {func_doc}{appended_doc}\"\n return cast(F, wrapper)\n\n return decorator\n\n\ndef hash_checkpoint(checkpoint_path: Union[str, Path], output_dir: Union[str, Path],) -> Tuple[Path, str]:\n \"\"\"\n Hash the checkpoint file in the format of ``<filename>-<hash>.<ext>``\n to be used with ``check_hash`` of :func:`torch.hub.load_state_dict_from_url`.\n\n Args:\n checkpoint_path: Path to the checkpoint file.\n output_dir: Output directory to store the hashed checkpoint file.\n\n Returns:\n Path to the hashed checkpoint file, The 8 digits of SHA256 hash.\n\n .. versionadded:: 0.5.0\n \"\"\"\n\n if isinstance(checkpoint_path, str):\n checkpoint_path = Path(checkpoint_path)\n\n if isinstance(output_dir, str):\n output_dir = Path(output_dir)\n\n sha_hash = hashlib.sha256(checkpoint_path.read_bytes()).hexdigest()\n old_filename = checkpoint_path.stem\n new_filename = \"-\".join((old_filename, sha_hash[:8])) + \".pt\"\n\n hash_checkpoint_path = output_dir / new_filename\n shutil.move(str(checkpoint_path), hash_checkpoint_path)\n\n return hash_checkpoint_path, sha_hash\n", "path": "ignite/utils.py"}]}
| 3,543 | 494 |
gh_patches_debug_12430
|
rasdani/github-patches
|
git_diff
|
scrapy__scrapy-2065
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
IOError, 'Not a gzipped file'
while trying to access sitemap from robots.txt , Scrapy fails with **IOError, 'Not a gzipped file'** error
not sure if this issue is related to following issue(s)
https://github.com/scrapy/scrapy/issues/193 -> closed issue
https://github.com/scrapy/scrapy/pull/660 -> merged pull request to address issue 193
https://github.com/scrapy/scrapy/issues/951 -> open issue
> line where code fails in gzip.py at line # 197
>
> ``` python
> def _read_gzip_header(self):
> magic = self.fileobj.read(2)
> if magic != '\037\213':
> raise IOError, 'Not a gzipped file'
> ```
# Response Header
```
Content-Encoding: gzip
Accept-Ranges: bytes
X-Amz-Request-Id: BFFF010DDE6268DA
Vary: Accept-Encoding
Server: AmazonS3
Last-Modified: Wed, 15 Jun 2016 19:02:20 GMT
Etag: "300bb71d6897cb2a22bba0bd07978c84"
Cache-Control: no-transform
Date: Sun, 19 Jun 2016 10:54:53 GMT
Content-Type: binary/octet-stream
```
Error Log:
``` log
Traceback (most recent call last):
File "c:\venv\scrapy1.0\lib\site-packages\scrapy\utils\defer.py", line 102, in iter_errback
yield next(it)
File "c:\venv\scrapy1.0\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 29, in process_spider_output
for x in result:
File "c:\venv\scrapy1.0\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 22, in <genexpr>
return (_set_referer(r) for r in result or ())
File "c:\venv\scrapy1.0\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 37, in <genexpr>
return (r for r in result or () if _filter(r))
File "c:\venv\scrapy1.0\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in <genexpr>
return (r for r in result or () if _filter(r))
File "D:\projects\sitemap_spider\sitemap_spider\spiders\mainspider.py", line 31, in _parse_sitemap
body = self._get_sitemap_body(response)
File "c:\venv\scrapy1.0\lib\site-packages\scrapy\spiders\sitemap.py", line 67, in _get_sitemap_body
return gunzip(response.body)
File "c:\venv\scrapy1.0\lib\site-packages\scrapy\utils\gz.py", line 37, in gunzip
chunk = read1(f, 8196)
File "c:\venv\scrapy1.0\lib\site-packages\scrapy\utils\gz.py", line 21, in read1
return gzf.read(size)
File "c:\python27\Lib\gzip.py", line 268, in read
self._read(readsize)
File "c:\python27\Lib\gzip.py", line 303, in _read
self._read_gzip_header()
File "c:\python27\Lib\gzip.py", line 197, in _read_gzip_header
raise IOError, 'Not a gzipped file'
```
i did download file manually and was able to extract the content so it is not like file is corrupted
as an example sitemap url : you can follow amazon robots.txt
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/utils/gz.py`
Content:
```
1 import struct
2
3 try:
4 from cStringIO import StringIO as BytesIO
5 except ImportError:
6 from io import BytesIO
7 from gzip import GzipFile
8
9 import six
10 import re
11
12 # - Python>=3.5 GzipFile's read() has issues returning leftover
13 # uncompressed data when input is corrupted
14 # (regression or bug-fix compared to Python 3.4)
15 # - read1(), which fetches data before raising EOFError on next call
16 # works here but is only available from Python>=3.3
17 # - scrapy does not support Python 3.2
18 # - Python 2.7 GzipFile works fine with standard read() + extrabuf
19 if six.PY2:
20 def read1(gzf, size=-1):
21 return gzf.read(size)
22 else:
23 def read1(gzf, size=-1):
24 return gzf.read1(size)
25
26
27 def gunzip(data):
28 """Gunzip the given data and return as much data as possible.
29
30 This is resilient to CRC checksum errors.
31 """
32 f = GzipFile(fileobj=BytesIO(data))
33 output = b''
34 chunk = b'.'
35 while chunk:
36 try:
37 chunk = read1(f, 8196)
38 output += chunk
39 except (IOError, EOFError, struct.error):
40 # complete only if there is some data, otherwise re-raise
41 # see issue 87 about catching struct.error
42 # some pages are quite small so output is '' and f.extrabuf
43 # contains the whole page content
44 if output or getattr(f, 'extrabuf', None):
45 try:
46 output += f.extrabuf
47 finally:
48 break
49 else:
50 raise
51 return output
52
53 _is_gzipped_re = re.compile(br'^application/(x-)?gzip\b', re.I)
54
55 def is_gzipped(response):
56 """Return True if the response is gzipped, or False otherwise"""
57 ctype = response.headers.get('Content-Type', b'')
58 return _is_gzipped_re.search(ctype) is not None
59
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scrapy/utils/gz.py b/scrapy/utils/gz.py
--- a/scrapy/utils/gz.py
+++ b/scrapy/utils/gz.py
@@ -50,9 +50,12 @@
raise
return output
-_is_gzipped_re = re.compile(br'^application/(x-)?gzip\b', re.I)
+_is_gzipped = re.compile(br'^application/(x-)?gzip\b', re.I).search
+_is_octetstream = re.compile(br'^(application|binary)/octet-stream\b', re.I).search
def is_gzipped(response):
"""Return True if the response is gzipped, or False otherwise"""
ctype = response.headers.get('Content-Type', b'')
- return _is_gzipped_re.search(ctype) is not None
+ cenc = response.headers.get('Content-Encoding', b'').lower()
+ return (_is_gzipped(ctype) or
+ (_is_octetstream(ctype) and cenc in (b'gzip', b'x-gzip')))
|
{"golden_diff": "diff --git a/scrapy/utils/gz.py b/scrapy/utils/gz.py\n--- a/scrapy/utils/gz.py\n+++ b/scrapy/utils/gz.py\n@@ -50,9 +50,12 @@\n raise\n return output\n \n-_is_gzipped_re = re.compile(br'^application/(x-)?gzip\\b', re.I)\n+_is_gzipped = re.compile(br'^application/(x-)?gzip\\b', re.I).search\n+_is_octetstream = re.compile(br'^(application|binary)/octet-stream\\b', re.I).search\n \n def is_gzipped(response):\n \"\"\"Return True if the response is gzipped, or False otherwise\"\"\"\n ctype = response.headers.get('Content-Type', b'')\n- return _is_gzipped_re.search(ctype) is not None\n+ cenc = response.headers.get('Content-Encoding', b'').lower()\n+ return (_is_gzipped(ctype) or\n+ (_is_octetstream(ctype) and cenc in (b'gzip', b'x-gzip')))\n", "issue": "IOError, 'Not a gzipped file'\nwhile trying to access sitemap from robots.txt , Scrapy fails with **IOError, 'Not a gzipped file'** error\n\nnot sure if this issue is related to following issue(s)\nhttps://github.com/scrapy/scrapy/issues/193 -> closed issue\nhttps://github.com/scrapy/scrapy/pull/660 -> merged pull request to address issue 193\nhttps://github.com/scrapy/scrapy/issues/951 -> open issue\n\n> line where code fails in gzip.py at line # 197\n> \n> ``` python\n> def _read_gzip_header(self):\n> magic = self.fileobj.read(2)\n> if magic != '\\037\\213':\n> raise IOError, 'Not a gzipped file'\n> ```\n# Response Header\n\n```\nContent-Encoding: gzip\nAccept-Ranges: bytes\nX-Amz-Request-Id: BFFF010DDE6268DA\nVary: Accept-Encoding\nServer: AmazonS3\nLast-Modified: Wed, 15 Jun 2016 19:02:20 GMT\nEtag: \"300bb71d6897cb2a22bba0bd07978c84\"\nCache-Control: no-transform\nDate: Sun, 19 Jun 2016 10:54:53 GMT\nContent-Type: binary/octet-stream\n```\n\nError Log:\n\n``` log\n Traceback (most recent call last):\n File \"c:\\venv\\scrapy1.0\\lib\\site-packages\\scrapy\\utils\\defer.py\", line 102, in iter_errback\n yield next(it)\n File \"c:\\venv\\scrapy1.0\\lib\\site-packages\\scrapy\\spidermiddlewares\\offsite.py\", line 29, in process_spider_output\n for x in result:\n File \"c:\\venv\\scrapy1.0\\lib\\site-packages\\scrapy\\spidermiddlewares\\referer.py\", line 22, in <genexpr>\n return (_set_referer(r) for r in result or ())\n File \"c:\\venv\\scrapy1.0\\lib\\site-packages\\scrapy\\spidermiddlewares\\urllength.py\", line 37, in <genexpr>\n return (r for r in result or () if _filter(r))\n File \"c:\\venv\\scrapy1.0\\lib\\site-packages\\scrapy\\spidermiddlewares\\depth.py\", line 58, in <genexpr>\n return (r for r in result or () if _filter(r))\n File \"D:\\projects\\sitemap_spider\\sitemap_spider\\spiders\\mainspider.py\", line 31, in _parse_sitemap\n body = self._get_sitemap_body(response)\n File \"c:\\venv\\scrapy1.0\\lib\\site-packages\\scrapy\\spiders\\sitemap.py\", line 67, in _get_sitemap_body\n return gunzip(response.body)\n File \"c:\\venv\\scrapy1.0\\lib\\site-packages\\scrapy\\utils\\gz.py\", line 37, in gunzip\n chunk = read1(f, 8196)\n File \"c:\\venv\\scrapy1.0\\lib\\site-packages\\scrapy\\utils\\gz.py\", line 21, in read1\n return gzf.read(size)\n File \"c:\\python27\\Lib\\gzip.py\", line 268, in read\n self._read(readsize)\n File \"c:\\python27\\Lib\\gzip.py\", line 303, in _read\n self._read_gzip_header()\n File \"c:\\python27\\Lib\\gzip.py\", line 197, in _read_gzip_header\n raise IOError, 'Not a gzipped file'\n```\n\ni did download file manually and was able to extract the content so it is not like file is corrupted \n\nas an example sitemap url : you can follow amazon robots.txt\n\n", "before_files": [{"content": "import struct\n\ntry:\n from cStringIO import StringIO as BytesIO\nexcept ImportError:\n from io import BytesIO\nfrom gzip import GzipFile\n\nimport six\nimport re\n\n# - Python>=3.5 GzipFile's read() has issues returning leftover\n# uncompressed data when input is corrupted\n# (regression or bug-fix compared to Python 3.4)\n# - read1(), which fetches data before raising EOFError on next call\n# works here but is only available from Python>=3.3\n# - scrapy does not support Python 3.2\n# - Python 2.7 GzipFile works fine with standard read() + extrabuf\nif six.PY2:\n def read1(gzf, size=-1):\n return gzf.read(size)\nelse:\n def read1(gzf, size=-1):\n return gzf.read1(size)\n\n\ndef gunzip(data):\n \"\"\"Gunzip the given data and return as much data as possible.\n\n This is resilient to CRC checksum errors.\n \"\"\"\n f = GzipFile(fileobj=BytesIO(data))\n output = b''\n chunk = b'.'\n while chunk:\n try:\n chunk = read1(f, 8196)\n output += chunk\n except (IOError, EOFError, struct.error):\n # complete only if there is some data, otherwise re-raise\n # see issue 87 about catching struct.error\n # some pages are quite small so output is '' and f.extrabuf\n # contains the whole page content\n if output or getattr(f, 'extrabuf', None):\n try:\n output += f.extrabuf\n finally:\n break\n else:\n raise\n return output\n\n_is_gzipped_re = re.compile(br'^application/(x-)?gzip\\b', re.I)\n\ndef is_gzipped(response):\n \"\"\"Return True if the response is gzipped, or False otherwise\"\"\"\n ctype = response.headers.get('Content-Type', b'')\n return _is_gzipped_re.search(ctype) is not None\n", "path": "scrapy/utils/gz.py"}], "after_files": [{"content": "import struct\n\ntry:\n from cStringIO import StringIO as BytesIO\nexcept ImportError:\n from io import BytesIO\nfrom gzip import GzipFile\n\nimport six\nimport re\n\n# - Python>=3.5 GzipFile's read() has issues returning leftover\n# uncompressed data when input is corrupted\n# (regression or bug-fix compared to Python 3.4)\n# - read1(), which fetches data before raising EOFError on next call\n# works here but is only available from Python>=3.3\n# - scrapy does not support Python 3.2\n# - Python 2.7 GzipFile works fine with standard read() + extrabuf\nif six.PY2:\n def read1(gzf, size=-1):\n return gzf.read(size)\nelse:\n def read1(gzf, size=-1):\n return gzf.read1(size)\n\n\ndef gunzip(data):\n \"\"\"Gunzip the given data and return as much data as possible.\n\n This is resilient to CRC checksum errors.\n \"\"\"\n f = GzipFile(fileobj=BytesIO(data))\n output = b''\n chunk = b'.'\n while chunk:\n try:\n chunk = read1(f, 8196)\n output += chunk\n except (IOError, EOFError, struct.error):\n # complete only if there is some data, otherwise re-raise\n # see issue 87 about catching struct.error\n # some pages are quite small so output is '' and f.extrabuf\n # contains the whole page content\n if output or getattr(f, 'extrabuf', None):\n try:\n output += f.extrabuf\n finally:\n break\n else:\n raise\n return output\n\n_is_gzipped = re.compile(br'^application/(x-)?gzip\\b', re.I).search\n_is_octetstream = re.compile(br'^(application|binary)/octet-stream\\b', re.I).search\n\ndef is_gzipped(response):\n \"\"\"Return True if the response is gzipped, or False otherwise\"\"\"\n ctype = response.headers.get('Content-Type', b'')\n cenc = response.headers.get('Content-Encoding', b'').lower()\n return (_is_gzipped(ctype) or\n (_is_octetstream(ctype) and cenc in (b'gzip', b'x-gzip')))\n", "path": "scrapy/utils/gz.py"}]}
| 1,743 | 233 |
gh_patches_debug_19955
|
rasdani/github-patches
|
git_diff
|
mkdocs__mkdocs-134
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Wrong paths to extra media files
Incorrect paths are generated for files in `extra_css` and `extra_javascript`. For example, `test.js` will be referenced on the index page as `./test.js`, on the next page as `.././test.js`, on the next page as `../.././test.js`, etc.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mkdocs/build.py`
Content:
```
1 # coding: utf-8
2 from __future__ import print_function
3
4 from mkdocs import nav, toc, utils
5 from mkdocs.compat import urljoin, urlparse, urlunparse, PY2
6 import jinja2
7 import markdown
8 import os
9 import re
10
11
12 class PathToURL(object):
13 def __init__(self, nav=None):
14 self.nav = nav
15
16 def __call__(self, match):
17 url = match.groups()[0]
18 scheme, netloc, path, query, query, fragment = urlparse(url)
19
20 if (scheme or netloc or not utils.is_markdown_file(path)):
21 # Ignore URLs unless they are a relative link to a markdown file.
22 return 'a href="%s"' % url
23
24 if self.nav:
25 # If the site navigation has been provided, then validate
26 # the internal hyperlink, making sure the target actually exists.
27 target_file = self.nav.file_context.make_absolute(path)
28 if target_file not in self.nav.source_files:
29 source_file = self.nav.file_context.current_file
30 msg = (
31 'The page "%s" contained a hyperlink to "%s" which '
32 'is not listed in the "pages" configuration.'
33 )
34 assert False, msg % (source_file, target_file)
35 path = utils.get_url_path(target_file)
36 path = self.nav.url_context.make_relative(path)
37 else:
38 path = utils.get_url_path(path).lstrip('/')
39
40 # Convert the .md hyperlink to a relative hyperlink to the HTML page.
41 url = urlunparse((scheme, netloc, path, query, query, fragment))
42 return 'a href="%s"' % url
43
44
45 def convert_markdown(markdown_source, extensions=()):
46 """
47 Convert the Markdown source file to HTML content, and additionally
48 return the parsed table of contents, and a dictionary of any metadata
49 that was specified in the Markdown file.
50
51 `extensions` is an optional sequence of Python Markdown extensions to add
52 to the default set.
53 """
54
55 # Prepend a table of contents marker for the TOC extension
56 markdown_source = toc.pre_process(markdown_source)
57
58 # Generate the HTML from the markdown source
59 md = markdown.Markdown(
60 extensions=['meta', 'toc', 'tables', 'fenced_code'] + list(extensions)
61 )
62 html_content = md.convert(markdown_source)
63 meta = md.Meta
64
65 # Strip out the generated table of contents
66 (html_content, toc_html) = toc.post_process(html_content)
67
68 # Post process the generated table of contents into a data structure
69 table_of_contents = toc.TableOfContents(toc_html)
70
71 return (html_content, table_of_contents, meta)
72
73
74 def post_process_html(html_content, nav=None):
75 html_content = re.sub(r'a href="([^"]*)"', PathToURL(nav), html_content)
76 html_content = re.sub('<pre>', '<pre class="prettyprint well">', html_content)
77 return html_content
78
79
80 def get_context(page, content, nav, toc, meta, config):
81 site_name = config['site_name']
82
83 if page.is_homepage or page.title is None:
84 page_title = site_name
85 else:
86 page_title = page.title + ' - ' + site_name
87
88 if page.is_homepage:
89 page_description = config['site_description']
90 else:
91 page_description = None
92
93 if config['site_url']:
94 base = config['site_url']
95 if not base.endswith('/'):
96 base += '/'
97 canonical_url = urljoin(base, page.abs_url.lstrip('/'))
98 else:
99 canonical_url = None
100
101 if config['site_favicon']:
102 site_favicon = nav.url_context.make_relative('/' + config['site_favicon'])
103 else:
104 site_favicon = None
105
106 if config['extra_javascript']:
107 config['extra_javascript'] = utils.create_media_urls(nav=nav, url_list=config['extra_javascript'])
108
109 if config['extra_css']:
110 config['extra_css'] = utils.create_media_urls(nav=nav, url_list=config['extra_css'])
111
112 return {
113 'site_name': site_name,
114 'site_author': config['site_author'],
115 'favicon': site_favicon,
116
117 'page_title': page_title,
118 'page_description': page_description,
119
120 'content': content,
121 'toc': toc,
122 'nav': nav,
123 'meta': meta,
124
125 'base_url': nav.url_context.make_relative('/'),
126 'homepage_url': nav.homepage.url,
127 'canonical_url': canonical_url,
128
129 'current_page': page,
130 'previous_page': page.previous_page,
131 'next_page': page.next_page,
132
133 # Note that there's intentionally repetition here. Rather than simply
134 # provide the config dictionary we instead pass everything explicitly.
135 #
136 # This helps ensure that we can throughly document the context that
137 # gets passed to themes.
138 'repo_url': config['repo_url'],
139 'repo_name': config['repo_name'],
140
141 'extra_css': config['extra_css'],
142 'extra_javascript': config['extra_javascript'],
143
144 'include_nav': config['include_nav'],
145 'include_next_prev': config['include_next_prev'],
146 'include_search': config['include_search'],
147
148 'copyright': config['copyright'],
149 'google-analytics': config['google-analytics']
150 }
151
152
153 def build_pages(config):
154 """
155 Builds all the pages and writes them into the build directory.
156 """
157 site_navigation = nav.SiteNavigation(config['pages'])
158 loader = jinja2.FileSystemLoader(config['theme_dir'])
159 env = jinja2.Environment(loader=loader)
160
161 for page in site_navigation.walk_pages():
162 # Read the input file
163 input_path = os.path.join(config['docs_dir'], page.input_path)
164 input_content = open(input_path, 'r').read()
165 if PY2:
166 input_content = input_content.decode('utf-8')
167
168 # Process the markdown text
169 html_content, table_of_contents, meta = convert_markdown(
170 input_content, extensions=config['markdown_extensions']
171 )
172 html_content = post_process_html(html_content, site_navigation)
173
174 context = get_context(
175 page, html_content, site_navigation,
176 table_of_contents, meta, config
177 )
178
179 # Allow 'template:' override in md source files.
180 if 'template' in meta:
181 template = env.get_template(meta['template'][0])
182 else:
183 template = env.get_template('base.html')
184
185 # Render the template.
186 output_content = template.render(context)
187
188 # Write the output file.
189 output_path = os.path.join(config['site_dir'], page.output_path)
190 utils.write_file(output_content.encode('utf-8'), output_path)
191
192
193 def build(config, live_server=False):
194 """
195 Perform a full site build.
196 """
197 if not live_server:
198 print("Building documentation to directory: %s" % config['site_dir'])
199 utils.copy_media_files(config['theme_dir'], config['site_dir'])
200 utils.copy_media_files(config['docs_dir'], config['site_dir'])
201 build_pages(config)
202
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mkdocs/build.py b/mkdocs/build.py
--- a/mkdocs/build.py
+++ b/mkdocs/build.py
@@ -103,11 +103,9 @@
else:
site_favicon = None
- if config['extra_javascript']:
- config['extra_javascript'] = utils.create_media_urls(nav=nav, url_list=config['extra_javascript'])
+ extra_javascript = utils.create_media_urls(nav=nav, url_list=config['extra_javascript'])
- if config['extra_css']:
- config['extra_css'] = utils.create_media_urls(nav=nav, url_list=config['extra_css'])
+ extra_css = utils.create_media_urls(nav=nav, url_list=config['extra_css'])
return {
'site_name': site_name,
@@ -138,8 +136,8 @@
'repo_url': config['repo_url'],
'repo_name': config['repo_name'],
- 'extra_css': config['extra_css'],
- 'extra_javascript': config['extra_javascript'],
+ 'extra_css': extra_css,
+ 'extra_javascript': extra_javascript,
'include_nav': config['include_nav'],
'include_next_prev': config['include_next_prev'],
|
{"golden_diff": "diff --git a/mkdocs/build.py b/mkdocs/build.py\n--- a/mkdocs/build.py\n+++ b/mkdocs/build.py\n@@ -103,11 +103,9 @@\n else:\n site_favicon = None\n \n- if config['extra_javascript']:\n- config['extra_javascript'] = utils.create_media_urls(nav=nav, url_list=config['extra_javascript'])\n+ extra_javascript = utils.create_media_urls(nav=nav, url_list=config['extra_javascript'])\n \n- if config['extra_css']:\n- config['extra_css'] = utils.create_media_urls(nav=nav, url_list=config['extra_css'])\n+ extra_css = utils.create_media_urls(nav=nav, url_list=config['extra_css'])\n \n return {\n 'site_name': site_name,\n@@ -138,8 +136,8 @@\n 'repo_url': config['repo_url'],\n 'repo_name': config['repo_name'],\n \n- 'extra_css': config['extra_css'],\n- 'extra_javascript': config['extra_javascript'],\n+ 'extra_css': extra_css,\n+ 'extra_javascript': extra_javascript,\n \n 'include_nav': config['include_nav'],\n 'include_next_prev': config['include_next_prev'],\n", "issue": "Wrong paths to extra media files\nIncorrect paths are generated for files in `extra_css` and `extra_javascript`. For example, `test.js` will be referenced on the index page as `./test.js`, on the next page as `.././test.js`, on the next page as `../.././test.js`, etc.\n\n", "before_files": [{"content": "# coding: utf-8\nfrom __future__ import print_function\n\nfrom mkdocs import nav, toc, utils\nfrom mkdocs.compat import urljoin, urlparse, urlunparse, PY2\nimport jinja2\nimport markdown\nimport os\nimport re\n\n\nclass PathToURL(object):\n def __init__(self, nav=None):\n self.nav = nav\n\n def __call__(self, match):\n url = match.groups()[0]\n scheme, netloc, path, query, query, fragment = urlparse(url)\n\n if (scheme or netloc or not utils.is_markdown_file(path)):\n # Ignore URLs unless they are a relative link to a markdown file.\n return 'a href=\"%s\"' % url\n\n if self.nav:\n # If the site navigation has been provided, then validate\n # the internal hyperlink, making sure the target actually exists.\n target_file = self.nav.file_context.make_absolute(path)\n if target_file not in self.nav.source_files:\n source_file = self.nav.file_context.current_file\n msg = (\n 'The page \"%s\" contained a hyperlink to \"%s\" which '\n 'is not listed in the \"pages\" configuration.'\n )\n assert False, msg % (source_file, target_file)\n path = utils.get_url_path(target_file)\n path = self.nav.url_context.make_relative(path)\n else:\n path = utils.get_url_path(path).lstrip('/')\n\n # Convert the .md hyperlink to a relative hyperlink to the HTML page.\n url = urlunparse((scheme, netloc, path, query, query, fragment))\n return 'a href=\"%s\"' % url\n\n\ndef convert_markdown(markdown_source, extensions=()):\n \"\"\"\n Convert the Markdown source file to HTML content, and additionally\n return the parsed table of contents, and a dictionary of any metadata\n that was specified in the Markdown file.\n\n `extensions` is an optional sequence of Python Markdown extensions to add\n to the default set.\n \"\"\"\n\n # Prepend a table of contents marker for the TOC extension\n markdown_source = toc.pre_process(markdown_source)\n\n # Generate the HTML from the markdown source\n md = markdown.Markdown(\n extensions=['meta', 'toc', 'tables', 'fenced_code'] + list(extensions)\n )\n html_content = md.convert(markdown_source)\n meta = md.Meta\n\n # Strip out the generated table of contents\n (html_content, toc_html) = toc.post_process(html_content)\n\n # Post process the generated table of contents into a data structure\n table_of_contents = toc.TableOfContents(toc_html)\n\n return (html_content, table_of_contents, meta)\n\n\ndef post_process_html(html_content, nav=None):\n html_content = re.sub(r'a href=\"([^\"]*)\"', PathToURL(nav), html_content)\n html_content = re.sub('<pre>', '<pre class=\"prettyprint well\">', html_content)\n return html_content\n\n\ndef get_context(page, content, nav, toc, meta, config):\n site_name = config['site_name']\n\n if page.is_homepage or page.title is None:\n page_title = site_name\n else:\n page_title = page.title + ' - ' + site_name\n\n if page.is_homepage:\n page_description = config['site_description']\n else:\n page_description = None\n\n if config['site_url']:\n base = config['site_url']\n if not base.endswith('/'):\n base += '/'\n canonical_url = urljoin(base, page.abs_url.lstrip('/'))\n else:\n canonical_url = None\n\n if config['site_favicon']:\n site_favicon = nav.url_context.make_relative('/' + config['site_favicon'])\n else:\n site_favicon = None\n\n if config['extra_javascript']:\n config['extra_javascript'] = utils.create_media_urls(nav=nav, url_list=config['extra_javascript'])\n\n if config['extra_css']:\n config['extra_css'] = utils.create_media_urls(nav=nav, url_list=config['extra_css'])\n\n return {\n 'site_name': site_name,\n 'site_author': config['site_author'],\n 'favicon': site_favicon,\n\n 'page_title': page_title,\n 'page_description': page_description,\n\n 'content': content,\n 'toc': toc,\n 'nav': nav,\n 'meta': meta,\n\n 'base_url': nav.url_context.make_relative('/'),\n 'homepage_url': nav.homepage.url,\n 'canonical_url': canonical_url,\n\n 'current_page': page,\n 'previous_page': page.previous_page,\n 'next_page': page.next_page,\n\n # Note that there's intentionally repetition here. Rather than simply\n # provide the config dictionary we instead pass everything explicitly.\n #\n # This helps ensure that we can throughly document the context that\n # gets passed to themes.\n 'repo_url': config['repo_url'],\n 'repo_name': config['repo_name'],\n\n 'extra_css': config['extra_css'],\n 'extra_javascript': config['extra_javascript'],\n\n 'include_nav': config['include_nav'],\n 'include_next_prev': config['include_next_prev'],\n 'include_search': config['include_search'],\n\n 'copyright': config['copyright'],\n 'google-analytics': config['google-analytics']\n }\n\n\ndef build_pages(config):\n \"\"\"\n Builds all the pages and writes them into the build directory.\n \"\"\"\n site_navigation = nav.SiteNavigation(config['pages'])\n loader = jinja2.FileSystemLoader(config['theme_dir'])\n env = jinja2.Environment(loader=loader)\n\n for page in site_navigation.walk_pages():\n # Read the input file\n input_path = os.path.join(config['docs_dir'], page.input_path)\n input_content = open(input_path, 'r').read()\n if PY2:\n input_content = input_content.decode('utf-8')\n\n # Process the markdown text\n html_content, table_of_contents, meta = convert_markdown(\n input_content, extensions=config['markdown_extensions']\n )\n html_content = post_process_html(html_content, site_navigation)\n\n context = get_context(\n page, html_content, site_navigation,\n table_of_contents, meta, config\n )\n\n # Allow 'template:' override in md source files.\n if 'template' in meta:\n template = env.get_template(meta['template'][0])\n else:\n template = env.get_template('base.html')\n\n # Render the template.\n output_content = template.render(context)\n\n # Write the output file.\n output_path = os.path.join(config['site_dir'], page.output_path)\n utils.write_file(output_content.encode('utf-8'), output_path)\n\n\ndef build(config, live_server=False):\n \"\"\"\n Perform a full site build.\n \"\"\"\n if not live_server:\n print(\"Building documentation to directory: %s\" % config['site_dir'])\n utils.copy_media_files(config['theme_dir'], config['site_dir'])\n utils.copy_media_files(config['docs_dir'], config['site_dir'])\n build_pages(config)\n", "path": "mkdocs/build.py"}], "after_files": [{"content": "# coding: utf-8\nfrom __future__ import print_function\n\nfrom mkdocs import nav, toc, utils\nfrom mkdocs.compat import urljoin, urlparse, urlunparse, PY2\nimport jinja2\nimport markdown\nimport os\nimport re\n\n\nclass PathToURL(object):\n def __init__(self, nav=None):\n self.nav = nav\n\n def __call__(self, match):\n url = match.groups()[0]\n scheme, netloc, path, query, query, fragment = urlparse(url)\n\n if (scheme or netloc or not utils.is_markdown_file(path)):\n # Ignore URLs unless they are a relative link to a markdown file.\n return 'a href=\"%s\"' % url\n\n if self.nav:\n # If the site navigation has been provided, then validate\n # the internal hyperlink, making sure the target actually exists.\n target_file = self.nav.file_context.make_absolute(path)\n if target_file not in self.nav.source_files:\n source_file = self.nav.file_context.current_file\n msg = (\n 'The page \"%s\" contained a hyperlink to \"%s\" which '\n 'is not listed in the \"pages\" configuration.'\n )\n assert False, msg % (source_file, target_file)\n path = utils.get_url_path(target_file)\n path = self.nav.url_context.make_relative(path)\n else:\n path = utils.get_url_path(path).lstrip('/')\n\n # Convert the .md hyperlink to a relative hyperlink to the HTML page.\n url = urlunparse((scheme, netloc, path, query, query, fragment))\n return 'a href=\"%s\"' % url\n\n\ndef convert_markdown(markdown_source, extensions=()):\n \"\"\"\n Convert the Markdown source file to HTML content, and additionally\n return the parsed table of contents, and a dictionary of any metadata\n that was specified in the Markdown file.\n\n `extensions` is an optional sequence of Python Markdown extensions to add\n to the default set.\n \"\"\"\n\n # Prepend a table of contents marker for the TOC extension\n markdown_source = toc.pre_process(markdown_source)\n\n # Generate the HTML from the markdown source\n md = markdown.Markdown(\n extensions=['meta', 'toc', 'tables', 'fenced_code'] + list(extensions)\n )\n html_content = md.convert(markdown_source)\n meta = md.Meta\n\n # Strip out the generated table of contents\n (html_content, toc_html) = toc.post_process(html_content)\n\n # Post process the generated table of contents into a data structure\n table_of_contents = toc.TableOfContents(toc_html)\n\n return (html_content, table_of_contents, meta)\n\n\ndef post_process_html(html_content, nav=None):\n html_content = re.sub(r'a href=\"([^\"]*)\"', PathToURL(nav), html_content)\n html_content = re.sub('<pre>', '<pre class=\"prettyprint well\">', html_content)\n return html_content\n\n\ndef get_context(page, content, nav, toc, meta, config):\n site_name = config['site_name']\n\n if page.is_homepage or page.title is None:\n page_title = site_name\n else:\n page_title = page.title + ' - ' + site_name\n\n if page.is_homepage:\n page_description = config['site_description']\n else:\n page_description = None\n\n if config['site_url']:\n base = config['site_url']\n if not base.endswith('/'):\n base += '/'\n canonical_url = urljoin(base, page.abs_url.lstrip('/'))\n else:\n canonical_url = None\n\n if config['site_favicon']:\n site_favicon = nav.url_context.make_relative('/' + config['site_favicon'])\n else:\n site_favicon = None\n\n extra_javascript = utils.create_media_urls(nav=nav, url_list=config['extra_javascript'])\n\n extra_css = utils.create_media_urls(nav=nav, url_list=config['extra_css'])\n\n return {\n 'site_name': site_name,\n 'site_author': config['site_author'],\n 'favicon': site_favicon,\n\n 'page_title': page_title,\n 'page_description': page_description,\n\n 'content': content,\n 'toc': toc,\n 'nav': nav,\n 'meta': meta,\n\n 'base_url': nav.url_context.make_relative('/'),\n 'homepage_url': nav.homepage.url,\n 'canonical_url': canonical_url,\n\n 'current_page': page,\n 'previous_page': page.previous_page,\n 'next_page': page.next_page,\n\n # Note that there's intentionally repetition here. Rather than simply\n # provide the config dictionary we instead pass everything explicitly.\n #\n # This helps ensure that we can throughly document the context that\n # gets passed to themes.\n 'repo_url': config['repo_url'],\n 'repo_name': config['repo_name'],\n\n 'extra_css': extra_css,\n 'extra_javascript': extra_javascript,\n\n 'include_nav': config['include_nav'],\n 'include_next_prev': config['include_next_prev'],\n 'include_search': config['include_search'],\n\n 'copyright': config['copyright'],\n 'google-analytics': config['google-analytics']\n }\n\n\ndef build_pages(config):\n \"\"\"\n Builds all the pages and writes them into the build directory.\n \"\"\"\n site_navigation = nav.SiteNavigation(config['pages'])\n loader = jinja2.FileSystemLoader(config['theme_dir'])\n env = jinja2.Environment(loader=loader)\n\n for page in site_navigation.walk_pages():\n # Read the input file\n input_path = os.path.join(config['docs_dir'], page.input_path)\n input_content = open(input_path, 'r').read()\n if PY2:\n input_content = input_content.decode('utf-8')\n\n # Process the markdown text\n html_content, table_of_contents, meta = convert_markdown(\n input_content, extensions=config['markdown_extensions']\n )\n html_content = post_process_html(html_content, site_navigation)\n\n context = get_context(\n page, html_content, site_navigation,\n table_of_contents, meta, config\n )\n\n # Allow 'template:' override in md source files.\n if 'template' in meta:\n template = env.get_template(meta['template'][0])\n else:\n template = env.get_template('base.html')\n\n # Render the template.\n output_content = template.render(context)\n\n # Write the output file.\n output_path = os.path.join(config['site_dir'], page.output_path)\n utils.write_file(output_content.encode('utf-8'), output_path)\n\n\ndef build(config, live_server=False):\n \"\"\"\n Perform a full site build.\n \"\"\"\n if not live_server:\n print(\"Building documentation to directory: %s\" % config['site_dir'])\n utils.copy_media_files(config['theme_dir'], config['site_dir'])\n utils.copy_media_files(config['docs_dir'], config['site_dir'])\n build_pages(config)\n", "path": "mkdocs/build.py"}]}
| 2,359 | 279 |
gh_patches_debug_27658
|
rasdani/github-patches
|
git_diff
|
sopel-irc__sopel-1058
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
.setlocation gives an error and fails: AttributeError: 'NoneType' object has no attribute 'get'
yoosi: `.setlocation london`
sopel: `AttributeError: 'NoneType' object has no attribute 'get' (file "/home/sopel/sopel/sopel/modules/weather.py", line 181, in update_woeid)`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sopel/modules/weather.py`
Content:
```
1 # coding=utf-8
2 # Copyright 2008, Sean B. Palmer, inamidst.com
3 # Copyright 2012, Elsie Powell, embolalia.com
4 # Licensed under the Eiffel Forum License 2.
5 from __future__ import unicode_literals, absolute_import, print_function, division
6
7 from sopel import web
8 from sopel.module import commands, example, NOLIMIT
9
10 import xmltodict
11
12
13 def woeid_search(query):
14 """
15 Find the first Where On Earth ID for the given query. Result is the etree
16 node for the result, so that location data can still be retrieved. Returns
17 None if there is no result, or the woeid field is empty.
18 """
19 query = 'q=select * from geo.places where text="%s"' % query
20 body = web.get('http://query.yahooapis.com/v1/public/yql?' + query,
21 dont_decode=True)
22 parsed = xmltodict.parse(body).get('query')
23 results = parsed.get('results')
24 if results is None or results.get('place') is None:
25 return None
26 if type(results.get('place')) is list:
27 return results.get('place')[0]
28 return results.get('place')
29
30
31 def get_cover(parsed):
32 try:
33 condition = parsed['channel']['item']['yweather:condition']
34 except KeyError:
35 return 'unknown'
36 text = condition['@text']
37 # code = int(condition['code'])
38 # TODO parse code to get those little icon thingies.
39 return text
40
41
42 def get_temp(parsed):
43 try:
44 condition = parsed['channel']['item']['yweather:condition']
45 temp = int(condition['@temp'])
46 except (KeyError, ValueError):
47 return 'unknown'
48 f = round((temp * 1.8) + 32, 2)
49 return (u'%d\u00B0C (%d\u00B0F)' % (temp, f))
50
51
52 def get_humidity(parsed):
53 try:
54 humidity = parsed['channel']['yweather:atmosphere']['@humidity']
55 except (KeyError, ValueError):
56 return 'unknown'
57 return "Humidity: %s%%" % humidity
58
59
60 def get_wind(parsed):
61 try:
62 wind_data = parsed['channel']['yweather:wind']
63 kph = float(wind_data['@speed'])
64 m_s = float(round(kph / 3.6, 1))
65 speed = int(round(kph / 1.852, 0))
66 degrees = int(wind_data['@direction'])
67 except (KeyError, ValueError):
68 return 'unknown'
69
70 if speed < 1:
71 description = 'Calm'
72 elif speed < 4:
73 description = 'Light air'
74 elif speed < 7:
75 description = 'Light breeze'
76 elif speed < 11:
77 description = 'Gentle breeze'
78 elif speed < 16:
79 description = 'Moderate breeze'
80 elif speed < 22:
81 description = 'Fresh breeze'
82 elif speed < 28:
83 description = 'Strong breeze'
84 elif speed < 34:
85 description = 'Near gale'
86 elif speed < 41:
87 description = 'Gale'
88 elif speed < 48:
89 description = 'Strong gale'
90 elif speed < 56:
91 description = 'Storm'
92 elif speed < 64:
93 description = 'Violent storm'
94 else:
95 description = 'Hurricane'
96
97 if (degrees <= 22.5) or (degrees > 337.5):
98 degrees = u'\u2193'
99 elif (degrees > 22.5) and (degrees <= 67.5):
100 degrees = u'\u2199'
101 elif (degrees > 67.5) and (degrees <= 112.5):
102 degrees = u'\u2190'
103 elif (degrees > 112.5) and (degrees <= 157.5):
104 degrees = u'\u2196'
105 elif (degrees > 157.5) and (degrees <= 202.5):
106 degrees = u'\u2191'
107 elif (degrees > 202.5) and (degrees <= 247.5):
108 degrees = u'\u2197'
109 elif (degrees > 247.5) and (degrees <= 292.5):
110 degrees = u'\u2192'
111 elif (degrees > 292.5) and (degrees <= 337.5):
112 degrees = u'\u2198'
113
114 return description + ' ' + str(m_s) + 'm/s (' + degrees + ')'
115
116
117 @commands('weather', 'wea')
118 @example('.weather London')
119 def weather(bot, trigger):
120 """.weather location - Show the weather at the given location."""
121
122 location = trigger.group(2)
123 woeid = ''
124 if not location:
125 woeid = bot.db.get_nick_value(trigger.nick, 'woeid')
126 if not woeid:
127 return bot.msg(trigger.sender, "I don't know where you live. " +
128 'Give me a location, like .weather London, or tell me where you live by saying .setlocation London, for example.')
129 else:
130 location = location.strip()
131 woeid = bot.db.get_nick_value(location, 'woeid')
132 if woeid is None:
133 first_result = woeid_search(location)
134 if first_result is not None:
135 woeid = first_result.get('woeid')
136
137 if not woeid:
138 return bot.reply("I don't know where that is.")
139
140 query = web.urlencode({'w': woeid, 'u': 'c'})
141 raw = web.get('http://weather.yahooapis.com/forecastrss?' + query,
142 dont_decode=True)
143 parsed = xmltodict.parse(raw).get('rss')
144 location = parsed.get('channel').get('title')
145
146 cover = get_cover(parsed)
147 temp = get_temp(parsed)
148 humidity = get_humidity(parsed)
149 wind = get_wind(parsed)
150 bot.say(u'%s: %s, %s, %s, %s' % (location, cover, temp, humidity, wind))
151
152
153 @commands('setlocation', 'setwoeid')
154 @example('.setlocation Columbus, OH')
155 def update_woeid(bot, trigger):
156 """Set your default weather location."""
157 if not trigger.group(2):
158 bot.reply('Give me a location, like "Washington, DC" or "London".')
159 return NOLIMIT
160
161 first_result = woeid_search(trigger.group(2))
162 if first_result is None:
163 return bot.reply("I don't know where that is.")
164
165 woeid = first_result.get('woeid')
166
167 bot.db.set_nick_value(trigger.nick, 'woeid', woeid)
168
169 neighborhood = first_result.get('locality2') or ''
170 if neighborhood:
171 neighborhood = neighborhood.get('#text') + ', '
172 city = first_result.get('locality1') or ''
173 # This is to catch cases like 'Bawlf, Alberta' where the location is
174 # thought to be a "LocalAdmin" rather than a "Town"
175 if city:
176 city = city.get('#text')
177 else:
178 city = first_result.get('name')
179 state = first_result.get('admin1').get('#text') or ''
180 country = first_result.get('country').get('#text') or ''
181 uzip = first_result.get('postal').get('#text') or ''
182 bot.reply('I now have you at WOEID %s (%s%s, %s, %s %s)' %
183 (woeid, neighborhood, city, state, country, uzip))
184
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sopel/modules/weather.py b/sopel/modules/weather.py
--- a/sopel/modules/weather.py
+++ b/sopel/modules/weather.py
@@ -137,16 +137,16 @@
if not woeid:
return bot.reply("I don't know where that is.")
- query = web.urlencode({'w': woeid, 'u': 'c'})
- raw = web.get('http://weather.yahooapis.com/forecastrss?' + query,
+ query = 'q=select * from weather.forecast where woeid="%s" and u=\'c\'' % woeid
+ body = web.get('http://query.yahooapis.com/v1/public/yql?' + query,
dont_decode=True)
- parsed = xmltodict.parse(raw).get('rss')
- location = parsed.get('channel').get('title')
-
- cover = get_cover(parsed)
- temp = get_temp(parsed)
- humidity = get_humidity(parsed)
- wind = get_wind(parsed)
+ parsed = xmltodict.parse(body).get('query')
+ results = parsed.get('results')
+ location = results.get('channel').get('title')
+ cover = get_cover(results)
+ temp = get_temp(results)
+ humidity = get_humidity(results)
+ wind = get_wind(results)
bot.say(u'%s: %s, %s, %s, %s' % (location, cover, temp, humidity, wind))
@@ -178,6 +178,5 @@
city = first_result.get('name')
state = first_result.get('admin1').get('#text') or ''
country = first_result.get('country').get('#text') or ''
- uzip = first_result.get('postal').get('#text') or ''
- bot.reply('I now have you at WOEID %s (%s%s, %s, %s %s)' %
- (woeid, neighborhood, city, state, country, uzip))
+ bot.reply('I now have you at WOEID %s (%s%s, %s, %s)' %
+ (woeid, neighborhood, city, state, country))
|
{"golden_diff": "diff --git a/sopel/modules/weather.py b/sopel/modules/weather.py\n--- a/sopel/modules/weather.py\n+++ b/sopel/modules/weather.py\n@@ -137,16 +137,16 @@\n if not woeid:\n return bot.reply(\"I don't know where that is.\")\n \n- query = web.urlencode({'w': woeid, 'u': 'c'})\n- raw = web.get('http://weather.yahooapis.com/forecastrss?' + query,\n+ query = 'q=select * from weather.forecast where woeid=\"%s\" and u=\\'c\\'' % woeid\n+ body = web.get('http://query.yahooapis.com/v1/public/yql?' + query,\n dont_decode=True)\n- parsed = xmltodict.parse(raw).get('rss')\n- location = parsed.get('channel').get('title')\n-\n- cover = get_cover(parsed)\n- temp = get_temp(parsed)\n- humidity = get_humidity(parsed)\n- wind = get_wind(parsed)\n+ parsed = xmltodict.parse(body).get('query')\n+ results = parsed.get('results')\n+ location = results.get('channel').get('title')\n+ cover = get_cover(results)\n+ temp = get_temp(results)\n+ humidity = get_humidity(results)\n+ wind = get_wind(results)\n bot.say(u'%s: %s, %s, %s, %s' % (location, cover, temp, humidity, wind))\n \n \n@@ -178,6 +178,5 @@\n city = first_result.get('name')\n state = first_result.get('admin1').get('#text') or ''\n country = first_result.get('country').get('#text') or ''\n- uzip = first_result.get('postal').get('#text') or ''\n- bot.reply('I now have you at WOEID %s (%s%s, %s, %s %s)' %\n- (woeid, neighborhood, city, state, country, uzip))\n+ bot.reply('I now have you at WOEID %s (%s%s, %s, %s)' %\n+ (woeid, neighborhood, city, state, country))\n", "issue": ".setlocation gives an error and fails: AttributeError: 'NoneType' object has no attribute 'get'\nyoosi: `.setlocation london`\n\nsopel: `AttributeError: 'NoneType' object has no attribute 'get' (file \"/home/sopel/sopel/sopel/modules/weather.py\", line 181, in update_woeid)`\n\n", "before_files": [{"content": "# coding=utf-8\n# Copyright 2008, Sean B. Palmer, inamidst.com\n# Copyright 2012, Elsie Powell, embolalia.com\n# Licensed under the Eiffel Forum License 2.\nfrom __future__ import unicode_literals, absolute_import, print_function, division\n\nfrom sopel import web\nfrom sopel.module import commands, example, NOLIMIT\n\nimport xmltodict\n\n\ndef woeid_search(query):\n \"\"\"\n Find the first Where On Earth ID for the given query. Result is the etree\n node for the result, so that location data can still be retrieved. Returns\n None if there is no result, or the woeid field is empty.\n \"\"\"\n query = 'q=select * from geo.places where text=\"%s\"' % query\n body = web.get('http://query.yahooapis.com/v1/public/yql?' + query,\n dont_decode=True)\n parsed = xmltodict.parse(body).get('query')\n results = parsed.get('results')\n if results is None or results.get('place') is None:\n return None\n if type(results.get('place')) is list:\n return results.get('place')[0]\n return results.get('place')\n\n\ndef get_cover(parsed):\n try:\n condition = parsed['channel']['item']['yweather:condition']\n except KeyError:\n return 'unknown'\n text = condition['@text']\n # code = int(condition['code'])\n # TODO parse code to get those little icon thingies.\n return text\n\n\ndef get_temp(parsed):\n try:\n condition = parsed['channel']['item']['yweather:condition']\n temp = int(condition['@temp'])\n except (KeyError, ValueError):\n return 'unknown'\n f = round((temp * 1.8) + 32, 2)\n return (u'%d\\u00B0C (%d\\u00B0F)' % (temp, f))\n\n\ndef get_humidity(parsed):\n try:\n humidity = parsed['channel']['yweather:atmosphere']['@humidity']\n except (KeyError, ValueError):\n return 'unknown'\n return \"Humidity: %s%%\" % humidity\n\n\ndef get_wind(parsed):\n try:\n wind_data = parsed['channel']['yweather:wind']\n kph = float(wind_data['@speed'])\n m_s = float(round(kph / 3.6, 1))\n speed = int(round(kph / 1.852, 0))\n degrees = int(wind_data['@direction'])\n except (KeyError, ValueError):\n return 'unknown'\n\n if speed < 1:\n description = 'Calm'\n elif speed < 4:\n description = 'Light air'\n elif speed < 7:\n description = 'Light breeze'\n elif speed < 11:\n description = 'Gentle breeze'\n elif speed < 16:\n description = 'Moderate breeze'\n elif speed < 22:\n description = 'Fresh breeze'\n elif speed < 28:\n description = 'Strong breeze'\n elif speed < 34:\n description = 'Near gale'\n elif speed < 41:\n description = 'Gale'\n elif speed < 48:\n description = 'Strong gale'\n elif speed < 56:\n description = 'Storm'\n elif speed < 64:\n description = 'Violent storm'\n else:\n description = 'Hurricane'\n\n if (degrees <= 22.5) or (degrees > 337.5):\n degrees = u'\\u2193'\n elif (degrees > 22.5) and (degrees <= 67.5):\n degrees = u'\\u2199'\n elif (degrees > 67.5) and (degrees <= 112.5):\n degrees = u'\\u2190'\n elif (degrees > 112.5) and (degrees <= 157.5):\n degrees = u'\\u2196'\n elif (degrees > 157.5) and (degrees <= 202.5):\n degrees = u'\\u2191'\n elif (degrees > 202.5) and (degrees <= 247.5):\n degrees = u'\\u2197'\n elif (degrees > 247.5) and (degrees <= 292.5):\n degrees = u'\\u2192'\n elif (degrees > 292.5) and (degrees <= 337.5):\n degrees = u'\\u2198'\n\n return description + ' ' + str(m_s) + 'm/s (' + degrees + ')'\n\n\n@commands('weather', 'wea')\n@example('.weather London')\ndef weather(bot, trigger):\n \"\"\".weather location - Show the weather at the given location.\"\"\"\n\n location = trigger.group(2)\n woeid = ''\n if not location:\n woeid = bot.db.get_nick_value(trigger.nick, 'woeid')\n if not woeid:\n return bot.msg(trigger.sender, \"I don't know where you live. \" +\n 'Give me a location, like .weather London, or tell me where you live by saying .setlocation London, for example.')\n else:\n location = location.strip()\n woeid = bot.db.get_nick_value(location, 'woeid')\n if woeid is None:\n first_result = woeid_search(location)\n if first_result is not None:\n woeid = first_result.get('woeid')\n\n if not woeid:\n return bot.reply(\"I don't know where that is.\")\n\n query = web.urlencode({'w': woeid, 'u': 'c'})\n raw = web.get('http://weather.yahooapis.com/forecastrss?' + query,\n dont_decode=True)\n parsed = xmltodict.parse(raw).get('rss')\n location = parsed.get('channel').get('title')\n\n cover = get_cover(parsed)\n temp = get_temp(parsed)\n humidity = get_humidity(parsed)\n wind = get_wind(parsed)\n bot.say(u'%s: %s, %s, %s, %s' % (location, cover, temp, humidity, wind))\n\n\n@commands('setlocation', 'setwoeid')\n@example('.setlocation Columbus, OH')\ndef update_woeid(bot, trigger):\n \"\"\"Set your default weather location.\"\"\"\n if not trigger.group(2):\n bot.reply('Give me a location, like \"Washington, DC\" or \"London\".')\n return NOLIMIT\n\n first_result = woeid_search(trigger.group(2))\n if first_result is None:\n return bot.reply(\"I don't know where that is.\")\n\n woeid = first_result.get('woeid')\n\n bot.db.set_nick_value(trigger.nick, 'woeid', woeid)\n\n neighborhood = first_result.get('locality2') or ''\n if neighborhood:\n neighborhood = neighborhood.get('#text') + ', '\n city = first_result.get('locality1') or ''\n # This is to catch cases like 'Bawlf, Alberta' where the location is\n # thought to be a \"LocalAdmin\" rather than a \"Town\"\n if city:\n city = city.get('#text')\n else:\n city = first_result.get('name')\n state = first_result.get('admin1').get('#text') or ''\n country = first_result.get('country').get('#text') or ''\n uzip = first_result.get('postal').get('#text') or ''\n bot.reply('I now have you at WOEID %s (%s%s, %s, %s %s)' %\n (woeid, neighborhood, city, state, country, uzip))\n", "path": "sopel/modules/weather.py"}], "after_files": [{"content": "# coding=utf-8\n# Copyright 2008, Sean B. Palmer, inamidst.com\n# Copyright 2012, Elsie Powell, embolalia.com\n# Licensed under the Eiffel Forum License 2.\nfrom __future__ import unicode_literals, absolute_import, print_function, division\n\nfrom sopel import web\nfrom sopel.module import commands, example, NOLIMIT\n\nimport xmltodict\n\n\ndef woeid_search(query):\n \"\"\"\n Find the first Where On Earth ID for the given query. Result is the etree\n node for the result, so that location data can still be retrieved. Returns\n None if there is no result, or the woeid field is empty.\n \"\"\"\n query = 'q=select * from geo.places where text=\"%s\"' % query\n body = web.get('http://query.yahooapis.com/v1/public/yql?' + query,\n dont_decode=True)\n parsed = xmltodict.parse(body).get('query')\n results = parsed.get('results')\n if results is None or results.get('place') is None:\n return None\n if type(results.get('place')) is list:\n return results.get('place')[0]\n return results.get('place')\n\n\ndef get_cover(parsed):\n try:\n condition = parsed['channel']['item']['yweather:condition']\n except KeyError:\n return 'unknown'\n text = condition['@text']\n # code = int(condition['code'])\n # TODO parse code to get those little icon thingies.\n return text\n\n\ndef get_temp(parsed):\n try:\n condition = parsed['channel']['item']['yweather:condition']\n temp = int(condition['@temp'])\n except (KeyError, ValueError):\n return 'unknown'\n f = round((temp * 1.8) + 32, 2)\n return (u'%d\\u00B0C (%d\\u00B0F)' % (temp, f))\n\n\ndef get_humidity(parsed):\n try:\n humidity = parsed['channel']['yweather:atmosphere']['@humidity']\n except (KeyError, ValueError):\n return 'unknown'\n return \"Humidity: %s%%\" % humidity\n\n\ndef get_wind(parsed):\n try:\n wind_data = parsed['channel']['yweather:wind']\n kph = float(wind_data['@speed'])\n m_s = float(round(kph / 3.6, 1))\n speed = int(round(kph / 1.852, 0))\n degrees = int(wind_data['@direction'])\n except (KeyError, ValueError):\n return 'unknown'\n\n if speed < 1:\n description = 'Calm'\n elif speed < 4:\n description = 'Light air'\n elif speed < 7:\n description = 'Light breeze'\n elif speed < 11:\n description = 'Gentle breeze'\n elif speed < 16:\n description = 'Moderate breeze'\n elif speed < 22:\n description = 'Fresh breeze'\n elif speed < 28:\n description = 'Strong breeze'\n elif speed < 34:\n description = 'Near gale'\n elif speed < 41:\n description = 'Gale'\n elif speed < 48:\n description = 'Strong gale'\n elif speed < 56:\n description = 'Storm'\n elif speed < 64:\n description = 'Violent storm'\n else:\n description = 'Hurricane'\n\n if (degrees <= 22.5) or (degrees > 337.5):\n degrees = u'\\u2193'\n elif (degrees > 22.5) and (degrees <= 67.5):\n degrees = u'\\u2199'\n elif (degrees > 67.5) and (degrees <= 112.5):\n degrees = u'\\u2190'\n elif (degrees > 112.5) and (degrees <= 157.5):\n degrees = u'\\u2196'\n elif (degrees > 157.5) and (degrees <= 202.5):\n degrees = u'\\u2191'\n elif (degrees > 202.5) and (degrees <= 247.5):\n degrees = u'\\u2197'\n elif (degrees > 247.5) and (degrees <= 292.5):\n degrees = u'\\u2192'\n elif (degrees > 292.5) and (degrees <= 337.5):\n degrees = u'\\u2198'\n\n return description + ' ' + str(m_s) + 'm/s (' + degrees + ')'\n\n\n@commands('weather', 'wea')\n@example('.weather London')\ndef weather(bot, trigger):\n \"\"\".weather location - Show the weather at the given location.\"\"\"\n\n location = trigger.group(2)\n woeid = ''\n if not location:\n woeid = bot.db.get_nick_value(trigger.nick, 'woeid')\n if not woeid:\n return bot.msg(trigger.sender, \"I don't know where you live. \" +\n 'Give me a location, like .weather London, or tell me where you live by saying .setlocation London, for example.')\n else:\n location = location.strip()\n woeid = bot.db.get_nick_value(location, 'woeid')\n if woeid is None:\n first_result = woeid_search(location)\n if first_result is not None:\n woeid = first_result.get('woeid')\n\n if not woeid:\n return bot.reply(\"I don't know where that is.\")\n\n query = 'q=select * from weather.forecast where woeid=\"%s\" and u=\\'c\\'' % woeid\n body = web.get('http://query.yahooapis.com/v1/public/yql?' + query,\n dont_decode=True)\n parsed = xmltodict.parse(body).get('query')\n results = parsed.get('results')\n location = results.get('channel').get('title')\n cover = get_cover(results)\n temp = get_temp(results)\n humidity = get_humidity(results)\n wind = get_wind(results)\n bot.say(u'%s: %s, %s, %s, %s' % (location, cover, temp, humidity, wind))\n\n\n@commands('setlocation', 'setwoeid')\n@example('.setlocation Columbus, OH')\ndef update_woeid(bot, trigger):\n \"\"\"Set your default weather location.\"\"\"\n if not trigger.group(2):\n bot.reply('Give me a location, like \"Washington, DC\" or \"London\".')\n return NOLIMIT\n\n first_result = woeid_search(trigger.group(2))\n if first_result is None:\n return bot.reply(\"I don't know where that is.\")\n\n woeid = first_result.get('woeid')\n\n bot.db.set_nick_value(trigger.nick, 'woeid', woeid)\n\n neighborhood = first_result.get('locality2') or ''\n if neighborhood:\n neighborhood = neighborhood.get('#text') + ', '\n city = first_result.get('locality1') or ''\n # This is to catch cases like 'Bawlf, Alberta' where the location is\n # thought to be a \"LocalAdmin\" rather than a \"Town\"\n if city:\n city = city.get('#text')\n else:\n city = first_result.get('name')\n state = first_result.get('admin1').get('#text') or ''\n country = first_result.get('country').get('#text') or ''\n bot.reply('I now have you at WOEID %s (%s%s, %s, %s)' %\n (woeid, neighborhood, city, state, country))\n", "path": "sopel/modules/weather.py"}]}
| 2,516 | 491 |
gh_patches_debug_4126
|
rasdani/github-patches
|
git_diff
|
geopandas__geopandas-762
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Value error using sjoin with pandas v0.23
I use the `sjoin` function to add the region name (polygons) to every point within the region. Some points are not in any region, therefore I filter these points and buffer them step by step. So the points layer without intersection becomes smaller and smaller. If there is only one row left I get the following error in pandas v0.23 which I did not get before (pandas < v0.23). Using geopandas v0.3.0.
My call:
```python
new = gpd.sjoin(rest_points, polygons, how='left', op='intersects')
```
Error message:
```python
ValueError: You are trying to merge on object and int64 columns.
If you wish to proceed you should use pd.concat
```
class: `GeoDataFrame`
method: `merge(self, *args, **kwargs)`
line: `result = DataFrame.merge(self, *args, **kwargs)`
I do not understand the error and why it happens only with the last point (last row) and only with the newest pandas version. I had a look at "What's New" but could not find anything.
Full message:
```python
File "virtualenv/lib/python3.5/site-packages/geopandas/tools/sjoin.py", line 140,
in sjoin suffixes=('_%s' % lsuffix, '_%s' % rsuffix))
File "virtualenv/lib/python3.5/site-packages/geopandas/geodataframe.py", line 418,
in merge result = DataFrame.merge(self, *args, **kwargs)
File "virtualenv/lib/python3.5/site-packages/pandas/core/frame.py", line 6379,
in merge copy=copy, indicator=indicator, validate=validate)
File "virtualenv/lib/python3.5/site-packages/pandas/core/reshape/merge.py", line 60,
in mergevalidate=validate)
File "virtualenv/lib/python3.5/site-packages/pandas/core/reshape/merge.py", line 554,
in __init__self._maybe_coerce_merge_keys()
File "virtualenv/lib/python3.5/site-packages/pandas/core/reshape/merge.py", line 980,
in _maybe_coerce_merge_keys
raise ValueError(msg)
ValueError: You are trying to merge on object and int64 columns.
If you wish to proceed you should use pd.concat
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `geopandas/tools/sjoin.py`
Content:
```
1 from warnings import warn
2
3 import numpy as np
4 import pandas as pd
5 from shapely import prepared
6
7
8 def sjoin(left_df, right_df, how='inner', op='intersects',
9 lsuffix='left', rsuffix='right'):
10 """Spatial join of two GeoDataFrames.
11
12 Parameters
13 ----------
14 left_df, right_df : GeoDataFrames
15 how : string, default 'inner'
16 The type of join:
17
18 * 'left': use keys from left_df; retain only left_df geometry column
19 * 'right': use keys from right_df; retain only right_df geometry column
20 * 'inner': use intersection of keys from both dfs; retain only
21 left_df geometry column
22 op : string, default 'intersection'
23 Binary predicate, one of {'intersects', 'contains', 'within'}.
24 See http://toblerity.org/shapely/manual.html#binary-predicates.
25 lsuffix : string, default 'left'
26 Suffix to apply to overlapping column names (left GeoDataFrame).
27 rsuffix : string, default 'right'
28 Suffix to apply to overlapping column names (right GeoDataFrame).
29
30 """
31 import rtree
32
33 allowed_hows = ['left', 'right', 'inner']
34 if how not in allowed_hows:
35 raise ValueError("`how` was \"%s\" but is expected to be in %s" %
36 (how, allowed_hows))
37
38 allowed_ops = ['contains', 'within', 'intersects']
39 if op not in allowed_ops:
40 raise ValueError("`op` was \"%s\" but is expected to be in %s" %
41 (op, allowed_ops))
42
43 if left_df.crs != right_df.crs:
44 warn('CRS of frames being joined does not match!')
45
46 index_left = 'index_%s' % lsuffix
47 index_right = 'index_%s' % rsuffix
48
49 # due to GH 352
50 if (any(left_df.columns.isin([index_left, index_right]))
51 or any(right_df.columns.isin([index_left, index_right]))):
52 raise ValueError("'{0}' and '{1}' cannot be names in the frames being"
53 " joined".format(index_left, index_right))
54
55 # the rtree spatial index only allows limited (numeric) index types, but an
56 # index in geopandas may be any arbitrary dtype. so reset both indices now
57 # and store references to the original indices, to be reaffixed later.
58 # GH 352
59 left_df = left_df.copy(deep=True)
60 left_df.index = left_df.index.rename(index_left)
61 left_df = left_df.reset_index()
62 right_df = right_df.copy(deep=True)
63 right_df.index = right_df.index.rename(index_right)
64 right_df = right_df.reset_index()
65
66 if op == "within":
67 # within implemented as the inverse of contains; swap names
68 left_df, right_df = right_df, left_df
69
70 # insert the bounds in the rtree spatial index
71 right_df_bounds = right_df.geometry.apply(lambda x: x.bounds)
72 stream = ((i, b, None) for i, b in enumerate(right_df_bounds))
73 tree_idx = rtree.index.Index(stream)
74
75 idxmatch = (left_df.geometry.apply(lambda x: x.bounds)
76 .apply(lambda x: list(tree_idx.intersection(x))))
77 idxmatch = idxmatch[idxmatch.apply(len) > 0]
78
79 if idxmatch.shape[0] > 0:
80 # if output from join has overlapping geometries
81 r_idx = np.concatenate(idxmatch.values)
82 l_idx = np.concatenate([[i] * len(v) for i, v in idxmatch.iteritems()])
83
84 # Vectorize predicate operations
85 def find_intersects(a1, a2):
86 return a1.intersects(a2)
87
88 def find_contains(a1, a2):
89 return a1.contains(a2)
90
91 predicate_d = {'intersects': find_intersects,
92 'contains': find_contains,
93 'within': find_contains}
94
95 check_predicates = np.vectorize(predicate_d[op])
96
97 result = (
98 pd.DataFrame(
99 np.column_stack(
100 [l_idx,
101 r_idx,
102 check_predicates(
103 left_df.geometry
104 .apply(lambda x: prepared.prep(x))[l_idx],
105 right_df[right_df.geometry.name][r_idx])
106 ]))
107 )
108
109 result.columns = ['_key_left', '_key_right', 'match_bool']
110 result = (
111 pd.DataFrame(result[result['match_bool']==1])
112 .drop('match_bool', axis=1)
113 )
114
115 else:
116 # when output from the join has no overlapping geometries
117 result = pd.DataFrame(columns=['_key_left', '_key_right'])
118
119 if op == "within":
120 # within implemented as the inverse of contains; swap names
121 left_df, right_df = right_df, left_df
122 result = result.rename(columns={'_key_left': '_key_right',
123 '_key_right': '_key_left'})
124
125
126 if how == 'inner':
127 result = result.set_index('_key_left')
128 joined = (
129 left_df
130 .merge(result, left_index=True, right_index=True)
131 .merge(right_df.drop(right_df.geometry.name, axis=1),
132 left_on='_key_right', right_index=True,
133 suffixes=('_%s' % lsuffix, '_%s' % rsuffix))
134 )
135 joined = joined.set_index(index_left).drop(['_key_right'], axis=1)
136 joined.index.name = None
137 elif how == 'left':
138 result = result.set_index('_key_left')
139 joined = (
140 left_df
141 .merge(result, left_index=True, right_index=True, how='left')
142 .merge(right_df.drop(right_df.geometry.name, axis=1),
143 how='left', left_on='_key_right', right_index=True,
144 suffixes=('_%s' % lsuffix, '_%s' % rsuffix))
145 )
146 joined = joined.set_index(index_left).drop(['_key_right'], axis=1)
147 joined.index.name = None
148 else: # how == 'right':
149 joined = (
150 left_df
151 .drop(left_df.geometry.name, axis=1)
152 .merge(result.merge(right_df,
153 left_on='_key_right', right_index=True,
154 how='right'), left_index=True,
155 right_on='_key_left', how='right')
156 .set_index(index_right)
157 )
158 joined = joined.drop(['_key_left', '_key_right'], axis=1)
159
160 return joined
161
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/geopandas/tools/sjoin.py b/geopandas/tools/sjoin.py
--- a/geopandas/tools/sjoin.py
+++ b/geopandas/tools/sjoin.py
@@ -114,7 +114,7 @@
else:
# when output from the join has no overlapping geometries
- result = pd.DataFrame(columns=['_key_left', '_key_right'])
+ result = pd.DataFrame(columns=['_key_left', '_key_right'], dtype=float)
if op == "within":
# within implemented as the inverse of contains; swap names
|
{"golden_diff": "diff --git a/geopandas/tools/sjoin.py b/geopandas/tools/sjoin.py\n--- a/geopandas/tools/sjoin.py\n+++ b/geopandas/tools/sjoin.py\n@@ -114,7 +114,7 @@\n \n else:\n # when output from the join has no overlapping geometries\n- result = pd.DataFrame(columns=['_key_left', '_key_right'])\n+ result = pd.DataFrame(columns=['_key_left', '_key_right'], dtype=float)\n \n if op == \"within\":\n # within implemented as the inverse of contains; swap names\n", "issue": "Value error using sjoin with pandas v0.23\nI use the `sjoin` function to add the region name (polygons) to every point within the region. Some points are not in any region, therefore I filter these points and buffer them step by step. So the points layer without intersection becomes smaller and smaller. If there is only one row left I get the following error in pandas v0.23 which I did not get before (pandas < v0.23). Using geopandas v0.3.0.\r\n\r\nMy call:\r\n```python\r\nnew = gpd.sjoin(rest_points, polygons, how='left', op='intersects')\r\n```\r\n\r\nError message:\r\n```python\r\nValueError: You are trying to merge on object and int64 columns.\r\nIf you wish to proceed you should use pd.concat\r\n```\r\nclass: `GeoDataFrame`\r\nmethod: `merge(self, *args, **kwargs)`\r\nline: `result = DataFrame.merge(self, *args, **kwargs)`\r\n\r\nI do not understand the error and why it happens only with the last point (last row) and only with the newest pandas version. I had a look at \"What's New\" but could not find anything.\r\n\r\nFull message:\r\n```python\r\n File \"virtualenv/lib/python3.5/site-packages/geopandas/tools/sjoin.py\", line 140,\r\n in sjoin suffixes=('_%s' % lsuffix, '_%s' % rsuffix))\r\n File \"virtualenv/lib/python3.5/site-packages/geopandas/geodataframe.py\", line 418,\r\n in merge result = DataFrame.merge(self, *args, **kwargs)\r\n File \"virtualenv/lib/python3.5/site-packages/pandas/core/frame.py\", line 6379,\r\n in merge copy=copy, indicator=indicator, validate=validate)\r\n File \"virtualenv/lib/python3.5/site-packages/pandas/core/reshape/merge.py\", line 60,\r\n in mergevalidate=validate)\r\n File \"virtualenv/lib/python3.5/site-packages/pandas/core/reshape/merge.py\", line 554,\r\n in __init__self._maybe_coerce_merge_keys()\r\n File \"virtualenv/lib/python3.5/site-packages/pandas/core/reshape/merge.py\", line 980,\r\n in _maybe_coerce_merge_keys\r\n raise ValueError(msg)\r\nValueError: You are trying to merge on object and int64 columns.\r\nIf you wish to proceed you should use pd.concat\r\n```\n", "before_files": [{"content": "from warnings import warn\n\nimport numpy as np\nimport pandas as pd\nfrom shapely import prepared\n\n\ndef sjoin(left_df, right_df, how='inner', op='intersects',\n lsuffix='left', rsuffix='right'):\n \"\"\"Spatial join of two GeoDataFrames.\n\n Parameters\n ----------\n left_df, right_df : GeoDataFrames\n how : string, default 'inner'\n The type of join:\n\n * 'left': use keys from left_df; retain only left_df geometry column\n * 'right': use keys from right_df; retain only right_df geometry column\n * 'inner': use intersection of keys from both dfs; retain only\n left_df geometry column\n op : string, default 'intersection'\n Binary predicate, one of {'intersects', 'contains', 'within'}.\n See http://toblerity.org/shapely/manual.html#binary-predicates.\n lsuffix : string, default 'left'\n Suffix to apply to overlapping column names (left GeoDataFrame).\n rsuffix : string, default 'right'\n Suffix to apply to overlapping column names (right GeoDataFrame).\n\n \"\"\"\n import rtree\n\n allowed_hows = ['left', 'right', 'inner']\n if how not in allowed_hows:\n raise ValueError(\"`how` was \\\"%s\\\" but is expected to be in %s\" %\n (how, allowed_hows))\n\n allowed_ops = ['contains', 'within', 'intersects']\n if op not in allowed_ops:\n raise ValueError(\"`op` was \\\"%s\\\" but is expected to be in %s\" %\n (op, allowed_ops))\n\n if left_df.crs != right_df.crs:\n warn('CRS of frames being joined does not match!')\n\n index_left = 'index_%s' % lsuffix\n index_right = 'index_%s' % rsuffix\n\n # due to GH 352\n if (any(left_df.columns.isin([index_left, index_right]))\n or any(right_df.columns.isin([index_left, index_right]))):\n raise ValueError(\"'{0}' and '{1}' cannot be names in the frames being\"\n \" joined\".format(index_left, index_right))\n\n # the rtree spatial index only allows limited (numeric) index types, but an\n # index in geopandas may be any arbitrary dtype. so reset both indices now\n # and store references to the original indices, to be reaffixed later.\n # GH 352\n left_df = left_df.copy(deep=True)\n left_df.index = left_df.index.rename(index_left)\n left_df = left_df.reset_index()\n right_df = right_df.copy(deep=True)\n right_df.index = right_df.index.rename(index_right)\n right_df = right_df.reset_index()\n\n if op == \"within\":\n # within implemented as the inverse of contains; swap names\n left_df, right_df = right_df, left_df\n\n # insert the bounds in the rtree spatial index\n right_df_bounds = right_df.geometry.apply(lambda x: x.bounds)\n stream = ((i, b, None) for i, b in enumerate(right_df_bounds))\n tree_idx = rtree.index.Index(stream)\n\n idxmatch = (left_df.geometry.apply(lambda x: x.bounds)\n .apply(lambda x: list(tree_idx.intersection(x))))\n idxmatch = idxmatch[idxmatch.apply(len) > 0]\n\n if idxmatch.shape[0] > 0:\n # if output from join has overlapping geometries\n r_idx = np.concatenate(idxmatch.values)\n l_idx = np.concatenate([[i] * len(v) for i, v in idxmatch.iteritems()])\n\n # Vectorize predicate operations\n def find_intersects(a1, a2):\n return a1.intersects(a2)\n\n def find_contains(a1, a2):\n return a1.contains(a2)\n\n predicate_d = {'intersects': find_intersects,\n 'contains': find_contains,\n 'within': find_contains}\n\n check_predicates = np.vectorize(predicate_d[op])\n\n result = (\n pd.DataFrame(\n np.column_stack(\n [l_idx,\n r_idx,\n check_predicates(\n left_df.geometry\n .apply(lambda x: prepared.prep(x))[l_idx],\n right_df[right_df.geometry.name][r_idx])\n ]))\n )\n\n result.columns = ['_key_left', '_key_right', 'match_bool']\n result = (\n pd.DataFrame(result[result['match_bool']==1])\n .drop('match_bool', axis=1)\n )\n\n else:\n # when output from the join has no overlapping geometries\n result = pd.DataFrame(columns=['_key_left', '_key_right'])\n\n if op == \"within\":\n # within implemented as the inverse of contains; swap names\n left_df, right_df = right_df, left_df\n result = result.rename(columns={'_key_left': '_key_right',\n '_key_right': '_key_left'})\n\n\n if how == 'inner':\n result = result.set_index('_key_left')\n joined = (\n left_df\n .merge(result, left_index=True, right_index=True)\n .merge(right_df.drop(right_df.geometry.name, axis=1),\n left_on='_key_right', right_index=True,\n suffixes=('_%s' % lsuffix, '_%s' % rsuffix))\n )\n joined = joined.set_index(index_left).drop(['_key_right'], axis=1)\n joined.index.name = None\n elif how == 'left':\n result = result.set_index('_key_left')\n joined = (\n left_df\n .merge(result, left_index=True, right_index=True, how='left')\n .merge(right_df.drop(right_df.geometry.name, axis=1),\n how='left', left_on='_key_right', right_index=True,\n suffixes=('_%s' % lsuffix, '_%s' % rsuffix))\n )\n joined = joined.set_index(index_left).drop(['_key_right'], axis=1)\n joined.index.name = None\n else: # how == 'right':\n joined = (\n left_df\n .drop(left_df.geometry.name, axis=1)\n .merge(result.merge(right_df,\n left_on='_key_right', right_index=True,\n how='right'), left_index=True,\n right_on='_key_left', how='right')\n .set_index(index_right)\n )\n joined = joined.drop(['_key_left', '_key_right'], axis=1)\n\n return joined\n", "path": "geopandas/tools/sjoin.py"}], "after_files": [{"content": "from warnings import warn\n\nimport numpy as np\nimport pandas as pd\nfrom shapely import prepared\n\n\ndef sjoin(left_df, right_df, how='inner', op='intersects',\n lsuffix='left', rsuffix='right'):\n \"\"\"Spatial join of two GeoDataFrames.\n\n Parameters\n ----------\n left_df, right_df : GeoDataFrames\n how : string, default 'inner'\n The type of join:\n\n * 'left': use keys from left_df; retain only left_df geometry column\n * 'right': use keys from right_df; retain only right_df geometry column\n * 'inner': use intersection of keys from both dfs; retain only\n left_df geometry column\n op : string, default 'intersection'\n Binary predicate, one of {'intersects', 'contains', 'within'}.\n See http://toblerity.org/shapely/manual.html#binary-predicates.\n lsuffix : string, default 'left'\n Suffix to apply to overlapping column names (left GeoDataFrame).\n rsuffix : string, default 'right'\n Suffix to apply to overlapping column names (right GeoDataFrame).\n\n \"\"\"\n import rtree\n\n allowed_hows = ['left', 'right', 'inner']\n if how not in allowed_hows:\n raise ValueError(\"`how` was \\\"%s\\\" but is expected to be in %s\" %\n (how, allowed_hows))\n\n allowed_ops = ['contains', 'within', 'intersects']\n if op not in allowed_ops:\n raise ValueError(\"`op` was \\\"%s\\\" but is expected to be in %s\" %\n (op, allowed_ops))\n\n if left_df.crs != right_df.crs:\n warn('CRS of frames being joined does not match!')\n\n index_left = 'index_%s' % lsuffix\n index_right = 'index_%s' % rsuffix\n\n # due to GH 352\n if (any(left_df.columns.isin([index_left, index_right]))\n or any(right_df.columns.isin([index_left, index_right]))):\n raise ValueError(\"'{0}' and '{1}' cannot be names in the frames being\"\n \" joined\".format(index_left, index_right))\n\n # the rtree spatial index only allows limited (numeric) index types, but an\n # index in geopandas may be any arbitrary dtype. so reset both indices now\n # and store references to the original indices, to be reaffixed later.\n # GH 352\n left_df = left_df.copy(deep=True)\n left_df.index = left_df.index.rename(index_left)\n left_df = left_df.reset_index()\n right_df = right_df.copy(deep=True)\n right_df.index = right_df.index.rename(index_right)\n right_df = right_df.reset_index()\n\n if op == \"within\":\n # within implemented as the inverse of contains; swap names\n left_df, right_df = right_df, left_df\n\n # insert the bounds in the rtree spatial index\n right_df_bounds = right_df.geometry.apply(lambda x: x.bounds)\n stream = ((i, b, None) for i, b in enumerate(right_df_bounds))\n tree_idx = rtree.index.Index(stream)\n\n idxmatch = (left_df.geometry.apply(lambda x: x.bounds)\n .apply(lambda x: list(tree_idx.intersection(x))))\n idxmatch = idxmatch[idxmatch.apply(len) > 0]\n\n if idxmatch.shape[0] > 0:\n # if output from join has overlapping geometries\n r_idx = np.concatenate(idxmatch.values)\n l_idx = np.concatenate([[i] * len(v) for i, v in idxmatch.iteritems()])\n\n # Vectorize predicate operations\n def find_intersects(a1, a2):\n return a1.intersects(a2)\n\n def find_contains(a1, a2):\n return a1.contains(a2)\n\n predicate_d = {'intersects': find_intersects,\n 'contains': find_contains,\n 'within': find_contains}\n\n check_predicates = np.vectorize(predicate_d[op])\n\n result = (\n pd.DataFrame(\n np.column_stack(\n [l_idx,\n r_idx,\n check_predicates(\n left_df.geometry\n .apply(lambda x: prepared.prep(x))[l_idx],\n right_df[right_df.geometry.name][r_idx])\n ]))\n )\n\n result.columns = ['_key_left', '_key_right', 'match_bool']\n result = (\n pd.DataFrame(result[result['match_bool']==1])\n .drop('match_bool', axis=1)\n )\n\n else:\n # when output from the join has no overlapping geometries\n result = pd.DataFrame(columns=['_key_left', '_key_right'], dtype=float)\n\n if op == \"within\":\n # within implemented as the inverse of contains; swap names\n left_df, right_df = right_df, left_df\n result = result.rename(columns={'_key_left': '_key_right',\n '_key_right': '_key_left'})\n\n\n if how == 'inner':\n result = result.set_index('_key_left')\n joined = (\n left_df\n .merge(result, left_index=True, right_index=True)\n .merge(right_df.drop(right_df.geometry.name, axis=1),\n left_on='_key_right', right_index=True,\n suffixes=('_%s' % lsuffix, '_%s' % rsuffix))\n )\n joined = joined.set_index(index_left).drop(['_key_right'], axis=1)\n joined.index.name = None\n elif how == 'left':\n result = result.set_index('_key_left')\n joined = (\n left_df\n .merge(result, left_index=True, right_index=True, how='left')\n .merge(right_df.drop(right_df.geometry.name, axis=1),\n how='left', left_on='_key_right', right_index=True,\n suffixes=('_%s' % lsuffix, '_%s' % rsuffix))\n )\n joined = joined.set_index(index_left).drop(['_key_right'], axis=1)\n joined.index.name = None\n else: # how == 'right':\n joined = (\n left_df\n .drop(left_df.geometry.name, axis=1)\n .merge(result.merge(right_df,\n left_on='_key_right', right_index=True,\n how='right'), left_index=True,\n right_on='_key_left', how='right')\n .set_index(index_right)\n )\n joined = joined.drop(['_key_left', '_key_right'], axis=1)\n\n return joined\n", "path": "geopandas/tools/sjoin.py"}]}
| 2,594 | 127 |
gh_patches_debug_32401
|
rasdani/github-patches
|
git_diff
|
sunpy__sunpy-1961
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
tests create figure hashes directory
when running `py.tests` on single file it creates unneeded tmp directories.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sunpy/conftest.py`
Content:
```
1 from __future__ import absolute_import, print_function
2 from functools import partial
3
4 import os
5 import socket
6 import tempfile
7 import json
8
9 from sunpy.extern.six.moves.urllib.request import urlopen
10 from sunpy.extern.six.moves.urllib.error import URLError
11
12 import pytest
13
14
15 # Force MPL to use non-gui backends for testing.
16 try:
17 import matplotlib
18 except ImportError:
19 pass
20 else:
21 matplotlib.use('Agg')
22
23 from astropy.tests import disable_internet
24
25 from sunpy.tests import hash
26
27 hash_library_original_len = len(hash.hash_library)
28
29 GOOGLE_URL = 'http://www.google.com'
30
31
32 def site_reachable(url):
33 try:
34 urlopen(url, timeout=1)
35 except (URLError, socket.timeout):
36 return False
37 else:
38 return True
39
40
41 is_online = partial(site_reachable, GOOGLE_URL)
42
43
44 def pytest_runtest_setup(item):
45 """
46 pytest hook to skip all tests that have the mark 'online' if the
47 client is online (simply detected by checking whether http://www.google.com
48 can be requested).
49 """
50 if isinstance(item, item.Function):
51 if 'online' in item.keywords and not is_online():
52 msg = 'skipping test {0} (reason: client seems to be offline)'
53 pytest.skip(msg.format(item.name))
54
55 if 'online' not in item.keywords:
56 disable_internet.turn_off_internet()
57
58
59 def pytest_runtest_teardown(item, nextitem):
60 disable_internet.turn_on_internet()
61
62
63 def pytest_unconfigure(config):
64 tempdir = tempfile.mkdtemp(suffix="_figures")
65 # the hash_library is indexed by the name of the test but we want to look
66 # things up with the hash value
67 inv_hash_library = {v: k for k, v in hash.hash_library.items()}
68
69 for h in hash.file_list:
70 test_name = inv_hash_library.get(h, '')
71 if test_name != '':
72 os.rename(hash.file_list[h], os.path.join(tempdir,
73 test_name + '.png'))
74 print('All test files for figure hashes can be found in {0}'.format(
75 tempdir))
76
77 # Check if additions have been made to the hash library
78 if len(hash.hash_library) > hash_library_original_len:
79 # Write the new hash library in JSON
80 tempdir = tempfile.mkdtemp()
81 hashfile = os.path.join(tempdir, hash.HASH_LIBRARY_NAME)
82 with open(hashfile, 'wb') as outfile:
83 json.dump(
84 hash.hash_library,
85 outfile,
86 sort_keys=True,
87 indent=4,
88 separators=(',', ': '))
89 print(
90 "The hash library has expanded and should be copied to sunpy/tests/")
91 print(" " + hashfile)
92
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sunpy/conftest.py b/sunpy/conftest.py
--- a/sunpy/conftest.py
+++ b/sunpy/conftest.py
@@ -22,9 +22,8 @@
from astropy.tests import disable_internet
-from sunpy.tests import hash
-
-hash_library_original_len = len(hash.hash_library)
+from sunpy.tests.hash import HASH_LIBRARY_NAME
+from sunpy.tests.helpers import new_hash_library, figure_test_pngfiles
GOOGLE_URL = 'http://www.google.com'
@@ -61,31 +60,17 @@
def pytest_unconfigure(config):
- tempdir = tempfile.mkdtemp(suffix="_figures")
- # the hash_library is indexed by the name of the test but we want to look
- # things up with the hash value
- inv_hash_library = {v: k for k, v in hash.hash_library.items()}
-
- for h in hash.file_list:
- test_name = inv_hash_library.get(h, '')
- if test_name != '':
- os.rename(hash.file_list[h], os.path.join(tempdir,
- test_name + '.png'))
- print('All test files for figure hashes can be found in {0}'.format(
- tempdir))
-
- # Check if additions have been made to the hash library
- if len(hash.hash_library) > hash_library_original_len:
+ if len(figure_test_pngfiles) > 0:
+ tempdir = tempfile.mkdtemp(suffix="_figures")
+
+ # Rename each PNG with the name of the corresponding test
+ for test_name in figure_test_pngfiles:
+ os.rename(figure_test_pngfiles[test_name], os.path.join(tempdir, test_name + '.png'))
+
# Write the new hash library in JSON
- tempdir = tempfile.mkdtemp()
- hashfile = os.path.join(tempdir, hash.HASH_LIBRARY_NAME)
- with open(hashfile, 'wb') as outfile:
- json.dump(
- hash.hash_library,
- outfile,
- sort_keys=True,
- indent=4,
- separators=(',', ': '))
- print(
- "The hash library has expanded and should be copied to sunpy/tests/")
- print(" " + hashfile)
+ hashfile = os.path.join(tempdir, HASH_LIBRARY_NAME)
+ with open(hashfile, 'w') as outfile:
+ json.dump(new_hash_library, outfile, sort_keys=True, indent=4, separators=(',', ': '))
+
+ print('All test files for figure hashes can be found in {0}'.format(tempdir))
+ print("The corresponding hash library is {0}".format(hashfile))
|
{"golden_diff": "diff --git a/sunpy/conftest.py b/sunpy/conftest.py\n--- a/sunpy/conftest.py\n+++ b/sunpy/conftest.py\n@@ -22,9 +22,8 @@\n \n from astropy.tests import disable_internet\n \n-from sunpy.tests import hash\n-\n-hash_library_original_len = len(hash.hash_library)\n+from sunpy.tests.hash import HASH_LIBRARY_NAME\n+from sunpy.tests.helpers import new_hash_library, figure_test_pngfiles\n \n GOOGLE_URL = 'http://www.google.com'\n \n@@ -61,31 +60,17 @@\n \n \n def pytest_unconfigure(config):\n- tempdir = tempfile.mkdtemp(suffix=\"_figures\")\n- # the hash_library is indexed by the name of the test but we want to look\n- # things up with the hash value\n- inv_hash_library = {v: k for k, v in hash.hash_library.items()}\n-\n- for h in hash.file_list:\n- test_name = inv_hash_library.get(h, '')\n- if test_name != '':\n- os.rename(hash.file_list[h], os.path.join(tempdir,\n- test_name + '.png'))\n- print('All test files for figure hashes can be found in {0}'.format(\n- tempdir))\n-\n- # Check if additions have been made to the hash library\n- if len(hash.hash_library) > hash_library_original_len:\n+ if len(figure_test_pngfiles) > 0:\n+ tempdir = tempfile.mkdtemp(suffix=\"_figures\")\n+\n+ # Rename each PNG with the name of the corresponding test\n+ for test_name in figure_test_pngfiles:\n+ os.rename(figure_test_pngfiles[test_name], os.path.join(tempdir, test_name + '.png'))\n+\n # Write the new hash library in JSON\n- tempdir = tempfile.mkdtemp()\n- hashfile = os.path.join(tempdir, hash.HASH_LIBRARY_NAME)\n- with open(hashfile, 'wb') as outfile:\n- json.dump(\n- hash.hash_library,\n- outfile,\n- sort_keys=True,\n- indent=4,\n- separators=(',', ': '))\n- print(\n- \"The hash library has expanded and should be copied to sunpy/tests/\")\n- print(\" \" + hashfile)\n+ hashfile = os.path.join(tempdir, HASH_LIBRARY_NAME)\n+ with open(hashfile, 'w') as outfile:\n+ json.dump(new_hash_library, outfile, sort_keys=True, indent=4, separators=(',', ': '))\n+\n+ print('All test files for figure hashes can be found in {0}'.format(tempdir))\n+ print(\"The corresponding hash library is {0}\".format(hashfile))\n", "issue": "tests create figure hashes directory\nwhen running `py.tests` on single file it creates unneeded tmp directories.\n\n", "before_files": [{"content": "from __future__ import absolute_import, print_function\nfrom functools import partial\n\nimport os\nimport socket\nimport tempfile\nimport json\n\nfrom sunpy.extern.six.moves.urllib.request import urlopen\nfrom sunpy.extern.six.moves.urllib.error import URLError\n\nimport pytest\n\n\n# Force MPL to use non-gui backends for testing.\ntry:\n import matplotlib\nexcept ImportError:\n pass\nelse:\n matplotlib.use('Agg')\n\nfrom astropy.tests import disable_internet\n\nfrom sunpy.tests import hash\n\nhash_library_original_len = len(hash.hash_library)\n\nGOOGLE_URL = 'http://www.google.com'\n\n\ndef site_reachable(url):\n try:\n urlopen(url, timeout=1)\n except (URLError, socket.timeout):\n return False\n else:\n return True\n\n\nis_online = partial(site_reachable, GOOGLE_URL)\n\n\ndef pytest_runtest_setup(item):\n \"\"\"\n pytest hook to skip all tests that have the mark 'online' if the\n client is online (simply detected by checking whether http://www.google.com\n can be requested).\n \"\"\"\n if isinstance(item, item.Function):\n if 'online' in item.keywords and not is_online():\n msg = 'skipping test {0} (reason: client seems to be offline)'\n pytest.skip(msg.format(item.name))\n\n if 'online' not in item.keywords:\n disable_internet.turn_off_internet()\n\n\ndef pytest_runtest_teardown(item, nextitem):\n disable_internet.turn_on_internet()\n\n\ndef pytest_unconfigure(config):\n tempdir = tempfile.mkdtemp(suffix=\"_figures\")\n # the hash_library is indexed by the name of the test but we want to look\n # things up with the hash value\n inv_hash_library = {v: k for k, v in hash.hash_library.items()}\n\n for h in hash.file_list:\n test_name = inv_hash_library.get(h, '')\n if test_name != '':\n os.rename(hash.file_list[h], os.path.join(tempdir,\n test_name + '.png'))\n print('All test files for figure hashes can be found in {0}'.format(\n tempdir))\n\n # Check if additions have been made to the hash library\n if len(hash.hash_library) > hash_library_original_len:\n # Write the new hash library in JSON\n tempdir = tempfile.mkdtemp()\n hashfile = os.path.join(tempdir, hash.HASH_LIBRARY_NAME)\n with open(hashfile, 'wb') as outfile:\n json.dump(\n hash.hash_library,\n outfile,\n sort_keys=True,\n indent=4,\n separators=(',', ': '))\n print(\n \"The hash library has expanded and should be copied to sunpy/tests/\")\n print(\" \" + hashfile)\n", "path": "sunpy/conftest.py"}], "after_files": [{"content": "from __future__ import absolute_import, print_function\nfrom functools import partial\n\nimport os\nimport socket\nimport tempfile\nimport json\n\nfrom sunpy.extern.six.moves.urllib.request import urlopen\nfrom sunpy.extern.six.moves.urllib.error import URLError\n\nimport pytest\n\n\n# Force MPL to use non-gui backends for testing.\ntry:\n import matplotlib\nexcept ImportError:\n pass\nelse:\n matplotlib.use('Agg')\n\nfrom astropy.tests import disable_internet\n\nfrom sunpy.tests.hash import HASH_LIBRARY_NAME\nfrom sunpy.tests.helpers import new_hash_library, figure_test_pngfiles\n\nGOOGLE_URL = 'http://www.google.com'\n\n\ndef site_reachable(url):\n try:\n urlopen(url, timeout=1)\n except (URLError, socket.timeout):\n return False\n else:\n return True\n\n\nis_online = partial(site_reachable, GOOGLE_URL)\n\n\ndef pytest_runtest_setup(item):\n \"\"\"\n pytest hook to skip all tests that have the mark 'online' if the\n client is online (simply detected by checking whether http://www.google.com\n can be requested).\n \"\"\"\n if isinstance(item, item.Function):\n if 'online' in item.keywords and not is_online():\n msg = 'skipping test {0} (reason: client seems to be offline)'\n pytest.skip(msg.format(item.name))\n\n if 'online' not in item.keywords:\n disable_internet.turn_off_internet()\n\n\ndef pytest_runtest_teardown(item, nextitem):\n disable_internet.turn_on_internet()\n\n\ndef pytest_unconfigure(config):\n if len(figure_test_pngfiles) > 0:\n tempdir = tempfile.mkdtemp(suffix=\"_figures\")\n\n # Rename each PNG with the name of the corresponding test\n for test_name in figure_test_pngfiles:\n os.rename(figure_test_pngfiles[test_name], os.path.join(tempdir, test_name + '.png'))\n\n # Write the new hash library in JSON\n hashfile = os.path.join(tempdir, HASH_LIBRARY_NAME)\n with open(hashfile, 'w') as outfile:\n json.dump(new_hash_library, outfile, sort_keys=True, indent=4, separators=(',', ': '))\n\n print('All test files for figure hashes can be found in {0}'.format(tempdir))\n print(\"The corresponding hash library is {0}\".format(hashfile))\n", "path": "sunpy/conftest.py"}]}
| 1,056 | 589 |
gh_patches_debug_8984
|
rasdani/github-patches
|
git_diff
|
gammapy__gammapy-2896
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Install fails when Cython is not already installed
**Gammapy version**
Every I tested
**Bug description**
The setup.py hard imports Cython, so setuptools has no chance to install it.
**Expected behavior**
Install cython if needed before attempting to import it
**To Reproduce**
```
$ python -m venv test_gammapy
$ . test_gammapy/bin/activate
$ pip install gammapy
```
**Other information**
There is a very simple fix: provide a pyproject.toml that requires cython for the build system.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Licensed under a 3-clause BSD style license - see LICENSE.rst
2 import sys
3 import setuptools
4 from distutils.version import LooseVersion
5
6 if LooseVersion(setuptools.__version__) < "30.3":
7 sys.stderr.write("ERROR: setuptools 30.3 or later is required by gammapy\n")
8 sys.exit(1)
9
10 # TODO: check if setuptools_scm, numpy, ... are OK
11 # Exit with good error message telling people to install those first if not
12
13
14 from Cython.Build import cythonize
15 from distutils.extension import Extension
16 import numpy as np
17
18
19 def make_cython_extension(filename):
20 return Extension(
21 filename.strip(".pyx").replace("/", "."),
22 [filename],
23 include_dirs=[np.get_include()],
24 )
25
26
27 cython_files = [
28 "gammapy/stats/fit_statistics_cython.pyx",
29 ]
30
31 ext_modules = cythonize([make_cython_extension(_) for _ in cython_files])
32
33 setuptools.setup(use_scm_version=True, ext_modules=ext_modules)
34
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -1,16 +1,5 @@
# Licensed under a 3-clause BSD style license - see LICENSE.rst
-import sys
import setuptools
-from distutils.version import LooseVersion
-
-if LooseVersion(setuptools.__version__) < "30.3":
- sys.stderr.write("ERROR: setuptools 30.3 or later is required by gammapy\n")
- sys.exit(1)
-
-# TODO: check if setuptools_scm, numpy, ... are OK
-# Exit with good error message telling people to install those first if not
-
-
from Cython.Build import cythonize
from distutils.extension import Extension
import numpy as np
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -1,16 +1,5 @@\n # Licensed under a 3-clause BSD style license - see LICENSE.rst\n-import sys\n import setuptools\n-from distutils.version import LooseVersion\n-\n-if LooseVersion(setuptools.__version__) < \"30.3\":\n- sys.stderr.write(\"ERROR: setuptools 30.3 or later is required by gammapy\\n\")\n- sys.exit(1)\n-\n-# TODO: check if setuptools_scm, numpy, ... are OK\n-# Exit with good error message telling people to install those first if not\n-\n-\n from Cython.Build import cythonize\n from distutils.extension import Extension\n import numpy as np\n", "issue": "Install fails when Cython is not already installed\n**Gammapy version**\r\n\r\nEvery I tested\r\n\r\n**Bug description**\r\n\r\nThe setup.py hard imports Cython, so setuptools has no chance to install it.\r\n\r\n**Expected behavior**\r\n\r\nInstall cython if needed before attempting to import it\r\n\r\n**To Reproduce**\r\n\r\n\r\n```\r\n$ python -m venv test_gammapy\r\n$ . test_gammapy/bin/activate\r\n$ pip install gammapy\r\n```\r\n\r\n\r\n**Other information**\r\nThere is a very simple fix: provide a pyproject.toml that requires cython for the build system.\r\n\n", "before_files": [{"content": "# Licensed under a 3-clause BSD style license - see LICENSE.rst\nimport sys\nimport setuptools\nfrom distutils.version import LooseVersion\n\nif LooseVersion(setuptools.__version__) < \"30.3\":\n sys.stderr.write(\"ERROR: setuptools 30.3 or later is required by gammapy\\n\")\n sys.exit(1)\n\n# TODO: check if setuptools_scm, numpy, ... are OK\n# Exit with good error message telling people to install those first if not\n\n\nfrom Cython.Build import cythonize\nfrom distutils.extension import Extension\nimport numpy as np\n\n\ndef make_cython_extension(filename):\n return Extension(\n filename.strip(\".pyx\").replace(\"/\", \".\"),\n [filename],\n include_dirs=[np.get_include()],\n )\n\n\ncython_files = [\n \"gammapy/stats/fit_statistics_cython.pyx\",\n]\n\next_modules = cythonize([make_cython_extension(_) for _ in cython_files])\n\nsetuptools.setup(use_scm_version=True, ext_modules=ext_modules)\n", "path": "setup.py"}], "after_files": [{"content": "# Licensed under a 3-clause BSD style license - see LICENSE.rst\nimport setuptools\nfrom Cython.Build import cythonize\nfrom distutils.extension import Extension\nimport numpy as np\n\n\ndef make_cython_extension(filename):\n return Extension(\n filename.strip(\".pyx\").replace(\"/\", \".\"),\n [filename],\n include_dirs=[np.get_include()],\n )\n\n\ncython_files = [\n \"gammapy/stats/fit_statistics_cython.pyx\",\n]\n\next_modules = cythonize([make_cython_extension(_) for _ in cython_files])\n\nsetuptools.setup(use_scm_version=True, ext_modules=ext_modules)\n", "path": "setup.py"}]}
| 666 | 165 |
gh_patches_debug_7143
|
rasdani/github-patches
|
git_diff
|
pypi__warehouse-13499
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Circular import for warehouse.oidc
Surfaced via:
```shellsession
python -c "import warehouse.oidc"
```
```
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "workpath/warehouse/warehouse/oidc/__init__.py", line 13, in <module>
from warehouse.oidc.interfaces import IOIDCPublisherService
File "workpath/warehouse/warehouse/oidc/interfaces.py", line 19, in <module>
from warehouse.packaging.models import Project
File "workpath/warehouse/warehouse/packaging/__init__.py", line 17, in <module>
from warehouse.accounts.models import Email, User
File "workpath/warehouse/warehouse/accounts/__init__.py", line 22, in <module>
from warehouse.accounts.security_policy import (
File "workpath/warehouse/warehouse/accounts/security_policy.py", line 36, in <module>
from warehouse.utils.security_policy import AuthenticationMethod
File "workpath/warehouse/warehouse/utils/security_policy.py", line 21, in <module>
from warehouse.oidc.models import OIDCPublisher
File "workpath/warehouse/warehouse/oidc/models.py", line 24, in <module>
from warehouse.oidc.interfaces import SignedClaims
ImportError: cannot import name 'SignedClaims' from partially initialized module 'warehouse.oidc.interfaces' (most likely due to a circular import) (workpath/warehouse/warehouse/oidc/interfaces.py)
```
This isn't posing a problem in production (yet). I found it when I tried to filter out the warnings from `InsecureOIDCPublisherWarning` - but we should figure out how to not have a circular import.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `warehouse/oidc/interfaces.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 from __future__ import annotations
14
15 from typing import TYPE_CHECKING, Any, NewType
16
17 from zope.interface import Interface
18
19 from warehouse.packaging.models import Project
20 from warehouse.rate_limiting.interfaces import RateLimiterException
21
22 if TYPE_CHECKING:
23 from warehouse.oidc.models import PendingOIDCPublisher # pragma: no cover
24
25 SignedClaims = NewType("SignedClaims", dict[str, Any])
26
27
28 class IOIDCPublisherService(Interface):
29 def verify_jwt_signature(unverified_token: str):
30 """
31 Verify the given JWT's signature, returning its signed claims if
32 valid. If the signature is invalid, `None` is returned.
33
34 This method does **not** verify the claim set itself -- the API
35 consumer is responsible for evaluating the claim set.
36 """
37 pass
38
39 def find_publisher(signed_claims: SignedClaims, *, pending: bool = False):
40 """
41 Given a mapping of signed claims produced by `verify_jwt_signature`,
42 attempt to find and return either a `OIDCPublisher` or `PendingOIDCPublisher`
43 that matches them, depending on the value of `pending`.
44
45 If no publisher matches the claims, `None` is returned.
46 """
47 pass
48
49 def reify_pending_publisher(
50 pending_publisher: PendingOIDCPublisher, project: Project
51 ):
52 """
53 Reify the given pending `PendingOIDCPublisher` into an `OIDCPublisher`,
54 adding it to the given project (presumed newly created) in the process.
55
56 Returns the reified publisher.
57 """
58 pass
59
60
61 class TooManyOIDCRegistrations(RateLimiterException):
62 pass
63
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/warehouse/oidc/interfaces.py b/warehouse/oidc/interfaces.py
--- a/warehouse/oidc/interfaces.py
+++ b/warehouse/oidc/interfaces.py
@@ -16,11 +16,11 @@
from zope.interface import Interface
-from warehouse.packaging.models import Project
from warehouse.rate_limiting.interfaces import RateLimiterException
if TYPE_CHECKING:
- from warehouse.oidc.models import PendingOIDCPublisher # pragma: no cover
+ from warehouse.oidc.models import PendingOIDCPublisher
+ from warehouse.packaging.models import Project
SignedClaims = NewType("SignedClaims", dict[str, Any])
|
{"golden_diff": "diff --git a/warehouse/oidc/interfaces.py b/warehouse/oidc/interfaces.py\n--- a/warehouse/oidc/interfaces.py\n+++ b/warehouse/oidc/interfaces.py\n@@ -16,11 +16,11 @@\n \n from zope.interface import Interface\n \n-from warehouse.packaging.models import Project\n from warehouse.rate_limiting.interfaces import RateLimiterException\n \n if TYPE_CHECKING:\n- from warehouse.oidc.models import PendingOIDCPublisher # pragma: no cover\n+ from warehouse.oidc.models import PendingOIDCPublisher\n+ from warehouse.packaging.models import Project\n \n SignedClaims = NewType(\"SignedClaims\", dict[str, Any])\n", "issue": "Circular import for warehouse.oidc\nSurfaced via:\r\n\r\n```shellsession\r\npython -c \"import warehouse.oidc\"\r\n```\r\n\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"workpath/warehouse/warehouse/oidc/__init__.py\", line 13, in <module>\r\n from warehouse.oidc.interfaces import IOIDCPublisherService\r\n File \"workpath/warehouse/warehouse/oidc/interfaces.py\", line 19, in <module>\r\n from warehouse.packaging.models import Project\r\n File \"workpath/warehouse/warehouse/packaging/__init__.py\", line 17, in <module>\r\n from warehouse.accounts.models import Email, User\r\n File \"workpath/warehouse/warehouse/accounts/__init__.py\", line 22, in <module>\r\n from warehouse.accounts.security_policy import (\r\n File \"workpath/warehouse/warehouse/accounts/security_policy.py\", line 36, in <module>\r\n from warehouse.utils.security_policy import AuthenticationMethod\r\n File \"workpath/warehouse/warehouse/utils/security_policy.py\", line 21, in <module>\r\n from warehouse.oidc.models import OIDCPublisher\r\n File \"workpath/warehouse/warehouse/oidc/models.py\", line 24, in <module>\r\n from warehouse.oidc.interfaces import SignedClaims\r\nImportError: cannot import name 'SignedClaims' from partially initialized module 'warehouse.oidc.interfaces' (most likely due to a circular import) (workpath/warehouse/warehouse/oidc/interfaces.py)\r\n```\r\n\r\nThis isn't posing a problem in production (yet). I found it when I tried to filter out the warnings from `InsecureOIDCPublisherWarning` - but we should figure out how to not have a circular import.\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import annotations\n\nfrom typing import TYPE_CHECKING, Any, NewType\n\nfrom zope.interface import Interface\n\nfrom warehouse.packaging.models import Project\nfrom warehouse.rate_limiting.interfaces import RateLimiterException\n\nif TYPE_CHECKING:\n from warehouse.oidc.models import PendingOIDCPublisher # pragma: no cover\n\nSignedClaims = NewType(\"SignedClaims\", dict[str, Any])\n\n\nclass IOIDCPublisherService(Interface):\n def verify_jwt_signature(unverified_token: str):\n \"\"\"\n Verify the given JWT's signature, returning its signed claims if\n valid. If the signature is invalid, `None` is returned.\n\n This method does **not** verify the claim set itself -- the API\n consumer is responsible for evaluating the claim set.\n \"\"\"\n pass\n\n def find_publisher(signed_claims: SignedClaims, *, pending: bool = False):\n \"\"\"\n Given a mapping of signed claims produced by `verify_jwt_signature`,\n attempt to find and return either a `OIDCPublisher` or `PendingOIDCPublisher`\n that matches them, depending on the value of `pending`.\n\n If no publisher matches the claims, `None` is returned.\n \"\"\"\n pass\n\n def reify_pending_publisher(\n pending_publisher: PendingOIDCPublisher, project: Project\n ):\n \"\"\"\n Reify the given pending `PendingOIDCPublisher` into an `OIDCPublisher`,\n adding it to the given project (presumed newly created) in the process.\n\n Returns the reified publisher.\n \"\"\"\n pass\n\n\nclass TooManyOIDCRegistrations(RateLimiterException):\n pass\n", "path": "warehouse/oidc/interfaces.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import annotations\n\nfrom typing import TYPE_CHECKING, Any, NewType\n\nfrom zope.interface import Interface\n\nfrom warehouse.rate_limiting.interfaces import RateLimiterException\n\nif TYPE_CHECKING:\n from warehouse.oidc.models import PendingOIDCPublisher\n from warehouse.packaging.models import Project\n\nSignedClaims = NewType(\"SignedClaims\", dict[str, Any])\n\n\nclass IOIDCPublisherService(Interface):\n def verify_jwt_signature(unverified_token: str):\n \"\"\"\n Verify the given JWT's signature, returning its signed claims if\n valid. If the signature is invalid, `None` is returned.\n\n This method does **not** verify the claim set itself -- the API\n consumer is responsible for evaluating the claim set.\n \"\"\"\n pass\n\n def find_publisher(signed_claims: SignedClaims, *, pending: bool = False):\n \"\"\"\n Given a mapping of signed claims produced by `verify_jwt_signature`,\n attempt to find and return either a `OIDCPublisher` or `PendingOIDCPublisher`\n that matches them, depending on the value of `pending`.\n\n If no publisher matches the claims, `None` is returned.\n \"\"\"\n pass\n\n def reify_pending_publisher(\n pending_publisher: PendingOIDCPublisher, project: Project\n ):\n \"\"\"\n Reify the given pending `PendingOIDCPublisher` into an `OIDCPublisher`,\n adding it to the given project (presumed newly created) in the process.\n\n Returns the reified publisher.\n \"\"\"\n pass\n\n\nclass TooManyOIDCRegistrations(RateLimiterException):\n pass\n", "path": "warehouse/oidc/interfaces.py"}]}
| 1,237 | 145 |
gh_patches_debug_54008
|
rasdani/github-patches
|
git_diff
|
sublimelsp__LSP-555
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
TypeError: expected string or buffer
https://github.com/tomv564/LSP/blob/e37c4e6d7d959890c465cada35dff7fef22feb6e/plugin/core/types.py#L50-L54
It happened only once so far, when `plugin_loaded` was called on start up. After restarting Sublime Text right away, it did not happened again.
```
Traceback (most recent call last):
File "F:\SublimeText\sublime_plugin.py", line 298, in on_api_ready
plc()
File "F:\SublimeText\Data\Packages\LSP\boot.py", line 30, in plugin_loaded
startup()
File "F:\SublimeText\Data\Packages\LSP\plugin\core\main.py", line 25, in startup
start_active_window()
File "F:\SublimeText\Data\Packages\LSP\plugin\core\main.py", line 44, in start_active_window
windows.lookup(window).start_active_views()
File "F:\SublimeText\Data\Packages\LSP\plugin\core\windows.py", line 336, in start_active_views
self._initialize_on_open(view)
File "F:\SublimeText\Data\Packages\LSP\plugin\core\windows.py", line 348, in _initialize_on_open
self._configs.syntax_configs(view))
File "F:\SublimeText\Data\Packages\LSP\plugin\core\configurations.py", line 120, in syntax_configs
return list(filter(lambda c: config_supports_syntax(c, syntax) and c.enabled, self.all))
File "F:\SublimeText\Data\Packages\LSP\plugin\core\configurations.py", line 120, in <lambda>
return list(filter(lambda c: config_supports_syntax(c, syntax) and c.enabled, self.all))
File "F:\SublimeText\Data\Packages\LSP\plugin\core\types.py", line 81, in config_supports_syntax
if re.search(r'|'.join(r'\b%s\b' % re.escape(s) for s in language.syntaxes), syntax, re.IGNORECASE):
File "./python3.3/re.py", line 161, in search
TypeError: expected string or buffer
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugin/completion.py`
Content:
```
1 import sublime
2 import sublime_plugin
3
4 try:
5 from typing import Any, List, Dict, Tuple, Callable, Optional
6 assert Any and List and Dict and Tuple and Callable and Optional
7 except ImportError:
8 pass
9
10 from .core.protocol import Request
11 from .core.events import global_events
12 from .core.settings import settings, client_configs
13 from .core.logging import debug
14 from .core.completion import parse_completion_response
15 from .core.registry import session_for_view, client_for_view
16 from .core.configurations import is_supported_syntax
17 from .core.documents import get_document_position
18 from .core.sessions import Session
19
20 NO_COMPLETION_SCOPES = 'comment, string'
21
22
23 class CompletionState(object):
24 IDLE = 0
25 REQUESTING = 1
26 APPLYING = 2
27 CANCELLING = 3
28
29
30 last_text_command = None
31
32
33 class CompletionHelper(sublime_plugin.EventListener):
34 def on_text_command(self, view, command_name, args):
35 global last_text_command
36 last_text_command = command_name
37
38
39 class CompletionHandler(sublime_plugin.ViewEventListener):
40 def __init__(self, view):
41 self.view = view
42 self.initialized = False
43 self.enabled = False
44 self.trigger_chars = [] # type: List[str]
45 self.state = CompletionState.IDLE
46 self.completions = [] # type: List[Any]
47 self.next_request = None # type: Optional[Tuple[str, List[int]]]
48 self.last_prefix = ""
49 self.last_location = 0
50
51 @classmethod
52 def is_applicable(cls, settings):
53 syntax = settings.get('syntax')
54 if syntax is not None:
55 return is_supported_syntax(syntax, client_configs.all)
56 else:
57 return False
58
59 def initialize(self):
60 self.initialized = True
61 session = session_for_view(self.view)
62 if session:
63 completionProvider = session.get_capability(
64 'completionProvider')
65 if completionProvider:
66 self.enabled = True
67 self.trigger_chars = completionProvider.get(
68 'triggerCharacters') or []
69 if self.trigger_chars:
70 self.register_trigger_chars(session)
71
72 def _view_language(self, config_name: str) -> 'Optional[str]':
73 languages = self.view.settings().get('lsp_language')
74 return languages.get(config_name) if languages else None
75
76 def register_trigger_chars(self, session: Session) -> None:
77 completion_triggers = self.view.settings().get('auto_complete_triggers', [])
78 view_language = self._view_language(session.config.name)
79 if view_language:
80 for language in session.config.languages:
81 if language.id == view_language:
82 for scope in language.scopes:
83 # debug("registering", self.trigger_chars, "for", scope)
84 scope_trigger = next(
85 (trigger for trigger in completion_triggers if trigger.get('selector', None) == scope),
86 None
87 )
88 if scope_trigger:
89 scope_trigger['characters'] = "".join(self.trigger_chars)
90 else:
91 completion_triggers.append({
92 'characters': "".join(self.trigger_chars),
93 'selector': scope
94 })
95
96 self.view.settings().set('auto_complete_triggers', completion_triggers)
97
98 def is_after_trigger_character(self, location):
99 if location > 0:
100 prev_char = self.view.substr(location - 1)
101 return prev_char in self.trigger_chars
102
103 def is_same_completion(self, prefix, locations):
104 # completion requests from the same location with the same prefix are cached.
105 current_start = locations[0] - len(prefix)
106 last_start = self.last_location - len(self.last_prefix)
107 return prefix.startswith(self.last_prefix) and current_start == last_start
108
109 def on_modified(self):
110 # hide completion when backspacing past last completion.
111 if self.view.sel()[0].begin() < self.last_location:
112 self.last_location = 0
113 self.view.run_command("hide_auto_complete")
114 # cancel current completion if the previous input is an space
115 prev_char = self.view.substr(self.view.sel()[0].begin() - 1)
116 if self.state == CompletionState.REQUESTING and prev_char.isspace():
117 self.state = CompletionState.CANCELLING
118
119 def on_query_completions(self, prefix, locations):
120 if prefix != "" and self.view.match_selector(locations[0], NO_COMPLETION_SCOPES):
121 # debug('discarding completion because no completion scope with prefix {}'.format(prefix))
122 return (
123 [],
124 sublime.INHIBIT_WORD_COMPLETIONS | sublime.INHIBIT_EXPLICIT_COMPLETIONS
125 )
126
127 if not self.initialized:
128 self.initialize()
129
130 if self.enabled:
131 reuse_completion = self.is_same_completion(prefix, locations)
132 if self.state == CompletionState.IDLE:
133 if not reuse_completion:
134 self.last_prefix = prefix
135 self.last_location = locations[0]
136 self.do_request(prefix, locations)
137 self.completions = []
138
139 elif self.state in (CompletionState.REQUESTING, CompletionState.CANCELLING):
140 self.next_request = (prefix, locations)
141 self.state = CompletionState.CANCELLING
142
143 elif self.state == CompletionState.APPLYING:
144 self.state = CompletionState.IDLE
145
146 return (
147 self.completions,
148 0 if not settings.only_show_lsp_completions
149 else sublime.INHIBIT_WORD_COMPLETIONS | sublime.INHIBIT_EXPLICIT_COMPLETIONS
150 )
151
152 def do_request(self, prefix: str, locations: 'List[int]'):
153 self.next_request = None
154 view = self.view
155
156 # don't store client so we can handle restarts
157 client = client_for_view(view)
158 if not client:
159 return
160
161 if settings.complete_all_chars or self.is_after_trigger_character(locations[0]):
162 global_events.publish("view.on_purge_changes", self.view)
163 document_position = get_document_position(view, locations[0])
164 if document_position:
165 client.send_request(
166 Request.complete(document_position),
167 self.handle_response,
168 self.handle_error)
169 self.state = CompletionState.REQUESTING
170
171 def handle_response(self, response: 'Optional[Dict]'):
172
173 if self.state == CompletionState.REQUESTING:
174 last_start = self.last_location - len(self.last_prefix)
175 last_row, last_col = self.view.rowcol(last_start)
176 self.completions = parse_completion_response(response, last_col, settings)
177
178 # if insert_best_completion was just ran, undo it before presenting new completions.
179 prev_char = self.view.substr(self.view.sel()[0].begin() - 1)
180 if prev_char.isspace():
181 if last_text_command == "insert_best_completion":
182 self.view.run_command("undo")
183
184 self.state = CompletionState.APPLYING
185 self.view.run_command("hide_auto_complete")
186 self.run_auto_complete()
187 elif self.state == CompletionState.CANCELLING:
188 self.state = CompletionState.IDLE
189 if self.next_request:
190 prefix, locations = self.next_request
191 self.do_request(prefix, locations)
192 else:
193 debug('Got unexpected response while in state {}'.format(self.state))
194
195 def handle_error(self, error: dict):
196 sublime.status_message('Completion error: ' + str(error.get('message')))
197 self.state = CompletionState.IDLE
198
199 def run_auto_complete(self):
200 self.view.run_command(
201 "auto_complete", {
202 'disable_auto_insert': True,
203 'api_completions_only': settings.only_show_lsp_completions,
204 'next_completion_if_showing': False
205 })
206
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/plugin/completion.py b/plugin/completion.py
--- a/plugin/completion.py
+++ b/plugin/completion.py
@@ -51,10 +51,7 @@
@classmethod
def is_applicable(cls, settings):
syntax = settings.get('syntax')
- if syntax is not None:
- return is_supported_syntax(syntax, client_configs.all)
- else:
- return False
+ return is_supported_syntax(syntax) if syntax else False
def initialize(self):
self.initialized = True
|
{"golden_diff": "diff --git a/plugin/completion.py b/plugin/completion.py\n--- a/plugin/completion.py\n+++ b/plugin/completion.py\n@@ -51,10 +51,7 @@\n @classmethod\n def is_applicable(cls, settings):\n syntax = settings.get('syntax')\n- if syntax is not None:\n- return is_supported_syntax(syntax, client_configs.all)\n- else:\n- return False\n+ return is_supported_syntax(syntax) if syntax else False\n \n def initialize(self):\n self.initialized = True\n", "issue": "TypeError: expected string or buffer\nhttps://github.com/tomv564/LSP/blob/e37c4e6d7d959890c465cada35dff7fef22feb6e/plugin/core/types.py#L50-L54\r\n\r\nIt happened only once so far, when `plugin_loaded` was called on start up. After restarting Sublime Text right away, it did not happened again.\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"F:\\SublimeText\\sublime_plugin.py\", line 298, in on_api_ready\r\n plc()\r\n File \"F:\\SublimeText\\Data\\Packages\\LSP\\boot.py\", line 30, in plugin_loaded\r\n startup()\r\n File \"F:\\SublimeText\\Data\\Packages\\LSP\\plugin\\core\\main.py\", line 25, in startup\r\n start_active_window()\r\n File \"F:\\SublimeText\\Data\\Packages\\LSP\\plugin\\core\\main.py\", line 44, in start_active_window\r\n windows.lookup(window).start_active_views()\r\n File \"F:\\SublimeText\\Data\\Packages\\LSP\\plugin\\core\\windows.py\", line 336, in start_active_views\r\n self._initialize_on_open(view)\r\n File \"F:\\SublimeText\\Data\\Packages\\LSP\\plugin\\core\\windows.py\", line 348, in _initialize_on_open\r\n self._configs.syntax_configs(view))\r\n File \"F:\\SublimeText\\Data\\Packages\\LSP\\plugin\\core\\configurations.py\", line 120, in syntax_configs\r\n return list(filter(lambda c: config_supports_syntax(c, syntax) and c.enabled, self.all))\r\n File \"F:\\SublimeText\\Data\\Packages\\LSP\\plugin\\core\\configurations.py\", line 120, in <lambda>\r\n return list(filter(lambda c: config_supports_syntax(c, syntax) and c.enabled, self.all))\r\n File \"F:\\SublimeText\\Data\\Packages\\LSP\\plugin\\core\\types.py\", line 81, in config_supports_syntax\r\n if re.search(r'|'.join(r'\\b%s\\b' % re.escape(s) for s in language.syntaxes), syntax, re.IGNORECASE):\r\n File \"./python3.3/re.py\", line 161, in search\r\nTypeError: expected string or buffer\r\n```\n", "before_files": [{"content": "import sublime\nimport sublime_plugin\n\ntry:\n from typing import Any, List, Dict, Tuple, Callable, Optional\n assert Any and List and Dict and Tuple and Callable and Optional\nexcept ImportError:\n pass\n\nfrom .core.protocol import Request\nfrom .core.events import global_events\nfrom .core.settings import settings, client_configs\nfrom .core.logging import debug\nfrom .core.completion import parse_completion_response\nfrom .core.registry import session_for_view, client_for_view\nfrom .core.configurations import is_supported_syntax\nfrom .core.documents import get_document_position\nfrom .core.sessions import Session\n\nNO_COMPLETION_SCOPES = 'comment, string'\n\n\nclass CompletionState(object):\n IDLE = 0\n REQUESTING = 1\n APPLYING = 2\n CANCELLING = 3\n\n\nlast_text_command = None\n\n\nclass CompletionHelper(sublime_plugin.EventListener):\n def on_text_command(self, view, command_name, args):\n global last_text_command\n last_text_command = command_name\n\n\nclass CompletionHandler(sublime_plugin.ViewEventListener):\n def __init__(self, view):\n self.view = view\n self.initialized = False\n self.enabled = False\n self.trigger_chars = [] # type: List[str]\n self.state = CompletionState.IDLE\n self.completions = [] # type: List[Any]\n self.next_request = None # type: Optional[Tuple[str, List[int]]]\n self.last_prefix = \"\"\n self.last_location = 0\n\n @classmethod\n def is_applicable(cls, settings):\n syntax = settings.get('syntax')\n if syntax is not None:\n return is_supported_syntax(syntax, client_configs.all)\n else:\n return False\n\n def initialize(self):\n self.initialized = True\n session = session_for_view(self.view)\n if session:\n completionProvider = session.get_capability(\n 'completionProvider')\n if completionProvider:\n self.enabled = True\n self.trigger_chars = completionProvider.get(\n 'triggerCharacters') or []\n if self.trigger_chars:\n self.register_trigger_chars(session)\n\n def _view_language(self, config_name: str) -> 'Optional[str]':\n languages = self.view.settings().get('lsp_language')\n return languages.get(config_name) if languages else None\n\n def register_trigger_chars(self, session: Session) -> None:\n completion_triggers = self.view.settings().get('auto_complete_triggers', [])\n view_language = self._view_language(session.config.name)\n if view_language:\n for language in session.config.languages:\n if language.id == view_language:\n for scope in language.scopes:\n # debug(\"registering\", self.trigger_chars, \"for\", scope)\n scope_trigger = next(\n (trigger for trigger in completion_triggers if trigger.get('selector', None) == scope),\n None\n )\n if scope_trigger:\n scope_trigger['characters'] = \"\".join(self.trigger_chars)\n else:\n completion_triggers.append({\n 'characters': \"\".join(self.trigger_chars),\n 'selector': scope\n })\n\n self.view.settings().set('auto_complete_triggers', completion_triggers)\n\n def is_after_trigger_character(self, location):\n if location > 0:\n prev_char = self.view.substr(location - 1)\n return prev_char in self.trigger_chars\n\n def is_same_completion(self, prefix, locations):\n # completion requests from the same location with the same prefix are cached.\n current_start = locations[0] - len(prefix)\n last_start = self.last_location - len(self.last_prefix)\n return prefix.startswith(self.last_prefix) and current_start == last_start\n\n def on_modified(self):\n # hide completion when backspacing past last completion.\n if self.view.sel()[0].begin() < self.last_location:\n self.last_location = 0\n self.view.run_command(\"hide_auto_complete\")\n # cancel current completion if the previous input is an space\n prev_char = self.view.substr(self.view.sel()[0].begin() - 1)\n if self.state == CompletionState.REQUESTING and prev_char.isspace():\n self.state = CompletionState.CANCELLING\n\n def on_query_completions(self, prefix, locations):\n if prefix != \"\" and self.view.match_selector(locations[0], NO_COMPLETION_SCOPES):\n # debug('discarding completion because no completion scope with prefix {}'.format(prefix))\n return (\n [],\n sublime.INHIBIT_WORD_COMPLETIONS | sublime.INHIBIT_EXPLICIT_COMPLETIONS\n )\n\n if not self.initialized:\n self.initialize()\n\n if self.enabled:\n reuse_completion = self.is_same_completion(prefix, locations)\n if self.state == CompletionState.IDLE:\n if not reuse_completion:\n self.last_prefix = prefix\n self.last_location = locations[0]\n self.do_request(prefix, locations)\n self.completions = []\n\n elif self.state in (CompletionState.REQUESTING, CompletionState.CANCELLING):\n self.next_request = (prefix, locations)\n self.state = CompletionState.CANCELLING\n\n elif self.state == CompletionState.APPLYING:\n self.state = CompletionState.IDLE\n\n return (\n self.completions,\n 0 if not settings.only_show_lsp_completions\n else sublime.INHIBIT_WORD_COMPLETIONS | sublime.INHIBIT_EXPLICIT_COMPLETIONS\n )\n\n def do_request(self, prefix: str, locations: 'List[int]'):\n self.next_request = None\n view = self.view\n\n # don't store client so we can handle restarts\n client = client_for_view(view)\n if not client:\n return\n\n if settings.complete_all_chars or self.is_after_trigger_character(locations[0]):\n global_events.publish(\"view.on_purge_changes\", self.view)\n document_position = get_document_position(view, locations[0])\n if document_position:\n client.send_request(\n Request.complete(document_position),\n self.handle_response,\n self.handle_error)\n self.state = CompletionState.REQUESTING\n\n def handle_response(self, response: 'Optional[Dict]'):\n\n if self.state == CompletionState.REQUESTING:\n last_start = self.last_location - len(self.last_prefix)\n last_row, last_col = self.view.rowcol(last_start)\n self.completions = parse_completion_response(response, last_col, settings)\n\n # if insert_best_completion was just ran, undo it before presenting new completions.\n prev_char = self.view.substr(self.view.sel()[0].begin() - 1)\n if prev_char.isspace():\n if last_text_command == \"insert_best_completion\":\n self.view.run_command(\"undo\")\n\n self.state = CompletionState.APPLYING\n self.view.run_command(\"hide_auto_complete\")\n self.run_auto_complete()\n elif self.state == CompletionState.CANCELLING:\n self.state = CompletionState.IDLE\n if self.next_request:\n prefix, locations = self.next_request\n self.do_request(prefix, locations)\n else:\n debug('Got unexpected response while in state {}'.format(self.state))\n\n def handle_error(self, error: dict):\n sublime.status_message('Completion error: ' + str(error.get('message')))\n self.state = CompletionState.IDLE\n\n def run_auto_complete(self):\n self.view.run_command(\n \"auto_complete\", {\n 'disable_auto_insert': True,\n 'api_completions_only': settings.only_show_lsp_completions,\n 'next_completion_if_showing': False\n })\n", "path": "plugin/completion.py"}], "after_files": [{"content": "import sublime\nimport sublime_plugin\n\ntry:\n from typing import Any, List, Dict, Tuple, Callable, Optional\n assert Any and List and Dict and Tuple and Callable and Optional\nexcept ImportError:\n pass\n\nfrom .core.protocol import Request\nfrom .core.events import global_events\nfrom .core.settings import settings, client_configs\nfrom .core.logging import debug\nfrom .core.completion import parse_completion_response\nfrom .core.registry import session_for_view, client_for_view\nfrom .core.configurations import is_supported_syntax\nfrom .core.documents import get_document_position\nfrom .core.sessions import Session\n\nNO_COMPLETION_SCOPES = 'comment, string'\n\n\nclass CompletionState(object):\n IDLE = 0\n REQUESTING = 1\n APPLYING = 2\n CANCELLING = 3\n\n\nlast_text_command = None\n\n\nclass CompletionHelper(sublime_plugin.EventListener):\n def on_text_command(self, view, command_name, args):\n global last_text_command\n last_text_command = command_name\n\n\nclass CompletionHandler(sublime_plugin.ViewEventListener):\n def __init__(self, view):\n self.view = view\n self.initialized = False\n self.enabled = False\n self.trigger_chars = [] # type: List[str]\n self.state = CompletionState.IDLE\n self.completions = [] # type: List[Any]\n self.next_request = None # type: Optional[Tuple[str, List[int]]]\n self.last_prefix = \"\"\n self.last_location = 0\n\n @classmethod\n def is_applicable(cls, settings):\n syntax = settings.get('syntax')\n return is_supported_syntax(syntax) if syntax else False\n\n def initialize(self):\n self.initialized = True\n session = session_for_view(self.view)\n if session:\n completionProvider = session.get_capability(\n 'completionProvider')\n if completionProvider:\n self.enabled = True\n self.trigger_chars = completionProvider.get(\n 'triggerCharacters') or []\n if self.trigger_chars:\n self.register_trigger_chars(session)\n\n def _view_language(self, config_name: str) -> 'Optional[str]':\n languages = self.view.settings().get('lsp_language')\n return languages.get(config_name) if languages else None\n\n def register_trigger_chars(self, session: Session) -> None:\n completion_triggers = self.view.settings().get('auto_complete_triggers', [])\n view_language = self._view_language(session.config.name)\n if view_language:\n for language in session.config.languages:\n if language.id == view_language:\n for scope in language.scopes:\n # debug(\"registering\", self.trigger_chars, \"for\", scope)\n scope_trigger = next(\n (trigger for trigger in completion_triggers if trigger.get('selector', None) == scope),\n None\n )\n if scope_trigger:\n scope_trigger['characters'] = \"\".join(self.trigger_chars)\n else:\n completion_triggers.append({\n 'characters': \"\".join(self.trigger_chars),\n 'selector': scope\n })\n\n self.view.settings().set('auto_complete_triggers', completion_triggers)\n\n def is_after_trigger_character(self, location):\n if location > 0:\n prev_char = self.view.substr(location - 1)\n return prev_char in self.trigger_chars\n\n def is_same_completion(self, prefix, locations):\n # completion requests from the same location with the same prefix are cached.\n current_start = locations[0] - len(prefix)\n last_start = self.last_location - len(self.last_prefix)\n return prefix.startswith(self.last_prefix) and current_start == last_start\n\n def on_modified(self):\n # hide completion when backspacing past last completion.\n if self.view.sel()[0].begin() < self.last_location:\n self.last_location = 0\n self.view.run_command(\"hide_auto_complete\")\n # cancel current completion if the previous input is an space\n prev_char = self.view.substr(self.view.sel()[0].begin() - 1)\n if self.state == CompletionState.REQUESTING and prev_char.isspace():\n self.state = CompletionState.CANCELLING\n\n def on_query_completions(self, prefix, locations):\n if prefix != \"\" and self.view.match_selector(locations[0], NO_COMPLETION_SCOPES):\n # debug('discarding completion because no completion scope with prefix {}'.format(prefix))\n return (\n [],\n sublime.INHIBIT_WORD_COMPLETIONS | sublime.INHIBIT_EXPLICIT_COMPLETIONS\n )\n\n if not self.initialized:\n self.initialize()\n\n if self.enabled:\n reuse_completion = self.is_same_completion(prefix, locations)\n if self.state == CompletionState.IDLE:\n if not reuse_completion:\n self.last_prefix = prefix\n self.last_location = locations[0]\n self.do_request(prefix, locations)\n self.completions = []\n\n elif self.state in (CompletionState.REQUESTING, CompletionState.CANCELLING):\n self.next_request = (prefix, locations)\n self.state = CompletionState.CANCELLING\n\n elif self.state == CompletionState.APPLYING:\n self.state = CompletionState.IDLE\n\n return (\n self.completions,\n 0 if not settings.only_show_lsp_completions\n else sublime.INHIBIT_WORD_COMPLETIONS | sublime.INHIBIT_EXPLICIT_COMPLETIONS\n )\n\n def do_request(self, prefix: str, locations: 'List[int]'):\n self.next_request = None\n view = self.view\n\n # don't store client so we can handle restarts\n client = client_for_view(view)\n if not client:\n return\n\n if settings.complete_all_chars or self.is_after_trigger_character(locations[0]):\n global_events.publish(\"view.on_purge_changes\", self.view)\n document_position = get_document_position(view, locations[0])\n if document_position:\n client.send_request(\n Request.complete(document_position),\n self.handle_response,\n self.handle_error)\n self.state = CompletionState.REQUESTING\n\n def handle_response(self, response: 'Optional[Dict]'):\n\n if self.state == CompletionState.REQUESTING:\n last_start = self.last_location - len(self.last_prefix)\n last_row, last_col = self.view.rowcol(last_start)\n self.completions = parse_completion_response(response, last_col, settings)\n\n # if insert_best_completion was just ran, undo it before presenting new completions.\n prev_char = self.view.substr(self.view.sel()[0].begin() - 1)\n if prev_char.isspace():\n if last_text_command == \"insert_best_completion\":\n self.view.run_command(\"undo\")\n\n self.state = CompletionState.APPLYING\n self.view.run_command(\"hide_auto_complete\")\n self.run_auto_complete()\n elif self.state == CompletionState.CANCELLING:\n self.state = CompletionState.IDLE\n if self.next_request:\n prefix, locations = self.next_request\n self.do_request(prefix, locations)\n else:\n debug('Got unexpected response while in state {}'.format(self.state))\n\n def handle_error(self, error: dict):\n sublime.status_message('Completion error: ' + str(error.get('message')))\n self.state = CompletionState.IDLE\n\n def run_auto_complete(self):\n self.view.run_command(\n \"auto_complete\", {\n 'disable_auto_insert': True,\n 'api_completions_only': settings.only_show_lsp_completions,\n 'next_completion_if_showing': False\n })\n", "path": "plugin/completion.py"}]}
| 2,908 | 118 |
gh_patches_debug_32870
|
rasdani/github-patches
|
git_diff
|
netbox-community__netbox-2941
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Values for Selection Custom Fields via API
<!--
Before opening a new issue, please search through the existing issues to
see if your topic has already been addressed. Note that you may need to
remove the "is:open" filter from the search bar to include closed issues.
Check the appropriate type for your issue below by placing an x between the
brackets. For assistance with installation issues, or for any other issues
other than those listed below, please raise your topic for discussion on
our mailing list:
https://groups.google.com/forum/#!forum/netbox-discuss
Please note that issues which do not fall under any of the below categories
will be closed. Due to an excessive backlog of feature requests, we are
not currently accepting any proposals which extend NetBox's feature scope.
Do not prepend any sort of tag to your issue's title. An administrator will
review your issue and assign labels as appropriate.
--->
### Issue type
[X] Feature request <!-- An enhancement of existing functionality -->
[ ] Bug report <!-- Unexpected or erroneous behavior -->
[ ] Documentation <!-- A modification to the documentation -->
<!--
Please describe the environment in which you are running NetBox. (Be sure
to verify that you are running the latest stable release of NetBox before
submitting a bug report.) If you are submitting a bug report and have made
any changes to the code base, please first validate that your bug can be
recreated while running an official release.
-->
### Environment
* Python version: 3.6
* NetBox version: 2.2.8
<!--
BUG REPORTS must include:
* A list of the steps needed for someone else to reproduce the bug
* A description of the expected and observed behavior
* Any relevant error messages (screenshots may also help)
FEATURE REQUESTS must include:
* A detailed description of the proposed functionality
* A use case for the new feature
* A rough description of any necessary changes to the database schema
* Any relevant third-party libraries which would be needed
-->
### Description
As I [already told on the mailing list](https://groups.google.com/forum/#!topic/netbox-discuss/4j7ODlKWNk0), I would like to fill in the values of _selection custom fields_ via the API.
I expected to provide the actual value of the field, i.e. the text I see in the dropdown of the GUI. But the API expects to get the ID of the option, rather than the value, and there seems to be no way of looking up the available (value,id)-pairs for a given _selection custom fields_. Also I would like to avoid hardcoding any (value,id)-pairs in my application, as it's not consistent across our different Netbox instances (local, staging, production) and it would be very fragile.
I see two solutions:
* When I set the value of such a selection custom field, Netbox accepts the String value and converts it to the respective ID internally.
* There is a new endpoint to fetch the option pairs of selection custom fields, similar to the other __options_ endpoints.
IMO both options could even live in parallel to each other: Whenever an integer is presented as value for a selection custom field, we can assume it's the ID of the value. Whenever it's a string, we can assume it's the text value and first has to be converted to the corresponding integer id.
Allowing to submit text values via the API instead of the actual IDs might imply that we don't allow the same text value on one given selection custom field twice.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `netbox/extras/api/views.py`
Content:
```
1 from django.contrib.contenttypes.models import ContentType
2 from django.db.models import Count
3 from django.http import Http404, HttpResponse
4 from django.shortcuts import get_object_or_404
5 from rest_framework.decorators import action
6 from rest_framework.exceptions import PermissionDenied
7 from rest_framework.response import Response
8 from rest_framework.viewsets import ReadOnlyModelViewSet, ViewSet
9
10 from extras import filters
11 from extras.models import (
12 ConfigContext, CustomField, ExportTemplate, Graph, ImageAttachment, ObjectChange, ReportResult, TopologyMap,
13 Tag
14 )
15 from extras.reports import get_report, get_reports
16 from utilities.api import FieldChoicesViewSet, IsAuthenticatedOrLoginNotRequired, ModelViewSet
17 from . import serializers
18
19
20 #
21 # Field choices
22 #
23
24 class ExtrasFieldChoicesViewSet(FieldChoicesViewSet):
25 fields = (
26 (CustomField, ['type']),
27 (Graph, ['type']),
28 )
29
30
31 #
32 # Custom fields
33 #
34
35 class CustomFieldModelViewSet(ModelViewSet):
36 """
37 Include the applicable set of CustomFields in the ModelViewSet context.
38 """
39
40 def get_serializer_context(self):
41
42 # Gather all custom fields for the model
43 content_type = ContentType.objects.get_for_model(self.queryset.model)
44 custom_fields = content_type.custom_fields.prefetch_related('choices')
45
46 # Cache all relevant CustomFieldChoices. This saves us from having to do a lookup per select field per object.
47 custom_field_choices = {}
48 for field in custom_fields:
49 for cfc in field.choices.all():
50 custom_field_choices[cfc.id] = cfc.value
51 custom_field_choices = custom_field_choices
52
53 context = super().get_serializer_context()
54 context.update({
55 'custom_fields': custom_fields,
56 'custom_field_choices': custom_field_choices,
57 })
58 return context
59
60 def get_queryset(self):
61 # Prefetch custom field values
62 return super().get_queryset().prefetch_related('custom_field_values__field')
63
64
65 #
66 # Graphs
67 #
68
69 class GraphViewSet(ModelViewSet):
70 queryset = Graph.objects.all()
71 serializer_class = serializers.GraphSerializer
72 filterset_class = filters.GraphFilter
73
74
75 #
76 # Export templates
77 #
78
79 class ExportTemplateViewSet(ModelViewSet):
80 queryset = ExportTemplate.objects.all()
81 serializer_class = serializers.ExportTemplateSerializer
82 filterset_class = filters.ExportTemplateFilter
83
84
85 #
86 # Topology maps
87 #
88
89 class TopologyMapViewSet(ModelViewSet):
90 queryset = TopologyMap.objects.select_related('site')
91 serializer_class = serializers.TopologyMapSerializer
92 filterset_class = filters.TopologyMapFilter
93
94 @action(detail=True)
95 def render(self, request, pk):
96
97 tmap = get_object_or_404(TopologyMap, pk=pk)
98 img_format = 'png'
99
100 try:
101 data = tmap.render(img_format=img_format)
102 except Exception as e:
103 return HttpResponse(
104 "There was an error generating the requested graph: %s" % e
105 )
106
107 response = HttpResponse(data, content_type='image/{}'.format(img_format))
108 response['Content-Disposition'] = 'inline; filename="{}.{}"'.format(tmap.slug, img_format)
109
110 return response
111
112
113 #
114 # Tags
115 #
116
117 class TagViewSet(ModelViewSet):
118 queryset = Tag.objects.annotate(tagged_items=Count('extras_taggeditem_items'))
119 serializer_class = serializers.TagSerializer
120 filterset_class = filters.TagFilter
121
122
123 #
124 # Image attachments
125 #
126
127 class ImageAttachmentViewSet(ModelViewSet):
128 queryset = ImageAttachment.objects.all()
129 serializer_class = serializers.ImageAttachmentSerializer
130
131
132 #
133 # Config contexts
134 #
135
136 class ConfigContextViewSet(ModelViewSet):
137 queryset = ConfigContext.objects.prefetch_related(
138 'regions', 'sites', 'roles', 'platforms', 'tenant_groups', 'tenants',
139 )
140 serializer_class = serializers.ConfigContextSerializer
141 filterset_class = filters.ConfigContextFilter
142
143
144 #
145 # Reports
146 #
147
148 class ReportViewSet(ViewSet):
149 permission_classes = [IsAuthenticatedOrLoginNotRequired]
150 _ignore_model_permissions = True
151 exclude_from_schema = True
152 lookup_value_regex = '[^/]+' # Allow dots
153
154 def _retrieve_report(self, pk):
155
156 # Read the PK as "<module>.<report>"
157 if '.' not in pk:
158 raise Http404
159 module_name, report_name = pk.split('.', 1)
160
161 # Raise a 404 on an invalid Report module/name
162 report = get_report(module_name, report_name)
163 if report is None:
164 raise Http404
165
166 return report
167
168 def list(self, request):
169 """
170 Compile all reports and their related results (if any). Result data is deferred in the list view.
171 """
172 report_list = []
173
174 # Iterate through all available Reports.
175 for module_name, reports in get_reports():
176 for report in reports:
177
178 # Attach the relevant ReportResult (if any) to each Report.
179 report.result = ReportResult.objects.filter(report=report.full_name).defer('data').first()
180 report_list.append(report)
181
182 serializer = serializers.ReportSerializer(report_list, many=True, context={
183 'request': request,
184 })
185
186 return Response(serializer.data)
187
188 def retrieve(self, request, pk):
189 """
190 Retrieve a single Report identified as "<module>.<report>".
191 """
192
193 # Retrieve the Report and ReportResult, if any.
194 report = self._retrieve_report(pk)
195 report.result = ReportResult.objects.filter(report=report.full_name).first()
196
197 serializer = serializers.ReportDetailSerializer(report)
198
199 return Response(serializer.data)
200
201 @action(detail=True, methods=['post'])
202 def run(self, request, pk):
203 """
204 Run a Report and create a new ReportResult, overwriting any previous result for the Report.
205 """
206
207 # Check that the user has permission to run reports.
208 if not request.user.has_perm('extras.add_reportresult'):
209 raise PermissionDenied("This user does not have permission to run reports.")
210
211 # Retrieve and run the Report. This will create a new ReportResult.
212 report = self._retrieve_report(pk)
213 report.run()
214
215 serializer = serializers.ReportDetailSerializer(report)
216
217 return Response(serializer.data)
218
219
220 #
221 # Change logging
222 #
223
224 class ObjectChangeViewSet(ReadOnlyModelViewSet):
225 """
226 Retrieve a list of recent changes.
227 """
228 queryset = ObjectChange.objects.select_related('user')
229 serializer_class = serializers.ObjectChangeSerializer
230 filterset_class = filters.ObjectChangeFilter
231
```
Path: `netbox/extras/api/urls.py`
Content:
```
1 from rest_framework import routers
2
3 from . import views
4
5
6 class ExtrasRootView(routers.APIRootView):
7 """
8 Extras API root view
9 """
10 def get_view_name(self):
11 return 'Extras'
12
13
14 router = routers.DefaultRouter()
15 router.APIRootView = ExtrasRootView
16
17 # Field choices
18 router.register(r'_choices', views.ExtrasFieldChoicesViewSet, basename='field-choice')
19
20 # Graphs
21 router.register(r'graphs', views.GraphViewSet)
22
23 # Export templates
24 router.register(r'export-templates', views.ExportTemplateViewSet)
25
26 # Topology maps
27 router.register(r'topology-maps', views.TopologyMapViewSet)
28
29 # Tags
30 router.register(r'tags', views.TagViewSet)
31
32 # Image attachments
33 router.register(r'image-attachments', views.ImageAttachmentViewSet)
34
35 # Config contexts
36 router.register(r'config-contexts', views.ConfigContextViewSet)
37
38 # Reports
39 router.register(r'reports', views.ReportViewSet, basename='report')
40
41 # Change logging
42 router.register(r'object-changes', views.ObjectChangeViewSet)
43
44 app_name = 'extras-api'
45 urlpatterns = router.urls
46
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/netbox/extras/api/urls.py b/netbox/extras/api/urls.py
--- a/netbox/extras/api/urls.py
+++ b/netbox/extras/api/urls.py
@@ -17,6 +17,9 @@
# Field choices
router.register(r'_choices', views.ExtrasFieldChoicesViewSet, basename='field-choice')
+# Custom field choices
+router.register(r'_custom_field_choices', views.CustomFieldChoicesViewSet, base_name='custom-field-choice')
+
# Graphs
router.register(r'graphs', views.GraphViewSet)
diff --git a/netbox/extras/api/views.py b/netbox/extras/api/views.py
--- a/netbox/extras/api/views.py
+++ b/netbox/extras/api/views.py
@@ -1,3 +1,5 @@
+from collections import OrderedDict
+
from django.contrib.contenttypes.models import ContentType
from django.db.models import Count
from django.http import Http404, HttpResponse
@@ -9,8 +11,8 @@
from extras import filters
from extras.models import (
- ConfigContext, CustomField, ExportTemplate, Graph, ImageAttachment, ObjectChange, ReportResult, TopologyMap,
- Tag
+ ConfigContext, CustomField, CustomFieldChoice, ExportTemplate, Graph, ImageAttachment, ObjectChange, ReportResult, TopologyMap,
+ Tag,
)
from extras.reports import get_report, get_reports
from utilities.api import FieldChoicesViewSet, IsAuthenticatedOrLoginNotRequired, ModelViewSet
@@ -28,6 +30,36 @@
)
+#
+# Custom field choices
+#
+
+class CustomFieldChoicesViewSet(ViewSet):
+ """
+ """
+ permission_classes = [IsAuthenticatedOrLoginNotRequired]
+
+ def __init__(self, *args, **kwargs):
+ super(CustomFieldChoicesViewSet, self).__init__(*args, **kwargs)
+
+ self._fields = OrderedDict()
+
+ for cfc in CustomFieldChoice.objects.all():
+ self._fields.setdefault(cfc.field.name, {})
+ self._fields[cfc.field.name][cfc.value] = cfc.pk
+
+ def list(self, request):
+ return Response(self._fields)
+
+ def retrieve(self, request, pk):
+ if pk not in self._fields:
+ raise Http404
+ return Response(self._fields[pk])
+
+ def get_view_name(self):
+ return "Custom Field choices"
+
+
#
# Custom fields
#
|
{"golden_diff": "diff --git a/netbox/extras/api/urls.py b/netbox/extras/api/urls.py\n--- a/netbox/extras/api/urls.py\n+++ b/netbox/extras/api/urls.py\n@@ -17,6 +17,9 @@\n # Field choices\n router.register(r'_choices', views.ExtrasFieldChoicesViewSet, basename='field-choice')\n \n+# Custom field choices\n+router.register(r'_custom_field_choices', views.CustomFieldChoicesViewSet, base_name='custom-field-choice')\n+\n # Graphs\n router.register(r'graphs', views.GraphViewSet)\n \ndiff --git a/netbox/extras/api/views.py b/netbox/extras/api/views.py\n--- a/netbox/extras/api/views.py\n+++ b/netbox/extras/api/views.py\n@@ -1,3 +1,5 @@\n+from collections import OrderedDict\n+\n from django.contrib.contenttypes.models import ContentType\n from django.db.models import Count\n from django.http import Http404, HttpResponse\n@@ -9,8 +11,8 @@\n \n from extras import filters\n from extras.models import (\n- ConfigContext, CustomField, ExportTemplate, Graph, ImageAttachment, ObjectChange, ReportResult, TopologyMap,\n- Tag\n+ ConfigContext, CustomField, CustomFieldChoice, ExportTemplate, Graph, ImageAttachment, ObjectChange, ReportResult, TopologyMap,\n+ Tag,\n )\n from extras.reports import get_report, get_reports\n from utilities.api import FieldChoicesViewSet, IsAuthenticatedOrLoginNotRequired, ModelViewSet\n@@ -28,6 +30,36 @@\n )\n \n \n+#\n+# Custom field choices\n+#\n+\n+class CustomFieldChoicesViewSet(ViewSet):\n+ \"\"\"\n+ \"\"\"\n+ permission_classes = [IsAuthenticatedOrLoginNotRequired]\n+\n+ def __init__(self, *args, **kwargs):\n+ super(CustomFieldChoicesViewSet, self).__init__(*args, **kwargs)\n+\n+ self._fields = OrderedDict()\n+\n+ for cfc in CustomFieldChoice.objects.all():\n+ self._fields.setdefault(cfc.field.name, {})\n+ self._fields[cfc.field.name][cfc.value] = cfc.pk\n+\n+ def list(self, request):\n+ return Response(self._fields)\n+\n+ def retrieve(self, request, pk):\n+ if pk not in self._fields:\n+ raise Http404\n+ return Response(self._fields[pk])\n+\n+ def get_view_name(self):\n+ return \"Custom Field choices\"\n+\n+\n #\n # Custom fields\n #\n", "issue": "Values for Selection Custom Fields via API\n<!--\r\n Before opening a new issue, please search through the existing issues to\r\n see if your topic has already been addressed. Note that you may need to\r\n remove the \"is:open\" filter from the search bar to include closed issues.\r\n\r\n Check the appropriate type for your issue below by placing an x between the\r\n brackets. For assistance with installation issues, or for any other issues\r\n other than those listed below, please raise your topic for discussion on\r\n our mailing list:\r\n\r\n https://groups.google.com/forum/#!forum/netbox-discuss\r\n\r\n Please note that issues which do not fall under any of the below categories\r\n will be closed. Due to an excessive backlog of feature requests, we are\r\n not currently accepting any proposals which extend NetBox's feature scope.\r\n\r\n Do not prepend any sort of tag to your issue's title. An administrator will\r\n review your issue and assign labels as appropriate.\r\n--->\r\n### Issue type\r\n[X] Feature request <!-- An enhancement of existing functionality -->\r\n[ ] Bug report <!-- Unexpected or erroneous behavior -->\r\n[ ] Documentation <!-- A modification to the documentation -->\r\n\r\n<!--\r\n Please describe the environment in which you are running NetBox. (Be sure\r\n to verify that you are running the latest stable release of NetBox before\r\n submitting a bug report.) If you are submitting a bug report and have made\r\n any changes to the code base, please first validate that your bug can be\r\n recreated while running an official release.\r\n-->\r\n### Environment\r\n* Python version: 3.6\r\n* NetBox version: 2.2.8\r\n\r\n<!--\r\n BUG REPORTS must include:\r\n * A list of the steps needed for someone else to reproduce the bug\r\n * A description of the expected and observed behavior\r\n * Any relevant error messages (screenshots may also help)\r\n\r\n FEATURE REQUESTS must include:\r\n * A detailed description of the proposed functionality\r\n * A use case for the new feature\r\n * A rough description of any necessary changes to the database schema\r\n * Any relevant third-party libraries which would be needed\r\n-->\r\n### Description\r\n\r\nAs I [already told on the mailing list](https://groups.google.com/forum/#!topic/netbox-discuss/4j7ODlKWNk0), I would like to fill in the values of _selection custom fields_ via the API.\r\n\r\nI expected to provide the actual value of the field, i.e. the text I see in the dropdown of the GUI. But the API expects to get the ID of the option, rather than the value, and there seems to be no way of looking up the available (value,id)-pairs for a given _selection custom fields_. Also I would like to avoid hardcoding any (value,id)-pairs in my application, as it's not consistent across our different Netbox instances (local, staging, production) and it would be very fragile.\r\n\r\nI see two solutions:\r\n* When I set the value of such a selection custom field, Netbox accepts the String value and converts it to the respective ID internally.\r\n* There is a new endpoint to fetch the option pairs of selection custom fields, similar to the other __options_ endpoints.\r\n\r\nIMO both options could even live in parallel to each other: Whenever an integer is presented as value for a selection custom field, we can assume it's the ID of the value. Whenever it's a string, we can assume it's the text value and first has to be converted to the corresponding integer id.\r\n\r\nAllowing to submit text values via the API instead of the actual IDs might imply that we don't allow the same text value on one given selection custom field twice.\r\n \n", "before_files": [{"content": "from django.contrib.contenttypes.models import ContentType\nfrom django.db.models import Count\nfrom django.http import Http404, HttpResponse\nfrom django.shortcuts import get_object_or_404\nfrom rest_framework.decorators import action\nfrom rest_framework.exceptions import PermissionDenied\nfrom rest_framework.response import Response\nfrom rest_framework.viewsets import ReadOnlyModelViewSet, ViewSet\n\nfrom extras import filters\nfrom extras.models import (\n ConfigContext, CustomField, ExportTemplate, Graph, ImageAttachment, ObjectChange, ReportResult, TopologyMap,\n Tag\n)\nfrom extras.reports import get_report, get_reports\nfrom utilities.api import FieldChoicesViewSet, IsAuthenticatedOrLoginNotRequired, ModelViewSet\nfrom . import serializers\n\n\n#\n# Field choices\n#\n\nclass ExtrasFieldChoicesViewSet(FieldChoicesViewSet):\n fields = (\n (CustomField, ['type']),\n (Graph, ['type']),\n )\n\n\n#\n# Custom fields\n#\n\nclass CustomFieldModelViewSet(ModelViewSet):\n \"\"\"\n Include the applicable set of CustomFields in the ModelViewSet context.\n \"\"\"\n\n def get_serializer_context(self):\n\n # Gather all custom fields for the model\n content_type = ContentType.objects.get_for_model(self.queryset.model)\n custom_fields = content_type.custom_fields.prefetch_related('choices')\n\n # Cache all relevant CustomFieldChoices. This saves us from having to do a lookup per select field per object.\n custom_field_choices = {}\n for field in custom_fields:\n for cfc in field.choices.all():\n custom_field_choices[cfc.id] = cfc.value\n custom_field_choices = custom_field_choices\n\n context = super().get_serializer_context()\n context.update({\n 'custom_fields': custom_fields,\n 'custom_field_choices': custom_field_choices,\n })\n return context\n\n def get_queryset(self):\n # Prefetch custom field values\n return super().get_queryset().prefetch_related('custom_field_values__field')\n\n\n#\n# Graphs\n#\n\nclass GraphViewSet(ModelViewSet):\n queryset = Graph.objects.all()\n serializer_class = serializers.GraphSerializer\n filterset_class = filters.GraphFilter\n\n\n#\n# Export templates\n#\n\nclass ExportTemplateViewSet(ModelViewSet):\n queryset = ExportTemplate.objects.all()\n serializer_class = serializers.ExportTemplateSerializer\n filterset_class = filters.ExportTemplateFilter\n\n\n#\n# Topology maps\n#\n\nclass TopologyMapViewSet(ModelViewSet):\n queryset = TopologyMap.objects.select_related('site')\n serializer_class = serializers.TopologyMapSerializer\n filterset_class = filters.TopologyMapFilter\n\n @action(detail=True)\n def render(self, request, pk):\n\n tmap = get_object_or_404(TopologyMap, pk=pk)\n img_format = 'png'\n\n try:\n data = tmap.render(img_format=img_format)\n except Exception as e:\n return HttpResponse(\n \"There was an error generating the requested graph: %s\" % e\n )\n\n response = HttpResponse(data, content_type='image/{}'.format(img_format))\n response['Content-Disposition'] = 'inline; filename=\"{}.{}\"'.format(tmap.slug, img_format)\n\n return response\n\n\n#\n# Tags\n#\n\nclass TagViewSet(ModelViewSet):\n queryset = Tag.objects.annotate(tagged_items=Count('extras_taggeditem_items'))\n serializer_class = serializers.TagSerializer\n filterset_class = filters.TagFilter\n\n\n#\n# Image attachments\n#\n\nclass ImageAttachmentViewSet(ModelViewSet):\n queryset = ImageAttachment.objects.all()\n serializer_class = serializers.ImageAttachmentSerializer\n\n\n#\n# Config contexts\n#\n\nclass ConfigContextViewSet(ModelViewSet):\n queryset = ConfigContext.objects.prefetch_related(\n 'regions', 'sites', 'roles', 'platforms', 'tenant_groups', 'tenants',\n )\n serializer_class = serializers.ConfigContextSerializer\n filterset_class = filters.ConfigContextFilter\n\n\n#\n# Reports\n#\n\nclass ReportViewSet(ViewSet):\n permission_classes = [IsAuthenticatedOrLoginNotRequired]\n _ignore_model_permissions = True\n exclude_from_schema = True\n lookup_value_regex = '[^/]+' # Allow dots\n\n def _retrieve_report(self, pk):\n\n # Read the PK as \"<module>.<report>\"\n if '.' not in pk:\n raise Http404\n module_name, report_name = pk.split('.', 1)\n\n # Raise a 404 on an invalid Report module/name\n report = get_report(module_name, report_name)\n if report is None:\n raise Http404\n\n return report\n\n def list(self, request):\n \"\"\"\n Compile all reports and their related results (if any). Result data is deferred in the list view.\n \"\"\"\n report_list = []\n\n # Iterate through all available Reports.\n for module_name, reports in get_reports():\n for report in reports:\n\n # Attach the relevant ReportResult (if any) to each Report.\n report.result = ReportResult.objects.filter(report=report.full_name).defer('data').first()\n report_list.append(report)\n\n serializer = serializers.ReportSerializer(report_list, many=True, context={\n 'request': request,\n })\n\n return Response(serializer.data)\n\n def retrieve(self, request, pk):\n \"\"\"\n Retrieve a single Report identified as \"<module>.<report>\".\n \"\"\"\n\n # Retrieve the Report and ReportResult, if any.\n report = self._retrieve_report(pk)\n report.result = ReportResult.objects.filter(report=report.full_name).first()\n\n serializer = serializers.ReportDetailSerializer(report)\n\n return Response(serializer.data)\n\n @action(detail=True, methods=['post'])\n def run(self, request, pk):\n \"\"\"\n Run a Report and create a new ReportResult, overwriting any previous result for the Report.\n \"\"\"\n\n # Check that the user has permission to run reports.\n if not request.user.has_perm('extras.add_reportresult'):\n raise PermissionDenied(\"This user does not have permission to run reports.\")\n\n # Retrieve and run the Report. This will create a new ReportResult.\n report = self._retrieve_report(pk)\n report.run()\n\n serializer = serializers.ReportDetailSerializer(report)\n\n return Response(serializer.data)\n\n\n#\n# Change logging\n#\n\nclass ObjectChangeViewSet(ReadOnlyModelViewSet):\n \"\"\"\n Retrieve a list of recent changes.\n \"\"\"\n queryset = ObjectChange.objects.select_related('user')\n serializer_class = serializers.ObjectChangeSerializer\n filterset_class = filters.ObjectChangeFilter\n", "path": "netbox/extras/api/views.py"}, {"content": "from rest_framework import routers\n\nfrom . import views\n\n\nclass ExtrasRootView(routers.APIRootView):\n \"\"\"\n Extras API root view\n \"\"\"\n def get_view_name(self):\n return 'Extras'\n\n\nrouter = routers.DefaultRouter()\nrouter.APIRootView = ExtrasRootView\n\n# Field choices\nrouter.register(r'_choices', views.ExtrasFieldChoicesViewSet, basename='field-choice')\n\n# Graphs\nrouter.register(r'graphs', views.GraphViewSet)\n\n# Export templates\nrouter.register(r'export-templates', views.ExportTemplateViewSet)\n\n# Topology maps\nrouter.register(r'topology-maps', views.TopologyMapViewSet)\n\n# Tags\nrouter.register(r'tags', views.TagViewSet)\n\n# Image attachments\nrouter.register(r'image-attachments', views.ImageAttachmentViewSet)\n\n# Config contexts\nrouter.register(r'config-contexts', views.ConfigContextViewSet)\n\n# Reports\nrouter.register(r'reports', views.ReportViewSet, basename='report')\n\n# Change logging\nrouter.register(r'object-changes', views.ObjectChangeViewSet)\n\napp_name = 'extras-api'\nurlpatterns = router.urls\n", "path": "netbox/extras/api/urls.py"}], "after_files": [{"content": "from collections import OrderedDict\n\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.db.models import Count\nfrom django.http import Http404, HttpResponse\nfrom django.shortcuts import get_object_or_404\nfrom rest_framework.decorators import action\nfrom rest_framework.exceptions import PermissionDenied\nfrom rest_framework.response import Response\nfrom rest_framework.viewsets import ReadOnlyModelViewSet, ViewSet\n\nfrom extras import filters\nfrom extras.models import (\n ConfigContext, CustomField, CustomFieldChoice, ExportTemplate, Graph, ImageAttachment, ObjectChange, ReportResult, TopologyMap,\n Tag,\n)\nfrom extras.reports import get_report, get_reports\nfrom utilities.api import FieldChoicesViewSet, IsAuthenticatedOrLoginNotRequired, ModelViewSet\nfrom . import serializers\n\n\n#\n# Field choices\n#\n\nclass ExtrasFieldChoicesViewSet(FieldChoicesViewSet):\n fields = (\n (CustomField, ['type']),\n (Graph, ['type']),\n )\n\n\n#\n# Custom field choices\n#\n\nclass CustomFieldChoicesViewSet(ViewSet):\n \"\"\"\n \"\"\"\n permission_classes = [IsAuthenticatedOrLoginNotRequired]\n\n def __init__(self, *args, **kwargs):\n super(CustomFieldChoicesViewSet, self).__init__(*args, **kwargs)\n\n self._fields = OrderedDict()\n\n for cfc in CustomFieldChoice.objects.all():\n self._fields.setdefault(cfc.field.name, {})\n self._fields[cfc.field.name][cfc.value] = cfc.pk\n\n def list(self, request):\n return Response(self._fields)\n\n def retrieve(self, request, pk):\n if pk not in self._fields:\n raise Http404\n return Response(self._fields[pk])\n\n def get_view_name(self):\n return \"Custom Field choices\"\n\n\n#\n# Custom fields\n#\n\nclass CustomFieldModelViewSet(ModelViewSet):\n \"\"\"\n Include the applicable set of CustomFields in the ModelViewSet context.\n \"\"\"\n\n def get_serializer_context(self):\n\n # Gather all custom fields for the model\n content_type = ContentType.objects.get_for_model(self.queryset.model)\n custom_fields = content_type.custom_fields.prefetch_related('choices')\n\n # Cache all relevant CustomFieldChoices. This saves us from having to do a lookup per select field per object.\n custom_field_choices = {}\n for field in custom_fields:\n for cfc in field.choices.all():\n custom_field_choices[cfc.id] = cfc.value\n custom_field_choices = custom_field_choices\n\n context = super().get_serializer_context()\n context.update({\n 'custom_fields': custom_fields,\n 'custom_field_choices': custom_field_choices,\n })\n return context\n\n def get_queryset(self):\n # Prefetch custom field values\n return super().get_queryset().prefetch_related('custom_field_values__field')\n\n\n#\n# Graphs\n#\n\nclass GraphViewSet(ModelViewSet):\n queryset = Graph.objects.all()\n serializer_class = serializers.GraphSerializer\n filterset_class = filters.GraphFilter\n\n\n#\n# Export templates\n#\n\nclass ExportTemplateViewSet(ModelViewSet):\n queryset = ExportTemplate.objects.all()\n serializer_class = serializers.ExportTemplateSerializer\n filterset_class = filters.ExportTemplateFilter\n\n\n#\n# Topology maps\n#\n\nclass TopologyMapViewSet(ModelViewSet):\n queryset = TopologyMap.objects.select_related('site')\n serializer_class = serializers.TopologyMapSerializer\n filterset_class = filters.TopologyMapFilter\n\n @action(detail=True)\n def render(self, request, pk):\n\n tmap = get_object_or_404(TopologyMap, pk=pk)\n img_format = 'png'\n\n try:\n data = tmap.render(img_format=img_format)\n except Exception as e:\n return HttpResponse(\n \"There was an error generating the requested graph: %s\" % e\n )\n\n response = HttpResponse(data, content_type='image/{}'.format(img_format))\n response['Content-Disposition'] = 'inline; filename=\"{}.{}\"'.format(tmap.slug, img_format)\n\n return response\n\n\n#\n# Tags\n#\n\nclass TagViewSet(ModelViewSet):\n queryset = Tag.objects.annotate(tagged_items=Count('extras_taggeditem_items'))\n serializer_class = serializers.TagSerializer\n filterset_class = filters.TagFilter\n\n\n#\n# Image attachments\n#\n\nclass ImageAttachmentViewSet(ModelViewSet):\n queryset = ImageAttachment.objects.all()\n serializer_class = serializers.ImageAttachmentSerializer\n\n\n#\n# Config contexts\n#\n\nclass ConfigContextViewSet(ModelViewSet):\n queryset = ConfigContext.objects.prefetch_related(\n 'regions', 'sites', 'roles', 'platforms', 'tenant_groups', 'tenants',\n )\n serializer_class = serializers.ConfigContextSerializer\n filterset_class = filters.ConfigContextFilter\n\n\n#\n# Reports\n#\n\nclass ReportViewSet(ViewSet):\n permission_classes = [IsAuthenticatedOrLoginNotRequired]\n _ignore_model_permissions = True\n exclude_from_schema = True\n lookup_value_regex = '[^/]+' # Allow dots\n\n def _retrieve_report(self, pk):\n\n # Read the PK as \"<module>.<report>\"\n if '.' not in pk:\n raise Http404\n module_name, report_name = pk.split('.', 1)\n\n # Raise a 404 on an invalid Report module/name\n report = get_report(module_name, report_name)\n if report is None:\n raise Http404\n\n return report\n\n def list(self, request):\n \"\"\"\n Compile all reports and their related results (if any). Result data is deferred in the list view.\n \"\"\"\n report_list = []\n\n # Iterate through all available Reports.\n for module_name, reports in get_reports():\n for report in reports:\n\n # Attach the relevant ReportResult (if any) to each Report.\n report.result = ReportResult.objects.filter(report=report.full_name).defer('data').first()\n report_list.append(report)\n\n serializer = serializers.ReportSerializer(report_list, many=True, context={\n 'request': request,\n })\n\n return Response(serializer.data)\n\n def retrieve(self, request, pk):\n \"\"\"\n Retrieve a single Report identified as \"<module>.<report>\".\n \"\"\"\n\n # Retrieve the Report and ReportResult, if any.\n report = self._retrieve_report(pk)\n report.result = ReportResult.objects.filter(report=report.full_name).first()\n\n serializer = serializers.ReportDetailSerializer(report)\n\n return Response(serializer.data)\n\n @action(detail=True, methods=['post'])\n def run(self, request, pk):\n \"\"\"\n Run a Report and create a new ReportResult, overwriting any previous result for the Report.\n \"\"\"\n\n # Check that the user has permission to run reports.\n if not request.user.has_perm('extras.add_reportresult'):\n raise PermissionDenied(\"This user does not have permission to run reports.\")\n\n # Retrieve and run the Report. This will create a new ReportResult.\n report = self._retrieve_report(pk)\n report.run()\n\n serializer = serializers.ReportDetailSerializer(report)\n\n return Response(serializer.data)\n\n\n#\n# Change logging\n#\n\nclass ObjectChangeViewSet(ReadOnlyModelViewSet):\n \"\"\"\n Retrieve a list of recent changes.\n \"\"\"\n queryset = ObjectChange.objects.select_related('user')\n serializer_class = serializers.ObjectChangeSerializer\n filterset_class = filters.ObjectChangeFilter\n", "path": "netbox/extras/api/views.py"}, {"content": "from rest_framework import routers\n\nfrom . import views\n\n\nclass ExtrasRootView(routers.APIRootView):\n \"\"\"\n Extras API root view\n \"\"\"\n def get_view_name(self):\n return 'Extras'\n\n\nrouter = routers.DefaultRouter()\nrouter.APIRootView = ExtrasRootView\n\n# Field choices\nrouter.register(r'_choices', views.ExtrasFieldChoicesViewSet, basename='field-choice')\n\n# Custom field choices\nrouter.register(r'_custom_field_choices', views.CustomFieldChoicesViewSet, base_name='custom-field-choice')\n\n# Graphs\nrouter.register(r'graphs', views.GraphViewSet)\n\n# Export templates\nrouter.register(r'export-templates', views.ExportTemplateViewSet)\n\n# Topology maps\nrouter.register(r'topology-maps', views.TopologyMapViewSet)\n\n# Tags\nrouter.register(r'tags', views.TagViewSet)\n\n# Image attachments\nrouter.register(r'image-attachments', views.ImageAttachmentViewSet)\n\n# Config contexts\nrouter.register(r'config-contexts', views.ConfigContextViewSet)\n\n# Reports\nrouter.register(r'reports', views.ReportViewSet, basename='report')\n\n# Change logging\nrouter.register(r'object-changes', views.ObjectChangeViewSet)\n\napp_name = 'extras-api'\nurlpatterns = router.urls\n", "path": "netbox/extras/api/urls.py"}]}
| 3,338 | 543 |
gh_patches_debug_6745
|
rasdani/github-patches
|
git_diff
|
ipython__ipython-9228
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Output for semicolon-terminated lines with comments
cf https://github.com/JuliaLang/IJulia.jl/issues/109
The output of a semicolon-terminated line with comment is not suppressed in IPython.
```
b = 1
b; # foo
```
give `Out[13]: 1`
```
b = 1
b;
```
does not.
Havent found such and issue on our tracker.
@takluyver you are the best with the input machinery and coroutines; Thoughts ?
(Because of the comment @stevengj in sus-cited issue, marking as prio-high, but won't mind if priority decreased later, or not fixed)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `IPython/core/displayhook.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """Displayhook for IPython.
3
4 This defines a callable class that IPython uses for `sys.displayhook`.
5 """
6
7 # Copyright (c) IPython Development Team.
8 # Distributed under the terms of the Modified BSD License.
9
10 from __future__ import print_function
11
12 import sys
13 import io as _io
14 import tokenize
15
16 from IPython.core.formatters import _safe_get_formatter_method
17 from traitlets.config.configurable import Configurable
18 from IPython.utils import io
19 from IPython.utils.py3compat import builtin_mod, cast_unicode_py2
20 from traitlets import Instance, Float
21 from IPython.utils.warn import warn
22
23 # TODO: Move the various attributes (cache_size, [others now moved]). Some
24 # of these are also attributes of InteractiveShell. They should be on ONE object
25 # only and the other objects should ask that one object for their values.
26
27 class DisplayHook(Configurable):
28 """The custom IPython displayhook to replace sys.displayhook.
29
30 This class does many things, but the basic idea is that it is a callable
31 that gets called anytime user code returns a value.
32 """
33
34 shell = Instance('IPython.core.interactiveshell.InteractiveShellABC',
35 allow_none=True)
36 exec_result = Instance('IPython.core.interactiveshell.ExecutionResult',
37 allow_none=True)
38 cull_fraction = Float(0.2)
39
40 def __init__(self, shell=None, cache_size=1000, **kwargs):
41 super(DisplayHook, self).__init__(shell=shell, **kwargs)
42 cache_size_min = 3
43 if cache_size <= 0:
44 self.do_full_cache = 0
45 cache_size = 0
46 elif cache_size < cache_size_min:
47 self.do_full_cache = 0
48 cache_size = 0
49 warn('caching was disabled (min value for cache size is %s).' %
50 cache_size_min,level=3)
51 else:
52 self.do_full_cache = 1
53
54 self.cache_size = cache_size
55
56 # we need a reference to the user-level namespace
57 self.shell = shell
58
59 self._,self.__,self.___ = '','',''
60
61 # these are deliberately global:
62 to_user_ns = {'_':self._,'__':self.__,'___':self.___}
63 self.shell.user_ns.update(to_user_ns)
64
65 @property
66 def prompt_count(self):
67 return self.shell.execution_count
68
69 #-------------------------------------------------------------------------
70 # Methods used in __call__. Override these methods to modify the behavior
71 # of the displayhook.
72 #-------------------------------------------------------------------------
73
74 def check_for_underscore(self):
75 """Check if the user has set the '_' variable by hand."""
76 # If something injected a '_' variable in __builtin__, delete
77 # ipython's automatic one so we don't clobber that. gettext() in
78 # particular uses _, so we need to stay away from it.
79 if '_' in builtin_mod.__dict__:
80 try:
81 del self.shell.user_ns['_']
82 except KeyError:
83 pass
84
85 def quiet(self):
86 """Should we silence the display hook because of ';'?"""
87 # do not print output if input ends in ';'
88
89 try:
90 cell = cast_unicode_py2(self.shell.history_manager.input_hist_parsed[-1])
91 except IndexError:
92 # some uses of ipshellembed may fail here
93 return False
94
95 sio = _io.StringIO(cell)
96 tokens = list(tokenize.generate_tokens(sio.readline))
97
98 for token in reversed(tokens):
99 if token[0] in (tokenize.ENDMARKER, tokenize.COMMENT):
100 continue
101 if (token[0] == tokenize.OP) and (token[1] == ';'):
102 return True
103 else:
104 return False
105
106 def start_displayhook(self):
107 """Start the displayhook, initializing resources."""
108 pass
109
110 def write_output_prompt(self):
111 """Write the output prompt.
112
113 The default implementation simply writes the prompt to
114 ``io.stdout``.
115 """
116 # Use write, not print which adds an extra space.
117 io.stdout.write(self.shell.separate_out)
118 outprompt = self.shell.prompt_manager.render('out')
119 if self.do_full_cache:
120 io.stdout.write(outprompt)
121
122 def compute_format_data(self, result):
123 """Compute format data of the object to be displayed.
124
125 The format data is a generalization of the :func:`repr` of an object.
126 In the default implementation the format data is a :class:`dict` of
127 key value pair where the keys are valid MIME types and the values
128 are JSON'able data structure containing the raw data for that MIME
129 type. It is up to frontends to determine pick a MIME to to use and
130 display that data in an appropriate manner.
131
132 This method only computes the format data for the object and should
133 NOT actually print or write that to a stream.
134
135 Parameters
136 ----------
137 result : object
138 The Python object passed to the display hook, whose format will be
139 computed.
140
141 Returns
142 -------
143 (format_dict, md_dict) : dict
144 format_dict is a :class:`dict` whose keys are valid MIME types and values are
145 JSON'able raw data for that MIME type. It is recommended that
146 all return values of this should always include the "text/plain"
147 MIME type representation of the object.
148 md_dict is a :class:`dict` with the same MIME type keys
149 of metadata associated with each output.
150
151 """
152 return self.shell.display_formatter.format(result)
153
154 def write_format_data(self, format_dict, md_dict=None):
155 """Write the format data dict to the frontend.
156
157 This default version of this method simply writes the plain text
158 representation of the object to ``io.stdout``. Subclasses should
159 override this method to send the entire `format_dict` to the
160 frontends.
161
162 Parameters
163 ----------
164 format_dict : dict
165 The format dict for the object passed to `sys.displayhook`.
166 md_dict : dict (optional)
167 The metadata dict to be associated with the display data.
168 """
169 if 'text/plain' not in format_dict:
170 # nothing to do
171 return
172 # We want to print because we want to always make sure we have a
173 # newline, even if all the prompt separators are ''. This is the
174 # standard IPython behavior.
175 result_repr = format_dict['text/plain']
176 if '\n' in result_repr:
177 # So that multi-line strings line up with the left column of
178 # the screen, instead of having the output prompt mess up
179 # their first line.
180 # We use the prompt template instead of the expanded prompt
181 # because the expansion may add ANSI escapes that will interfere
182 # with our ability to determine whether or not we should add
183 # a newline.
184 prompt_template = self.shell.prompt_manager.out_template
185 if prompt_template and not prompt_template.endswith('\n'):
186 # But avoid extraneous empty lines.
187 result_repr = '\n' + result_repr
188
189 print(result_repr, file=io.stdout)
190
191 def update_user_ns(self, result):
192 """Update user_ns with various things like _, __, _1, etc."""
193
194 # Avoid recursive reference when displaying _oh/Out
195 if result is not self.shell.user_ns['_oh']:
196 if len(self.shell.user_ns['_oh']) >= self.cache_size and self.do_full_cache:
197 self.cull_cache()
198 # Don't overwrite '_' and friends if '_' is in __builtin__ (otherwise
199 # we cause buggy behavior for things like gettext).
200
201 if '_' not in builtin_mod.__dict__:
202 self.___ = self.__
203 self.__ = self._
204 self._ = result
205 self.shell.push({'_':self._,
206 '__':self.__,
207 '___':self.___}, interactive=False)
208
209 # hackish access to top-level namespace to create _1,_2... dynamically
210 to_main = {}
211 if self.do_full_cache:
212 new_result = '_'+repr(self.prompt_count)
213 to_main[new_result] = result
214 self.shell.push(to_main, interactive=False)
215 self.shell.user_ns['_oh'][self.prompt_count] = result
216
217 def fill_exec_result(self, result):
218 if self.exec_result is not None:
219 self.exec_result.result = result
220
221 def log_output(self, format_dict):
222 """Log the output."""
223 if 'text/plain' not in format_dict:
224 # nothing to do
225 return
226 if self.shell.logger.log_output:
227 self.shell.logger.log_write(format_dict['text/plain'], 'output')
228 self.shell.history_manager.output_hist_reprs[self.prompt_count] = \
229 format_dict['text/plain']
230
231 def finish_displayhook(self):
232 """Finish up all displayhook activities."""
233 io.stdout.write(self.shell.separate_out2)
234 io.stdout.flush()
235
236 def __call__(self, result=None):
237 """Printing with history cache management.
238
239 This is invoked everytime the interpreter needs to print, and is
240 activated by setting the variable sys.displayhook to it.
241 """
242 self.check_for_underscore()
243 if result is not None and not self.quiet():
244 self.start_displayhook()
245 self.write_output_prompt()
246 format_dict, md_dict = self.compute_format_data(result)
247 self.update_user_ns(result)
248 self.fill_exec_result(result)
249 if format_dict:
250 self.write_format_data(format_dict, md_dict)
251 self.log_output(format_dict)
252 self.finish_displayhook()
253
254 def cull_cache(self):
255 """Output cache is full, cull the oldest entries"""
256 oh = self.shell.user_ns.get('_oh', {})
257 sz = len(oh)
258 cull_count = max(int(sz * self.cull_fraction), 2)
259 warn('Output cache limit (currently {sz} entries) hit.\n'
260 'Flushing oldest {cull_count} entries.'.format(sz=sz, cull_count=cull_count))
261
262 for i, n in enumerate(sorted(oh)):
263 if i >= cull_count:
264 break
265 self.shell.user_ns.pop('_%i' % n, None)
266 oh.pop(n, None)
267
268
269 def flush(self):
270 if not self.do_full_cache:
271 raise ValueError("You shouldn't have reached the cache flush "
272 "if full caching is not enabled!")
273 # delete auto-generated vars from global namespace
274
275 for n in range(1,self.prompt_count + 1):
276 key = '_'+repr(n)
277 try:
278 del self.shell.user_ns[key]
279 except: pass
280 # In some embedded circumstances, the user_ns doesn't have the
281 # '_oh' key set up.
282 oh = self.shell.user_ns.get('_oh', None)
283 if oh is not None:
284 oh.clear()
285
286 # Release our own references to objects:
287 self._, self.__, self.___ = '', '', ''
288
289 if '_' not in builtin_mod.__dict__:
290 self.shell.user_ns.update({'_':None,'__':None, '___':None})
291 import gc
292 # TODO: Is this really needed?
293 # IronPython blocks here forever
294 if sys.platform != "cli":
295 gc.collect()
296
297
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/IPython/core/displayhook.py b/IPython/core/displayhook.py
--- a/IPython/core/displayhook.py
+++ b/IPython/core/displayhook.py
@@ -96,7 +96,7 @@
tokens = list(tokenize.generate_tokens(sio.readline))
for token in reversed(tokens):
- if token[0] in (tokenize.ENDMARKER, tokenize.COMMENT):
+ if token[0] in (tokenize.ENDMARKER, tokenize.NL, tokenize.NEWLINE, tokenize.COMMENT):
continue
if (token[0] == tokenize.OP) and (token[1] == ';'):
return True
|
{"golden_diff": "diff --git a/IPython/core/displayhook.py b/IPython/core/displayhook.py\n--- a/IPython/core/displayhook.py\n+++ b/IPython/core/displayhook.py\n@@ -96,7 +96,7 @@\n tokens = list(tokenize.generate_tokens(sio.readline))\n \n for token in reversed(tokens):\n- if token[0] in (tokenize.ENDMARKER, tokenize.COMMENT):\n+ if token[0] in (tokenize.ENDMARKER, tokenize.NL, tokenize.NEWLINE, tokenize.COMMENT):\n continue\n if (token[0] == tokenize.OP) and (token[1] == ';'):\n return True\n", "issue": "Output for semicolon-terminated lines with comments\ncf https://github.com/JuliaLang/IJulia.jl/issues/109\n\nThe output of a semicolon-terminated line with comment is not suppressed in IPython.\n\n```\nb = 1\nb; # foo\n```\n\ngive `Out[13]: 1`\n\n```\nb = 1\nb;\n```\n\ndoes not.\n\nHavent found such and issue on our tracker.\n\n @takluyver you are the best with the input machinery and coroutines; Thoughts ?\n\n(Because of the comment @stevengj in sus-cited issue, marking as prio-high, but won't mind if priority decreased later, or not fixed)\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Displayhook for IPython.\n\nThis defines a callable class that IPython uses for `sys.displayhook`.\n\"\"\"\n\n# Copyright (c) IPython Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nfrom __future__ import print_function\n\nimport sys\nimport io as _io\nimport tokenize\n\nfrom IPython.core.formatters import _safe_get_formatter_method\nfrom traitlets.config.configurable import Configurable\nfrom IPython.utils import io\nfrom IPython.utils.py3compat import builtin_mod, cast_unicode_py2\nfrom traitlets import Instance, Float\nfrom IPython.utils.warn import warn\n\n# TODO: Move the various attributes (cache_size, [others now moved]). Some\n# of these are also attributes of InteractiveShell. They should be on ONE object\n# only and the other objects should ask that one object for their values.\n\nclass DisplayHook(Configurable):\n \"\"\"The custom IPython displayhook to replace sys.displayhook.\n\n This class does many things, but the basic idea is that it is a callable\n that gets called anytime user code returns a value.\n \"\"\"\n\n shell = Instance('IPython.core.interactiveshell.InteractiveShellABC',\n allow_none=True)\n exec_result = Instance('IPython.core.interactiveshell.ExecutionResult',\n allow_none=True)\n cull_fraction = Float(0.2)\n\n def __init__(self, shell=None, cache_size=1000, **kwargs):\n super(DisplayHook, self).__init__(shell=shell, **kwargs)\n cache_size_min = 3\n if cache_size <= 0:\n self.do_full_cache = 0\n cache_size = 0\n elif cache_size < cache_size_min:\n self.do_full_cache = 0\n cache_size = 0\n warn('caching was disabled (min value for cache size is %s).' %\n cache_size_min,level=3)\n else:\n self.do_full_cache = 1\n\n self.cache_size = cache_size\n\n # we need a reference to the user-level namespace\n self.shell = shell\n \n self._,self.__,self.___ = '','',''\n\n # these are deliberately global:\n to_user_ns = {'_':self._,'__':self.__,'___':self.___}\n self.shell.user_ns.update(to_user_ns)\n\n @property\n def prompt_count(self):\n return self.shell.execution_count\n\n #-------------------------------------------------------------------------\n # Methods used in __call__. Override these methods to modify the behavior\n # of the displayhook.\n #-------------------------------------------------------------------------\n\n def check_for_underscore(self):\n \"\"\"Check if the user has set the '_' variable by hand.\"\"\"\n # If something injected a '_' variable in __builtin__, delete\n # ipython's automatic one so we don't clobber that. gettext() in\n # particular uses _, so we need to stay away from it.\n if '_' in builtin_mod.__dict__:\n try:\n del self.shell.user_ns['_']\n except KeyError:\n pass\n\n def quiet(self):\n \"\"\"Should we silence the display hook because of ';'?\"\"\"\n # do not print output if input ends in ';'\n \n try:\n cell = cast_unicode_py2(self.shell.history_manager.input_hist_parsed[-1])\n except IndexError:\n # some uses of ipshellembed may fail here\n return False\n \n sio = _io.StringIO(cell)\n tokens = list(tokenize.generate_tokens(sio.readline))\n\n for token in reversed(tokens):\n if token[0] in (tokenize.ENDMARKER, tokenize.COMMENT):\n continue\n if (token[0] == tokenize.OP) and (token[1] == ';'):\n return True\n else:\n return False\n\n def start_displayhook(self):\n \"\"\"Start the displayhook, initializing resources.\"\"\"\n pass\n\n def write_output_prompt(self):\n \"\"\"Write the output prompt.\n\n The default implementation simply writes the prompt to\n ``io.stdout``.\n \"\"\"\n # Use write, not print which adds an extra space.\n io.stdout.write(self.shell.separate_out)\n outprompt = self.shell.prompt_manager.render('out')\n if self.do_full_cache:\n io.stdout.write(outprompt)\n\n def compute_format_data(self, result):\n \"\"\"Compute format data of the object to be displayed.\n\n The format data is a generalization of the :func:`repr` of an object.\n In the default implementation the format data is a :class:`dict` of\n key value pair where the keys are valid MIME types and the values\n are JSON'able data structure containing the raw data for that MIME\n type. It is up to frontends to determine pick a MIME to to use and\n display that data in an appropriate manner.\n\n This method only computes the format data for the object and should\n NOT actually print or write that to a stream.\n\n Parameters\n ----------\n result : object\n The Python object passed to the display hook, whose format will be\n computed.\n\n Returns\n -------\n (format_dict, md_dict) : dict\n format_dict is a :class:`dict` whose keys are valid MIME types and values are\n JSON'able raw data for that MIME type. It is recommended that\n all return values of this should always include the \"text/plain\"\n MIME type representation of the object.\n md_dict is a :class:`dict` with the same MIME type keys\n of metadata associated with each output.\n \n \"\"\"\n return self.shell.display_formatter.format(result)\n\n def write_format_data(self, format_dict, md_dict=None):\n \"\"\"Write the format data dict to the frontend.\n\n This default version of this method simply writes the plain text\n representation of the object to ``io.stdout``. Subclasses should\n override this method to send the entire `format_dict` to the\n frontends.\n\n Parameters\n ----------\n format_dict : dict\n The format dict for the object passed to `sys.displayhook`.\n md_dict : dict (optional)\n The metadata dict to be associated with the display data.\n \"\"\"\n if 'text/plain' not in format_dict:\n # nothing to do\n return\n # We want to print because we want to always make sure we have a\n # newline, even if all the prompt separators are ''. This is the\n # standard IPython behavior.\n result_repr = format_dict['text/plain']\n if '\\n' in result_repr:\n # So that multi-line strings line up with the left column of\n # the screen, instead of having the output prompt mess up\n # their first line.\n # We use the prompt template instead of the expanded prompt\n # because the expansion may add ANSI escapes that will interfere\n # with our ability to determine whether or not we should add\n # a newline.\n prompt_template = self.shell.prompt_manager.out_template\n if prompt_template and not prompt_template.endswith('\\n'):\n # But avoid extraneous empty lines.\n result_repr = '\\n' + result_repr\n\n print(result_repr, file=io.stdout)\n\n def update_user_ns(self, result):\n \"\"\"Update user_ns with various things like _, __, _1, etc.\"\"\"\n\n # Avoid recursive reference when displaying _oh/Out\n if result is not self.shell.user_ns['_oh']:\n if len(self.shell.user_ns['_oh']) >= self.cache_size and self.do_full_cache:\n self.cull_cache()\n # Don't overwrite '_' and friends if '_' is in __builtin__ (otherwise\n # we cause buggy behavior for things like gettext).\n\n if '_' not in builtin_mod.__dict__:\n self.___ = self.__\n self.__ = self._\n self._ = result\n self.shell.push({'_':self._,\n '__':self.__,\n '___':self.___}, interactive=False)\n\n # hackish access to top-level namespace to create _1,_2... dynamically\n to_main = {}\n if self.do_full_cache:\n new_result = '_'+repr(self.prompt_count)\n to_main[new_result] = result\n self.shell.push(to_main, interactive=False)\n self.shell.user_ns['_oh'][self.prompt_count] = result\n\n def fill_exec_result(self, result):\n if self.exec_result is not None:\n self.exec_result.result = result\n\n def log_output(self, format_dict):\n \"\"\"Log the output.\"\"\"\n if 'text/plain' not in format_dict:\n # nothing to do\n return\n if self.shell.logger.log_output:\n self.shell.logger.log_write(format_dict['text/plain'], 'output')\n self.shell.history_manager.output_hist_reprs[self.prompt_count] = \\\n format_dict['text/plain']\n\n def finish_displayhook(self):\n \"\"\"Finish up all displayhook activities.\"\"\"\n io.stdout.write(self.shell.separate_out2)\n io.stdout.flush()\n\n def __call__(self, result=None):\n \"\"\"Printing with history cache management.\n\n This is invoked everytime the interpreter needs to print, and is\n activated by setting the variable sys.displayhook to it.\n \"\"\"\n self.check_for_underscore()\n if result is not None and not self.quiet():\n self.start_displayhook()\n self.write_output_prompt()\n format_dict, md_dict = self.compute_format_data(result)\n self.update_user_ns(result)\n self.fill_exec_result(result)\n if format_dict:\n self.write_format_data(format_dict, md_dict)\n self.log_output(format_dict)\n self.finish_displayhook()\n\n def cull_cache(self):\n \"\"\"Output cache is full, cull the oldest entries\"\"\"\n oh = self.shell.user_ns.get('_oh', {})\n sz = len(oh)\n cull_count = max(int(sz * self.cull_fraction), 2)\n warn('Output cache limit (currently {sz} entries) hit.\\n'\n 'Flushing oldest {cull_count} entries.'.format(sz=sz, cull_count=cull_count))\n \n for i, n in enumerate(sorted(oh)):\n if i >= cull_count:\n break\n self.shell.user_ns.pop('_%i' % n, None)\n oh.pop(n, None)\n \n\n def flush(self):\n if not self.do_full_cache:\n raise ValueError(\"You shouldn't have reached the cache flush \"\n \"if full caching is not enabled!\")\n # delete auto-generated vars from global namespace\n\n for n in range(1,self.prompt_count + 1):\n key = '_'+repr(n)\n try:\n del self.shell.user_ns[key]\n except: pass\n # In some embedded circumstances, the user_ns doesn't have the\n # '_oh' key set up.\n oh = self.shell.user_ns.get('_oh', None)\n if oh is not None:\n oh.clear()\n\n # Release our own references to objects:\n self._, self.__, self.___ = '', '', ''\n\n if '_' not in builtin_mod.__dict__:\n self.shell.user_ns.update({'_':None,'__':None, '___':None})\n import gc\n # TODO: Is this really needed?\n # IronPython blocks here forever\n if sys.platform != \"cli\":\n gc.collect()\n\n", "path": "IPython/core/displayhook.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Displayhook for IPython.\n\nThis defines a callable class that IPython uses for `sys.displayhook`.\n\"\"\"\n\n# Copyright (c) IPython Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nfrom __future__ import print_function\n\nimport sys\nimport io as _io\nimport tokenize\n\nfrom IPython.core.formatters import _safe_get_formatter_method\nfrom traitlets.config.configurable import Configurable\nfrom IPython.utils import io\nfrom IPython.utils.py3compat import builtin_mod, cast_unicode_py2\nfrom traitlets import Instance, Float\nfrom IPython.utils.warn import warn\n\n# TODO: Move the various attributes (cache_size, [others now moved]). Some\n# of these are also attributes of InteractiveShell. They should be on ONE object\n# only and the other objects should ask that one object for their values.\n\nclass DisplayHook(Configurable):\n \"\"\"The custom IPython displayhook to replace sys.displayhook.\n\n This class does many things, but the basic idea is that it is a callable\n that gets called anytime user code returns a value.\n \"\"\"\n\n shell = Instance('IPython.core.interactiveshell.InteractiveShellABC',\n allow_none=True)\n exec_result = Instance('IPython.core.interactiveshell.ExecutionResult',\n allow_none=True)\n cull_fraction = Float(0.2)\n\n def __init__(self, shell=None, cache_size=1000, **kwargs):\n super(DisplayHook, self).__init__(shell=shell, **kwargs)\n cache_size_min = 3\n if cache_size <= 0:\n self.do_full_cache = 0\n cache_size = 0\n elif cache_size < cache_size_min:\n self.do_full_cache = 0\n cache_size = 0\n warn('caching was disabled (min value for cache size is %s).' %\n cache_size_min,level=3)\n else:\n self.do_full_cache = 1\n\n self.cache_size = cache_size\n\n # we need a reference to the user-level namespace\n self.shell = shell\n \n self._,self.__,self.___ = '','',''\n\n # these are deliberately global:\n to_user_ns = {'_':self._,'__':self.__,'___':self.___}\n self.shell.user_ns.update(to_user_ns)\n\n @property\n def prompt_count(self):\n return self.shell.execution_count\n\n #-------------------------------------------------------------------------\n # Methods used in __call__. Override these methods to modify the behavior\n # of the displayhook.\n #-------------------------------------------------------------------------\n\n def check_for_underscore(self):\n \"\"\"Check if the user has set the '_' variable by hand.\"\"\"\n # If something injected a '_' variable in __builtin__, delete\n # ipython's automatic one so we don't clobber that. gettext() in\n # particular uses _, so we need to stay away from it.\n if '_' in builtin_mod.__dict__:\n try:\n del self.shell.user_ns['_']\n except KeyError:\n pass\n\n def quiet(self):\n \"\"\"Should we silence the display hook because of ';'?\"\"\"\n # do not print output if input ends in ';'\n \n try:\n cell = cast_unicode_py2(self.shell.history_manager.input_hist_parsed[-1])\n except IndexError:\n # some uses of ipshellembed may fail here\n return False\n \n sio = _io.StringIO(cell)\n tokens = list(tokenize.generate_tokens(sio.readline))\n\n for token in reversed(tokens):\n if token[0] in (tokenize.ENDMARKER, tokenize.NL, tokenize.NEWLINE, tokenize.COMMENT):\n continue\n if (token[0] == tokenize.OP) and (token[1] == ';'):\n return True\n else:\n return False\n\n def start_displayhook(self):\n \"\"\"Start the displayhook, initializing resources.\"\"\"\n pass\n\n def write_output_prompt(self):\n \"\"\"Write the output prompt.\n\n The default implementation simply writes the prompt to\n ``io.stdout``.\n \"\"\"\n # Use write, not print which adds an extra space.\n io.stdout.write(self.shell.separate_out)\n outprompt = self.shell.prompt_manager.render('out')\n if self.do_full_cache:\n io.stdout.write(outprompt)\n\n def compute_format_data(self, result):\n \"\"\"Compute format data of the object to be displayed.\n\n The format data is a generalization of the :func:`repr` of an object.\n In the default implementation the format data is a :class:`dict` of\n key value pair where the keys are valid MIME types and the values\n are JSON'able data structure containing the raw data for that MIME\n type. It is up to frontends to determine pick a MIME to to use and\n display that data in an appropriate manner.\n\n This method only computes the format data for the object and should\n NOT actually print or write that to a stream.\n\n Parameters\n ----------\n result : object\n The Python object passed to the display hook, whose format will be\n computed.\n\n Returns\n -------\n (format_dict, md_dict) : dict\n format_dict is a :class:`dict` whose keys are valid MIME types and values are\n JSON'able raw data for that MIME type. It is recommended that\n all return values of this should always include the \"text/plain\"\n MIME type representation of the object.\n md_dict is a :class:`dict` with the same MIME type keys\n of metadata associated with each output.\n \n \"\"\"\n return self.shell.display_formatter.format(result)\n\n def write_format_data(self, format_dict, md_dict=None):\n \"\"\"Write the format data dict to the frontend.\n\n This default version of this method simply writes the plain text\n representation of the object to ``io.stdout``. Subclasses should\n override this method to send the entire `format_dict` to the\n frontends.\n\n Parameters\n ----------\n format_dict : dict\n The format dict for the object passed to `sys.displayhook`.\n md_dict : dict (optional)\n The metadata dict to be associated with the display data.\n \"\"\"\n if 'text/plain' not in format_dict:\n # nothing to do\n return\n # We want to print because we want to always make sure we have a\n # newline, even if all the prompt separators are ''. This is the\n # standard IPython behavior.\n result_repr = format_dict['text/plain']\n if '\\n' in result_repr:\n # So that multi-line strings line up with the left column of\n # the screen, instead of having the output prompt mess up\n # their first line.\n # We use the prompt template instead of the expanded prompt\n # because the expansion may add ANSI escapes that will interfere\n # with our ability to determine whether or not we should add\n # a newline.\n prompt_template = self.shell.prompt_manager.out_template\n if prompt_template and not prompt_template.endswith('\\n'):\n # But avoid extraneous empty lines.\n result_repr = '\\n' + result_repr\n\n print(result_repr, file=io.stdout)\n\n def update_user_ns(self, result):\n \"\"\"Update user_ns with various things like _, __, _1, etc.\"\"\"\n\n # Avoid recursive reference when displaying _oh/Out\n if result is not self.shell.user_ns['_oh']:\n if len(self.shell.user_ns['_oh']) >= self.cache_size and self.do_full_cache:\n self.cull_cache()\n # Don't overwrite '_' and friends if '_' is in __builtin__ (otherwise\n # we cause buggy behavior for things like gettext).\n\n if '_' not in builtin_mod.__dict__:\n self.___ = self.__\n self.__ = self._\n self._ = result\n self.shell.push({'_':self._,\n '__':self.__,\n '___':self.___}, interactive=False)\n\n # hackish access to top-level namespace to create _1,_2... dynamically\n to_main = {}\n if self.do_full_cache:\n new_result = '_'+repr(self.prompt_count)\n to_main[new_result] = result\n self.shell.push(to_main, interactive=False)\n self.shell.user_ns['_oh'][self.prompt_count] = result\n\n def fill_exec_result(self, result):\n if self.exec_result is not None:\n self.exec_result.result = result\n\n def log_output(self, format_dict):\n \"\"\"Log the output.\"\"\"\n if 'text/plain' not in format_dict:\n # nothing to do\n return\n if self.shell.logger.log_output:\n self.shell.logger.log_write(format_dict['text/plain'], 'output')\n self.shell.history_manager.output_hist_reprs[self.prompt_count] = \\\n format_dict['text/plain']\n\n def finish_displayhook(self):\n \"\"\"Finish up all displayhook activities.\"\"\"\n io.stdout.write(self.shell.separate_out2)\n io.stdout.flush()\n\n def __call__(self, result=None):\n \"\"\"Printing with history cache management.\n\n This is invoked everytime the interpreter needs to print, and is\n activated by setting the variable sys.displayhook to it.\n \"\"\"\n self.check_for_underscore()\n if result is not None and not self.quiet():\n self.start_displayhook()\n self.write_output_prompt()\n format_dict, md_dict = self.compute_format_data(result)\n self.update_user_ns(result)\n self.fill_exec_result(result)\n if format_dict:\n self.write_format_data(format_dict, md_dict)\n self.log_output(format_dict)\n self.finish_displayhook()\n\n def cull_cache(self):\n \"\"\"Output cache is full, cull the oldest entries\"\"\"\n oh = self.shell.user_ns.get('_oh', {})\n sz = len(oh)\n cull_count = max(int(sz * self.cull_fraction), 2)\n warn('Output cache limit (currently {sz} entries) hit.\\n'\n 'Flushing oldest {cull_count} entries.'.format(sz=sz, cull_count=cull_count))\n \n for i, n in enumerate(sorted(oh)):\n if i >= cull_count:\n break\n self.shell.user_ns.pop('_%i' % n, None)\n oh.pop(n, None)\n \n\n def flush(self):\n if not self.do_full_cache:\n raise ValueError(\"You shouldn't have reached the cache flush \"\n \"if full caching is not enabled!\")\n # delete auto-generated vars from global namespace\n\n for n in range(1,self.prompt_count + 1):\n key = '_'+repr(n)\n try:\n del self.shell.user_ns[key]\n except: pass\n # In some embedded circumstances, the user_ns doesn't have the\n # '_oh' key set up.\n oh = self.shell.user_ns.get('_oh', None)\n if oh is not None:\n oh.clear()\n\n # Release our own references to objects:\n self._, self.__, self.___ = '', '', ''\n\n if '_' not in builtin_mod.__dict__:\n self.shell.user_ns.update({'_':None,'__':None, '___':None})\n import gc\n # TODO: Is this really needed?\n # IronPython blocks here forever\n if sys.platform != \"cli\":\n gc.collect()\n\n", "path": "IPython/core/displayhook.py"}]}
| 3,627 | 141 |
gh_patches_debug_1780
|
rasdani/github-patches
|
git_diff
|
localstack__localstack-4075
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support ruby2.7 runtime
AWS supports the following Ruby runtimes:
<!--StartFragment-->
Name | Identifier | SDK for Ruby | Operating system
-- | -- | -- | --
Ruby 2.7 | ruby2.7 | 3.0.1 | Amazon Linux 2
Ruby 2.5 | ruby2.5 | 3.0.1 | Amazon Linux
<!--EndFragment-->
Currently, `localstack/lambda` only contains the `ruby2.5` tag. Will the 2.7 runtime be supported in the (near) future?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `localstack/services/awslambda/lambda_utils.py`
Content:
```
1 import os
2 from collections import defaultdict
3 from localstack.utils.common import to_str
4
5 # Lambda runtime constants
6 LAMBDA_RUNTIME_PYTHON27 = 'python2.7'
7 LAMBDA_RUNTIME_PYTHON36 = 'python3.6'
8 LAMBDA_RUNTIME_PYTHON37 = 'python3.7'
9 LAMBDA_RUNTIME_PYTHON38 = 'python3.8'
10 LAMBDA_RUNTIME_NODEJS = 'nodejs'
11 LAMBDA_RUNTIME_NODEJS43 = 'nodejs4.3'
12 LAMBDA_RUNTIME_NODEJS610 = 'nodejs6.10'
13 LAMBDA_RUNTIME_NODEJS810 = 'nodejs8.10'
14 LAMBDA_RUNTIME_NODEJS10X = 'nodejs10.x'
15 LAMBDA_RUNTIME_NODEJS12X = 'nodejs12.x'
16 LAMBDA_RUNTIME_NODEJS14X = 'nodejs14.x'
17 LAMBDA_RUNTIME_JAVA8 = 'java8'
18 LAMBDA_RUNTIME_JAVA11 = 'java11'
19 LAMBDA_RUNTIME_DOTNETCORE2 = 'dotnetcore2.0'
20 LAMBDA_RUNTIME_DOTNETCORE21 = 'dotnetcore2.1'
21 LAMBDA_RUNTIME_DOTNETCORE31 = 'dotnetcore3.1'
22 LAMBDA_RUNTIME_GOLANG = 'go1.x'
23 LAMBDA_RUNTIME_RUBY = 'ruby'
24 LAMBDA_RUNTIME_RUBY25 = 'ruby2.5'
25 LAMBDA_RUNTIME_PROVIDED = 'provided'
26
27 # default handler and runtime
28 LAMBDA_DEFAULT_HANDLER = 'handler.handler'
29 LAMBDA_DEFAULT_RUNTIME = 'python3.8'
30 LAMBDA_DEFAULT_STARTING_POSITION = 'LATEST'
31
32 # List of Dotnet Lambda runtime names
33 DOTNET_LAMBDA_RUNTIMES = [LAMBDA_RUNTIME_DOTNETCORE2, LAMBDA_RUNTIME_DOTNETCORE21, LAMBDA_RUNTIME_DOTNETCORE31]
34
35
36 def multi_value_dict_for_list(elements):
37 temp_mv_dict = defaultdict(list)
38 for key in elements:
39 if isinstance(key, (list, tuple)):
40 key, value = key
41 else:
42 value = elements[key]
43 key = to_str(key)
44 temp_mv_dict[key].append(value)
45
46 return dict((k, tuple(v)) for k, v in temp_mv_dict.items())
47
48
49 def get_handler_file_from_name(handler_name, runtime=LAMBDA_DEFAULT_RUNTIME):
50 if runtime.startswith(LAMBDA_RUNTIME_PROVIDED):
51 return 'bootstrap'
52 delimiter = '.'
53 if runtime.startswith(LAMBDA_RUNTIME_NODEJS):
54 file_ext = '.js'
55 elif runtime.startswith(LAMBDA_RUNTIME_GOLANG):
56 file_ext = ''
57 elif runtime.startswith(tuple(DOTNET_LAMBDA_RUNTIMES)):
58 file_ext = '.dll'
59 delimiter = ':'
60 elif runtime.startswith(LAMBDA_RUNTIME_RUBY):
61 file_ext = '.rb'
62 else:
63 handler_name = handler_name.rpartition(delimiter)[0].replace(delimiter, os.path.sep)
64 file_ext = '.py'
65 return '%s%s' % (handler_name.split(delimiter)[0], file_ext)
66
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/localstack/services/awslambda/lambda_utils.py b/localstack/services/awslambda/lambda_utils.py
--- a/localstack/services/awslambda/lambda_utils.py
+++ b/localstack/services/awslambda/lambda_utils.py
@@ -22,6 +22,7 @@
LAMBDA_RUNTIME_GOLANG = 'go1.x'
LAMBDA_RUNTIME_RUBY = 'ruby'
LAMBDA_RUNTIME_RUBY25 = 'ruby2.5'
+LAMBDA_RUNTIME_RUBY27 = 'ruby2.7'
LAMBDA_RUNTIME_PROVIDED = 'provided'
# default handler and runtime
|
{"golden_diff": "diff --git a/localstack/services/awslambda/lambda_utils.py b/localstack/services/awslambda/lambda_utils.py\n--- a/localstack/services/awslambda/lambda_utils.py\n+++ b/localstack/services/awslambda/lambda_utils.py\n@@ -22,6 +22,7 @@\n LAMBDA_RUNTIME_GOLANG = 'go1.x'\n LAMBDA_RUNTIME_RUBY = 'ruby'\n LAMBDA_RUNTIME_RUBY25 = 'ruby2.5'\n+LAMBDA_RUNTIME_RUBY27 = 'ruby2.7'\n LAMBDA_RUNTIME_PROVIDED = 'provided'\n \n # default handler and runtime\n", "issue": "Support ruby2.7 runtime\nAWS supports the following Ruby runtimes:\r\n\r\n<!--StartFragment-->\r\nName | Identifier | SDK for Ruby | Operating system\r\n-- | -- | -- | --\r\nRuby 2.7 | ruby2.7 | 3.0.1 | Amazon Linux 2\r\nRuby 2.5 | ruby2.5 | 3.0.1 | Amazon Linux\r\n\r\n<!--EndFragment-->\r\n\r\nCurrently, `localstack/lambda` only contains the `ruby2.5` tag. Will the 2.7 runtime be supported in the (near) future?\n", "before_files": [{"content": "import os\nfrom collections import defaultdict\nfrom localstack.utils.common import to_str\n\n# Lambda runtime constants\nLAMBDA_RUNTIME_PYTHON27 = 'python2.7'\nLAMBDA_RUNTIME_PYTHON36 = 'python3.6'\nLAMBDA_RUNTIME_PYTHON37 = 'python3.7'\nLAMBDA_RUNTIME_PYTHON38 = 'python3.8'\nLAMBDA_RUNTIME_NODEJS = 'nodejs'\nLAMBDA_RUNTIME_NODEJS43 = 'nodejs4.3'\nLAMBDA_RUNTIME_NODEJS610 = 'nodejs6.10'\nLAMBDA_RUNTIME_NODEJS810 = 'nodejs8.10'\nLAMBDA_RUNTIME_NODEJS10X = 'nodejs10.x'\nLAMBDA_RUNTIME_NODEJS12X = 'nodejs12.x'\nLAMBDA_RUNTIME_NODEJS14X = 'nodejs14.x'\nLAMBDA_RUNTIME_JAVA8 = 'java8'\nLAMBDA_RUNTIME_JAVA11 = 'java11'\nLAMBDA_RUNTIME_DOTNETCORE2 = 'dotnetcore2.0'\nLAMBDA_RUNTIME_DOTNETCORE21 = 'dotnetcore2.1'\nLAMBDA_RUNTIME_DOTNETCORE31 = 'dotnetcore3.1'\nLAMBDA_RUNTIME_GOLANG = 'go1.x'\nLAMBDA_RUNTIME_RUBY = 'ruby'\nLAMBDA_RUNTIME_RUBY25 = 'ruby2.5'\nLAMBDA_RUNTIME_PROVIDED = 'provided'\n\n# default handler and runtime\nLAMBDA_DEFAULT_HANDLER = 'handler.handler'\nLAMBDA_DEFAULT_RUNTIME = 'python3.8'\nLAMBDA_DEFAULT_STARTING_POSITION = 'LATEST'\n\n# List of Dotnet Lambda runtime names\nDOTNET_LAMBDA_RUNTIMES = [LAMBDA_RUNTIME_DOTNETCORE2, LAMBDA_RUNTIME_DOTNETCORE21, LAMBDA_RUNTIME_DOTNETCORE31]\n\n\ndef multi_value_dict_for_list(elements):\n temp_mv_dict = defaultdict(list)\n for key in elements:\n if isinstance(key, (list, tuple)):\n key, value = key\n else:\n value = elements[key]\n key = to_str(key)\n temp_mv_dict[key].append(value)\n\n return dict((k, tuple(v)) for k, v in temp_mv_dict.items())\n\n\ndef get_handler_file_from_name(handler_name, runtime=LAMBDA_DEFAULT_RUNTIME):\n if runtime.startswith(LAMBDA_RUNTIME_PROVIDED):\n return 'bootstrap'\n delimiter = '.'\n if runtime.startswith(LAMBDA_RUNTIME_NODEJS):\n file_ext = '.js'\n elif runtime.startswith(LAMBDA_RUNTIME_GOLANG):\n file_ext = ''\n elif runtime.startswith(tuple(DOTNET_LAMBDA_RUNTIMES)):\n file_ext = '.dll'\n delimiter = ':'\n elif runtime.startswith(LAMBDA_RUNTIME_RUBY):\n file_ext = '.rb'\n else:\n handler_name = handler_name.rpartition(delimiter)[0].replace(delimiter, os.path.sep)\n file_ext = '.py'\n return '%s%s' % (handler_name.split(delimiter)[0], file_ext)\n", "path": "localstack/services/awslambda/lambda_utils.py"}], "after_files": [{"content": "import os\nfrom collections import defaultdict\nfrom localstack.utils.common import to_str\n\n# Lambda runtime constants\nLAMBDA_RUNTIME_PYTHON27 = 'python2.7'\nLAMBDA_RUNTIME_PYTHON36 = 'python3.6'\nLAMBDA_RUNTIME_PYTHON37 = 'python3.7'\nLAMBDA_RUNTIME_PYTHON38 = 'python3.8'\nLAMBDA_RUNTIME_NODEJS = 'nodejs'\nLAMBDA_RUNTIME_NODEJS43 = 'nodejs4.3'\nLAMBDA_RUNTIME_NODEJS610 = 'nodejs6.10'\nLAMBDA_RUNTIME_NODEJS810 = 'nodejs8.10'\nLAMBDA_RUNTIME_NODEJS10X = 'nodejs10.x'\nLAMBDA_RUNTIME_NODEJS12X = 'nodejs12.x'\nLAMBDA_RUNTIME_NODEJS14X = 'nodejs14.x'\nLAMBDA_RUNTIME_JAVA8 = 'java8'\nLAMBDA_RUNTIME_JAVA11 = 'java11'\nLAMBDA_RUNTIME_DOTNETCORE2 = 'dotnetcore2.0'\nLAMBDA_RUNTIME_DOTNETCORE21 = 'dotnetcore2.1'\nLAMBDA_RUNTIME_DOTNETCORE31 = 'dotnetcore3.1'\nLAMBDA_RUNTIME_GOLANG = 'go1.x'\nLAMBDA_RUNTIME_RUBY = 'ruby'\nLAMBDA_RUNTIME_RUBY25 = 'ruby2.5'\nLAMBDA_RUNTIME_RUBY27 = 'ruby2.7'\nLAMBDA_RUNTIME_PROVIDED = 'provided'\n\n# default handler and runtime\nLAMBDA_DEFAULT_HANDLER = 'handler.handler'\nLAMBDA_DEFAULT_RUNTIME = 'python3.8'\nLAMBDA_DEFAULT_STARTING_POSITION = 'LATEST'\n\n# List of Dotnet Lambda runtime names\nDOTNET_LAMBDA_RUNTIMES = [LAMBDA_RUNTIME_DOTNETCORE2, LAMBDA_RUNTIME_DOTNETCORE21, LAMBDA_RUNTIME_DOTNETCORE31]\n\n\ndef multi_value_dict_for_list(elements):\n temp_mv_dict = defaultdict(list)\n for key in elements:\n if isinstance(key, (list, tuple)):\n key, value = key\n else:\n value = elements[key]\n key = to_str(key)\n temp_mv_dict[key].append(value)\n\n return dict((k, tuple(v)) for k, v in temp_mv_dict.items())\n\n\ndef get_handler_file_from_name(handler_name, runtime=LAMBDA_DEFAULT_RUNTIME):\n if runtime.startswith(LAMBDA_RUNTIME_PROVIDED):\n return 'bootstrap'\n delimiter = '.'\n if runtime.startswith(LAMBDA_RUNTIME_NODEJS):\n file_ext = '.js'\n elif runtime.startswith(LAMBDA_RUNTIME_GOLANG):\n file_ext = ''\n elif runtime.startswith(tuple(DOTNET_LAMBDA_RUNTIMES)):\n file_ext = '.dll'\n delimiter = ':'\n elif runtime.startswith(LAMBDA_RUNTIME_RUBY):\n file_ext = '.rb'\n else:\n handler_name = handler_name.rpartition(delimiter)[0].replace(delimiter, os.path.sep)\n file_ext = '.py'\n return '%s%s' % (handler_name.split(delimiter)[0], file_ext)\n", "path": "localstack/services/awslambda/lambda_utils.py"}]}
| 1,179 | 139 |
gh_patches_debug_39863
|
rasdani/github-patches
|
git_diff
|
kserve__kserve-3031
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Introduce Priority for serving runtimes
/kind feature
**Problem**
- Currently, In auto selection mode, kserve serving runtime does not have a stable way to select a serving runtime if two are more serving runtimes support the same model format.
- Currently, Kserve [selects the first serving runtime](https://github.com/kserve/kserve/blob/f2f0717a9d6341b6ec9b939bdd324b2c8c507551/pkg/controller/v1beta1/inferenceservice/components/predictor.go#L152) from a list of serving runtimes [sorted by their respective creation timestamp descending and name ascending order](https://github.com/kserve/kserve/blob/f2f0717a9d6341b6ec9b939bdd324b2c8c507551/pkg/apis/serving/v1beta1/predictor_model.go#L162). So the users have little to no control over the runtime selection in auto select mode.
**Describe the solution you'd like**
[A clear and concise description of what you want to happen.]
- Introduce a new optional field called `Priority` in the model formats section. Through this field we can specify the priority of the serving runtimes for a sepcifice model format.
- The higher the priority value, the higher the precedence
- The priority is valid only if auto select is enabled otherwise the priority is not considered.
- If more than one serving runtime supports the model format and does not provided the priority, then the old way is followed for selecting the runtime.
- The serving runtime with priority takes precedence over the serving runtime with priority not specified.
- The same model formats with same model version cannot have the same priority.
**Example Yaml**
```yaml
apiVersion: serving.kserve.io/v1alpha1
kind: ClusterServingRuntime
metadata:
name: kserve-mlserver
spec:
annotations:
prometheus.kserve.io/path: /metrics
prometheus.kserve.io/port: '8080'
containers:
- env:
- name: MLSERVER_MODEL_IMPLEMENTATION
value: '{{.Labels.modelClass}}'
- name: MLSERVER_HTTP_PORT
value: '8080'
- name: MLSERVER_GRPC_PORT
value: '9000'
- name: MODELS_DIR
value: /mnt/models
image: docker.io/seldonio/mlserver:1.3.2
name: kserve-container
resources:
limits:
cpu: '1'
memory: 2Gi
requests:
cpu: '1'
memory: 2Gi
protocolVersions:
- v2
supportedModelFormats:
- autoSelect: true
name: sklearn
priority: 2
version: '0'
- autoSelect: true
name: xgboost
priority: 2
version: '1'
- autoSelect: true
name: lightgbm
priority: 2
version: '3'
- autoSelect: true
name: mlflow
priority: 1
version: '1'
```
**Anything else you would like to add:**
[Miscellaneous information that will assist in solving the issue.]
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/kserve/kserve/models/v1alpha1_supported_model_format.py`
Content:
```
1 # Copyright 2023 The KServe Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # coding: utf-8
16
17 """
18 KServe
19
20 Python SDK for KServe # noqa: E501
21
22 The version of the OpenAPI document: v0.1
23 Generated by: https://openapi-generator.tech
24 """
25
26
27 import pprint
28 import re # noqa: F401
29
30 import six
31
32 from kserve.configuration import Configuration
33
34
35 class V1alpha1SupportedModelFormat(object):
36 """NOTE: This class is auto generated by OpenAPI Generator.
37 Ref: https://openapi-generator.tech
38
39 Do not edit the class manually.
40 """
41
42 """
43 Attributes:
44 openapi_types (dict): The key is attribute name
45 and the value is attribute type.
46 attribute_map (dict): The key is attribute name
47 and the value is json key in definition.
48 """
49 openapi_types = {
50 'auto_select': 'bool',
51 'name': 'str',
52 'version': 'str'
53 }
54
55 attribute_map = {
56 'auto_select': 'autoSelect',
57 'name': 'name',
58 'version': 'version'
59 }
60
61 def __init__(self, auto_select=None, name='', version=None, local_vars_configuration=None): # noqa: E501
62 """V1alpha1SupportedModelFormat - a model defined in OpenAPI""" # noqa: E501
63 if local_vars_configuration is None:
64 local_vars_configuration = Configuration()
65 self.local_vars_configuration = local_vars_configuration
66
67 self._auto_select = None
68 self._name = None
69 self._version = None
70 self.discriminator = None
71
72 if auto_select is not None:
73 self.auto_select = auto_select
74 if name is not None:
75 self.name = name
76 if version is not None:
77 self.version = version
78
79 @property
80 def auto_select(self):
81 """Gets the auto_select of this V1alpha1SupportedModelFormat. # noqa: E501
82
83 Set to true to allow the ServingRuntime to be used for automatic model placement if this model format is specified with no explicit runtime. # noqa: E501
84
85 :return: The auto_select of this V1alpha1SupportedModelFormat. # noqa: E501
86 :rtype: bool
87 """
88 return self._auto_select
89
90 @auto_select.setter
91 def auto_select(self, auto_select):
92 """Sets the auto_select of this V1alpha1SupportedModelFormat.
93
94 Set to true to allow the ServingRuntime to be used for automatic model placement if this model format is specified with no explicit runtime. # noqa: E501
95
96 :param auto_select: The auto_select of this V1alpha1SupportedModelFormat. # noqa: E501
97 :type: bool
98 """
99
100 self._auto_select = auto_select
101
102 @property
103 def name(self):
104 """Gets the name of this V1alpha1SupportedModelFormat. # noqa: E501
105
106 Name of the model format. # noqa: E501
107
108 :return: The name of this V1alpha1SupportedModelFormat. # noqa: E501
109 :rtype: str
110 """
111 return self._name
112
113 @name.setter
114 def name(self, name):
115 """Sets the name of this V1alpha1SupportedModelFormat.
116
117 Name of the model format. # noqa: E501
118
119 :param name: The name of this V1alpha1SupportedModelFormat. # noqa: E501
120 :type: str
121 """
122
123 self._name = name
124
125 @property
126 def version(self):
127 """Gets the version of this V1alpha1SupportedModelFormat. # noqa: E501
128
129 Version of the model format. Used in validating that a predictor is supported by a runtime. Can be \"major\", \"major.minor\" or \"major.minor.patch\". # noqa: E501
130
131 :return: The version of this V1alpha1SupportedModelFormat. # noqa: E501
132 :rtype: str
133 """
134 return self._version
135
136 @version.setter
137 def version(self, version):
138 """Sets the version of this V1alpha1SupportedModelFormat.
139
140 Version of the model format. Used in validating that a predictor is supported by a runtime. Can be \"major\", \"major.minor\" or \"major.minor.patch\". # noqa: E501
141
142 :param version: The version of this V1alpha1SupportedModelFormat. # noqa: E501
143 :type: str
144 """
145
146 self._version = version
147
148 def to_dict(self):
149 """Returns the model properties as a dict"""
150 result = {}
151
152 for attr, _ in six.iteritems(self.openapi_types):
153 value = getattr(self, attr)
154 if isinstance(value, list):
155 result[attr] = list(map(
156 lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
157 value
158 ))
159 elif hasattr(value, "to_dict"):
160 result[attr] = value.to_dict()
161 elif isinstance(value, dict):
162 result[attr] = dict(map(
163 lambda item: (item[0], item[1].to_dict())
164 if hasattr(item[1], "to_dict") else item,
165 value.items()
166 ))
167 else:
168 result[attr] = value
169
170 return result
171
172 def to_str(self):
173 """Returns the string representation of the model"""
174 return pprint.pformat(self.to_dict())
175
176 def __repr__(self):
177 """For `print` and `pprint`"""
178 return self.to_str()
179
180 def __eq__(self, other):
181 """Returns true if both objects are equal"""
182 if not isinstance(other, V1alpha1SupportedModelFormat):
183 return False
184
185 return self.to_dict() == other.to_dict()
186
187 def __ne__(self, other):
188 """Returns true if both objects are not equal"""
189 if not isinstance(other, V1alpha1SupportedModelFormat):
190 return True
191
192 return self.to_dict() != other.to_dict()
193
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/python/kserve/kserve/models/v1alpha1_supported_model_format.py b/python/kserve/kserve/models/v1alpha1_supported_model_format.py
--- a/python/kserve/kserve/models/v1alpha1_supported_model_format.py
+++ b/python/kserve/kserve/models/v1alpha1_supported_model_format.py
@@ -49,16 +49,18 @@
openapi_types = {
'auto_select': 'bool',
'name': 'str',
+ 'priority': 'int',
'version': 'str'
}
attribute_map = {
'auto_select': 'autoSelect',
'name': 'name',
+ 'priority': 'priority',
'version': 'version'
}
- def __init__(self, auto_select=None, name='', version=None, local_vars_configuration=None): # noqa: E501
+ def __init__(self, auto_select=None, name='', priority=None, version=None, local_vars_configuration=None): # noqa: E501
"""V1alpha1SupportedModelFormat - a model defined in OpenAPI""" # noqa: E501
if local_vars_configuration is None:
local_vars_configuration = Configuration()
@@ -66,6 +68,7 @@
self._auto_select = None
self._name = None
+ self._priority = None
self._version = None
self.discriminator = None
@@ -73,6 +76,8 @@
self.auto_select = auto_select
if name is not None:
self.name = name
+ if priority is not None:
+ self.priority = priority
if version is not None:
self.version = version
@@ -122,6 +127,29 @@
self._name = name
+ @property
+ def priority(self):
+ """Gets the priority of this V1alpha1SupportedModelFormat. # noqa: E501
+
+ Priority of this serving runtime for auto selection. This is used to select the serving runtime if more than one serving runtime supports the same model format. The value should be greater than zero. The higher the value, the higher the priority. Priority is not considered if AutoSelect is either false or not specified. Priority can be overridden by specifying the runtime in the InferenceService. # noqa: E501
+
+ :return: The priority of this V1alpha1SupportedModelFormat. # noqa: E501
+ :rtype: int
+ """
+ return self._priority
+
+ @priority.setter
+ def priority(self, priority):
+ """Sets the priority of this V1alpha1SupportedModelFormat.
+
+ Priority of this serving runtime for auto selection. This is used to select the serving runtime if more than one serving runtime supports the same model format. The value should be greater than zero. The higher the value, the higher the priority. Priority is not considered if AutoSelect is either false or not specified. Priority can be overridden by specifying the runtime in the InferenceService. # noqa: E501
+
+ :param priority: The priority of this V1alpha1SupportedModelFormat. # noqa: E501
+ :type: int
+ """
+
+ self._priority = priority
+
@property
def version(self):
"""Gets the version of this V1alpha1SupportedModelFormat. # noqa: E501
|
{"golden_diff": "diff --git a/python/kserve/kserve/models/v1alpha1_supported_model_format.py b/python/kserve/kserve/models/v1alpha1_supported_model_format.py\n--- a/python/kserve/kserve/models/v1alpha1_supported_model_format.py\n+++ b/python/kserve/kserve/models/v1alpha1_supported_model_format.py\n@@ -49,16 +49,18 @@\n openapi_types = {\n 'auto_select': 'bool',\n 'name': 'str',\n+ 'priority': 'int',\n 'version': 'str'\n }\n \n attribute_map = {\n 'auto_select': 'autoSelect',\n 'name': 'name',\n+ 'priority': 'priority',\n 'version': 'version'\n }\n \n- def __init__(self, auto_select=None, name='', version=None, local_vars_configuration=None): # noqa: E501\n+ def __init__(self, auto_select=None, name='', priority=None, version=None, local_vars_configuration=None): # noqa: E501\n \"\"\"V1alpha1SupportedModelFormat - a model defined in OpenAPI\"\"\" # noqa: E501\n if local_vars_configuration is None:\n local_vars_configuration = Configuration()\n@@ -66,6 +68,7 @@\n \n self._auto_select = None\n self._name = None\n+ self._priority = None\n self._version = None\n self.discriminator = None\n \n@@ -73,6 +76,8 @@\n self.auto_select = auto_select\n if name is not None:\n self.name = name\n+ if priority is not None:\n+ self.priority = priority\n if version is not None:\n self.version = version\n \n@@ -122,6 +127,29 @@\n \n self._name = name\n \n+ @property\n+ def priority(self):\n+ \"\"\"Gets the priority of this V1alpha1SupportedModelFormat. # noqa: E501\n+\n+ Priority of this serving runtime for auto selection. This is used to select the serving runtime if more than one serving runtime supports the same model format. The value should be greater than zero. The higher the value, the higher the priority. Priority is not considered if AutoSelect is either false or not specified. Priority can be overridden by specifying the runtime in the InferenceService. # noqa: E501\n+\n+ :return: The priority of this V1alpha1SupportedModelFormat. # noqa: E501\n+ :rtype: int\n+ \"\"\"\n+ return self._priority\n+\n+ @priority.setter\n+ def priority(self, priority):\n+ \"\"\"Sets the priority of this V1alpha1SupportedModelFormat.\n+\n+ Priority of this serving runtime for auto selection. This is used to select the serving runtime if more than one serving runtime supports the same model format. The value should be greater than zero. The higher the value, the higher the priority. Priority is not considered if AutoSelect is either false or not specified. Priority can be overridden by specifying the runtime in the InferenceService. # noqa: E501\n+\n+ :param priority: The priority of this V1alpha1SupportedModelFormat. # noqa: E501\n+ :type: int\n+ \"\"\"\n+\n+ self._priority = priority\n+\n @property\n def version(self):\n \"\"\"Gets the version of this V1alpha1SupportedModelFormat. # noqa: E501\n", "issue": "Introduce Priority for serving runtimes\n/kind feature\r\n\r\n**Problem**\r\n- Currently, In auto selection mode, kserve serving runtime does not have a stable way to select a serving runtime if two are more serving runtimes support the same model format. \r\n- Currently, Kserve [selects the first serving runtime](https://github.com/kserve/kserve/blob/f2f0717a9d6341b6ec9b939bdd324b2c8c507551/pkg/controller/v1beta1/inferenceservice/components/predictor.go#L152) from a list of serving runtimes [sorted by their respective creation timestamp descending and name ascending order](https://github.com/kserve/kserve/blob/f2f0717a9d6341b6ec9b939bdd324b2c8c507551/pkg/apis/serving/v1beta1/predictor_model.go#L162). So the users have little to no control over the runtime selection in auto select mode.\r\n\r\n\r\n**Describe the solution you'd like**\r\n[A clear and concise description of what you want to happen.]\r\n- Introduce a new optional field called `Priority` in the model formats section. Through this field we can specify the priority of the serving runtimes for a sepcifice model format.\r\n- The higher the priority value, the higher the precedence\r\n- The priority is valid only if auto select is enabled otherwise the priority is not considered.\r\n- If more than one serving runtime supports the model format and does not provided the priority, then the old way is followed for selecting the runtime.\r\n- The serving runtime with priority takes precedence over the serving runtime with priority not specified.\r\n- The same model formats with same model version cannot have the same priority.\r\n\r\n**Example Yaml**\r\n```yaml\r\napiVersion: serving.kserve.io/v1alpha1\r\nkind: ClusterServingRuntime\r\nmetadata:\r\n name: kserve-mlserver\r\nspec:\r\n annotations:\r\n prometheus.kserve.io/path: /metrics\r\n prometheus.kserve.io/port: '8080'\r\n containers:\r\n - env:\r\n - name: MLSERVER_MODEL_IMPLEMENTATION\r\n value: '{{.Labels.modelClass}}'\r\n - name: MLSERVER_HTTP_PORT\r\n value: '8080'\r\n - name: MLSERVER_GRPC_PORT\r\n value: '9000'\r\n - name: MODELS_DIR\r\n value: /mnt/models\r\n image: docker.io/seldonio/mlserver:1.3.2\r\n name: kserve-container\r\n resources:\r\n limits:\r\n cpu: '1'\r\n memory: 2Gi\r\n requests:\r\n cpu: '1'\r\n memory: 2Gi\r\n protocolVersions:\r\n - v2\r\n supportedModelFormats:\r\n - autoSelect: true\r\n name: sklearn\r\n priority: 2\r\n version: '0'\r\n - autoSelect: true\r\n name: xgboost\r\n priority: 2\r\n version: '1'\r\n - autoSelect: true\r\n name: lightgbm\r\n priority: 2\r\n version: '3'\r\n - autoSelect: true\r\n name: mlflow\r\n priority: 1\r\n version: '1'\r\n\r\n```\r\n\r\n\r\n**Anything else you would like to add:**\r\n[Miscellaneous information that will assist in solving the issue.]\r\n\n", "before_files": [{"content": "# Copyright 2023 The KServe Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# coding: utf-8\n\n\"\"\"\n KServe\n\n Python SDK for KServe # noqa: E501\n\n The version of the OpenAPI document: v0.1\n Generated by: https://openapi-generator.tech\n\"\"\"\n\n\nimport pprint\nimport re # noqa: F401\n\nimport six\n\nfrom kserve.configuration import Configuration\n\n\nclass V1alpha1SupportedModelFormat(object):\n \"\"\"NOTE: This class is auto generated by OpenAPI Generator.\n Ref: https://openapi-generator.tech\n\n Do not edit the class manually.\n \"\"\"\n\n \"\"\"\n Attributes:\n openapi_types (dict): The key is attribute name\n and the value is attribute type.\n attribute_map (dict): The key is attribute name\n and the value is json key in definition.\n \"\"\"\n openapi_types = {\n 'auto_select': 'bool',\n 'name': 'str',\n 'version': 'str'\n }\n\n attribute_map = {\n 'auto_select': 'autoSelect',\n 'name': 'name',\n 'version': 'version'\n }\n\n def __init__(self, auto_select=None, name='', version=None, local_vars_configuration=None): # noqa: E501\n \"\"\"V1alpha1SupportedModelFormat - a model defined in OpenAPI\"\"\" # noqa: E501\n if local_vars_configuration is None:\n local_vars_configuration = Configuration()\n self.local_vars_configuration = local_vars_configuration\n\n self._auto_select = None\n self._name = None\n self._version = None\n self.discriminator = None\n\n if auto_select is not None:\n self.auto_select = auto_select\n if name is not None:\n self.name = name\n if version is not None:\n self.version = version\n\n @property\n def auto_select(self):\n \"\"\"Gets the auto_select of this V1alpha1SupportedModelFormat. # noqa: E501\n\n Set to true to allow the ServingRuntime to be used for automatic model placement if this model format is specified with no explicit runtime. # noqa: E501\n\n :return: The auto_select of this V1alpha1SupportedModelFormat. # noqa: E501\n :rtype: bool\n \"\"\"\n return self._auto_select\n\n @auto_select.setter\n def auto_select(self, auto_select):\n \"\"\"Sets the auto_select of this V1alpha1SupportedModelFormat.\n\n Set to true to allow the ServingRuntime to be used for automatic model placement if this model format is specified with no explicit runtime. # noqa: E501\n\n :param auto_select: The auto_select of this V1alpha1SupportedModelFormat. # noqa: E501\n :type: bool\n \"\"\"\n\n self._auto_select = auto_select\n\n @property\n def name(self):\n \"\"\"Gets the name of this V1alpha1SupportedModelFormat. # noqa: E501\n\n Name of the model format. # noqa: E501\n\n :return: The name of this V1alpha1SupportedModelFormat. # noqa: E501\n :rtype: str\n \"\"\"\n return self._name\n\n @name.setter\n def name(self, name):\n \"\"\"Sets the name of this V1alpha1SupportedModelFormat.\n\n Name of the model format. # noqa: E501\n\n :param name: The name of this V1alpha1SupportedModelFormat. # noqa: E501\n :type: str\n \"\"\"\n\n self._name = name\n\n @property\n def version(self):\n \"\"\"Gets the version of this V1alpha1SupportedModelFormat. # noqa: E501\n\n Version of the model format. Used in validating that a predictor is supported by a runtime. Can be \\\"major\\\", \\\"major.minor\\\" or \\\"major.minor.patch\\\". # noqa: E501\n\n :return: The version of this V1alpha1SupportedModelFormat. # noqa: E501\n :rtype: str\n \"\"\"\n return self._version\n\n @version.setter\n def version(self, version):\n \"\"\"Sets the version of this V1alpha1SupportedModelFormat.\n\n Version of the model format. Used in validating that a predictor is supported by a runtime. Can be \\\"major\\\", \\\"major.minor\\\" or \\\"major.minor.patch\\\". # noqa: E501\n\n :param version: The version of this V1alpha1SupportedModelFormat. # noqa: E501\n :type: str\n \"\"\"\n\n self._version = version\n\n def to_dict(self):\n \"\"\"Returns the model properties as a dict\"\"\"\n result = {}\n\n for attr, _ in six.iteritems(self.openapi_types):\n value = getattr(self, attr)\n if isinstance(value, list):\n result[attr] = list(map(\n lambda x: x.to_dict() if hasattr(x, \"to_dict\") else x,\n value\n ))\n elif hasattr(value, \"to_dict\"):\n result[attr] = value.to_dict()\n elif isinstance(value, dict):\n result[attr] = dict(map(\n lambda item: (item[0], item[1].to_dict())\n if hasattr(item[1], \"to_dict\") else item,\n value.items()\n ))\n else:\n result[attr] = value\n\n return result\n\n def to_str(self):\n \"\"\"Returns the string representation of the model\"\"\"\n return pprint.pformat(self.to_dict())\n\n def __repr__(self):\n \"\"\"For `print` and `pprint`\"\"\"\n return self.to_str()\n\n def __eq__(self, other):\n \"\"\"Returns true if both objects are equal\"\"\"\n if not isinstance(other, V1alpha1SupportedModelFormat):\n return False\n\n return self.to_dict() == other.to_dict()\n\n def __ne__(self, other):\n \"\"\"Returns true if both objects are not equal\"\"\"\n if not isinstance(other, V1alpha1SupportedModelFormat):\n return True\n\n return self.to_dict() != other.to_dict()\n", "path": "python/kserve/kserve/models/v1alpha1_supported_model_format.py"}], "after_files": [{"content": "# Copyright 2023 The KServe Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# coding: utf-8\n\n\"\"\"\n KServe\n\n Python SDK for KServe # noqa: E501\n\n The version of the OpenAPI document: v0.1\n Generated by: https://openapi-generator.tech\n\"\"\"\n\n\nimport pprint\nimport re # noqa: F401\n\nimport six\n\nfrom kserve.configuration import Configuration\n\n\nclass V1alpha1SupportedModelFormat(object):\n \"\"\"NOTE: This class is auto generated by OpenAPI Generator.\n Ref: https://openapi-generator.tech\n\n Do not edit the class manually.\n \"\"\"\n\n \"\"\"\n Attributes:\n openapi_types (dict): The key is attribute name\n and the value is attribute type.\n attribute_map (dict): The key is attribute name\n and the value is json key in definition.\n \"\"\"\n openapi_types = {\n 'auto_select': 'bool',\n 'name': 'str',\n 'priority': 'int',\n 'version': 'str'\n }\n\n attribute_map = {\n 'auto_select': 'autoSelect',\n 'name': 'name',\n 'priority': 'priority',\n 'version': 'version'\n }\n\n def __init__(self, auto_select=None, name='', priority=None, version=None, local_vars_configuration=None): # noqa: E501\n \"\"\"V1alpha1SupportedModelFormat - a model defined in OpenAPI\"\"\" # noqa: E501\n if local_vars_configuration is None:\n local_vars_configuration = Configuration()\n self.local_vars_configuration = local_vars_configuration\n\n self._auto_select = None\n self._name = None\n self._priority = None\n self._version = None\n self.discriminator = None\n\n if auto_select is not None:\n self.auto_select = auto_select\n if name is not None:\n self.name = name\n if priority is not None:\n self.priority = priority\n if version is not None:\n self.version = version\n\n @property\n def auto_select(self):\n \"\"\"Gets the auto_select of this V1alpha1SupportedModelFormat. # noqa: E501\n\n Set to true to allow the ServingRuntime to be used for automatic model placement if this model format is specified with no explicit runtime. # noqa: E501\n\n :return: The auto_select of this V1alpha1SupportedModelFormat. # noqa: E501\n :rtype: bool\n \"\"\"\n return self._auto_select\n\n @auto_select.setter\n def auto_select(self, auto_select):\n \"\"\"Sets the auto_select of this V1alpha1SupportedModelFormat.\n\n Set to true to allow the ServingRuntime to be used for automatic model placement if this model format is specified with no explicit runtime. # noqa: E501\n\n :param auto_select: The auto_select of this V1alpha1SupportedModelFormat. # noqa: E501\n :type: bool\n \"\"\"\n\n self._auto_select = auto_select\n\n @property\n def name(self):\n \"\"\"Gets the name of this V1alpha1SupportedModelFormat. # noqa: E501\n\n Name of the model format. # noqa: E501\n\n :return: The name of this V1alpha1SupportedModelFormat. # noqa: E501\n :rtype: str\n \"\"\"\n return self._name\n\n @name.setter\n def name(self, name):\n \"\"\"Sets the name of this V1alpha1SupportedModelFormat.\n\n Name of the model format. # noqa: E501\n\n :param name: The name of this V1alpha1SupportedModelFormat. # noqa: E501\n :type: str\n \"\"\"\n\n self._name = name\n\n @property\n def priority(self):\n \"\"\"Gets the priority of this V1alpha1SupportedModelFormat. # noqa: E501\n\n Priority of this serving runtime for auto selection. This is used to select the serving runtime if more than one serving runtime supports the same model format. The value should be greater than zero. The higher the value, the higher the priority. Priority is not considered if AutoSelect is either false or not specified. Priority can be overridden by specifying the runtime in the InferenceService. # noqa: E501\n\n :return: The priority of this V1alpha1SupportedModelFormat. # noqa: E501\n :rtype: int\n \"\"\"\n return self._priority\n\n @priority.setter\n def priority(self, priority):\n \"\"\"Sets the priority of this V1alpha1SupportedModelFormat.\n\n Priority of this serving runtime for auto selection. This is used to select the serving runtime if more than one serving runtime supports the same model format. The value should be greater than zero. The higher the value, the higher the priority. Priority is not considered if AutoSelect is either false or not specified. Priority can be overridden by specifying the runtime in the InferenceService. # noqa: E501\n\n :param priority: The priority of this V1alpha1SupportedModelFormat. # noqa: E501\n :type: int\n \"\"\"\n\n self._priority = priority\n\n @property\n def version(self):\n \"\"\"Gets the version of this V1alpha1SupportedModelFormat. # noqa: E501\n\n Version of the model format. Used in validating that a predictor is supported by a runtime. Can be \\\"major\\\", \\\"major.minor\\\" or \\\"major.minor.patch\\\". # noqa: E501\n\n :return: The version of this V1alpha1SupportedModelFormat. # noqa: E501\n :rtype: str\n \"\"\"\n return self._version\n\n @version.setter\n def version(self, version):\n \"\"\"Sets the version of this V1alpha1SupportedModelFormat.\n\n Version of the model format. Used in validating that a predictor is supported by a runtime. Can be \\\"major\\\", \\\"major.minor\\\" or \\\"major.minor.patch\\\". # noqa: E501\n\n :param version: The version of this V1alpha1SupportedModelFormat. # noqa: E501\n :type: str\n \"\"\"\n\n self._version = version\n\n def to_dict(self):\n \"\"\"Returns the model properties as a dict\"\"\"\n result = {}\n\n for attr, _ in six.iteritems(self.openapi_types):\n value = getattr(self, attr)\n if isinstance(value, list):\n result[attr] = list(map(\n lambda x: x.to_dict() if hasattr(x, \"to_dict\") else x,\n value\n ))\n elif hasattr(value, \"to_dict\"):\n result[attr] = value.to_dict()\n elif isinstance(value, dict):\n result[attr] = dict(map(\n lambda item: (item[0], item[1].to_dict())\n if hasattr(item[1], \"to_dict\") else item,\n value.items()\n ))\n else:\n result[attr] = value\n\n return result\n\n def to_str(self):\n \"\"\"Returns the string representation of the model\"\"\"\n return pprint.pformat(self.to_dict())\n\n def __repr__(self):\n \"\"\"For `print` and `pprint`\"\"\"\n return self.to_str()\n\n def __eq__(self, other):\n \"\"\"Returns true if both objects are equal\"\"\"\n if not isinstance(other, V1alpha1SupportedModelFormat):\n return False\n\n return self.to_dict() == other.to_dict()\n\n def __ne__(self, other):\n \"\"\"Returns true if both objects are not equal\"\"\"\n if not isinstance(other, V1alpha1SupportedModelFormat):\n return True\n\n return self.to_dict() != other.to_dict()\n", "path": "python/kserve/kserve/models/v1alpha1_supported_model_format.py"}]}
| 2,971 | 769 |
gh_patches_debug_13003
|
rasdani/github-patches
|
git_diff
|
replicate__cog-362
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support importing model from subdirectories
This works:
```yaml
model: "predict.py:MyModel"
```
But this doesn't:
```yaml:
model: "mymodel/predict.py:MyModel"
```
It should.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/cog/predictor.py`
Content:
```
1 from abc import ABC, abstractmethod
2 import importlib
3 import os.path
4 from pathlib import Path
5 from typing import Dict, Any
6
7 import yaml
8
9 from .errors import ConfigDoesNotExist, PredictorNotSet
10
11
12 # TODO(andreas): handle directory input
13 # TODO(andreas): handle List[Dict[str, int]], etc.
14 # TODO(andreas): model-level documentation
15
16
17 class Predictor(ABC):
18 @abstractmethod
19 def setup(self):
20 pass
21
22 @abstractmethod
23 def predict(self, **kwargs):
24 pass
25
26 def get_type_signature(self):
27 """
28 Returns a dict describing the inputs of the model.
29 """
30 from .input import (
31 get_type_name,
32 UNSPECIFIED,
33 )
34
35 inputs = []
36 if hasattr(self.predict, "_inputs"):
37 input_specs = self.predict._inputs
38 for spec in input_specs:
39 arg: Dict[str, Any] = {
40 "name": spec.name,
41 "type": get_type_name(spec.type),
42 }
43 if spec.help:
44 arg["help"] = spec.help
45 if spec.default is not UNSPECIFIED:
46 arg["default"] = str(spec.default) # TODO: don't string this
47 if spec.min is not None:
48 arg["min"] = str(spec.min) # TODO: don't string this
49 if spec.max is not None:
50 arg["max"] = str(spec.max) # TODO: don't string this
51 if spec.options is not None:
52 arg["options"] = [str(o) for o in spec.options]
53 inputs.append(arg)
54 return {"inputs": inputs}
55
56
57 def run_prediction(predictor, inputs, cleanup_functions):
58 """
59 Run the predictor on the inputs, and append resulting paths
60 to cleanup functions for removal.
61 """
62 result = predictor.predict(**inputs)
63 if isinstance(result, Path):
64 cleanup_functions.append(result.unlink)
65 return result
66
67
68 def load_predictor():
69 # Assumes the working directory is /src
70 config_path = os.path.abspath("cog.yaml")
71 try:
72 with open(config_path) as fh:
73 config = yaml.safe_load(fh)
74 except FileNotFoundError:
75 raise ConfigDoesNotExist(
76 f"Could not find {config_path}",
77 )
78
79 if "predict" not in config:
80 raise PredictorNotSet(
81 "Can't run predictions: 'predict' option not found in cog.yaml"
82 )
83
84 # TODO: handle predict scripts in subdirectories
85 predict_string = config["predict"]
86 module_name, class_name = predict_string.split(".py:", 1)
87 module = importlib.import_module(module_name)
88 predictor_class = getattr(module, class_name)
89 return predictor_class()
90
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/python/cog/predictor.py b/python/cog/predictor.py
--- a/python/cog/predictor.py
+++ b/python/cog/predictor.py
@@ -81,9 +81,11 @@
"Can't run predictions: 'predict' option not found in cog.yaml"
)
- # TODO: handle predict scripts in subdirectories
predict_string = config["predict"]
- module_name, class_name = predict_string.split(".py:", 1)
- module = importlib.import_module(module_name)
+ module_path, class_name = predict_string.split(":", 1)
+ module_name = os.path.basename(module_path).split(".py", 1)[0]
+ spec = importlib.util.spec_from_file_location(module_name, module_path)
+ module = importlib.util.module_from_spec(spec)
+ spec.loader.exec_module(module)
predictor_class = getattr(module, class_name)
return predictor_class()
|
{"golden_diff": "diff --git a/python/cog/predictor.py b/python/cog/predictor.py\n--- a/python/cog/predictor.py\n+++ b/python/cog/predictor.py\n@@ -81,9 +81,11 @@\n \"Can't run predictions: 'predict' option not found in cog.yaml\"\n )\n \n- # TODO: handle predict scripts in subdirectories\n predict_string = config[\"predict\"]\n- module_name, class_name = predict_string.split(\".py:\", 1)\n- module = importlib.import_module(module_name)\n+ module_path, class_name = predict_string.split(\":\", 1)\n+ module_name = os.path.basename(module_path).split(\".py\", 1)[0]\n+ spec = importlib.util.spec_from_file_location(module_name, module_path)\n+ module = importlib.util.module_from_spec(spec)\n+ spec.loader.exec_module(module)\n predictor_class = getattr(module, class_name)\n return predictor_class()\n", "issue": "Support importing model from subdirectories\nThis works:\r\n\r\n```yaml\r\nmodel: \"predict.py:MyModel\"\r\n```\r\n\r\nBut this doesn't:\r\n\r\n```yaml:\r\nmodel: \"mymodel/predict.py:MyModel\"\r\n```\r\n\r\nIt should.\n", "before_files": [{"content": "from abc import ABC, abstractmethod\nimport importlib\nimport os.path\nfrom pathlib import Path\nfrom typing import Dict, Any\n\nimport yaml\n\nfrom .errors import ConfigDoesNotExist, PredictorNotSet\n\n\n# TODO(andreas): handle directory input\n# TODO(andreas): handle List[Dict[str, int]], etc.\n# TODO(andreas): model-level documentation\n\n\nclass Predictor(ABC):\n @abstractmethod\n def setup(self):\n pass\n\n @abstractmethod\n def predict(self, **kwargs):\n pass\n\n def get_type_signature(self):\n \"\"\"\n Returns a dict describing the inputs of the model.\n \"\"\"\n from .input import (\n get_type_name,\n UNSPECIFIED,\n )\n\n inputs = []\n if hasattr(self.predict, \"_inputs\"):\n input_specs = self.predict._inputs\n for spec in input_specs:\n arg: Dict[str, Any] = {\n \"name\": spec.name,\n \"type\": get_type_name(spec.type),\n }\n if spec.help:\n arg[\"help\"] = spec.help\n if spec.default is not UNSPECIFIED:\n arg[\"default\"] = str(spec.default) # TODO: don't string this\n if spec.min is not None:\n arg[\"min\"] = str(spec.min) # TODO: don't string this\n if spec.max is not None:\n arg[\"max\"] = str(spec.max) # TODO: don't string this\n if spec.options is not None:\n arg[\"options\"] = [str(o) for o in spec.options]\n inputs.append(arg)\n return {\"inputs\": inputs}\n\n\ndef run_prediction(predictor, inputs, cleanup_functions):\n \"\"\"\n Run the predictor on the inputs, and append resulting paths\n to cleanup functions for removal.\n \"\"\"\n result = predictor.predict(**inputs)\n if isinstance(result, Path):\n cleanup_functions.append(result.unlink)\n return result\n\n\ndef load_predictor():\n # Assumes the working directory is /src\n config_path = os.path.abspath(\"cog.yaml\")\n try:\n with open(config_path) as fh:\n config = yaml.safe_load(fh)\n except FileNotFoundError:\n raise ConfigDoesNotExist(\n f\"Could not find {config_path}\",\n )\n\n if \"predict\" not in config:\n raise PredictorNotSet(\n \"Can't run predictions: 'predict' option not found in cog.yaml\"\n )\n\n # TODO: handle predict scripts in subdirectories\n predict_string = config[\"predict\"]\n module_name, class_name = predict_string.split(\".py:\", 1)\n module = importlib.import_module(module_name)\n predictor_class = getattr(module, class_name)\n return predictor_class()\n", "path": "python/cog/predictor.py"}], "after_files": [{"content": "from abc import ABC, abstractmethod\nimport importlib\nimport os.path\nfrom pathlib import Path\nfrom typing import Dict, Any\n\nimport yaml\n\nfrom .errors import ConfigDoesNotExist, PredictorNotSet\n\n\n# TODO(andreas): handle directory input\n# TODO(andreas): handle List[Dict[str, int]], etc.\n# TODO(andreas): model-level documentation\n\n\nclass Predictor(ABC):\n @abstractmethod\n def setup(self):\n pass\n\n @abstractmethod\n def predict(self, **kwargs):\n pass\n\n def get_type_signature(self):\n \"\"\"\n Returns a dict describing the inputs of the model.\n \"\"\"\n from .input import (\n get_type_name,\n UNSPECIFIED,\n )\n\n inputs = []\n if hasattr(self.predict, \"_inputs\"):\n input_specs = self.predict._inputs\n for spec in input_specs:\n arg: Dict[str, Any] = {\n \"name\": spec.name,\n \"type\": get_type_name(spec.type),\n }\n if spec.help:\n arg[\"help\"] = spec.help\n if spec.default is not UNSPECIFIED:\n arg[\"default\"] = str(spec.default) # TODO: don't string this\n if spec.min is not None:\n arg[\"min\"] = str(spec.min) # TODO: don't string this\n if spec.max is not None:\n arg[\"max\"] = str(spec.max) # TODO: don't string this\n if spec.options is not None:\n arg[\"options\"] = [str(o) for o in spec.options]\n inputs.append(arg)\n return {\"inputs\": inputs}\n\n\ndef run_prediction(predictor, inputs, cleanup_functions):\n \"\"\"\n Run the predictor on the inputs, and append resulting paths\n to cleanup functions for removal.\n \"\"\"\n result = predictor.predict(**inputs)\n if isinstance(result, Path):\n cleanup_functions.append(result.unlink)\n return result\n\n\ndef load_predictor():\n # Assumes the working directory is /src\n config_path = os.path.abspath(\"cog.yaml\")\n try:\n with open(config_path) as fh:\n config = yaml.safe_load(fh)\n except FileNotFoundError:\n raise ConfigDoesNotExist(\n f\"Could not find {config_path}\",\n )\n\n if \"predict\" not in config:\n raise PredictorNotSet(\n \"Can't run predictions: 'predict' option not found in cog.yaml\"\n )\n\n predict_string = config[\"predict\"]\n module_path, class_name = predict_string.split(\":\", 1)\n module_name = os.path.basename(module_path).split(\".py\", 1)[0]\n spec = importlib.util.spec_from_file_location(module_name, module_path)\n module = importlib.util.module_from_spec(spec)\n spec.loader.exec_module(module)\n predictor_class = getattr(module, class_name)\n return predictor_class()\n", "path": "python/cog/predictor.py"}]}
| 1,060 | 209 |
gh_patches_debug_35521
|
rasdani/github-patches
|
git_diff
|
docker__docker-py-929
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Using a docker/config.json file causes "TypeError: string indices must be integers"
Using a ~/.docker/config.json file causes docker-compose to output a Python error. @dnephin in https://github.com/docker/compose/issues/2697#issuecomment-172936366 suggests that this is an issue to be raised with the docker-py project instead of the compose project.
So here's a simple config.json file, and a dummy docker-compose.yml file which demonstrate the issue:
```
$ cat ~/.docker/config.json
{
"detachKeys": "ctrl-q,ctrl-u,ctrl-i,ctrl-t"
}
$ cat docker-compose.yml
version: '2'
services:
s1:
image: ubuntu
$ docker-compose ps
Traceback (most recent call last):
File "<string>", line 3, in <module>
File "/compose/compose/cli/main.py", line 55, in main
File "/compose/compose/cli/docopt_command.py", line 23, in sys_dispatch
File "/compose/compose/cli/docopt_command.py", line 26, in dispatch
File "/compose/compose/cli/main.py", line 172, in perform_command
File "/compose/compose/cli/command.py", line 52, in project_from_options
File "/compose/compose/cli/command.py", line 85, in get_project
File "/compose/compose/cli/command.py", line 66, in get_client
File "/compose/compose/cli/docker_client.py", line 37, in docker_client
File "/compose/venv/lib/python2.7/site-packages/docker/client.py", line 56, in __init__
File "/compose/venv/lib/python2.7/site-packages/docker/auth/auth.py", line 182, in load_config
File "/compose/venv/lib/python2.7/site-packages/docker/auth/auth.py", line 118, in parse_auth
TypeError: string indices must be integers
docker-compose returned -1
```
And if we delete the config and try again:
```
$ rm ~/.docker/config.json
$ docker-compose ps
Name Command State Ports
------------------------------
```
I'm using Docker Toolbox 1.10.0 on OSX 10.11.3. The docker container is being run on a VirtualBox VM managed by docker-machine. Here are some more version numbers:
```
$ docker version
Client:
Version: 1.10.0
API version: 1.22
Go version: go1.5.3
Git commit: 590d5108
Built: Thu Feb 4 18:18:11 2016
OS/Arch: darwin/amd64
Server:
Version: 1.10.0
API version: 1.22
Go version: go1.5.3
Git commit: 590d5108
Built: Thu Feb 4 19:55:25 2016
OS/Arch: linux/amd64
$ docker-compose version
docker-compose version 1.6.0, build d99cad6
docker-py version: 1.7.0
CPython version: 2.7.9
OpenSSL version: OpenSSL 1.0.1j 15 Oct 2014
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docker/auth/auth.py`
Content:
```
1 # Copyright 2013 dotCloud inc.
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import base64
16 import json
17 import logging
18 import os
19
20 import six
21
22 from .. import errors
23
24 INDEX_NAME = 'docker.io'
25 INDEX_URL = 'https://{0}/v1/'.format(INDEX_NAME)
26 DOCKER_CONFIG_FILENAME = os.path.join('.docker', 'config.json')
27 LEGACY_DOCKER_CONFIG_FILENAME = '.dockercfg'
28
29 log = logging.getLogger(__name__)
30
31
32 def resolve_repository_name(repo_name):
33 if '://' in repo_name:
34 raise errors.InvalidRepository(
35 'Repository name cannot contain a scheme ({0})'.format(repo_name)
36 )
37
38 index_name, remote_name = split_repo_name(repo_name)
39 if index_name[0] == '-' or index_name[-1] == '-':
40 raise errors.InvalidRepository(
41 'Invalid index name ({0}). Cannot begin or end with a'
42 ' hyphen.'.format(index_name)
43 )
44 return resolve_index_name(index_name), remote_name
45
46
47 def resolve_index_name(index_name):
48 index_name = convert_to_hostname(index_name)
49 if index_name == 'index.'+INDEX_NAME:
50 index_name = INDEX_NAME
51 return index_name
52
53
54 def split_repo_name(repo_name):
55 parts = repo_name.split('/', 1)
56 if len(parts) == 1 or (
57 '.' not in parts[0] and ':' not in parts[0] and parts[0] != 'localhost'
58 ):
59 # This is a docker index repo (ex: username/foobar or ubuntu)
60 return INDEX_NAME, repo_name
61 return tuple(parts)
62
63
64 def resolve_authconfig(authconfig, registry=None):
65 """
66 Returns the authentication data from the given auth configuration for a
67 specific registry. As with the Docker client, legacy entries in the config
68 with full URLs are stripped down to hostnames before checking for a match.
69 Returns None if no match was found.
70 """
71 # Default to the public index server
72 registry = resolve_index_name(registry) if registry else INDEX_NAME
73 log.debug("Looking for auth entry for {0}".format(repr(registry)))
74
75 if registry in authconfig:
76 log.debug("Found {0}".format(repr(registry)))
77 return authconfig[registry]
78
79 for key, config in six.iteritems(authconfig):
80 if resolve_index_name(key) == registry:
81 log.debug("Found {0}".format(repr(key)))
82 return config
83
84 log.debug("No entry found")
85 return None
86
87
88 def convert_to_hostname(url):
89 return url.replace('http://', '').replace('https://', '').split('/', 1)[0]
90
91
92 def decode_auth(auth):
93 if isinstance(auth, six.string_types):
94 auth = auth.encode('ascii')
95 s = base64.b64decode(auth)
96 login, pwd = s.split(b':', 1)
97 return login.decode('utf8'), pwd.decode('utf8')
98
99
100 def encode_header(auth):
101 auth_json = json.dumps(auth).encode('ascii')
102 return base64.urlsafe_b64encode(auth_json)
103
104
105 def parse_auth(entries):
106 """
107 Parses authentication entries
108
109 Args:
110 entries: Dict of authentication entries.
111
112 Returns:
113 Authentication registry.
114 """
115
116 conf = {}
117 for registry, entry in six.iteritems(entries):
118 username, password = decode_auth(entry['auth'])
119 log.debug(
120 'Found entry (registry={0}, username={1})'
121 .format(repr(registry), repr(username))
122 )
123 conf[registry] = {
124 'username': username,
125 'password': password,
126 'email': entry.get('email'),
127 'serveraddress': registry,
128 }
129 return conf
130
131
132 def find_config_file(config_path=None):
133 environment_path = os.path.join(
134 os.environ.get('DOCKER_CONFIG'),
135 os.path.basename(DOCKER_CONFIG_FILENAME)
136 ) if os.environ.get('DOCKER_CONFIG') else None
137
138 paths = [
139 config_path, # 1
140 environment_path, # 2
141 os.path.join(os.path.expanduser('~'), DOCKER_CONFIG_FILENAME), # 3
142 os.path.join(
143 os.path.expanduser('~'), LEGACY_DOCKER_CONFIG_FILENAME
144 ) # 4
145 ]
146
147 for path in paths:
148 if path and os.path.exists(path):
149 return path
150 return None
151
152
153 def load_config(config_path=None):
154 """
155 Loads authentication data from a Docker configuration file in the given
156 root directory or if config_path is passed use given path.
157 Lookup priority:
158 explicit config_path parameter > DOCKER_CONFIG environment variable >
159 ~/.docker/config.json > ~/.dockercfg
160 """
161 config_file = find_config_file(config_path)
162
163 if not config_file:
164 log.debug("File doesn't exist")
165 return {}
166
167 try:
168 with open(config_file) as f:
169 data = json.load(f)
170 res = {}
171 if data.get('auths'):
172 log.debug("Found 'auths' section")
173 res.update(parse_auth(data['auths']))
174 if data.get('HttpHeaders'):
175 log.debug("Found 'HttpHeaders' section")
176 res.update({'HttpHeaders': data['HttpHeaders']})
177 if res:
178 return res
179 else:
180 log.debug("Couldn't find 'auths' or 'HttpHeaders' sections")
181 f.seek(0)
182 return parse_auth(json.load(f))
183 except (IOError, KeyError, ValueError) as e:
184 # Likely missing new Docker config file or it's in an
185 # unknown format, continue to attempt to read old location
186 # and format.
187 log.debug(e)
188
189 log.debug("Attempting to parse legacy auth file format")
190 try:
191 data = []
192 with open(config_file) as f:
193 for line in f.readlines():
194 data.append(line.strip().split(' = ')[1])
195 if len(data) < 2:
196 # Not enough data
197 raise errors.InvalidConfigFile(
198 'Invalid or empty configuration file!'
199 )
200
201 username, password = decode_auth(data[0])
202 return {
203 INDEX_NAME: {
204 'username': username,
205 'password': password,
206 'email': data[1],
207 'serveraddress': INDEX_URL,
208 }
209 }
210 except Exception as e:
211 log.debug(e)
212 pass
213
214 log.debug("All parsing attempts failed - returning empty config")
215 return {}
216
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docker/auth/auth.py b/docker/auth/auth.py
--- a/docker/auth/auth.py
+++ b/docker/auth/auth.py
@@ -46,7 +46,7 @@
def resolve_index_name(index_name):
index_name = convert_to_hostname(index_name)
- if index_name == 'index.'+INDEX_NAME:
+ if index_name == 'index.' + INDEX_NAME:
index_name = INDEX_NAME
return index_name
@@ -102,12 +102,14 @@
return base64.urlsafe_b64encode(auth_json)
-def parse_auth(entries):
+def parse_auth(entries, raise_on_error=False):
"""
Parses authentication entries
Args:
- entries: Dict of authentication entries.
+ entries: Dict of authentication entries.
+ raise_on_error: If set to true, an invalid format will raise
+ InvalidConfigFile
Returns:
Authentication registry.
@@ -115,6 +117,19 @@
conf = {}
for registry, entry in six.iteritems(entries):
+ if not (isinstance(entry, dict) and 'auth' in entry):
+ log.debug(
+ 'Config entry for key {0} is not auth config'.format(registry)
+ )
+ # We sometimes fall back to parsing the whole config as if it was
+ # the auth config by itself, for legacy purposes. In that case, we
+ # fail silently and return an empty conf if any of the keys is not
+ # formatted properly.
+ if raise_on_error:
+ raise errors.InvalidConfigFile(
+ 'Invalid configuration for registry {0}'.format(registry)
+ )
+ return {}
username, password = decode_auth(entry['auth'])
log.debug(
'Found entry (registry={0}, username={1})'
@@ -170,7 +185,7 @@
res = {}
if data.get('auths'):
log.debug("Found 'auths' section")
- res.update(parse_auth(data['auths']))
+ res.update(parse_auth(data['auths'], raise_on_error=True))
if data.get('HttpHeaders'):
log.debug("Found 'HttpHeaders' section")
res.update({'HttpHeaders': data['HttpHeaders']})
|
{"golden_diff": "diff --git a/docker/auth/auth.py b/docker/auth/auth.py\n--- a/docker/auth/auth.py\n+++ b/docker/auth/auth.py\n@@ -46,7 +46,7 @@\n \n def resolve_index_name(index_name):\n index_name = convert_to_hostname(index_name)\n- if index_name == 'index.'+INDEX_NAME:\n+ if index_name == 'index.' + INDEX_NAME:\n index_name = INDEX_NAME\n return index_name\n \n@@ -102,12 +102,14 @@\n return base64.urlsafe_b64encode(auth_json)\n \n \n-def parse_auth(entries):\n+def parse_auth(entries, raise_on_error=False):\n \"\"\"\n Parses authentication entries\n \n Args:\n- entries: Dict of authentication entries.\n+ entries: Dict of authentication entries.\n+ raise_on_error: If set to true, an invalid format will raise\n+ InvalidConfigFile\n \n Returns:\n Authentication registry.\n@@ -115,6 +117,19 @@\n \n conf = {}\n for registry, entry in six.iteritems(entries):\n+ if not (isinstance(entry, dict) and 'auth' in entry):\n+ log.debug(\n+ 'Config entry for key {0} is not auth config'.format(registry)\n+ )\n+ # We sometimes fall back to parsing the whole config as if it was\n+ # the auth config by itself, for legacy purposes. In that case, we\n+ # fail silently and return an empty conf if any of the keys is not\n+ # formatted properly.\n+ if raise_on_error:\n+ raise errors.InvalidConfigFile(\n+ 'Invalid configuration for registry {0}'.format(registry)\n+ )\n+ return {}\n username, password = decode_auth(entry['auth'])\n log.debug(\n 'Found entry (registry={0}, username={1})'\n@@ -170,7 +185,7 @@\n res = {}\n if data.get('auths'):\n log.debug(\"Found 'auths' section\")\n- res.update(parse_auth(data['auths']))\n+ res.update(parse_auth(data['auths'], raise_on_error=True))\n if data.get('HttpHeaders'):\n log.debug(\"Found 'HttpHeaders' section\")\n res.update({'HttpHeaders': data['HttpHeaders']})\n", "issue": "Using a docker/config.json file causes \"TypeError: string indices must be integers\"\nUsing a ~/.docker/config.json file causes docker-compose to output a Python error. @dnephin in https://github.com/docker/compose/issues/2697#issuecomment-172936366 suggests that this is an issue to be raised with the docker-py project instead of the compose project.\n\nSo here's a simple config.json file, and a dummy docker-compose.yml file which demonstrate the issue:\n\n```\n$ cat ~/.docker/config.json \n{\n \"detachKeys\": \"ctrl-q,ctrl-u,ctrl-i,ctrl-t\"\n}\n$ cat docker-compose.yml \nversion: '2'\nservices:\n s1:\n image: ubuntu\n$ docker-compose ps\nTraceback (most recent call last):\n File \"<string>\", line 3, in <module>\n File \"/compose/compose/cli/main.py\", line 55, in main\n File \"/compose/compose/cli/docopt_command.py\", line 23, in sys_dispatch\n File \"/compose/compose/cli/docopt_command.py\", line 26, in dispatch\n File \"/compose/compose/cli/main.py\", line 172, in perform_command\n File \"/compose/compose/cli/command.py\", line 52, in project_from_options\n File \"/compose/compose/cli/command.py\", line 85, in get_project\n File \"/compose/compose/cli/command.py\", line 66, in get_client\n File \"/compose/compose/cli/docker_client.py\", line 37, in docker_client\n File \"/compose/venv/lib/python2.7/site-packages/docker/client.py\", line 56, in __init__\n File \"/compose/venv/lib/python2.7/site-packages/docker/auth/auth.py\", line 182, in load_config\n File \"/compose/venv/lib/python2.7/site-packages/docker/auth/auth.py\", line 118, in parse_auth\nTypeError: string indices must be integers\ndocker-compose returned -1\n```\n\nAnd if we delete the config and try again:\n\n```\n$ rm ~/.docker/config.json \n$ docker-compose ps\nName Command State Ports \n------------------------------\n```\n\nI'm using Docker Toolbox 1.10.0 on OSX 10.11.3. The docker container is being run on a VirtualBox VM managed by docker-machine. Here are some more version numbers:\n\n```\n$ docker version\nClient:\n Version: 1.10.0\n API version: 1.22\n Go version: go1.5.3\n Git commit: 590d5108\n Built: Thu Feb 4 18:18:11 2016\n OS/Arch: darwin/amd64\n\nServer:\n Version: 1.10.0\n API version: 1.22\n Go version: go1.5.3\n Git commit: 590d5108\n Built: Thu Feb 4 19:55:25 2016\n OS/Arch: linux/amd64\n$ docker-compose version\ndocker-compose version 1.6.0, build d99cad6\ndocker-py version: 1.7.0\nCPython version: 2.7.9\nOpenSSL version: OpenSSL 1.0.1j 15 Oct 2014\n```\n\n", "before_files": [{"content": "# Copyright 2013 dotCloud inc.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport base64\nimport json\nimport logging\nimport os\n\nimport six\n\nfrom .. import errors\n\nINDEX_NAME = 'docker.io'\nINDEX_URL = 'https://{0}/v1/'.format(INDEX_NAME)\nDOCKER_CONFIG_FILENAME = os.path.join('.docker', 'config.json')\nLEGACY_DOCKER_CONFIG_FILENAME = '.dockercfg'\n\nlog = logging.getLogger(__name__)\n\n\ndef resolve_repository_name(repo_name):\n if '://' in repo_name:\n raise errors.InvalidRepository(\n 'Repository name cannot contain a scheme ({0})'.format(repo_name)\n )\n\n index_name, remote_name = split_repo_name(repo_name)\n if index_name[0] == '-' or index_name[-1] == '-':\n raise errors.InvalidRepository(\n 'Invalid index name ({0}). Cannot begin or end with a'\n ' hyphen.'.format(index_name)\n )\n return resolve_index_name(index_name), remote_name\n\n\ndef resolve_index_name(index_name):\n index_name = convert_to_hostname(index_name)\n if index_name == 'index.'+INDEX_NAME:\n index_name = INDEX_NAME\n return index_name\n\n\ndef split_repo_name(repo_name):\n parts = repo_name.split('/', 1)\n if len(parts) == 1 or (\n '.' not in parts[0] and ':' not in parts[0] and parts[0] != 'localhost'\n ):\n # This is a docker index repo (ex: username/foobar or ubuntu)\n return INDEX_NAME, repo_name\n return tuple(parts)\n\n\ndef resolve_authconfig(authconfig, registry=None):\n \"\"\"\n Returns the authentication data from the given auth configuration for a\n specific registry. As with the Docker client, legacy entries in the config\n with full URLs are stripped down to hostnames before checking for a match.\n Returns None if no match was found.\n \"\"\"\n # Default to the public index server\n registry = resolve_index_name(registry) if registry else INDEX_NAME\n log.debug(\"Looking for auth entry for {0}\".format(repr(registry)))\n\n if registry in authconfig:\n log.debug(\"Found {0}\".format(repr(registry)))\n return authconfig[registry]\n\n for key, config in six.iteritems(authconfig):\n if resolve_index_name(key) == registry:\n log.debug(\"Found {0}\".format(repr(key)))\n return config\n\n log.debug(\"No entry found\")\n return None\n\n\ndef convert_to_hostname(url):\n return url.replace('http://', '').replace('https://', '').split('/', 1)[0]\n\n\ndef decode_auth(auth):\n if isinstance(auth, six.string_types):\n auth = auth.encode('ascii')\n s = base64.b64decode(auth)\n login, pwd = s.split(b':', 1)\n return login.decode('utf8'), pwd.decode('utf8')\n\n\ndef encode_header(auth):\n auth_json = json.dumps(auth).encode('ascii')\n return base64.urlsafe_b64encode(auth_json)\n\n\ndef parse_auth(entries):\n \"\"\"\n Parses authentication entries\n\n Args:\n entries: Dict of authentication entries.\n\n Returns:\n Authentication registry.\n \"\"\"\n\n conf = {}\n for registry, entry in six.iteritems(entries):\n username, password = decode_auth(entry['auth'])\n log.debug(\n 'Found entry (registry={0}, username={1})'\n .format(repr(registry), repr(username))\n )\n conf[registry] = {\n 'username': username,\n 'password': password,\n 'email': entry.get('email'),\n 'serveraddress': registry,\n }\n return conf\n\n\ndef find_config_file(config_path=None):\n environment_path = os.path.join(\n os.environ.get('DOCKER_CONFIG'),\n os.path.basename(DOCKER_CONFIG_FILENAME)\n ) if os.environ.get('DOCKER_CONFIG') else None\n\n paths = [\n config_path, # 1\n environment_path, # 2\n os.path.join(os.path.expanduser('~'), DOCKER_CONFIG_FILENAME), # 3\n os.path.join(\n os.path.expanduser('~'), LEGACY_DOCKER_CONFIG_FILENAME\n ) # 4\n ]\n\n for path in paths:\n if path and os.path.exists(path):\n return path\n return None\n\n\ndef load_config(config_path=None):\n \"\"\"\n Loads authentication data from a Docker configuration file in the given\n root directory or if config_path is passed use given path.\n Lookup priority:\n explicit config_path parameter > DOCKER_CONFIG environment variable >\n ~/.docker/config.json > ~/.dockercfg\n \"\"\"\n config_file = find_config_file(config_path)\n\n if not config_file:\n log.debug(\"File doesn't exist\")\n return {}\n\n try:\n with open(config_file) as f:\n data = json.load(f)\n res = {}\n if data.get('auths'):\n log.debug(\"Found 'auths' section\")\n res.update(parse_auth(data['auths']))\n if data.get('HttpHeaders'):\n log.debug(\"Found 'HttpHeaders' section\")\n res.update({'HttpHeaders': data['HttpHeaders']})\n if res:\n return res\n else:\n log.debug(\"Couldn't find 'auths' or 'HttpHeaders' sections\")\n f.seek(0)\n return parse_auth(json.load(f))\n except (IOError, KeyError, ValueError) as e:\n # Likely missing new Docker config file or it's in an\n # unknown format, continue to attempt to read old location\n # and format.\n log.debug(e)\n\n log.debug(\"Attempting to parse legacy auth file format\")\n try:\n data = []\n with open(config_file) as f:\n for line in f.readlines():\n data.append(line.strip().split(' = ')[1])\n if len(data) < 2:\n # Not enough data\n raise errors.InvalidConfigFile(\n 'Invalid or empty configuration file!'\n )\n\n username, password = decode_auth(data[0])\n return {\n INDEX_NAME: {\n 'username': username,\n 'password': password,\n 'email': data[1],\n 'serveraddress': INDEX_URL,\n }\n }\n except Exception as e:\n log.debug(e)\n pass\n\n log.debug(\"All parsing attempts failed - returning empty config\")\n return {}\n", "path": "docker/auth/auth.py"}], "after_files": [{"content": "# Copyright 2013 dotCloud inc.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport base64\nimport json\nimport logging\nimport os\n\nimport six\n\nfrom .. import errors\n\nINDEX_NAME = 'docker.io'\nINDEX_URL = 'https://{0}/v1/'.format(INDEX_NAME)\nDOCKER_CONFIG_FILENAME = os.path.join('.docker', 'config.json')\nLEGACY_DOCKER_CONFIG_FILENAME = '.dockercfg'\n\nlog = logging.getLogger(__name__)\n\n\ndef resolve_repository_name(repo_name):\n if '://' in repo_name:\n raise errors.InvalidRepository(\n 'Repository name cannot contain a scheme ({0})'.format(repo_name)\n )\n\n index_name, remote_name = split_repo_name(repo_name)\n if index_name[0] == '-' or index_name[-1] == '-':\n raise errors.InvalidRepository(\n 'Invalid index name ({0}). Cannot begin or end with a'\n ' hyphen.'.format(index_name)\n )\n return resolve_index_name(index_name), remote_name\n\n\ndef resolve_index_name(index_name):\n index_name = convert_to_hostname(index_name)\n if index_name == 'index.' + INDEX_NAME:\n index_name = INDEX_NAME\n return index_name\n\n\ndef split_repo_name(repo_name):\n parts = repo_name.split('/', 1)\n if len(parts) == 1 or (\n '.' not in parts[0] and ':' not in parts[0] and parts[0] != 'localhost'\n ):\n # This is a docker index repo (ex: username/foobar or ubuntu)\n return INDEX_NAME, repo_name\n return tuple(parts)\n\n\ndef resolve_authconfig(authconfig, registry=None):\n \"\"\"\n Returns the authentication data from the given auth configuration for a\n specific registry. As with the Docker client, legacy entries in the config\n with full URLs are stripped down to hostnames before checking for a match.\n Returns None if no match was found.\n \"\"\"\n # Default to the public index server\n registry = resolve_index_name(registry) if registry else INDEX_NAME\n log.debug(\"Looking for auth entry for {0}\".format(repr(registry)))\n\n if registry in authconfig:\n log.debug(\"Found {0}\".format(repr(registry)))\n return authconfig[registry]\n\n for key, config in six.iteritems(authconfig):\n if resolve_index_name(key) == registry:\n log.debug(\"Found {0}\".format(repr(key)))\n return config\n\n log.debug(\"No entry found\")\n return None\n\n\ndef convert_to_hostname(url):\n return url.replace('http://', '').replace('https://', '').split('/', 1)[0]\n\n\ndef decode_auth(auth):\n if isinstance(auth, six.string_types):\n auth = auth.encode('ascii')\n s = base64.b64decode(auth)\n login, pwd = s.split(b':', 1)\n return login.decode('utf8'), pwd.decode('utf8')\n\n\ndef encode_header(auth):\n auth_json = json.dumps(auth).encode('ascii')\n return base64.urlsafe_b64encode(auth_json)\n\n\ndef parse_auth(entries, raise_on_error=False):\n \"\"\"\n Parses authentication entries\n\n Args:\n entries: Dict of authentication entries.\n raise_on_error: If set to true, an invalid format will raise\n InvalidConfigFile\n\n Returns:\n Authentication registry.\n \"\"\"\n\n conf = {}\n for registry, entry in six.iteritems(entries):\n if not (isinstance(entry, dict) and 'auth' in entry):\n log.debug(\n 'Config entry for key {0} is not auth config'.format(registry)\n )\n # We sometimes fall back to parsing the whole config as if it was\n # the auth config by itself, for legacy purposes. In that case, we\n # fail silently and return an empty conf if any of the keys is not\n # formatted properly.\n if raise_on_error:\n raise errors.InvalidConfigFile(\n 'Invalid configuration for registry {0}'.format(registry)\n )\n return {}\n username, password = decode_auth(entry['auth'])\n log.debug(\n 'Found entry (registry={0}, username={1})'\n .format(repr(registry), repr(username))\n )\n conf[registry] = {\n 'username': username,\n 'password': password,\n 'email': entry.get('email'),\n 'serveraddress': registry,\n }\n return conf\n\n\ndef find_config_file(config_path=None):\n environment_path = os.path.join(\n os.environ.get('DOCKER_CONFIG'),\n os.path.basename(DOCKER_CONFIG_FILENAME)\n ) if os.environ.get('DOCKER_CONFIG') else None\n\n paths = [\n config_path, # 1\n environment_path, # 2\n os.path.join(os.path.expanduser('~'), DOCKER_CONFIG_FILENAME), # 3\n os.path.join(\n os.path.expanduser('~'), LEGACY_DOCKER_CONFIG_FILENAME\n ) # 4\n ]\n\n for path in paths:\n if path and os.path.exists(path):\n return path\n return None\n\n\ndef load_config(config_path=None):\n \"\"\"\n Loads authentication data from a Docker configuration file in the given\n root directory or if config_path is passed use given path.\n Lookup priority:\n explicit config_path parameter > DOCKER_CONFIG environment variable >\n ~/.docker/config.json > ~/.dockercfg\n \"\"\"\n config_file = find_config_file(config_path)\n\n if not config_file:\n log.debug(\"File doesn't exist\")\n return {}\n\n try:\n with open(config_file) as f:\n data = json.load(f)\n res = {}\n if data.get('auths'):\n log.debug(\"Found 'auths' section\")\n res.update(parse_auth(data['auths'], raise_on_error=True))\n if data.get('HttpHeaders'):\n log.debug(\"Found 'HttpHeaders' section\")\n res.update({'HttpHeaders': data['HttpHeaders']})\n if res:\n return res\n else:\n log.debug(\"Couldn't find 'auths' or 'HttpHeaders' sections\")\n f.seek(0)\n return parse_auth(json.load(f))\n except (IOError, KeyError, ValueError) as e:\n # Likely missing new Docker config file or it's in an\n # unknown format, continue to attempt to read old location\n # and format.\n log.debug(e)\n\n log.debug(\"Attempting to parse legacy auth file format\")\n try:\n data = []\n with open(config_file) as f:\n for line in f.readlines():\n data.append(line.strip().split(' = ')[1])\n if len(data) < 2:\n # Not enough data\n raise errors.InvalidConfigFile(\n 'Invalid or empty configuration file!'\n )\n\n username, password = decode_auth(data[0])\n return {\n INDEX_NAME: {\n 'username': username,\n 'password': password,\n 'email': data[1],\n 'serveraddress': INDEX_URL,\n }\n }\n except Exception as e:\n log.debug(e)\n pass\n\n log.debug(\"All parsing attempts failed - returning empty config\")\n return {}\n", "path": "docker/auth/auth.py"}]}
| 3,063 | 498 |
gh_patches_debug_31576
|
rasdani/github-patches
|
git_diff
|
deepset-ai__haystack-7904
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`ContextRelevanceEvaluator`: statements extraction can be made more robust
Slightly different inputs produce very different outputs in the `ContextRelevanceEvaluator`.
**Example in the docs**:
```python
from haystack.components.evaluators import ContextRelevanceEvaluator
questions = ["Who created the Python language?"]
contexts = [
[
"Python, created by Guido van Rossum in the late 1980s, is a high-level general-purpose programming language. Its design philosophy emphasizes code readability, and its language constructs aim to help programmers write clear, logical code for both small and large-scale software projects."
],
]
evaluator = ContextRelevanceEvaluator()
result = evaluator.run(questions=questions, contexts=contexts)
print(result["score"])
# 1.0
print(result["individual_scores"])
# [1.0]
print(result["results"])
# [{'statements': ['Python, created by Guido van Rossum in the late 1980s.'], 'statement_scores': [1], 'score': 1.0}]
```
**A different example**:
```python
from haystack.components.evaluators import ContextRelevanceEvaluator
questions = ["Who created the Python language?"]
contexts = [
[
"Python, created by Guido van Rossum in the late 1980s, is a high-level general-purpose programming language. Its design philosophy emphasizes code readability, and its language constructs aim to help programmers write clear, logical code for both small and large-scale software projects.",
"Java is a high-level, class-based, object-oriented programming language. It allows you to write once, run anywhere, meaning that compiled Java code can run on all platforms that support Java without the need for recompilation.",
"Scala is a high-level, statically typed programming language.",
],
]
evaluator = ContextRelevanceEvaluator()
result = evaluator.run(questions=questions, contexts=contexts)
print(result)
# {'results': [{'statements': ['Python, created by Guido van Rossum in the late 1980s, is a high-level general-purpose programming language.', 'Its design philosophy emphasizes code readability, and its language constructs aim to help programmers write clear, logical code for both small and large-scale software projects.'], 'statement_scores': [1, 0], 'score': 0.5}], 'score': 0.5, 'individual_scores': [0.5]}
```
In this case, the first context is broken into 2 statements, while the 2nd and 3rd statements are not considered.
**Expected behavior**
The extraction of statements should be more robust/consistent.
**Additional context**
From @julian-risch:
> I checked that the prompt sent to the LLM correctly contains all three contexts. The problem is that the LLM extract statements only for the first context at least in this example. It seems that the prompt instructions aren't good enough. You could try extending the _DEFAULT_EXAMPLES with an example that has multiple contexts for one question.
**System:**
- Haystack version (commit or version number): 2.2.0
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `haystack/components/evaluators/context_relevance.py`
Content:
```
1 # SPDX-FileCopyrightText: 2022-present deepset GmbH <[email protected]>
2 #
3 # SPDX-License-Identifier: Apache-2.0
4
5 from typing import Any, Dict, List, Optional
6
7 from numpy import mean as np_mean
8
9 from haystack import component, default_from_dict, default_to_dict
10 from haystack.components.evaluators.llm_evaluator import LLMEvaluator
11 from haystack.utils import Secret, deserialize_secrets_inplace
12
13 # Private global variable for default examples to include in the prompt if the user does not provide any examples
14 _DEFAULT_EXAMPLES = [
15 {
16 "inputs": {
17 "questions": "What is the capital of Germany?",
18 "contexts": ["Berlin is the capital of Germany and was founded in 1244."],
19 },
20 "outputs": {
21 "statements": ["Berlin is the capital of Germany.", "Berlin was founded in 1244."],
22 "statement_scores": [1, 0],
23 },
24 },
25 {
26 "inputs": {"questions": "What is the capital of France?", "contexts": ["Berlin is the capital of Germany."]},
27 "outputs": {"statements": ["Berlin is the capital of Germany."], "statement_scores": [0]},
28 },
29 {
30 "inputs": {"questions": "What is the capital of Italy?", "contexts": ["Rome is the capital of Italy."]},
31 "outputs": {"statements": ["Rome is the capital of Italy."], "statement_scores": [1]},
32 },
33 ]
34
35
36 @component
37 class ContextRelevanceEvaluator(LLMEvaluator):
38 """
39 Evaluator that checks if a provided context is relevant to the question.
40
41 An LLM breaks up the context into multiple statements and checks whether each statement
42 is relevant for answering a question.
43 The final score for the context relevance is a number from 0.0 to 1.0. It represents the proportion of
44 statements that can be inferred from the provided contexts.
45
46 Usage example:
47 ```python
48 from haystack.components.evaluators import ContextRelevanceEvaluator
49
50 questions = ["Who created the Python language?"]
51 contexts = [
52 [(
53 "Python, created by Guido van Rossum in the late 1980s, is a high-level general-purpose programming "
54 "language. Its design philosophy emphasizes code readability, and its language constructs aim to help "
55 "programmers write clear, logical code for both small and large-scale software projects."
56 )],
57 ]
58
59 evaluator = ContextRelevanceEvaluator()
60 result = evaluator.run(questions=questions, contexts=contexts)
61 print(result["score"])
62 # 1.0
63 print(result["individual_scores"])
64 # [1.0]
65 print(result["results"])
66 # [{
67 # 'statements': ['Python, created by Guido van Rossum in the late 1980s.'],
68 # 'statement_scores': [1],
69 # 'score': 1.0
70 # }]
71 ```
72 """
73
74 def __init__(
75 self,
76 examples: Optional[List[Dict[str, Any]]] = None,
77 progress_bar: bool = True,
78 api: str = "openai",
79 api_key: Secret = Secret.from_env_var("OPENAI_API_KEY"),
80 raise_on_failure: bool = True,
81 ):
82 """
83 Creates an instance of ContextRelevanceEvaluator.
84
85 :param examples:
86 Optional few-shot examples conforming to the expected input and output format of ContextRelevanceEvaluator.
87 Default examples will be used if none are provided.
88 Each example must be a dictionary with keys "inputs" and "outputs".
89 "inputs" must be a dictionary with keys "questions" and "contexts".
90 "outputs" must be a dictionary with "statements" and "statement_scores".
91 Expected format:
92 [{
93 "inputs": {
94 "questions": "What is the capital of Italy?", "contexts": ["Rome is the capital of Italy."],
95 },
96 "outputs": {
97 "statements": ["Rome is the capital of Italy."],
98 "statement_scores": [1],
99 },
100 }]
101 :param progress_bar:
102 Whether to show a progress bar during the evaluation.
103 :param api:
104 The API to use for calling an LLM through a Generator.
105 Supported APIs: "openai".
106 :param api_key:
107 The API key.
108 :param raise_on_failure:
109 Whether to raise an exception if the API call fails.
110
111 """
112 self.instructions = (
113 "Your task is to judge how relevant the provided context is for answering a question. "
114 "First, please extract statements from the provided context. "
115 "Second, calculate a relevance score for each statement in the context. "
116 "The score is 1 if the statement is relevant to answer the question or 0 if it is not relevant."
117 )
118 self.inputs = [("questions", List[str]), ("contexts", List[List[str]])]
119 self.outputs = ["statements", "statement_scores"]
120 self.examples = examples or _DEFAULT_EXAMPLES
121 self.api = api
122 self.api_key = api_key
123
124 super(ContextRelevanceEvaluator, self).__init__(
125 instructions=self.instructions,
126 inputs=self.inputs,
127 outputs=self.outputs,
128 examples=self.examples,
129 api=self.api,
130 api_key=self.api_key,
131 raise_on_failure=raise_on_failure,
132 progress_bar=progress_bar,
133 )
134
135 @component.output_types(individual_scores=List[int], score=float, results=List[Dict[str, Any]])
136 def run(self, questions: List[str], contexts: List[List[str]]) -> Dict[str, Any]:
137 """
138 Run the LLM evaluator.
139
140 :param questions:
141 A list of questions.
142 :param contexts:
143 A list of lists of contexts. Each list of contexts corresponds to one question.
144 :returns:
145 A dictionary with the following outputs:
146 - `score`: Mean context relevance score over all the provided input questions.
147 - `individual_scores`: A list of context relevance scores for each input question.
148 - `results`: A list of dictionaries with `statements` and `statement_scores` for each input context.
149 """
150 result = super(ContextRelevanceEvaluator, self).run(questions=questions, contexts=contexts)
151
152 # calculate average statement relevance score per query
153 for idx, res in enumerate(result["results"]):
154 if res is None:
155 result["results"][idx] = {"statements": [], "statement_scores": [], "score": float("nan")}
156 continue
157 if not res["statements"]:
158 res["score"] = 0
159 else:
160 res["score"] = np_mean(res["statement_scores"])
161
162 # calculate average context relevance score over all queries
163 result["score"] = np_mean([res["score"] for res in result["results"]])
164 result["individual_scores"] = [res["score"] for res in result["results"]]
165
166 return result
167
168 def to_dict(self) -> Dict[str, Any]:
169 """
170 Serialize this component to a dictionary.
171
172 :returns:
173 A dictionary with serialized data.
174 """
175 return default_to_dict(
176 self,
177 api=self.api,
178 api_key=self.api_key.to_dict() if self.api_key else None,
179 examples=self.examples,
180 progress_bar=self.progress_bar,
181 raise_on_failure=self.raise_on_failure,
182 )
183
184 @classmethod
185 def from_dict(cls, data: Dict[str, Any]) -> "ContextRelevanceEvaluator":
186 """
187 Deserialize this component from a dictionary.
188
189 :param data:
190 The dictionary representation of this component.
191 :returns:
192 The deserialized component instance.
193 """
194 deserialize_secrets_inplace(data["init_parameters"], keys=["api_key"])
195 return default_from_dict(cls, data)
196
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/haystack/components/evaluators/context_relevance.py b/haystack/components/evaluators/context_relevance.py
--- a/haystack/components/evaluators/context_relevance.py
+++ b/haystack/components/evaluators/context_relevance.py
@@ -23,8 +23,23 @@
},
},
{
- "inputs": {"questions": "What is the capital of France?", "contexts": ["Berlin is the capital of Germany."]},
- "outputs": {"statements": ["Berlin is the capital of Germany."], "statement_scores": [0]},
+ "inputs": {
+ "questions": "What is the capital of France?",
+ "contexts": [
+ "Berlin is the capital of Germany and was founded in 1244.",
+ "Europe is a continent with 44 countries.",
+ "Madrid is the capital of Spain.",
+ ],
+ },
+ "outputs": {
+ "statements": [
+ "Berlin is the capital of Germany.",
+ "Berlin was founded in 1244.",
+ "Europe is a continent with 44 countries.",
+ "Madrid is the capital of Spain.",
+ ],
+ "statement_scores": [0, 0, 0, 0],
+ },
},
{
"inputs": {"questions": "What is the capital of Italy?", "contexts": ["Rome is the capital of Italy."]},
@@ -113,7 +128,8 @@
"Your task is to judge how relevant the provided context is for answering a question. "
"First, please extract statements from the provided context. "
"Second, calculate a relevance score for each statement in the context. "
- "The score is 1 if the statement is relevant to answer the question or 0 if it is not relevant."
+ "The score is 1 if the statement is relevant to answer the question or 0 if it is not relevant. "
+ "Each statement should be scored individually."
)
self.inputs = [("questions", List[str]), ("contexts", List[List[str]])]
self.outputs = ["statements", "statement_scores"]
|
{"golden_diff": "diff --git a/haystack/components/evaluators/context_relevance.py b/haystack/components/evaluators/context_relevance.py\n--- a/haystack/components/evaluators/context_relevance.py\n+++ b/haystack/components/evaluators/context_relevance.py\n@@ -23,8 +23,23 @@\n },\n },\n {\n- \"inputs\": {\"questions\": \"What is the capital of France?\", \"contexts\": [\"Berlin is the capital of Germany.\"]},\n- \"outputs\": {\"statements\": [\"Berlin is the capital of Germany.\"], \"statement_scores\": [0]},\n+ \"inputs\": {\n+ \"questions\": \"What is the capital of France?\",\n+ \"contexts\": [\n+ \"Berlin is the capital of Germany and was founded in 1244.\",\n+ \"Europe is a continent with 44 countries.\",\n+ \"Madrid is the capital of Spain.\",\n+ ],\n+ },\n+ \"outputs\": {\n+ \"statements\": [\n+ \"Berlin is the capital of Germany.\",\n+ \"Berlin was founded in 1244.\",\n+ \"Europe is a continent with 44 countries.\",\n+ \"Madrid is the capital of Spain.\",\n+ ],\n+ \"statement_scores\": [0, 0, 0, 0],\n+ },\n },\n {\n \"inputs\": {\"questions\": \"What is the capital of Italy?\", \"contexts\": [\"Rome is the capital of Italy.\"]},\n@@ -113,7 +128,8 @@\n \"Your task is to judge how relevant the provided context is for answering a question. \"\n \"First, please extract statements from the provided context. \"\n \"Second, calculate a relevance score for each statement in the context. \"\n- \"The score is 1 if the statement is relevant to answer the question or 0 if it is not relevant.\"\n+ \"The score is 1 if the statement is relevant to answer the question or 0 if it is not relevant. \"\n+ \"Each statement should be scored individually.\"\n )\n self.inputs = [(\"questions\", List[str]), (\"contexts\", List[List[str]])]\n self.outputs = [\"statements\", \"statement_scores\"]\n", "issue": "`ContextRelevanceEvaluator`: statements extraction can be made more robust\nSlightly different inputs produce very different outputs in the `ContextRelevanceEvaluator`.\r\n\r\n**Example in the docs**:\r\n```python\r\nfrom haystack.components.evaluators import ContextRelevanceEvaluator\r\n\r\nquestions = [\"Who created the Python language?\"]\r\ncontexts = [\r\n [\r\n \"Python, created by Guido van Rossum in the late 1980s, is a high-level general-purpose programming language. Its design philosophy emphasizes code readability, and its language constructs aim to help programmers write clear, logical code for both small and large-scale software projects.\"\r\n ],\r\n]\r\n\r\nevaluator = ContextRelevanceEvaluator()\r\nresult = evaluator.run(questions=questions, contexts=contexts)\r\nprint(result[\"score\"])\r\n# 1.0\r\nprint(result[\"individual_scores\"])\r\n# [1.0]\r\nprint(result[\"results\"])\r\n# [{'statements': ['Python, created by Guido van Rossum in the late 1980s.'], 'statement_scores': [1], 'score': 1.0}]\r\n```\r\n\r\n**A different example**:\r\n```python\r\nfrom haystack.components.evaluators import ContextRelevanceEvaluator\r\nquestions = [\"Who created the Python language?\"]\r\ncontexts = [\r\n [\r\n \"Python, created by Guido van Rossum in the late 1980s, is a high-level general-purpose programming language. Its design philosophy emphasizes code readability, and its language constructs aim to help programmers write clear, logical code for both small and large-scale software projects.\",\r\n \"Java is a high-level, class-based, object-oriented programming language. It allows you to write once, run anywhere, meaning that compiled Java code can run on all platforms that support Java without the need for recompilation.\",\r\n \"Scala is a high-level, statically typed programming language.\",\r\n ],\r\n]\r\nevaluator = ContextRelevanceEvaluator()\r\nresult = evaluator.run(questions=questions, contexts=contexts)\r\n\r\nprint(result)\r\n# {'results': [{'statements': ['Python, created by Guido van Rossum in the late 1980s, is a high-level general-purpose programming language.', 'Its design philosophy emphasizes code readability, and its language constructs aim to help programmers write clear, logical code for both small and large-scale software projects.'], 'statement_scores': [1, 0], 'score': 0.5}], 'score': 0.5, 'individual_scores': [0.5]}\r\n```\r\n\r\nIn this case, the first context is broken into 2 statements, while the 2nd and 3rd statements are not considered.\r\n\r\n**Expected behavior**\r\nThe extraction of statements should be more robust/consistent.\r\n\r\n**Additional context**\r\nFrom @julian-risch:\r\n> I checked that the prompt sent to the LLM correctly contains all three contexts. The problem is that the LLM extract statements only for the first context at least in this example. It seems that the prompt instructions aren't good enough. You could try extending the _DEFAULT_EXAMPLES with an example that has multiple contexts for one question.\r\n\r\n\r\n**System:**\r\n - Haystack version (commit or version number): 2.2.0\n", "before_files": [{"content": "# SPDX-FileCopyrightText: 2022-present deepset GmbH <[email protected]>\n#\n# SPDX-License-Identifier: Apache-2.0\n\nfrom typing import Any, Dict, List, Optional\n\nfrom numpy import mean as np_mean\n\nfrom haystack import component, default_from_dict, default_to_dict\nfrom haystack.components.evaluators.llm_evaluator import LLMEvaluator\nfrom haystack.utils import Secret, deserialize_secrets_inplace\n\n# Private global variable for default examples to include in the prompt if the user does not provide any examples\n_DEFAULT_EXAMPLES = [\n {\n \"inputs\": {\n \"questions\": \"What is the capital of Germany?\",\n \"contexts\": [\"Berlin is the capital of Germany and was founded in 1244.\"],\n },\n \"outputs\": {\n \"statements\": [\"Berlin is the capital of Germany.\", \"Berlin was founded in 1244.\"],\n \"statement_scores\": [1, 0],\n },\n },\n {\n \"inputs\": {\"questions\": \"What is the capital of France?\", \"contexts\": [\"Berlin is the capital of Germany.\"]},\n \"outputs\": {\"statements\": [\"Berlin is the capital of Germany.\"], \"statement_scores\": [0]},\n },\n {\n \"inputs\": {\"questions\": \"What is the capital of Italy?\", \"contexts\": [\"Rome is the capital of Italy.\"]},\n \"outputs\": {\"statements\": [\"Rome is the capital of Italy.\"], \"statement_scores\": [1]},\n },\n]\n\n\n@component\nclass ContextRelevanceEvaluator(LLMEvaluator):\n \"\"\"\n Evaluator that checks if a provided context is relevant to the question.\n\n An LLM breaks up the context into multiple statements and checks whether each statement\n is relevant for answering a question.\n The final score for the context relevance is a number from 0.0 to 1.0. It represents the proportion of\n statements that can be inferred from the provided contexts.\n\n Usage example:\n ```python\n from haystack.components.evaluators import ContextRelevanceEvaluator\n\n questions = [\"Who created the Python language?\"]\n contexts = [\n [(\n \"Python, created by Guido van Rossum in the late 1980s, is a high-level general-purpose programming \"\n \"language. Its design philosophy emphasizes code readability, and its language constructs aim to help \"\n \"programmers write clear, logical code for both small and large-scale software projects.\"\n )],\n ]\n\n evaluator = ContextRelevanceEvaluator()\n result = evaluator.run(questions=questions, contexts=contexts)\n print(result[\"score\"])\n # 1.0\n print(result[\"individual_scores\"])\n # [1.0]\n print(result[\"results\"])\n # [{\n # 'statements': ['Python, created by Guido van Rossum in the late 1980s.'],\n # 'statement_scores': [1],\n # 'score': 1.0\n # }]\n ```\n \"\"\"\n\n def __init__(\n self,\n examples: Optional[List[Dict[str, Any]]] = None,\n progress_bar: bool = True,\n api: str = \"openai\",\n api_key: Secret = Secret.from_env_var(\"OPENAI_API_KEY\"),\n raise_on_failure: bool = True,\n ):\n \"\"\"\n Creates an instance of ContextRelevanceEvaluator.\n\n :param examples:\n Optional few-shot examples conforming to the expected input and output format of ContextRelevanceEvaluator.\n Default examples will be used if none are provided.\n Each example must be a dictionary with keys \"inputs\" and \"outputs\".\n \"inputs\" must be a dictionary with keys \"questions\" and \"contexts\".\n \"outputs\" must be a dictionary with \"statements\" and \"statement_scores\".\n Expected format:\n [{\n \"inputs\": {\n \"questions\": \"What is the capital of Italy?\", \"contexts\": [\"Rome is the capital of Italy.\"],\n },\n \"outputs\": {\n \"statements\": [\"Rome is the capital of Italy.\"],\n \"statement_scores\": [1],\n },\n }]\n :param progress_bar:\n Whether to show a progress bar during the evaluation.\n :param api:\n The API to use for calling an LLM through a Generator.\n Supported APIs: \"openai\".\n :param api_key:\n The API key.\n :param raise_on_failure:\n Whether to raise an exception if the API call fails.\n\n \"\"\"\n self.instructions = (\n \"Your task is to judge how relevant the provided context is for answering a question. \"\n \"First, please extract statements from the provided context. \"\n \"Second, calculate a relevance score for each statement in the context. \"\n \"The score is 1 if the statement is relevant to answer the question or 0 if it is not relevant.\"\n )\n self.inputs = [(\"questions\", List[str]), (\"contexts\", List[List[str]])]\n self.outputs = [\"statements\", \"statement_scores\"]\n self.examples = examples or _DEFAULT_EXAMPLES\n self.api = api\n self.api_key = api_key\n\n super(ContextRelevanceEvaluator, self).__init__(\n instructions=self.instructions,\n inputs=self.inputs,\n outputs=self.outputs,\n examples=self.examples,\n api=self.api,\n api_key=self.api_key,\n raise_on_failure=raise_on_failure,\n progress_bar=progress_bar,\n )\n\n @component.output_types(individual_scores=List[int], score=float, results=List[Dict[str, Any]])\n def run(self, questions: List[str], contexts: List[List[str]]) -> Dict[str, Any]:\n \"\"\"\n Run the LLM evaluator.\n\n :param questions:\n A list of questions.\n :param contexts:\n A list of lists of contexts. Each list of contexts corresponds to one question.\n :returns:\n A dictionary with the following outputs:\n - `score`: Mean context relevance score over all the provided input questions.\n - `individual_scores`: A list of context relevance scores for each input question.\n - `results`: A list of dictionaries with `statements` and `statement_scores` for each input context.\n \"\"\"\n result = super(ContextRelevanceEvaluator, self).run(questions=questions, contexts=contexts)\n\n # calculate average statement relevance score per query\n for idx, res in enumerate(result[\"results\"]):\n if res is None:\n result[\"results\"][idx] = {\"statements\": [], \"statement_scores\": [], \"score\": float(\"nan\")}\n continue\n if not res[\"statements\"]:\n res[\"score\"] = 0\n else:\n res[\"score\"] = np_mean(res[\"statement_scores\"])\n\n # calculate average context relevance score over all queries\n result[\"score\"] = np_mean([res[\"score\"] for res in result[\"results\"]])\n result[\"individual_scores\"] = [res[\"score\"] for res in result[\"results\"]]\n\n return result\n\n def to_dict(self) -> Dict[str, Any]:\n \"\"\"\n Serialize this component to a dictionary.\n\n :returns:\n A dictionary with serialized data.\n \"\"\"\n return default_to_dict(\n self,\n api=self.api,\n api_key=self.api_key.to_dict() if self.api_key else None,\n examples=self.examples,\n progress_bar=self.progress_bar,\n raise_on_failure=self.raise_on_failure,\n )\n\n @classmethod\n def from_dict(cls, data: Dict[str, Any]) -> \"ContextRelevanceEvaluator\":\n \"\"\"\n Deserialize this component from a dictionary.\n\n :param data:\n The dictionary representation of this component.\n :returns:\n The deserialized component instance.\n \"\"\"\n deserialize_secrets_inplace(data[\"init_parameters\"], keys=[\"api_key\"])\n return default_from_dict(cls, data)\n", "path": "haystack/components/evaluators/context_relevance.py"}], "after_files": [{"content": "# SPDX-FileCopyrightText: 2022-present deepset GmbH <[email protected]>\n#\n# SPDX-License-Identifier: Apache-2.0\n\nfrom typing import Any, Dict, List, Optional\n\nfrom numpy import mean as np_mean\n\nfrom haystack import component, default_from_dict, default_to_dict\nfrom haystack.components.evaluators.llm_evaluator import LLMEvaluator\nfrom haystack.utils import Secret, deserialize_secrets_inplace\n\n# Private global variable for default examples to include in the prompt if the user does not provide any examples\n_DEFAULT_EXAMPLES = [\n {\n \"inputs\": {\n \"questions\": \"What is the capital of Germany?\",\n \"contexts\": [\"Berlin is the capital of Germany and was founded in 1244.\"],\n },\n \"outputs\": {\n \"statements\": [\"Berlin is the capital of Germany.\", \"Berlin was founded in 1244.\"],\n \"statement_scores\": [1, 0],\n },\n },\n {\n \"inputs\": {\n \"questions\": \"What is the capital of France?\",\n \"contexts\": [\n \"Berlin is the capital of Germany and was founded in 1244.\",\n \"Europe is a continent with 44 countries.\",\n \"Madrid is the capital of Spain.\",\n ],\n },\n \"outputs\": {\n \"statements\": [\n \"Berlin is the capital of Germany.\",\n \"Berlin was founded in 1244.\",\n \"Europe is a continent with 44 countries.\",\n \"Madrid is the capital of Spain.\",\n ],\n \"statement_scores\": [0, 0, 0, 0],\n },\n },\n {\n \"inputs\": {\"questions\": \"What is the capital of Italy?\", \"contexts\": [\"Rome is the capital of Italy.\"]},\n \"outputs\": {\"statements\": [\"Rome is the capital of Italy.\"], \"statement_scores\": [1]},\n },\n]\n\n\n@component\nclass ContextRelevanceEvaluator(LLMEvaluator):\n \"\"\"\n Evaluator that checks if a provided context is relevant to the question.\n\n An LLM breaks up the context into multiple statements and checks whether each statement\n is relevant for answering a question.\n The final score for the context relevance is a number from 0.0 to 1.0. It represents the proportion of\n statements that can be inferred from the provided contexts.\n\n Usage example:\n ```python\n from haystack.components.evaluators import ContextRelevanceEvaluator\n\n questions = [\"Who created the Python language?\"]\n contexts = [\n [(\n \"Python, created by Guido van Rossum in the late 1980s, is a high-level general-purpose programming \"\n \"language. Its design philosophy emphasizes code readability, and its language constructs aim to help \"\n \"programmers write clear, logical code for both small and large-scale software projects.\"\n )],\n ]\n\n evaluator = ContextRelevanceEvaluator()\n result = evaluator.run(questions=questions, contexts=contexts)\n print(result[\"score\"])\n # 1.0\n print(result[\"individual_scores\"])\n # [1.0]\n print(result[\"results\"])\n # [{\n # 'statements': ['Python, created by Guido van Rossum in the late 1980s.'],\n # 'statement_scores': [1],\n # 'score': 1.0\n # }]\n ```\n \"\"\"\n\n def __init__(\n self,\n examples: Optional[List[Dict[str, Any]]] = None,\n progress_bar: bool = True,\n api: str = \"openai\",\n api_key: Secret = Secret.from_env_var(\"OPENAI_API_KEY\"),\n raise_on_failure: bool = True,\n ):\n \"\"\"\n Creates an instance of ContextRelevanceEvaluator.\n\n :param examples:\n Optional few-shot examples conforming to the expected input and output format of ContextRelevanceEvaluator.\n Default examples will be used if none are provided.\n Each example must be a dictionary with keys \"inputs\" and \"outputs\".\n \"inputs\" must be a dictionary with keys \"questions\" and \"contexts\".\n \"outputs\" must be a dictionary with \"statements\" and \"statement_scores\".\n Expected format:\n [{\n \"inputs\": {\n \"questions\": \"What is the capital of Italy?\", \"contexts\": [\"Rome is the capital of Italy.\"],\n },\n \"outputs\": {\n \"statements\": [\"Rome is the capital of Italy.\"],\n \"statement_scores\": [1],\n },\n }]\n :param progress_bar:\n Whether to show a progress bar during the evaluation.\n :param api:\n The API to use for calling an LLM through a Generator.\n Supported APIs: \"openai\".\n :param api_key:\n The API key.\n :param raise_on_failure:\n Whether to raise an exception if the API call fails.\n\n \"\"\"\n self.instructions = (\n \"Your task is to judge how relevant the provided context is for answering a question. \"\n \"First, please extract statements from the provided context. \"\n \"Second, calculate a relevance score for each statement in the context. \"\n \"The score is 1 if the statement is relevant to answer the question or 0 if it is not relevant. \"\n \"Each statement should be scored individually.\"\n )\n self.inputs = [(\"questions\", List[str]), (\"contexts\", List[List[str]])]\n self.outputs = [\"statements\", \"statement_scores\"]\n self.examples = examples or _DEFAULT_EXAMPLES\n self.api = api\n self.api_key = api_key\n\n super(ContextRelevanceEvaluator, self).__init__(\n instructions=self.instructions,\n inputs=self.inputs,\n outputs=self.outputs,\n examples=self.examples,\n api=self.api,\n api_key=self.api_key,\n raise_on_failure=raise_on_failure,\n progress_bar=progress_bar,\n )\n\n @component.output_types(individual_scores=List[int], score=float, results=List[Dict[str, Any]])\n def run(self, questions: List[str], contexts: List[List[str]]) -> Dict[str, Any]:\n \"\"\"\n Run the LLM evaluator.\n\n :param questions:\n A list of questions.\n :param contexts:\n A list of lists of contexts. Each list of contexts corresponds to one question.\n :returns:\n A dictionary with the following outputs:\n - `score`: Mean context relevance score over all the provided input questions.\n - `individual_scores`: A list of context relevance scores for each input question.\n - `results`: A list of dictionaries with `statements` and `statement_scores` for each input context.\n \"\"\"\n result = super(ContextRelevanceEvaluator, self).run(questions=questions, contexts=contexts)\n\n # calculate average statement relevance score per query\n for idx, res in enumerate(result[\"results\"]):\n if res is None:\n result[\"results\"][idx] = {\"statements\": [], \"statement_scores\": [], \"score\": float(\"nan\")}\n continue\n if not res[\"statements\"]:\n res[\"score\"] = 0\n else:\n res[\"score\"] = np_mean(res[\"statement_scores\"])\n\n # calculate average context relevance score over all queries\n result[\"score\"] = np_mean([res[\"score\"] for res in result[\"results\"]])\n result[\"individual_scores\"] = [res[\"score\"] for res in result[\"results\"]]\n\n return result\n\n def to_dict(self) -> Dict[str, Any]:\n \"\"\"\n Serialize this component to a dictionary.\n\n :returns:\n A dictionary with serialized data.\n \"\"\"\n return default_to_dict(\n self,\n api=self.api,\n api_key=self.api_key.to_dict() if self.api_key else None,\n examples=self.examples,\n progress_bar=self.progress_bar,\n raise_on_failure=self.raise_on_failure,\n )\n\n @classmethod\n def from_dict(cls, data: Dict[str, Any]) -> \"ContextRelevanceEvaluator\":\n \"\"\"\n Deserialize this component from a dictionary.\n\n :param data:\n The dictionary representation of this component.\n :returns:\n The deserialized component instance.\n \"\"\"\n deserialize_secrets_inplace(data[\"init_parameters\"], keys=[\"api_key\"])\n return default_from_dict(cls, data)\n", "path": "haystack/components/evaluators/context_relevance.py"}]}
| 3,059 | 476 |
gh_patches_debug_25546
|
rasdani/github-patches
|
git_diff
|
OCHA-DAP__hdx-ckan-553
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
About menu: glitch on home page functionality
On **homepage only** and only in Chrome.
Click on about to open the submenu.
Click again to close the menu.
Move the cursor around the `About` text.
The menu text 'About' disappears and reappears unreliably as the cursor moves around it. This is on chrome. Does not occur in firefox.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ckanext-hdx_theme/ckanext/hdx_theme/version.py`
Content:
```
1 hdx_version='v0.2.3'
```
Path: `ckanext-hdx_theme/ckanext/hdx_theme/splash_page.py`
Content:
```
1 import sys, re
2 import ckan.lib.base as base
3 from ckan.lib.base import request
4 from ckan.lib.base import c,g,h
5 from ckan.lib.base import model
6 from ckan.lib.base import render
7 from ckan.lib.base import _
8 import ckan.logic as logic
9 import ckan.plugins.toolkit as tk
10
11 from ckan.controllers.group import GroupController as gc
12 from ckan.controllers.home import HomeController
13
14 import ckanext.hdx_theme.caching as caching
15
16 NotAuthorized = logic.NotAuthorized
17 check_access = logic.check_access
18 get_action = logic.get_action
19
20
21 class SplashPageController(HomeController):
22
23 group_type = 'group'
24
25 def index(self):
26 group_type = None
27 context = {'model': model, 'session': model.Session,
28 'user': c.user or c.author, 'for_view': True,
29 'with_private': False}
30
31 q = c.q = request.params.get('q', '')
32 data_dict = {'all_fields': True, 'q': q}
33 sort_by = c.sort_by_selected = request.params.get('sort')
34 if sort_by:
35 data_dict['sort'] = sort_by
36 try:
37 self._check_access('site_read', context)
38 except NotAuthorized:
39 abort(401, _('Not authorized to see this page'))
40 if c.userobj:
41 context['user_id'] = c.userobj.id
42 context['user_is_admin'] = c.userobj.sysadmin
43
44
45
46 c.group_package_stuff = caching.cached_get_group_package_stuff()
47
48 ##Removing groups without geojson for the map
49 c.group_map = []
50 for gp in c.group_package_stuff:
51 for e in gp['extras']:
52 if e['key'] == 'geojson' and e['value']:
53 c.group_map.append(gp)
54
55 #print c.group_package_stuff
56
57 if c.userobj is not None:
58 msg = None
59 url = h.url_for(controller='user', action='edit')
60 is_google_id = \
61 c.userobj.name.startswith('https://www.google.com/accounts/o8/id')
62 if not c.userobj.email and (is_google_id and not c.userobj.fullname):
63 msg = _(u'Please <a href="{link}">update your profile</a>'
64 u' and add your email address and your full name. '
65 u'{site} uses your email address'
66 u' if you need to reset your password.'.format(
67 link=url, site=g.site_title))
68 elif not c.userobj.email:
69 msg = _('Please <a href="%s">update your profile</a>'
70 ' and add your email address. ') % url + \
71 _('%s uses your email address'
72 ' if you need to reset your password.') \
73 % g.site_title
74 elif is_google_id and not c.userobj.fullname:
75 msg = _('Please <a href="%s">update your profile</a>'
76 ' and add your full name.') % (url)
77 if msg:
78 h.flash_notice(msg, allow_html=True)
79
80 return base.render('home/index.html', cache_force=True)
81
82 def _check_access(self, action_name, *args, **kw):
83 ''' select the correct group/org check_access '''
84 return check_access(self._replace_group_org(action_name), *args, **kw)
85
86 def _replace_group_org(self, string):
87 ''' substitute organization for group if this is an org'''
88 if self.group_type == 'organization':
89 string = re.sub('^group', 'organization', string)
90 return string
91
92 def _action(self, action_name):
93 ''' select the correct group/org action '''
94 return get_action(self._replace_group_org(action_name))
95
96
```
Path: `ckanext-hdx_theme/ckanext/hdx_theme/plugin.py`
Content:
```
1 import ckanext.hdx_theme.licenses as hdx_licenses
2
3 import ckan.plugins as plugins
4 import ckan.plugins.toolkit as toolkit
5 import ckan.model.package as package
6 import ckan.model.license as license
7 import version
8
9 import ckanext.hdx_theme.caching as caching
10 import ckanext.hdx_theme.auth as auth
11
12
13 def run_on_startup():
14 _generate_license_list()
15
16 caching.cached_get_group_package_stuff()
17
18
19 def _generate_license_list():
20 package.Package._license_register = license.LicenseRegister()
21 package.Package._license_register.licenses = [
22 license.License(hdx_licenses.LicenseCreativeCommonsIntergovernmentalOrgs()),
23 license.License(license.LicenseCreativeCommonsAttribution()),
24 license.License(license.LicenseCreativeCommonsAttributionShareAlike()),
25 license.License(hdx_licenses.LicenseOtherPublicDomainNoRestrictions()),
26 license.License(hdx_licenses.LicenseHdxMultiple()),
27 license.License(hdx_licenses.LicenseHdxOther())
28 ]
29
30 class HDXThemePlugin(plugins.SingletonPlugin):
31 plugins.implements(plugins.IConfigurer)
32 plugins.implements(plugins.IRoutes, inherit=True)
33 plugins.implements(plugins.ITemplateHelpers)
34 plugins.implements(plugins.IActions)
35 plugins.implements(plugins.IAuthFunctions)
36 plugins.implements(plugins.IGroupController, inherit=True)
37 plugins.implements(plugins.IMiddleware, inherit=True)
38
39 def update_config(self, config):
40 toolkit.add_template_directory(config, 'templates')
41 toolkit.add_public_directory(config, 'public')
42 toolkit.add_resource('fanstatic', 'hdx_theme')
43
44
45 def before_map(self, map):
46 map.connect('home', '/', controller='ckanext.hdx_theme.splash_page:SplashPageController', action='index')
47 map.connect('/count/dataset', controller='ckanext.hdx_theme.count:CountController', action='dataset')
48 map.connect('/count/country', controller='ckanext.hdx_theme.count:CountController', action='country')
49 map.connect('/count/source', controller='ckanext.hdx_theme.count:CountController', action='source')
50 map.connect('/user/logged_in', controller='ckanext.hdx_theme.login:LoginController', action='logged_in')
51 map.connect('/contribute', controller='ckanext.hdx_theme.login:LoginController', action='contribute')
52
53 map.connect('/count/test', controller='ckanext.hdx_theme.count:CountController', action='test')
54
55 map.connect('/organization/{id}/request_membership', controller='ckanext.hdx_theme.org_controller:HDXOrgController', action='request_membership')
56 return map
57
58 def create(self, entity):
59 caching.invalidate_group_caches()
60
61 def edit(self, entity):
62 caching.invalidate_group_caches()
63
64 def get_helpers(self):
65 from ckanext.hdx_theme import helpers as hdx_helpers
66 return {
67 'is_downloadable': hdx_helpers.is_downloadable,
68 'get_facet_items_dict':hdx_helpers.get_facet_items_dict,
69 'get_last_modifier_user': hdx_helpers.get_last_modifier_user,
70 'get_filtered_params_list':hdx_helpers.get_filtered_params_list,
71 'get_last_revision_package':hdx_helpers.get_last_revision_package,
72 'get_last_modifier_user':hdx_helpers.get_last_modifier_user,
73 'get_last_revision_group':hdx_helpers.get_last_revision_group,
74 'get_group_followers':hdx_helpers.get_group_followers,
75 'get_group_members':hdx_helpers.get_group_members,
76 'markdown_extract_strip':hdx_helpers.markdown_extract_strip,
77 'render_date_from_concat_str':hdx_helpers.render_date_from_concat_str,
78 'hdx_version':hdx_helpers.hdx_version,
79 'hdx_build_nav_icon_with_message':hdx_helpers.hdx_build_nav_icon_with_message,
80 'hdx_num_of_new_related_items':hdx_helpers.hdx_num_of_new_related_items,
81 'hdx_get_extras_element':hdx_helpers.hdx_get_extras_element,
82 'hdx_get_user_info':hdx_helpers.hdx_get_user_info,
83 'hdx_linked_user':hdx_helpers.hdx_linked_user,
84 'hdx_show_singular_plural':hdx_helpers.hdx_show_singular_plural,
85 'hdx_member_roles_list':hdx_helpers.hdx_member_roles_list
86
87 }
88
89 def get_actions(self):
90 from ckanext.hdx_theme import actions as hdx_actions
91 return {
92 'organization_list_for_user':hdx_actions.organization_list_for_user,
93 'cached_group_list': hdx_actions.cached_group_list,
94 'hdx_basic_user_info': hdx_actions.hdx_basic_user_info,
95 'member_list': hdx_actions.member_list,
96 'hdx_get_sys_admins': hdx_actions.hdx_get_sys_admins
97
98 }
99 def get_auth_functions(self):
100 return {
101 'hdx_basic_user_info': auth.hdx_basic_user_info
102 }
103
104 def make_middleware(self, app, config):
105 run_on_startup()
106 return app
107
108
109
110
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/plugin.py b/ckanext-hdx_theme/ckanext/hdx_theme/plugin.py
--- a/ckanext-hdx_theme/ckanext/hdx_theme/plugin.py
+++ b/ckanext-hdx_theme/ckanext/hdx_theme/plugin.py
@@ -53,6 +53,8 @@
map.connect('/count/test', controller='ckanext.hdx_theme.count:CountController', action='test')
map.connect('/organization/{id}/request_membership', controller='ckanext.hdx_theme.org_controller:HDXOrgController', action='request_membership')
+
+ map.connect('/about/{page}', controller='ckanext.hdx_theme.splash_page:SplashPageController', action='about')
return map
def create(self, entity):
diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/splash_page.py b/ckanext-hdx_theme/ckanext/hdx_theme/splash_page.py
--- a/ckanext-hdx_theme/ckanext/hdx_theme/splash_page.py
+++ b/ckanext-hdx_theme/ckanext/hdx_theme/splash_page.py
@@ -92,4 +92,14 @@
def _action(self, action_name):
''' select the correct group/org action '''
return get_action(self._replace_group_org(action_name))
-
+
+ def about(self, page):
+ title = {'license': _('Data Licenses'),
+ 'terms': _('Terms of Service')}
+ html = {'license': 'home/snippets/hdx_licenses.html',
+ 'terms': 'home/snippets/hdx_terms_of_service.html'}
+
+ extraVars = {'title': title.get(page), 'html': html.get(page), 'page': page}
+ return base.render('home/about2.html', extra_vars = extraVars)
+
+
diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/version.py b/ckanext-hdx_theme/ckanext/hdx_theme/version.py
--- a/ckanext-hdx_theme/ckanext/hdx_theme/version.py
+++ b/ckanext-hdx_theme/ckanext/hdx_theme/version.py
@@ -1 +1 @@
-hdx_version='v0.2.3'
\ No newline at end of file
+hdx_version='v0.2.4'
\ No newline at end of file
|
{"golden_diff": "diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/plugin.py b/ckanext-hdx_theme/ckanext/hdx_theme/plugin.py\n--- a/ckanext-hdx_theme/ckanext/hdx_theme/plugin.py\n+++ b/ckanext-hdx_theme/ckanext/hdx_theme/plugin.py\n@@ -53,6 +53,8 @@\n map.connect('/count/test', controller='ckanext.hdx_theme.count:CountController', action='test')\n \n map.connect('/organization/{id}/request_membership', controller='ckanext.hdx_theme.org_controller:HDXOrgController', action='request_membership')\n+\n+ map.connect('/about/{page}', controller='ckanext.hdx_theme.splash_page:SplashPageController', action='about')\n return map\n \n def create(self, entity):\ndiff --git a/ckanext-hdx_theme/ckanext/hdx_theme/splash_page.py b/ckanext-hdx_theme/ckanext/hdx_theme/splash_page.py\n--- a/ckanext-hdx_theme/ckanext/hdx_theme/splash_page.py\n+++ b/ckanext-hdx_theme/ckanext/hdx_theme/splash_page.py\n@@ -92,4 +92,14 @@\n \tdef _action(self, action_name):\n \t\t''' select the correct group/org action '''\n \t\treturn get_action(self._replace_group_org(action_name))\n-\t\n+\n+\tdef about(self, page):\n+\t\ttitle = {'license': _('Data Licenses'),\n+\t\t\t\t 'terms': _('Terms of Service')}\n+\t\thtml = {'license': 'home/snippets/hdx_licenses.html',\n+\t\t\t\t 'terms': 'home/snippets/hdx_terms_of_service.html'}\n+\n+\t\textraVars = {'title': title.get(page), 'html': html.get(page), 'page': page}\n+\t\treturn base.render('home/about2.html', extra_vars = extraVars)\n+\n+\ndiff --git a/ckanext-hdx_theme/ckanext/hdx_theme/version.py b/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n--- a/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n+++ b/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n@@ -1 +1 @@\n-hdx_version='v0.2.3'\n\\ No newline at end of file\n+hdx_version='v0.2.4'\n\\ No newline at end of file\n", "issue": "About menu: glitch on home page functionality\nOn **homepage only** and only in Chrome.\nClick on about to open the submenu.\nClick again to close the menu.\nMove the cursor around the `About` text.\nThe menu text 'About' disappears and reappears unreliably as the cursor moves around it. This is on chrome. Does not occur in firefox.\n\n", "before_files": [{"content": "hdx_version='v0.2.3'", "path": "ckanext-hdx_theme/ckanext/hdx_theme/version.py"}, {"content": "import sys, re\nimport ckan.lib.base as base\nfrom ckan.lib.base import request\nfrom ckan.lib.base import c,g,h\nfrom ckan.lib.base import model\nfrom ckan.lib.base import render\nfrom ckan.lib.base import _\nimport ckan.logic as logic\nimport ckan.plugins.toolkit as tk\n\nfrom ckan.controllers.group import GroupController as gc\nfrom ckan.controllers.home import HomeController\n\nimport ckanext.hdx_theme.caching as caching\n\nNotAuthorized = logic.NotAuthorized\ncheck_access = logic.check_access\nget_action = logic.get_action\n\n\nclass SplashPageController(HomeController):\n\n\tgroup_type = 'group'\n\n\tdef index(self):\n\t\tgroup_type = None\n\t\tcontext = {'model': model, 'session': model.Session,\n\t\t\t\t 'user': c.user or c.author, 'for_view': True,\n\t\t\t\t 'with_private': False}\n\n\t\tq = c.q = request.params.get('q', '')\n\t\tdata_dict = {'all_fields': True, 'q': q}\n\t\tsort_by = c.sort_by_selected = request.params.get('sort')\n\t\tif sort_by:\n\t\t\tdata_dict['sort'] = sort_by\n\t\ttry:\n\t\t\tself._check_access('site_read', context)\n\t\texcept NotAuthorized:\n\t\t\tabort(401, _('Not authorized to see this page'))\n\t\tif c.userobj:\n\t\t\tcontext['user_id'] = c.userobj.id\n\t\t\tcontext['user_is_admin'] = c.userobj.sysadmin\n\t\t\n\t\t\n\t\t\n\t\tc.group_package_stuff = caching.cached_get_group_package_stuff()\n\n\t\t##Removing groups without geojson for the map\n\t\tc.group_map = []\n\t\tfor gp in c.group_package_stuff:\n\t\t\tfor e in gp['extras']:\n\t\t\t\tif e['key'] == 'geojson' and e['value']:\n\t\t\t\t\tc.group_map.append(gp)\n\n\t\t#print c.group_package_stuff\n\n\t\tif c.userobj is not None:\n\t\t\tmsg = None\n\t\t\turl = h.url_for(controller='user', action='edit')\n\t\t\tis_google_id = \\\n\t\t\t\tc.userobj.name.startswith('https://www.google.com/accounts/o8/id')\n\t\t\tif not c.userobj.email and (is_google_id and not c.userobj.fullname):\n\t\t\t\tmsg = _(u'Please <a href=\"{link}\">update your profile</a>'\n\t\t\t\t\t\tu' and add your email address and your full name. '\n\t\t\t\t\t\tu'{site} uses your email address'\n\t\t\t\t\t\tu' if you need to reset your password.'.format(\n\t\t\t\t\t\t\tlink=url, site=g.site_title))\n\t\t\telif not c.userobj.email:\n\t\t\t\tmsg = _('Please <a href=\"%s\">update your profile</a>'\n\t\t\t\t\t\t' and add your email address. ') % url + \\\n\t\t\t\t\t_('%s uses your email address'\n\t\t\t\t\t\t' if you need to reset your password.') \\\n\t\t\t\t\t% g.site_title\n\t\t\telif is_google_id and not c.userobj.fullname:\n\t\t\t\tmsg = _('Please <a href=\"%s\">update your profile</a>'\n\t\t\t\t\t\t' and add your full name.') % (url)\n\t\t\tif msg:\n\t\t\t\th.flash_notice(msg, allow_html=True)\n\n\t\treturn base.render('home/index.html', cache_force=True)\n\n\tdef _check_access(self, action_name, *args, **kw):\n\t\t''' select the correct group/org check_access '''\n\t\treturn check_access(self._replace_group_org(action_name), *args, **kw)\n\n\tdef _replace_group_org(self, string):\n\t\t''' substitute organization for group if this is an org'''\n\t\tif self.group_type == 'organization':\n\t\t\tstring = re.sub('^group', 'organization', string)\n\t\treturn string\n\n\tdef _action(self, action_name):\n\t\t''' select the correct group/org action '''\n\t\treturn get_action(self._replace_group_org(action_name))\n\t\n", "path": "ckanext-hdx_theme/ckanext/hdx_theme/splash_page.py"}, {"content": "import ckanext.hdx_theme.licenses as hdx_licenses\n\nimport ckan.plugins as plugins\nimport ckan.plugins.toolkit as toolkit\nimport ckan.model.package as package\nimport ckan.model.license as license\nimport version\n\nimport ckanext.hdx_theme.caching as caching\nimport ckanext.hdx_theme.auth as auth\n\n\ndef run_on_startup():\n _generate_license_list()\n \n caching.cached_get_group_package_stuff()\n \n\ndef _generate_license_list():\n package.Package._license_register = license.LicenseRegister() \n package.Package._license_register.licenses = [\n license.License(hdx_licenses.LicenseCreativeCommonsIntergovernmentalOrgs()),\n license.License(license.LicenseCreativeCommonsAttribution()),\n license.License(license.LicenseCreativeCommonsAttributionShareAlike()),\n license.License(hdx_licenses.LicenseOtherPublicDomainNoRestrictions()),\n license.License(hdx_licenses.LicenseHdxMultiple()),\n license.License(hdx_licenses.LicenseHdxOther())\n ]\n\nclass HDXThemePlugin(plugins.SingletonPlugin):\n plugins.implements(plugins.IConfigurer)\n plugins.implements(plugins.IRoutes, inherit=True)\n plugins.implements(plugins.ITemplateHelpers)\n plugins.implements(plugins.IActions)\n plugins.implements(plugins.IAuthFunctions)\n plugins.implements(plugins.IGroupController, inherit=True)\n plugins.implements(plugins.IMiddleware, inherit=True)\n \n def update_config(self, config):\n toolkit.add_template_directory(config, 'templates')\n toolkit.add_public_directory(config, 'public')\n toolkit.add_resource('fanstatic', 'hdx_theme')\n \n\n def before_map(self, map):\n map.connect('home', '/', controller='ckanext.hdx_theme.splash_page:SplashPageController', action='index')\n map.connect('/count/dataset', controller='ckanext.hdx_theme.count:CountController', action='dataset')\n map.connect('/count/country', controller='ckanext.hdx_theme.count:CountController', action='country')\n map.connect('/count/source', controller='ckanext.hdx_theme.count:CountController', action='source')\n map.connect('/user/logged_in', controller='ckanext.hdx_theme.login:LoginController', action='logged_in')\n map.connect('/contribute', controller='ckanext.hdx_theme.login:LoginController', action='contribute')\n \n map.connect('/count/test', controller='ckanext.hdx_theme.count:CountController', action='test')\n \n map.connect('/organization/{id}/request_membership', controller='ckanext.hdx_theme.org_controller:HDXOrgController', action='request_membership')\n return map\n \n def create(self, entity):\n caching.invalidate_group_caches()\n\n def edit(self, entity):\n caching.invalidate_group_caches()\n\n def get_helpers(self):\n from ckanext.hdx_theme import helpers as hdx_helpers\n return {\n 'is_downloadable': hdx_helpers.is_downloadable,\n 'get_facet_items_dict':hdx_helpers.get_facet_items_dict,\n 'get_last_modifier_user': hdx_helpers.get_last_modifier_user,\n 'get_filtered_params_list':hdx_helpers.get_filtered_params_list,\n 'get_last_revision_package':hdx_helpers.get_last_revision_package,\n 'get_last_modifier_user':hdx_helpers.get_last_modifier_user,\n 'get_last_revision_group':hdx_helpers.get_last_revision_group,\n 'get_group_followers':hdx_helpers.get_group_followers,\n 'get_group_members':hdx_helpers.get_group_members,\n 'markdown_extract_strip':hdx_helpers.markdown_extract_strip,\n 'render_date_from_concat_str':hdx_helpers.render_date_from_concat_str,\n 'hdx_version':hdx_helpers.hdx_version,\n 'hdx_build_nav_icon_with_message':hdx_helpers.hdx_build_nav_icon_with_message,\n 'hdx_num_of_new_related_items':hdx_helpers.hdx_num_of_new_related_items,\n 'hdx_get_extras_element':hdx_helpers.hdx_get_extras_element,\n 'hdx_get_user_info':hdx_helpers.hdx_get_user_info,\n 'hdx_linked_user':hdx_helpers.hdx_linked_user,\n 'hdx_show_singular_plural':hdx_helpers.hdx_show_singular_plural,\n 'hdx_member_roles_list':hdx_helpers.hdx_member_roles_list\n \n }\n \n def get_actions(self):\n from ckanext.hdx_theme import actions as hdx_actions\n return {\n 'organization_list_for_user':hdx_actions.organization_list_for_user, \n 'cached_group_list': hdx_actions.cached_group_list,\n 'hdx_basic_user_info': hdx_actions.hdx_basic_user_info,\n 'member_list': hdx_actions.member_list,\n 'hdx_get_sys_admins': hdx_actions.hdx_get_sys_admins\n \n }\n def get_auth_functions(self):\n return {\n 'hdx_basic_user_info': auth.hdx_basic_user_info\n }\n \n def make_middleware(self, app, config):\n run_on_startup()\n return app\n\n \n \n\n", "path": "ckanext-hdx_theme/ckanext/hdx_theme/plugin.py"}], "after_files": [{"content": "hdx_version='v0.2.4'", "path": "ckanext-hdx_theme/ckanext/hdx_theme/version.py"}, {"content": "import sys, re\nimport ckan.lib.base as base\nfrom ckan.lib.base import request\nfrom ckan.lib.base import c,g,h\nfrom ckan.lib.base import model\nfrom ckan.lib.base import render\nfrom ckan.lib.base import _\nimport ckan.logic as logic\nimport ckan.plugins.toolkit as tk\n\nfrom ckan.controllers.group import GroupController as gc\nfrom ckan.controllers.home import HomeController\n\nimport ckanext.hdx_theme.caching as caching\n\nNotAuthorized = logic.NotAuthorized\ncheck_access = logic.check_access\nget_action = logic.get_action\n\n\nclass SplashPageController(HomeController):\n\n\tgroup_type = 'group'\n\n\tdef index(self):\n\t\tgroup_type = None\n\t\tcontext = {'model': model, 'session': model.Session,\n\t\t\t\t 'user': c.user or c.author, 'for_view': True,\n\t\t\t\t 'with_private': False}\n\n\t\tq = c.q = request.params.get('q', '')\n\t\tdata_dict = {'all_fields': True, 'q': q}\n\t\tsort_by = c.sort_by_selected = request.params.get('sort')\n\t\tif sort_by:\n\t\t\tdata_dict['sort'] = sort_by\n\t\ttry:\n\t\t\tself._check_access('site_read', context)\n\t\texcept NotAuthorized:\n\t\t\tabort(401, _('Not authorized to see this page'))\n\t\tif c.userobj:\n\t\t\tcontext['user_id'] = c.userobj.id\n\t\t\tcontext['user_is_admin'] = c.userobj.sysadmin\n\t\t\n\t\t\n\t\t\n\t\tc.group_package_stuff = caching.cached_get_group_package_stuff()\n\n\t\t##Removing groups without geojson for the map\n\t\tc.group_map = []\n\t\tfor gp in c.group_package_stuff:\n\t\t\tfor e in gp['extras']:\n\t\t\t\tif e['key'] == 'geojson' and e['value']:\n\t\t\t\t\tc.group_map.append(gp)\n\n\t\t#print c.group_package_stuff\n\n\t\tif c.userobj is not None:\n\t\t\tmsg = None\n\t\t\turl = h.url_for(controller='user', action='edit')\n\t\t\tis_google_id = \\\n\t\t\t\tc.userobj.name.startswith('https://www.google.com/accounts/o8/id')\n\t\t\tif not c.userobj.email and (is_google_id and not c.userobj.fullname):\n\t\t\t\tmsg = _(u'Please <a href=\"{link}\">update your profile</a>'\n\t\t\t\t\t\tu' and add your email address and your full name. '\n\t\t\t\t\t\tu'{site} uses your email address'\n\t\t\t\t\t\tu' if you need to reset your password.'.format(\n\t\t\t\t\t\t\tlink=url, site=g.site_title))\n\t\t\telif not c.userobj.email:\n\t\t\t\tmsg = _('Please <a href=\"%s\">update your profile</a>'\n\t\t\t\t\t\t' and add your email address. ') % url + \\\n\t\t\t\t\t_('%s uses your email address'\n\t\t\t\t\t\t' if you need to reset your password.') \\\n\t\t\t\t\t% g.site_title\n\t\t\telif is_google_id and not c.userobj.fullname:\n\t\t\t\tmsg = _('Please <a href=\"%s\">update your profile</a>'\n\t\t\t\t\t\t' and add your full name.') % (url)\n\t\t\tif msg:\n\t\t\t\th.flash_notice(msg, allow_html=True)\n\n\t\treturn base.render('home/index.html', cache_force=True)\n\n\tdef _check_access(self, action_name, *args, **kw):\n\t\t''' select the correct group/org check_access '''\n\t\treturn check_access(self._replace_group_org(action_name), *args, **kw)\n\n\tdef _replace_group_org(self, string):\n\t\t''' substitute organization for group if this is an org'''\n\t\tif self.group_type == 'organization':\n\t\t\tstring = re.sub('^group', 'organization', string)\n\t\treturn string\n\n\tdef _action(self, action_name):\n\t\t''' select the correct group/org action '''\n\t\treturn get_action(self._replace_group_org(action_name))\n\n\tdef about(self, page):\n\t\ttitle = {'license': _('Data Licenses'),\n\t\t\t\t 'terms': _('Terms of Service')}\n\t\thtml = {'license': 'home/snippets/hdx_licenses.html',\n\t\t\t\t 'terms': 'home/snippets/hdx_terms_of_service.html'}\n\n\t\textraVars = {'title': title.get(page), 'html': html.get(page), 'page': page}\n\t\treturn base.render('home/about2.html', extra_vars = extraVars)\n\n\n", "path": "ckanext-hdx_theme/ckanext/hdx_theme/splash_page.py"}, {"content": "import ckanext.hdx_theme.licenses as hdx_licenses\n\nimport ckan.plugins as plugins\nimport ckan.plugins.toolkit as toolkit\nimport ckan.model.package as package\nimport ckan.model.license as license\nimport version\n\nimport ckanext.hdx_theme.caching as caching\nimport ckanext.hdx_theme.auth as auth\n\n\ndef run_on_startup():\n _generate_license_list()\n \n caching.cached_get_group_package_stuff()\n \n\ndef _generate_license_list():\n package.Package._license_register = license.LicenseRegister() \n package.Package._license_register.licenses = [\n license.License(hdx_licenses.LicenseCreativeCommonsIntergovernmentalOrgs()),\n license.License(license.LicenseCreativeCommonsAttribution()),\n license.License(license.LicenseCreativeCommonsAttributionShareAlike()),\n license.License(hdx_licenses.LicenseOtherPublicDomainNoRestrictions()),\n license.License(hdx_licenses.LicenseHdxMultiple()),\n license.License(hdx_licenses.LicenseHdxOther())\n ]\n\nclass HDXThemePlugin(plugins.SingletonPlugin):\n plugins.implements(plugins.IConfigurer)\n plugins.implements(plugins.IRoutes, inherit=True)\n plugins.implements(plugins.ITemplateHelpers)\n plugins.implements(plugins.IActions)\n plugins.implements(plugins.IAuthFunctions)\n plugins.implements(plugins.IGroupController, inherit=True)\n plugins.implements(plugins.IMiddleware, inherit=True)\n \n def update_config(self, config):\n toolkit.add_template_directory(config, 'templates')\n toolkit.add_public_directory(config, 'public')\n toolkit.add_resource('fanstatic', 'hdx_theme')\n \n\n def before_map(self, map):\n map.connect('home', '/', controller='ckanext.hdx_theme.splash_page:SplashPageController', action='index')\n map.connect('/count/dataset', controller='ckanext.hdx_theme.count:CountController', action='dataset')\n map.connect('/count/country', controller='ckanext.hdx_theme.count:CountController', action='country')\n map.connect('/count/source', controller='ckanext.hdx_theme.count:CountController', action='source')\n map.connect('/user/logged_in', controller='ckanext.hdx_theme.login:LoginController', action='logged_in')\n map.connect('/contribute', controller='ckanext.hdx_theme.login:LoginController', action='contribute')\n \n map.connect('/count/test', controller='ckanext.hdx_theme.count:CountController', action='test')\n \n map.connect('/organization/{id}/request_membership', controller='ckanext.hdx_theme.org_controller:HDXOrgController', action='request_membership')\n\n map.connect('/about/{page}', controller='ckanext.hdx_theme.splash_page:SplashPageController', action='about')\n return map\n \n def create(self, entity):\n caching.invalidate_group_caches()\n\n def edit(self, entity):\n caching.invalidate_group_caches()\n\n def get_helpers(self):\n from ckanext.hdx_theme import helpers as hdx_helpers\n return {\n 'is_downloadable': hdx_helpers.is_downloadable,\n 'get_facet_items_dict':hdx_helpers.get_facet_items_dict,\n 'get_last_modifier_user': hdx_helpers.get_last_modifier_user,\n 'get_filtered_params_list':hdx_helpers.get_filtered_params_list,\n 'get_last_revision_package':hdx_helpers.get_last_revision_package,\n 'get_last_modifier_user':hdx_helpers.get_last_modifier_user,\n 'get_last_revision_group':hdx_helpers.get_last_revision_group,\n 'get_group_followers':hdx_helpers.get_group_followers,\n 'get_group_members':hdx_helpers.get_group_members,\n 'markdown_extract_strip':hdx_helpers.markdown_extract_strip,\n 'render_date_from_concat_str':hdx_helpers.render_date_from_concat_str,\n 'hdx_version':hdx_helpers.hdx_version,\n 'hdx_build_nav_icon_with_message':hdx_helpers.hdx_build_nav_icon_with_message,\n 'hdx_num_of_new_related_items':hdx_helpers.hdx_num_of_new_related_items,\n 'hdx_get_extras_element':hdx_helpers.hdx_get_extras_element,\n 'hdx_get_user_info':hdx_helpers.hdx_get_user_info,\n 'hdx_linked_user':hdx_helpers.hdx_linked_user,\n 'hdx_show_singular_plural':hdx_helpers.hdx_show_singular_plural,\n 'hdx_member_roles_list':hdx_helpers.hdx_member_roles_list\n \n }\n \n def get_actions(self):\n from ckanext.hdx_theme import actions as hdx_actions\n return {\n 'organization_list_for_user':hdx_actions.organization_list_for_user, \n 'cached_group_list': hdx_actions.cached_group_list,\n 'hdx_basic_user_info': hdx_actions.hdx_basic_user_info,\n 'member_list': hdx_actions.member_list,\n 'hdx_get_sys_admins': hdx_actions.hdx_get_sys_admins\n \n }\n def get_auth_functions(self):\n return {\n 'hdx_basic_user_info': auth.hdx_basic_user_info\n }\n \n def make_middleware(self, app, config):\n run_on_startup()\n return app\n\n \n \n\n", "path": "ckanext-hdx_theme/ckanext/hdx_theme/plugin.py"}]}
| 2,731 | 545 |
gh_patches_debug_10862
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-964
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add HTTP user-agent to WSGI instrumentation
As discussed for ASGI in [#716 (comment)](https://github.com/open-telemetry/opentelemetry-python/pull/716/files#r428364828), the WSGI instrumentation should also set an attribute from the user-agent in a HTTP request.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `instrumentation/opentelemetry-instrumentation-wsgi/src/opentelemetry/instrumentation/wsgi/__init__.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """
15 This library provides a WSGI middleware that can be used on any WSGI framework
16 (such as Django / Flask) to track requests timing through OpenTelemetry.
17
18 Usage (Flask)
19 -------------
20
21 .. code-block:: python
22
23 from flask import Flask
24 from opentelemetry.instrumentation.wsgi import OpenTelemetryMiddleware
25
26 app = Flask(__name__)
27 app.wsgi_app = OpenTelemetryMiddleware(app.wsgi_app)
28
29 @app.route("/")
30 def hello():
31 return "Hello!"
32
33 if __name__ == "__main__":
34 app.run(debug=True)
35
36
37 Usage (Django)
38 --------------
39
40 Modify the application's ``wsgi.py`` file as shown below.
41
42 .. code-block:: python
43
44 import os
45 from opentelemetry.instrumentation.wsgi import OpenTelemetryMiddleware
46 from django.core.wsgi import get_wsgi_application
47
48 os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'application.settings')
49
50 application = get_wsgi_application()
51 application = OpenTelemetryMiddleware(application)
52
53 API
54 ---
55 """
56
57 import functools
58 import typing
59 import wsgiref.util as wsgiref_util
60
61 from opentelemetry import context, propagators, trace
62 from opentelemetry.instrumentation.utils import http_status_to_canonical_code
63 from opentelemetry.instrumentation.wsgi.version import __version__
64 from opentelemetry.trace.status import Status, StatusCanonicalCode
65
66 _HTTP_VERSION_PREFIX = "HTTP/"
67
68
69 def get_header_from_environ(
70 environ: dict, header_name: str
71 ) -> typing.List[str]:
72 """Retrieve a HTTP header value from the PEP3333-conforming WSGI environ.
73
74 Returns:
75 A list with a single string with the header value if it exists, else an empty list.
76 """
77 environ_key = "HTTP_" + header_name.upper().replace("-", "_")
78 value = environ.get(environ_key)
79 if value is not None:
80 return [value]
81 return []
82
83
84 def setifnotnone(dic, key, value):
85 if value is not None:
86 dic[key] = value
87
88
89 def collect_request_attributes(environ):
90 """Collects HTTP request attributes from the PEP3333-conforming
91 WSGI environ and returns a dictionary to be used as span creation attributes."""
92
93 result = {
94 "component": "http",
95 "http.method": environ.get("REQUEST_METHOD"),
96 "http.server_name": environ.get("SERVER_NAME"),
97 "http.scheme": environ.get("wsgi.url_scheme"),
98 }
99
100 host_port = environ.get("SERVER_PORT")
101 if host_port is not None:
102 result.update({"host.port": int(host_port)})
103
104 setifnotnone(result, "http.host", environ.get("HTTP_HOST"))
105 target = environ.get("RAW_URI")
106 if target is None: # Note: `"" or None is None`
107 target = environ.get("REQUEST_URI")
108 if target is not None:
109 result["http.target"] = target
110 else:
111 result["http.url"] = wsgiref_util.request_uri(environ)
112
113 remote_addr = environ.get("REMOTE_ADDR")
114 if remote_addr:
115 result["net.peer.ip"] = remote_addr
116 remote_host = environ.get("REMOTE_HOST")
117 if remote_host and remote_host != remote_addr:
118 result["net.peer.name"] = remote_host
119
120 setifnotnone(result, "net.peer.port", environ.get("REMOTE_PORT"))
121 flavor = environ.get("SERVER_PROTOCOL", "")
122 if flavor.upper().startswith(_HTTP_VERSION_PREFIX):
123 flavor = flavor[len(_HTTP_VERSION_PREFIX) :]
124 if flavor:
125 result["http.flavor"] = flavor
126
127 return result
128
129
130 def add_response_attributes(
131 span, start_response_status, response_headers
132 ): # pylint: disable=unused-argument
133 """Adds HTTP response attributes to span using the arguments
134 passed to a PEP3333-conforming start_response callable."""
135
136 status_code, status_text = start_response_status.split(" ", 1)
137 span.set_attribute("http.status_text", status_text)
138
139 try:
140 status_code = int(status_code)
141 except ValueError:
142 span.set_status(
143 Status(
144 StatusCanonicalCode.UNKNOWN,
145 "Non-integer HTTP status: " + repr(status_code),
146 )
147 )
148 else:
149 span.set_attribute("http.status_code", status_code)
150 span.set_status(Status(http_status_to_canonical_code(status_code)))
151
152
153 def get_default_span_name(environ):
154 """Default implementation for name_callback, returns HTTP {METHOD_NAME}."""
155 return "HTTP {}".format(environ.get("REQUEST_METHOD", "")).strip()
156
157
158 class OpenTelemetryMiddleware:
159 """The WSGI application middleware.
160
161 This class is a PEP 3333 conforming WSGI middleware that starts and
162 annotates spans for any requests it is invoked with.
163
164 Args:
165 wsgi: The WSGI application callable to forward requests to.
166 name_callback: Callback which calculates a generic span name for an
167 incoming HTTP request based on the PEP3333 WSGI environ.
168 Optional: Defaults to get_default_span_name.
169 """
170
171 def __init__(self, wsgi, name_callback=get_default_span_name):
172 self.wsgi = wsgi
173 self.tracer = trace.get_tracer(__name__, __version__)
174 self.name_callback = name_callback
175
176 @staticmethod
177 def _create_start_response(span, start_response):
178 @functools.wraps(start_response)
179 def _start_response(status, response_headers, *args, **kwargs):
180 add_response_attributes(span, status, response_headers)
181 return start_response(status, response_headers, *args, **kwargs)
182
183 return _start_response
184
185 def __call__(self, environ, start_response):
186 """The WSGI application
187
188 Args:
189 environ: A WSGI environment.
190 start_response: The WSGI start_response callable.
191 """
192
193 token = context.attach(
194 propagators.extract(get_header_from_environ, environ)
195 )
196 span_name = self.name_callback(environ)
197
198 span = self.tracer.start_span(
199 span_name,
200 kind=trace.SpanKind.SERVER,
201 attributes=collect_request_attributes(environ),
202 )
203
204 try:
205 with self.tracer.use_span(span):
206 start_response = self._create_start_response(
207 span, start_response
208 )
209 iterable = self.wsgi(environ, start_response)
210 return _end_span_after_iterating(
211 iterable, span, self.tracer, token
212 )
213 except Exception as ex:
214 span.set_status(Status(StatusCanonicalCode.INTERNAL, str(ex)))
215 span.end()
216 context.detach(token)
217 raise
218
219
220 # Put this in a subfunction to not delay the call to the wrapped
221 # WSGI application (instrumentation should change the application
222 # behavior as little as possible).
223 def _end_span_after_iterating(iterable, span, tracer, token):
224 try:
225 with tracer.use_span(span):
226 for yielded in iterable:
227 yield yielded
228 finally:
229 close = getattr(iterable, "close", None)
230 if close:
231 close()
232 span.end()
233 context.detach(token)
234
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/instrumentation/opentelemetry-instrumentation-wsgi/src/opentelemetry/instrumentation/wsgi/__init__.py b/instrumentation/opentelemetry-instrumentation-wsgi/src/opentelemetry/instrumentation/wsgi/__init__.py
--- a/instrumentation/opentelemetry-instrumentation-wsgi/src/opentelemetry/instrumentation/wsgi/__init__.py
+++ b/instrumentation/opentelemetry-instrumentation-wsgi/src/opentelemetry/instrumentation/wsgi/__init__.py
@@ -117,6 +117,10 @@
if remote_host and remote_host != remote_addr:
result["net.peer.name"] = remote_host
+ user_agent = environ.get("HTTP_USER_AGENT")
+ if user_agent is not None and len(user_agent) > 0:
+ result["http.user_agent"] = user_agent
+
setifnotnone(result, "net.peer.port", environ.get("REMOTE_PORT"))
flavor = environ.get("SERVER_PROTOCOL", "")
if flavor.upper().startswith(_HTTP_VERSION_PREFIX):
|
{"golden_diff": "diff --git a/instrumentation/opentelemetry-instrumentation-wsgi/src/opentelemetry/instrumentation/wsgi/__init__.py b/instrumentation/opentelemetry-instrumentation-wsgi/src/opentelemetry/instrumentation/wsgi/__init__.py\n--- a/instrumentation/opentelemetry-instrumentation-wsgi/src/opentelemetry/instrumentation/wsgi/__init__.py\n+++ b/instrumentation/opentelemetry-instrumentation-wsgi/src/opentelemetry/instrumentation/wsgi/__init__.py\n@@ -117,6 +117,10 @@\n if remote_host and remote_host != remote_addr:\n result[\"net.peer.name\"] = remote_host\n \n+ user_agent = environ.get(\"HTTP_USER_AGENT\")\n+ if user_agent is not None and len(user_agent) > 0:\n+ result[\"http.user_agent\"] = user_agent\n+\n setifnotnone(result, \"net.peer.port\", environ.get(\"REMOTE_PORT\"))\n flavor = environ.get(\"SERVER_PROTOCOL\", \"\")\n if flavor.upper().startswith(_HTTP_VERSION_PREFIX):\n", "issue": "Add HTTP user-agent to WSGI instrumentation\nAs discussed for ASGI in [#716 (comment)](https://github.com/open-telemetry/opentelemetry-python/pull/716/files#r428364828), the WSGI instrumentation should also set an attribute from the user-agent in a HTTP request. \n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"\nThis library provides a WSGI middleware that can be used on any WSGI framework\n(such as Django / Flask) to track requests timing through OpenTelemetry.\n\nUsage (Flask)\n-------------\n\n.. code-block:: python\n\n from flask import Flask\n from opentelemetry.instrumentation.wsgi import OpenTelemetryMiddleware\n\n app = Flask(__name__)\n app.wsgi_app = OpenTelemetryMiddleware(app.wsgi_app)\n\n @app.route(\"/\")\n def hello():\n return \"Hello!\"\n\n if __name__ == \"__main__\":\n app.run(debug=True)\n\n\nUsage (Django)\n--------------\n\nModify the application's ``wsgi.py`` file as shown below.\n\n.. code-block:: python\n\n import os\n from opentelemetry.instrumentation.wsgi import OpenTelemetryMiddleware\n from django.core.wsgi import get_wsgi_application\n\n os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'application.settings')\n\n application = get_wsgi_application()\n application = OpenTelemetryMiddleware(application)\n\nAPI\n---\n\"\"\"\n\nimport functools\nimport typing\nimport wsgiref.util as wsgiref_util\n\nfrom opentelemetry import context, propagators, trace\nfrom opentelemetry.instrumentation.utils import http_status_to_canonical_code\nfrom opentelemetry.instrumentation.wsgi.version import __version__\nfrom opentelemetry.trace.status import Status, StatusCanonicalCode\n\n_HTTP_VERSION_PREFIX = \"HTTP/\"\n\n\ndef get_header_from_environ(\n environ: dict, header_name: str\n) -> typing.List[str]:\n \"\"\"Retrieve a HTTP header value from the PEP3333-conforming WSGI environ.\n\n Returns:\n A list with a single string with the header value if it exists, else an empty list.\n \"\"\"\n environ_key = \"HTTP_\" + header_name.upper().replace(\"-\", \"_\")\n value = environ.get(environ_key)\n if value is not None:\n return [value]\n return []\n\n\ndef setifnotnone(dic, key, value):\n if value is not None:\n dic[key] = value\n\n\ndef collect_request_attributes(environ):\n \"\"\"Collects HTTP request attributes from the PEP3333-conforming\n WSGI environ and returns a dictionary to be used as span creation attributes.\"\"\"\n\n result = {\n \"component\": \"http\",\n \"http.method\": environ.get(\"REQUEST_METHOD\"),\n \"http.server_name\": environ.get(\"SERVER_NAME\"),\n \"http.scheme\": environ.get(\"wsgi.url_scheme\"),\n }\n\n host_port = environ.get(\"SERVER_PORT\")\n if host_port is not None:\n result.update({\"host.port\": int(host_port)})\n\n setifnotnone(result, \"http.host\", environ.get(\"HTTP_HOST\"))\n target = environ.get(\"RAW_URI\")\n if target is None: # Note: `\"\" or None is None`\n target = environ.get(\"REQUEST_URI\")\n if target is not None:\n result[\"http.target\"] = target\n else:\n result[\"http.url\"] = wsgiref_util.request_uri(environ)\n\n remote_addr = environ.get(\"REMOTE_ADDR\")\n if remote_addr:\n result[\"net.peer.ip\"] = remote_addr\n remote_host = environ.get(\"REMOTE_HOST\")\n if remote_host and remote_host != remote_addr:\n result[\"net.peer.name\"] = remote_host\n\n setifnotnone(result, \"net.peer.port\", environ.get(\"REMOTE_PORT\"))\n flavor = environ.get(\"SERVER_PROTOCOL\", \"\")\n if flavor.upper().startswith(_HTTP_VERSION_PREFIX):\n flavor = flavor[len(_HTTP_VERSION_PREFIX) :]\n if flavor:\n result[\"http.flavor\"] = flavor\n\n return result\n\n\ndef add_response_attributes(\n span, start_response_status, response_headers\n): # pylint: disable=unused-argument\n \"\"\"Adds HTTP response attributes to span using the arguments\n passed to a PEP3333-conforming start_response callable.\"\"\"\n\n status_code, status_text = start_response_status.split(\" \", 1)\n span.set_attribute(\"http.status_text\", status_text)\n\n try:\n status_code = int(status_code)\n except ValueError:\n span.set_status(\n Status(\n StatusCanonicalCode.UNKNOWN,\n \"Non-integer HTTP status: \" + repr(status_code),\n )\n )\n else:\n span.set_attribute(\"http.status_code\", status_code)\n span.set_status(Status(http_status_to_canonical_code(status_code)))\n\n\ndef get_default_span_name(environ):\n \"\"\"Default implementation for name_callback, returns HTTP {METHOD_NAME}.\"\"\"\n return \"HTTP {}\".format(environ.get(\"REQUEST_METHOD\", \"\")).strip()\n\n\nclass OpenTelemetryMiddleware:\n \"\"\"The WSGI application middleware.\n\n This class is a PEP 3333 conforming WSGI middleware that starts and\n annotates spans for any requests it is invoked with.\n\n Args:\n wsgi: The WSGI application callable to forward requests to.\n name_callback: Callback which calculates a generic span name for an\n incoming HTTP request based on the PEP3333 WSGI environ.\n Optional: Defaults to get_default_span_name.\n \"\"\"\n\n def __init__(self, wsgi, name_callback=get_default_span_name):\n self.wsgi = wsgi\n self.tracer = trace.get_tracer(__name__, __version__)\n self.name_callback = name_callback\n\n @staticmethod\n def _create_start_response(span, start_response):\n @functools.wraps(start_response)\n def _start_response(status, response_headers, *args, **kwargs):\n add_response_attributes(span, status, response_headers)\n return start_response(status, response_headers, *args, **kwargs)\n\n return _start_response\n\n def __call__(self, environ, start_response):\n \"\"\"The WSGI application\n\n Args:\n environ: A WSGI environment.\n start_response: The WSGI start_response callable.\n \"\"\"\n\n token = context.attach(\n propagators.extract(get_header_from_environ, environ)\n )\n span_name = self.name_callback(environ)\n\n span = self.tracer.start_span(\n span_name,\n kind=trace.SpanKind.SERVER,\n attributes=collect_request_attributes(environ),\n )\n\n try:\n with self.tracer.use_span(span):\n start_response = self._create_start_response(\n span, start_response\n )\n iterable = self.wsgi(environ, start_response)\n return _end_span_after_iterating(\n iterable, span, self.tracer, token\n )\n except Exception as ex:\n span.set_status(Status(StatusCanonicalCode.INTERNAL, str(ex)))\n span.end()\n context.detach(token)\n raise\n\n\n# Put this in a subfunction to not delay the call to the wrapped\n# WSGI application (instrumentation should change the application\n# behavior as little as possible).\ndef _end_span_after_iterating(iterable, span, tracer, token):\n try:\n with tracer.use_span(span):\n for yielded in iterable:\n yield yielded\n finally:\n close = getattr(iterable, \"close\", None)\n if close:\n close()\n span.end()\n context.detach(token)\n", "path": "instrumentation/opentelemetry-instrumentation-wsgi/src/opentelemetry/instrumentation/wsgi/__init__.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"\nThis library provides a WSGI middleware that can be used on any WSGI framework\n(such as Django / Flask) to track requests timing through OpenTelemetry.\n\nUsage (Flask)\n-------------\n\n.. code-block:: python\n\n from flask import Flask\n from opentelemetry.instrumentation.wsgi import OpenTelemetryMiddleware\n\n app = Flask(__name__)\n app.wsgi_app = OpenTelemetryMiddleware(app.wsgi_app)\n\n @app.route(\"/\")\n def hello():\n return \"Hello!\"\n\n if __name__ == \"__main__\":\n app.run(debug=True)\n\n\nUsage (Django)\n--------------\n\nModify the application's ``wsgi.py`` file as shown below.\n\n.. code-block:: python\n\n import os\n from opentelemetry.instrumentation.wsgi import OpenTelemetryMiddleware\n from django.core.wsgi import get_wsgi_application\n\n os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'application.settings')\n\n application = get_wsgi_application()\n application = OpenTelemetryMiddleware(application)\n\nAPI\n---\n\"\"\"\n\nimport functools\nimport typing\nimport wsgiref.util as wsgiref_util\n\nfrom opentelemetry import context, propagators, trace\nfrom opentelemetry.instrumentation.utils import http_status_to_canonical_code\nfrom opentelemetry.instrumentation.wsgi.version import __version__\nfrom opentelemetry.trace.status import Status, StatusCanonicalCode\n\n_HTTP_VERSION_PREFIX = \"HTTP/\"\n\n\ndef get_header_from_environ(\n environ: dict, header_name: str\n) -> typing.List[str]:\n \"\"\"Retrieve a HTTP header value from the PEP3333-conforming WSGI environ.\n\n Returns:\n A list with a single string with the header value if it exists, else an empty list.\n \"\"\"\n environ_key = \"HTTP_\" + header_name.upper().replace(\"-\", \"_\")\n value = environ.get(environ_key)\n if value is not None:\n return [value]\n return []\n\n\ndef setifnotnone(dic, key, value):\n if value is not None:\n dic[key] = value\n\n\ndef collect_request_attributes(environ):\n \"\"\"Collects HTTP request attributes from the PEP3333-conforming\n WSGI environ and returns a dictionary to be used as span creation attributes.\"\"\"\n\n result = {\n \"component\": \"http\",\n \"http.method\": environ.get(\"REQUEST_METHOD\"),\n \"http.server_name\": environ.get(\"SERVER_NAME\"),\n \"http.scheme\": environ.get(\"wsgi.url_scheme\"),\n }\n\n host_port = environ.get(\"SERVER_PORT\")\n if host_port is not None:\n result.update({\"host.port\": int(host_port)})\n\n setifnotnone(result, \"http.host\", environ.get(\"HTTP_HOST\"))\n target = environ.get(\"RAW_URI\")\n if target is None: # Note: `\"\" or None is None`\n target = environ.get(\"REQUEST_URI\")\n if target is not None:\n result[\"http.target\"] = target\n else:\n result[\"http.url\"] = wsgiref_util.request_uri(environ)\n\n remote_addr = environ.get(\"REMOTE_ADDR\")\n if remote_addr:\n result[\"net.peer.ip\"] = remote_addr\n remote_host = environ.get(\"REMOTE_HOST\")\n if remote_host and remote_host != remote_addr:\n result[\"net.peer.name\"] = remote_host\n\n user_agent = environ.get(\"HTTP_USER_AGENT\")\n if user_agent is not None and len(user_agent) > 0:\n result[\"http.user_agent\"] = user_agent\n\n setifnotnone(result, \"net.peer.port\", environ.get(\"REMOTE_PORT\"))\n flavor = environ.get(\"SERVER_PROTOCOL\", \"\")\n if flavor.upper().startswith(_HTTP_VERSION_PREFIX):\n flavor = flavor[len(_HTTP_VERSION_PREFIX) :]\n if flavor:\n result[\"http.flavor\"] = flavor\n\n return result\n\n\ndef add_response_attributes(\n span, start_response_status, response_headers\n): # pylint: disable=unused-argument\n \"\"\"Adds HTTP response attributes to span using the arguments\n passed to a PEP3333-conforming start_response callable.\"\"\"\n\n status_code, status_text = start_response_status.split(\" \", 1)\n span.set_attribute(\"http.status_text\", status_text)\n\n try:\n status_code = int(status_code)\n except ValueError:\n span.set_status(\n Status(\n StatusCanonicalCode.UNKNOWN,\n \"Non-integer HTTP status: \" + repr(status_code),\n )\n )\n else:\n span.set_attribute(\"http.status_code\", status_code)\n span.set_status(Status(http_status_to_canonical_code(status_code)))\n\n\ndef get_default_span_name(environ):\n \"\"\"Default implementation for name_callback, returns HTTP {METHOD_NAME}.\"\"\"\n return \"HTTP {}\".format(environ.get(\"REQUEST_METHOD\", \"\")).strip()\n\n\nclass OpenTelemetryMiddleware:\n \"\"\"The WSGI application middleware.\n\n This class is a PEP 3333 conforming WSGI middleware that starts and\n annotates spans for any requests it is invoked with.\n\n Args:\n wsgi: The WSGI application callable to forward requests to.\n name_callback: Callback which calculates a generic span name for an\n incoming HTTP request based on the PEP3333 WSGI environ.\n Optional: Defaults to get_default_span_name.\n \"\"\"\n\n def __init__(self, wsgi, name_callback=get_default_span_name):\n self.wsgi = wsgi\n self.tracer = trace.get_tracer(__name__, __version__)\n self.name_callback = name_callback\n\n @staticmethod\n def _create_start_response(span, start_response):\n @functools.wraps(start_response)\n def _start_response(status, response_headers, *args, **kwargs):\n add_response_attributes(span, status, response_headers)\n return start_response(status, response_headers, *args, **kwargs)\n\n return _start_response\n\n def __call__(self, environ, start_response):\n \"\"\"The WSGI application\n\n Args:\n environ: A WSGI environment.\n start_response: The WSGI start_response callable.\n \"\"\"\n\n token = context.attach(\n propagators.extract(get_header_from_environ, environ)\n )\n span_name = self.name_callback(environ)\n\n span = self.tracer.start_span(\n span_name,\n kind=trace.SpanKind.SERVER,\n attributes=collect_request_attributes(environ),\n )\n\n try:\n with self.tracer.use_span(span):\n start_response = self._create_start_response(\n span, start_response\n )\n iterable = self.wsgi(environ, start_response)\n return _end_span_after_iterating(\n iterable, span, self.tracer, token\n )\n except Exception as ex:\n span.set_status(Status(StatusCanonicalCode.INTERNAL, str(ex)))\n span.end()\n context.detach(token)\n raise\n\n\n# Put this in a subfunction to not delay the call to the wrapped\n# WSGI application (instrumentation should change the application\n# behavior as little as possible).\ndef _end_span_after_iterating(iterable, span, tracer, token):\n try:\n with tracer.use_span(span):\n for yielded in iterable:\n yield yielded\n finally:\n close = getattr(iterable, \"close\", None)\n if close:\n close()\n span.end()\n context.detach(token)\n", "path": "instrumentation/opentelemetry-instrumentation-wsgi/src/opentelemetry/instrumentation/wsgi/__init__.py"}]}
| 2,622 | 226 |
gh_patches_debug_11953
|
rasdani/github-patches
|
git_diff
|
ckan__ckan-6180
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Can't use `solr_user` and `solr_password` in Python 3
**CKAN version**
2.9
**Describe the bug**
When setting the `solr_user` and `solr_password` config options running Python 3 you get a startup error:
```
File "/usr/lib/ckan/src/ckan/ckan/lib/search/__init__.py", line 309, in check_solr_schema_version
res = _get_schema_from_solr(SOLR_SCHEMA_FILE_OFFSET_MANAGED)
File "/usr/lib/ckan/src/ckan/ckan/lib/search/__init__.py", line 262, in _get_schema_from_solr
http_auth = 'Basic ' + http_auth.encode('base64').strip()
LookupError: 'base64' is not a text encoding; use codecs.encode() to handle arbitrary codecs
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ckan/lib/search/__init__.py`
Content:
```
1 # encoding: utf-8
2
3 from __future__ import print_function
4 import logging
5 import sys
6 import cgitb
7 import warnings
8 import xml.dom.minidom
9
10 import requests
11
12 from ckan.common import asbool, config
13 import ckan.model as model
14 import ckan.plugins as p
15 import ckan.logic as logic
16
17 from ckan.lib.search.common import (
18 SearchIndexError, SearchError, SearchQueryError,
19 make_connection, is_available, SolrSettings
20 )
21 from ckan.lib.search.index import PackageSearchIndex, NoopSearchIndex
22 from ckan.lib.search.query import (
23 TagSearchQuery, ResourceSearchQuery, PackageSearchQuery,
24 QueryOptions, convert_legacy_parameters_to_solr
25 )
26
27
28 log = logging.getLogger(__name__)
29
30
31 def text_traceback():
32 with warnings.catch_warnings():
33 warnings.simplefilter("ignore")
34 res = 'the original traceback:'.join(
35 cgitb.text(sys.exc_info()).split('the original traceback:')[1:]
36 ).strip()
37 return res
38
39
40 SUPPORTED_SCHEMA_VERSIONS = ['2.8', '2.9']
41
42 DEFAULT_OPTIONS = {
43 'limit': 20,
44 'offset': 0,
45 # about presenting the results
46 'order_by': 'rank',
47 'return_objects': False,
48 'ref_entity_with_attr': 'name',
49 'all_fields': False,
50 'search_tags': True,
51 'callback': None, # simply passed through
52 }
53
54 _INDICES = {
55 'package': PackageSearchIndex
56 }
57
58 _QUERIES = {
59 'tag': TagSearchQuery,
60 'resource': ResourceSearchQuery,
61 'package': PackageSearchQuery
62 }
63
64 SOLR_SCHEMA_FILE_OFFSET_MANAGED = '/schema?wt=schema.xml'
65 SOLR_SCHEMA_FILE_OFFSET_CLASSIC = '/admin/file/?file=schema.xml'
66
67
68 def _normalize_type(_type):
69 if isinstance(_type, model.domain_object.DomainObject):
70 _type = _type.__class__
71 if isinstance(_type, type):
72 _type = _type.__name__
73 return _type.strip().lower()
74
75
76 def index_for(_type):
77 """ Get a SearchIndex instance sub-class suitable for
78 the specified type. """
79 try:
80 _type_n = _normalize_type(_type)
81 return _INDICES[_type_n]()
82 except KeyError as ke:
83 log.warn("Unknown search type: %s" % _type)
84 return NoopSearchIndex()
85
86
87 def query_for(_type):
88 """ Get a SearchQuery instance sub-class suitable for the specified
89 type. """
90 try:
91 _type_n = _normalize_type(_type)
92 return _QUERIES[_type_n]()
93 except KeyError as ke:
94 raise SearchError("Unknown search type: %s" % _type)
95
96
97 def dispatch_by_operation(entity_type, entity, operation):
98 """Call the appropriate index method for a given notification."""
99 try:
100 index = index_for(entity_type)
101 if operation == model.domain_object.DomainObjectOperation.new:
102 index.insert_dict(entity)
103 elif operation == model.domain_object.DomainObjectOperation.changed:
104 index.update_dict(entity)
105 elif operation == model.domain_object.DomainObjectOperation.deleted:
106 index.remove_dict(entity)
107 else:
108 log.warn("Unknown operation: %s" % operation)
109 except Exception as ex:
110 log.exception(ex)
111 # we really need to know about any exceptions, so reraise
112 # (see #1172)
113 raise
114
115
116 class SynchronousSearchPlugin(p.SingletonPlugin):
117 """Update the search index automatically."""
118 p.implements(p.IDomainObjectModification, inherit=True)
119
120 def notify(self, entity, operation):
121 if (not isinstance(entity, model.Package) or
122 not asbool(config.get('ckan.search.automatic_indexing', True))):
123 return
124 if operation != model.domain_object.DomainObjectOperation.deleted:
125 dispatch_by_operation(
126 entity.__class__.__name__,
127 logic.get_action('package_show')(
128 {'model': model, 'ignore_auth': True, 'validate': False,
129 'use_cache': False},
130 {'id': entity.id}),
131 operation
132 )
133 elif operation == model.domain_object.DomainObjectOperation.deleted:
134 dispatch_by_operation(entity.__class__.__name__,
135 {'id': entity.id}, operation)
136 else:
137 log.warn("Discarded Sync. indexing for: %s" % entity)
138
139
140 def rebuild(package_id=None, only_missing=False, force=False, refresh=False,
141 defer_commit=False, package_ids=None, quiet=False):
142 '''
143 Rebuilds the search index.
144
145 If a dataset id is provided, only this dataset will be reindexed.
146 When reindexing all datasets, if only_missing is True, only the
147 datasets not already indexed will be processed. If force equals
148 True, if an exception is found, the exception will be logged, but
149 the process will carry on.
150 '''
151 log.info("Rebuilding search index...")
152
153 package_index = index_for(model.Package)
154 context = {'model': model, 'ignore_auth': True, 'validate': False,
155 'use_cache': False}
156
157 if package_id:
158 pkg_dict = logic.get_action('package_show')(context,
159 {'id': package_id})
160 log.info('Indexing just package %r...', pkg_dict['name'])
161 package_index.remove_dict(pkg_dict)
162 package_index.insert_dict(pkg_dict)
163 elif package_ids is not None:
164 for package_id in package_ids:
165 pkg_dict = logic.get_action('package_show')(context,
166 {'id': package_id})
167 log.info('Indexing just package %r...', pkg_dict['name'])
168 package_index.update_dict(pkg_dict, True)
169 else:
170 package_ids = [r[0] for r in model.Session.query(model.Package.id).
171 filter(model.Package.state != 'deleted').all()]
172 if only_missing:
173 log.info('Indexing only missing packages...')
174 package_query = query_for(model.Package)
175 indexed_pkg_ids = set(package_query.get_all_entity_ids(
176 max_results=len(package_ids)))
177 # Packages not indexed
178 package_ids = set(package_ids) - indexed_pkg_ids
179
180 if len(package_ids) == 0:
181 log.info('All datasets are already indexed')
182 return
183 else:
184 log.info('Rebuilding the whole index...')
185 # When refreshing, the index is not previously cleared
186 if not refresh:
187 package_index.clear()
188
189 total_packages = len(package_ids)
190 for counter, pkg_id in enumerate(package_ids):
191 if not quiet:
192 sys.stdout.write(
193 "\rIndexing dataset {0}/{1}".format(
194 counter +1, total_packages)
195 )
196 sys.stdout.flush()
197 try:
198 package_index.update_dict(
199 logic.get_action('package_show')(context,
200 {'id': pkg_id}
201 ),
202 defer_commit
203 )
204 except Exception as e:
205 log.error(u'Error while indexing dataset %s: %s' %
206 (pkg_id, repr(e)))
207 if force:
208 log.error(text_traceback())
209 continue
210 else:
211 raise
212
213 model.Session.commit()
214 log.info('Finished rebuilding search index.')
215
216
217 def commit():
218 package_index = index_for(model.Package)
219 package_index.commit()
220 log.info('Commited pending changes on the search index')
221
222
223 def check():
224 package_query = query_for(model.Package)
225
226 log.debug("Checking packages search index...")
227 pkgs_q = model.Session.query(model.Package).filter_by(
228 state=model.State.ACTIVE)
229 pkgs = {pkg.id for pkg in pkgs_q}
230 indexed_pkgs = set(package_query.get_all_entity_ids(max_results=len(pkgs)))
231 pkgs_not_indexed = pkgs - indexed_pkgs
232 print('Packages not indexed = %i out of %i' % (len(pkgs_not_indexed),
233 len(pkgs)))
234 for pkg_id in pkgs_not_indexed:
235 pkg = model.Session.query(model.Package).get(pkg_id)
236 print((pkg.metadata_modified.strftime('%Y-%m-%d'), pkg.name))
237
238
239 def show(package_reference):
240 package_query = query_for(model.Package)
241 return package_query.get_index(package_reference)
242
243
244 def clear(package_reference):
245 package_index = index_for(model.Package)
246 log.debug("Clearing search index for dataset %s..." %
247 package_reference)
248 package_index.delete_package({'id': package_reference})
249
250
251 def clear_all():
252 package_index = index_for(model.Package)
253 log.debug("Clearing search index...")
254 package_index.clear()
255
256 def _get_schema_from_solr(file_offset):
257 solr_url, solr_user, solr_password = SolrSettings.get()
258
259 http_auth = None
260 if solr_user is not None and solr_password is not None:
261 http_auth = solr_user + ':' + solr_password
262 http_auth = 'Basic ' + http_auth.encode('base64').strip()
263
264 url = solr_url.strip('/') + file_offset
265
266 if http_auth:
267 response = requests.get(
268 url, headers={'Authorization': http_auth})
269 else:
270 response = requests.get(url)
271
272 return response
273
274 def check_solr_schema_version(schema_file=None):
275 '''
276 Checks if the schema version of the SOLR server is compatible
277 with this CKAN version.
278
279 The schema will be retrieved from the SOLR server, using the
280 offset defined in SOLR_SCHEMA_FILE_OFFSET_MANAGED
281 ('/schema?wt=schema.xml'). If SOLR is set to use the manually
282 edited `schema.xml`, the schema will be retrieved from the SOLR
283 server using the offset defined in
284 SOLR_SCHEMA_FILE_OFFSET_CLASSIC ('/admin/file/?file=schema.xml').
285
286 The schema_file parameter allows to override this pointing to
287 different schema file, but it should only be used for testing
288 purposes.
289
290 If the CKAN instance is configured to not use SOLR or the SOLR
291 server is not available, the function will return False, as the
292 version check does not apply. If the SOLR server is available,
293 a SearchError exception will be thrown if the version could not
294 be extracted or it is not included in the supported versions list.
295
296 :schema_file: Absolute path to an alternative schema file. Should
297 be only used for testing purposes (Default is None)
298 '''
299
300 if not is_available():
301 # Something is wrong with the SOLR server
302 log.warn('Problems were found while connecting to the SOLR server')
303 return False
304
305 # Try to get the schema XML file to extract the version
306 if not schema_file:
307 try:
308 # Try Managed Schema
309 res = _get_schema_from_solr(SOLR_SCHEMA_FILE_OFFSET_MANAGED)
310 res.raise_for_status()
311 except requests.HTTPError:
312 # Fallback to Manually Edited schema.xml
313 res = _get_schema_from_solr(SOLR_SCHEMA_FILE_OFFSET_CLASSIC)
314 schema_content = res.text
315 else:
316 with open(schema_file, 'rb') as f:
317 schema_content = f.read()
318
319 tree = xml.dom.minidom.parseString(schema_content)
320
321 version = tree.documentElement.getAttribute('version')
322 if not len(version):
323 msg = 'Could not extract version info from the SOLR schema'
324 if schema_file:
325 msg += ', using file {}'.format(schema_file)
326 raise SearchError(msg)
327
328 if not version in SUPPORTED_SCHEMA_VERSIONS:
329 raise SearchError('SOLR schema version not supported: %s. Supported'
330 ' versions are [%s]'
331 % (version, ', '.join(SUPPORTED_SCHEMA_VERSIONS)))
332 return True
333
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ckan/lib/search/__init__.py b/ckan/lib/search/__init__.py
--- a/ckan/lib/search/__init__.py
+++ b/ckan/lib/search/__init__.py
@@ -5,6 +5,7 @@
import sys
import cgitb
import warnings
+import base64
import xml.dom.minidom
import requests
@@ -259,7 +260,7 @@
http_auth = None
if solr_user is not None and solr_password is not None:
http_auth = solr_user + ':' + solr_password
- http_auth = 'Basic ' + http_auth.encode('base64').strip()
+ http_auth = 'Basic {}'.format(base64.b64encode(http_auth.encode('utf8')).strip())
url = solr_url.strip('/') + file_offset
|
{"golden_diff": "diff --git a/ckan/lib/search/__init__.py b/ckan/lib/search/__init__.py\n--- a/ckan/lib/search/__init__.py\n+++ b/ckan/lib/search/__init__.py\n@@ -5,6 +5,7 @@\n import sys\n import cgitb\n import warnings\n+import base64\n import xml.dom.minidom\n \n import requests\n@@ -259,7 +260,7 @@\n http_auth = None\n if solr_user is not None and solr_password is not None:\n http_auth = solr_user + ':' + solr_password\n- http_auth = 'Basic ' + http_auth.encode('base64').strip()\n+ http_auth = 'Basic {}'.format(base64.b64encode(http_auth.encode('utf8')).strip())\n \n url = solr_url.strip('/') + file_offset\n", "issue": "Can't use `solr_user` and `solr_password` in Python 3\n**CKAN version**\r\n2.9 \r\n\r\n**Describe the bug**\r\nWhen setting the `solr_user` and `solr_password` config options running Python 3 you get a startup error:\r\n\r\n```\r\n File \"/usr/lib/ckan/src/ckan/ckan/lib/search/__init__.py\", line 309, in check_solr_schema_version\r\n res = _get_schema_from_solr(SOLR_SCHEMA_FILE_OFFSET_MANAGED)\r\n File \"/usr/lib/ckan/src/ckan/ckan/lib/search/__init__.py\", line 262, in _get_schema_from_solr\r\n http_auth = 'Basic ' + http_auth.encode('base64').strip()\r\nLookupError: 'base64' is not a text encoding; use codecs.encode() to handle arbitrary codecs\r\n```\r\n\n", "before_files": [{"content": "# encoding: utf-8\n\nfrom __future__ import print_function\nimport logging\nimport sys\nimport cgitb\nimport warnings\nimport xml.dom.minidom\n\nimport requests\n\nfrom ckan.common import asbool, config\nimport ckan.model as model\nimport ckan.plugins as p\nimport ckan.logic as logic\n\nfrom ckan.lib.search.common import (\n SearchIndexError, SearchError, SearchQueryError,\n make_connection, is_available, SolrSettings\n )\nfrom ckan.lib.search.index import PackageSearchIndex, NoopSearchIndex\nfrom ckan.lib.search.query import (\n TagSearchQuery, ResourceSearchQuery, PackageSearchQuery,\n QueryOptions, convert_legacy_parameters_to_solr\n)\n\n\nlog = logging.getLogger(__name__)\n\n\ndef text_traceback():\n with warnings.catch_warnings():\n warnings.simplefilter(\"ignore\")\n res = 'the original traceback:'.join(\n cgitb.text(sys.exc_info()).split('the original traceback:')[1:]\n ).strip()\n return res\n\n\nSUPPORTED_SCHEMA_VERSIONS = ['2.8', '2.9']\n\nDEFAULT_OPTIONS = {\n 'limit': 20,\n 'offset': 0,\n # about presenting the results\n 'order_by': 'rank',\n 'return_objects': False,\n 'ref_entity_with_attr': 'name',\n 'all_fields': False,\n 'search_tags': True,\n 'callback': None, # simply passed through\n}\n\n_INDICES = {\n 'package': PackageSearchIndex\n}\n\n_QUERIES = {\n 'tag': TagSearchQuery,\n 'resource': ResourceSearchQuery,\n 'package': PackageSearchQuery\n}\n\nSOLR_SCHEMA_FILE_OFFSET_MANAGED = '/schema?wt=schema.xml'\nSOLR_SCHEMA_FILE_OFFSET_CLASSIC = '/admin/file/?file=schema.xml'\n\n\ndef _normalize_type(_type):\n if isinstance(_type, model.domain_object.DomainObject):\n _type = _type.__class__\n if isinstance(_type, type):\n _type = _type.__name__\n return _type.strip().lower()\n\n\ndef index_for(_type):\n \"\"\" Get a SearchIndex instance sub-class suitable for\n the specified type. \"\"\"\n try:\n _type_n = _normalize_type(_type)\n return _INDICES[_type_n]()\n except KeyError as ke:\n log.warn(\"Unknown search type: %s\" % _type)\n return NoopSearchIndex()\n\n\ndef query_for(_type):\n \"\"\" Get a SearchQuery instance sub-class suitable for the specified\n type. \"\"\"\n try:\n _type_n = _normalize_type(_type)\n return _QUERIES[_type_n]()\n except KeyError as ke:\n raise SearchError(\"Unknown search type: %s\" % _type)\n\n\ndef dispatch_by_operation(entity_type, entity, operation):\n \"\"\"Call the appropriate index method for a given notification.\"\"\"\n try:\n index = index_for(entity_type)\n if operation == model.domain_object.DomainObjectOperation.new:\n index.insert_dict(entity)\n elif operation == model.domain_object.DomainObjectOperation.changed:\n index.update_dict(entity)\n elif operation == model.domain_object.DomainObjectOperation.deleted:\n index.remove_dict(entity)\n else:\n log.warn(\"Unknown operation: %s\" % operation)\n except Exception as ex:\n log.exception(ex)\n # we really need to know about any exceptions, so reraise\n # (see #1172)\n raise\n\n\nclass SynchronousSearchPlugin(p.SingletonPlugin):\n \"\"\"Update the search index automatically.\"\"\"\n p.implements(p.IDomainObjectModification, inherit=True)\n\n def notify(self, entity, operation):\n if (not isinstance(entity, model.Package) or\n not asbool(config.get('ckan.search.automatic_indexing', True))):\n return\n if operation != model.domain_object.DomainObjectOperation.deleted:\n dispatch_by_operation(\n entity.__class__.__name__,\n logic.get_action('package_show')(\n {'model': model, 'ignore_auth': True, 'validate': False,\n 'use_cache': False},\n {'id': entity.id}),\n operation\n )\n elif operation == model.domain_object.DomainObjectOperation.deleted:\n dispatch_by_operation(entity.__class__.__name__,\n {'id': entity.id}, operation)\n else:\n log.warn(\"Discarded Sync. indexing for: %s\" % entity)\n\n\ndef rebuild(package_id=None, only_missing=False, force=False, refresh=False,\n defer_commit=False, package_ids=None, quiet=False):\n '''\n Rebuilds the search index.\n\n If a dataset id is provided, only this dataset will be reindexed.\n When reindexing all datasets, if only_missing is True, only the\n datasets not already indexed will be processed. If force equals\n True, if an exception is found, the exception will be logged, but\n the process will carry on.\n '''\n log.info(\"Rebuilding search index...\")\n\n package_index = index_for(model.Package)\n context = {'model': model, 'ignore_auth': True, 'validate': False,\n 'use_cache': False}\n\n if package_id:\n pkg_dict = logic.get_action('package_show')(context,\n {'id': package_id})\n log.info('Indexing just package %r...', pkg_dict['name'])\n package_index.remove_dict(pkg_dict)\n package_index.insert_dict(pkg_dict)\n elif package_ids is not None:\n for package_id in package_ids:\n pkg_dict = logic.get_action('package_show')(context,\n {'id': package_id})\n log.info('Indexing just package %r...', pkg_dict['name'])\n package_index.update_dict(pkg_dict, True)\n else:\n package_ids = [r[0] for r in model.Session.query(model.Package.id).\n filter(model.Package.state != 'deleted').all()]\n if only_missing:\n log.info('Indexing only missing packages...')\n package_query = query_for(model.Package)\n indexed_pkg_ids = set(package_query.get_all_entity_ids(\n max_results=len(package_ids)))\n # Packages not indexed\n package_ids = set(package_ids) - indexed_pkg_ids\n\n if len(package_ids) == 0:\n log.info('All datasets are already indexed')\n return\n else:\n log.info('Rebuilding the whole index...')\n # When refreshing, the index is not previously cleared\n if not refresh:\n package_index.clear()\n\n total_packages = len(package_ids)\n for counter, pkg_id in enumerate(package_ids):\n if not quiet:\n sys.stdout.write(\n \"\\rIndexing dataset {0}/{1}\".format(\n counter +1, total_packages)\n )\n sys.stdout.flush()\n try:\n package_index.update_dict(\n logic.get_action('package_show')(context,\n {'id': pkg_id}\n ),\n defer_commit\n )\n except Exception as e:\n log.error(u'Error while indexing dataset %s: %s' %\n (pkg_id, repr(e)))\n if force:\n log.error(text_traceback())\n continue\n else:\n raise\n\n model.Session.commit()\n log.info('Finished rebuilding search index.')\n\n\ndef commit():\n package_index = index_for(model.Package)\n package_index.commit()\n log.info('Commited pending changes on the search index')\n\n\ndef check():\n package_query = query_for(model.Package)\n\n log.debug(\"Checking packages search index...\")\n pkgs_q = model.Session.query(model.Package).filter_by(\n state=model.State.ACTIVE)\n pkgs = {pkg.id for pkg in pkgs_q}\n indexed_pkgs = set(package_query.get_all_entity_ids(max_results=len(pkgs)))\n pkgs_not_indexed = pkgs - indexed_pkgs\n print('Packages not indexed = %i out of %i' % (len(pkgs_not_indexed),\n len(pkgs)))\n for pkg_id in pkgs_not_indexed:\n pkg = model.Session.query(model.Package).get(pkg_id)\n print((pkg.metadata_modified.strftime('%Y-%m-%d'), pkg.name))\n\n\ndef show(package_reference):\n package_query = query_for(model.Package)\n return package_query.get_index(package_reference)\n\n\ndef clear(package_reference):\n package_index = index_for(model.Package)\n log.debug(\"Clearing search index for dataset %s...\" %\n package_reference)\n package_index.delete_package({'id': package_reference})\n\n\ndef clear_all():\n package_index = index_for(model.Package)\n log.debug(\"Clearing search index...\")\n package_index.clear()\n\ndef _get_schema_from_solr(file_offset):\n solr_url, solr_user, solr_password = SolrSettings.get()\n\n http_auth = None\n if solr_user is not None and solr_password is not None:\n http_auth = solr_user + ':' + solr_password\n http_auth = 'Basic ' + http_auth.encode('base64').strip()\n\n url = solr_url.strip('/') + file_offset\n\n if http_auth:\n response = requests.get(\n url, headers={'Authorization': http_auth})\n else:\n response = requests.get(url)\n\n return response\n\ndef check_solr_schema_version(schema_file=None):\n '''\n Checks if the schema version of the SOLR server is compatible\n with this CKAN version.\n\n The schema will be retrieved from the SOLR server, using the\n offset defined in SOLR_SCHEMA_FILE_OFFSET_MANAGED\n ('/schema?wt=schema.xml'). If SOLR is set to use the manually\n edited `schema.xml`, the schema will be retrieved from the SOLR\n server using the offset defined in\n SOLR_SCHEMA_FILE_OFFSET_CLASSIC ('/admin/file/?file=schema.xml').\n\n The schema_file parameter allows to override this pointing to\n different schema file, but it should only be used for testing\n purposes.\n\n If the CKAN instance is configured to not use SOLR or the SOLR\n server is not available, the function will return False, as the\n version check does not apply. If the SOLR server is available,\n a SearchError exception will be thrown if the version could not\n be extracted or it is not included in the supported versions list.\n\n :schema_file: Absolute path to an alternative schema file. Should\n be only used for testing purposes (Default is None)\n '''\n\n if not is_available():\n # Something is wrong with the SOLR server\n log.warn('Problems were found while connecting to the SOLR server')\n return False\n\n # Try to get the schema XML file to extract the version\n if not schema_file:\n try:\n # Try Managed Schema\n res = _get_schema_from_solr(SOLR_SCHEMA_FILE_OFFSET_MANAGED)\n res.raise_for_status()\n except requests.HTTPError:\n # Fallback to Manually Edited schema.xml\n res = _get_schema_from_solr(SOLR_SCHEMA_FILE_OFFSET_CLASSIC)\n schema_content = res.text\n else:\n with open(schema_file, 'rb') as f:\n schema_content = f.read()\n\n tree = xml.dom.minidom.parseString(schema_content)\n\n version = tree.documentElement.getAttribute('version')\n if not len(version):\n msg = 'Could not extract version info from the SOLR schema'\n if schema_file:\n msg += ', using file {}'.format(schema_file)\n raise SearchError(msg)\n\n if not version in SUPPORTED_SCHEMA_VERSIONS:\n raise SearchError('SOLR schema version not supported: %s. Supported'\n ' versions are [%s]'\n % (version, ', '.join(SUPPORTED_SCHEMA_VERSIONS)))\n return True\n", "path": "ckan/lib/search/__init__.py"}], "after_files": [{"content": "# encoding: utf-8\n\nfrom __future__ import print_function\nimport logging\nimport sys\nimport cgitb\nimport warnings\nimport base64\nimport xml.dom.minidom\n\nimport requests\n\nfrom ckan.common import asbool, config\nimport ckan.model as model\nimport ckan.plugins as p\nimport ckan.logic as logic\n\nfrom ckan.lib.search.common import (\n SearchIndexError, SearchError, SearchQueryError,\n make_connection, is_available, SolrSettings\n )\nfrom ckan.lib.search.index import PackageSearchIndex, NoopSearchIndex\nfrom ckan.lib.search.query import (\n TagSearchQuery, ResourceSearchQuery, PackageSearchQuery,\n QueryOptions, convert_legacy_parameters_to_solr\n)\n\n\nlog = logging.getLogger(__name__)\n\n\ndef text_traceback():\n with warnings.catch_warnings():\n warnings.simplefilter(\"ignore\")\n res = 'the original traceback:'.join(\n cgitb.text(sys.exc_info()).split('the original traceback:')[1:]\n ).strip()\n return res\n\n\nSUPPORTED_SCHEMA_VERSIONS = ['2.8', '2.9']\n\nDEFAULT_OPTIONS = {\n 'limit': 20,\n 'offset': 0,\n # about presenting the results\n 'order_by': 'rank',\n 'return_objects': False,\n 'ref_entity_with_attr': 'name',\n 'all_fields': False,\n 'search_tags': True,\n 'callback': None, # simply passed through\n}\n\n_INDICES = {\n 'package': PackageSearchIndex\n}\n\n_QUERIES = {\n 'tag': TagSearchQuery,\n 'resource': ResourceSearchQuery,\n 'package': PackageSearchQuery\n}\n\nSOLR_SCHEMA_FILE_OFFSET_MANAGED = '/schema?wt=schema.xml'\nSOLR_SCHEMA_FILE_OFFSET_CLASSIC = '/admin/file/?file=schema.xml'\n\n\ndef _normalize_type(_type):\n if isinstance(_type, model.domain_object.DomainObject):\n _type = _type.__class__\n if isinstance(_type, type):\n _type = _type.__name__\n return _type.strip().lower()\n\n\ndef index_for(_type):\n \"\"\" Get a SearchIndex instance sub-class suitable for\n the specified type. \"\"\"\n try:\n _type_n = _normalize_type(_type)\n return _INDICES[_type_n]()\n except KeyError as ke:\n log.warn(\"Unknown search type: %s\" % _type)\n return NoopSearchIndex()\n\n\ndef query_for(_type):\n \"\"\" Get a SearchQuery instance sub-class suitable for the specified\n type. \"\"\"\n try:\n _type_n = _normalize_type(_type)\n return _QUERIES[_type_n]()\n except KeyError as ke:\n raise SearchError(\"Unknown search type: %s\" % _type)\n\n\ndef dispatch_by_operation(entity_type, entity, operation):\n \"\"\"Call the appropriate index method for a given notification.\"\"\"\n try:\n index = index_for(entity_type)\n if operation == model.domain_object.DomainObjectOperation.new:\n index.insert_dict(entity)\n elif operation == model.domain_object.DomainObjectOperation.changed:\n index.update_dict(entity)\n elif operation == model.domain_object.DomainObjectOperation.deleted:\n index.remove_dict(entity)\n else:\n log.warn(\"Unknown operation: %s\" % operation)\n except Exception as ex:\n log.exception(ex)\n # we really need to know about any exceptions, so reraise\n # (see #1172)\n raise\n\n\nclass SynchronousSearchPlugin(p.SingletonPlugin):\n \"\"\"Update the search index automatically.\"\"\"\n p.implements(p.IDomainObjectModification, inherit=True)\n\n def notify(self, entity, operation):\n if (not isinstance(entity, model.Package) or\n not asbool(config.get('ckan.search.automatic_indexing', True))):\n return\n if operation != model.domain_object.DomainObjectOperation.deleted:\n dispatch_by_operation(\n entity.__class__.__name__,\n logic.get_action('package_show')(\n {'model': model, 'ignore_auth': True, 'validate': False,\n 'use_cache': False},\n {'id': entity.id}),\n operation\n )\n elif operation == model.domain_object.DomainObjectOperation.deleted:\n dispatch_by_operation(entity.__class__.__name__,\n {'id': entity.id}, operation)\n else:\n log.warn(\"Discarded Sync. indexing for: %s\" % entity)\n\n\ndef rebuild(package_id=None, only_missing=False, force=False, refresh=False,\n defer_commit=False, package_ids=None, quiet=False):\n '''\n Rebuilds the search index.\n\n If a dataset id is provided, only this dataset will be reindexed.\n When reindexing all datasets, if only_missing is True, only the\n datasets not already indexed will be processed. If force equals\n True, if an exception is found, the exception will be logged, but\n the process will carry on.\n '''\n log.info(\"Rebuilding search index...\")\n\n package_index = index_for(model.Package)\n context = {'model': model, 'ignore_auth': True, 'validate': False,\n 'use_cache': False}\n\n if package_id:\n pkg_dict = logic.get_action('package_show')(context,\n {'id': package_id})\n log.info('Indexing just package %r...', pkg_dict['name'])\n package_index.remove_dict(pkg_dict)\n package_index.insert_dict(pkg_dict)\n elif package_ids is not None:\n for package_id in package_ids:\n pkg_dict = logic.get_action('package_show')(context,\n {'id': package_id})\n log.info('Indexing just package %r...', pkg_dict['name'])\n package_index.update_dict(pkg_dict, True)\n else:\n package_ids = [r[0] for r in model.Session.query(model.Package.id).\n filter(model.Package.state != 'deleted').all()]\n if only_missing:\n log.info('Indexing only missing packages...')\n package_query = query_for(model.Package)\n indexed_pkg_ids = set(package_query.get_all_entity_ids(\n max_results=len(package_ids)))\n # Packages not indexed\n package_ids = set(package_ids) - indexed_pkg_ids\n\n if len(package_ids) == 0:\n log.info('All datasets are already indexed')\n return\n else:\n log.info('Rebuilding the whole index...')\n # When refreshing, the index is not previously cleared\n if not refresh:\n package_index.clear()\n\n total_packages = len(package_ids)\n for counter, pkg_id in enumerate(package_ids):\n if not quiet:\n sys.stdout.write(\n \"\\rIndexing dataset {0}/{1}\".format(\n counter +1, total_packages)\n )\n sys.stdout.flush()\n try:\n package_index.update_dict(\n logic.get_action('package_show')(context,\n {'id': pkg_id}\n ),\n defer_commit\n )\n except Exception as e:\n log.error(u'Error while indexing dataset %s: %s' %\n (pkg_id, repr(e)))\n if force:\n log.error(text_traceback())\n continue\n else:\n raise\n\n model.Session.commit()\n log.info('Finished rebuilding search index.')\n\n\ndef commit():\n package_index = index_for(model.Package)\n package_index.commit()\n log.info('Commited pending changes on the search index')\n\n\ndef check():\n package_query = query_for(model.Package)\n\n log.debug(\"Checking packages search index...\")\n pkgs_q = model.Session.query(model.Package).filter_by(\n state=model.State.ACTIVE)\n pkgs = {pkg.id for pkg in pkgs_q}\n indexed_pkgs = set(package_query.get_all_entity_ids(max_results=len(pkgs)))\n pkgs_not_indexed = pkgs - indexed_pkgs\n print('Packages not indexed = %i out of %i' % (len(pkgs_not_indexed),\n len(pkgs)))\n for pkg_id in pkgs_not_indexed:\n pkg = model.Session.query(model.Package).get(pkg_id)\n print((pkg.metadata_modified.strftime('%Y-%m-%d'), pkg.name))\n\n\ndef show(package_reference):\n package_query = query_for(model.Package)\n return package_query.get_index(package_reference)\n\n\ndef clear(package_reference):\n package_index = index_for(model.Package)\n log.debug(\"Clearing search index for dataset %s...\" %\n package_reference)\n package_index.delete_package({'id': package_reference})\n\n\ndef clear_all():\n package_index = index_for(model.Package)\n log.debug(\"Clearing search index...\")\n package_index.clear()\n\ndef _get_schema_from_solr(file_offset):\n solr_url, solr_user, solr_password = SolrSettings.get()\n\n http_auth = None\n if solr_user is not None and solr_password is not None:\n http_auth = solr_user + ':' + solr_password\n http_auth = 'Basic {}'.format(base64.b64encode(http_auth.encode('utf8')).strip())\n\n url = solr_url.strip('/') + file_offset\n\n if http_auth:\n response = requests.get(\n url, headers={'Authorization': http_auth})\n else:\n response = requests.get(url)\n\n return response\n\ndef check_solr_schema_version(schema_file=None):\n '''\n Checks if the schema version of the SOLR server is compatible\n with this CKAN version.\n\n The schema will be retrieved from the SOLR server, using the\n offset defined in SOLR_SCHEMA_FILE_OFFSET_MANAGED\n ('/schema?wt=schema.xml'). If SOLR is set to use the manually\n edited `schema.xml`, the schema will be retrieved from the SOLR\n server using the offset defined in\n SOLR_SCHEMA_FILE_OFFSET_CLASSIC ('/admin/file/?file=schema.xml').\n\n The schema_file parameter allows to override this pointing to\n different schema file, but it should only be used for testing\n purposes.\n\n If the CKAN instance is configured to not use SOLR or the SOLR\n server is not available, the function will return False, as the\n version check does not apply. If the SOLR server is available,\n a SearchError exception will be thrown if the version could not\n be extracted or it is not included in the supported versions list.\n\n :schema_file: Absolute path to an alternative schema file. Should\n be only used for testing purposes (Default is None)\n '''\n\n if not is_available():\n # Something is wrong with the SOLR server\n log.warn('Problems were found while connecting to the SOLR server')\n return False\n\n # Try to get the schema XML file to extract the version\n if not schema_file:\n try:\n # Try Managed Schema\n res = _get_schema_from_solr(SOLR_SCHEMA_FILE_OFFSET_MANAGED)\n res.raise_for_status()\n except requests.HTTPError:\n # Fallback to Manually Edited schema.xml\n res = _get_schema_from_solr(SOLR_SCHEMA_FILE_OFFSET_CLASSIC)\n schema_content = res.text\n else:\n with open(schema_file, 'rb') as f:\n schema_content = f.read()\n\n tree = xml.dom.minidom.parseString(schema_content)\n\n version = tree.documentElement.getAttribute('version')\n if not len(version):\n msg = 'Could not extract version info from the SOLR schema'\n if schema_file:\n msg += ', using file {}'.format(schema_file)\n raise SearchError(msg)\n\n if not version in SUPPORTED_SCHEMA_VERSIONS:\n raise SearchError('SOLR schema version not supported: %s. Supported'\n ' versions are [%s]'\n % (version, ', '.join(SUPPORTED_SCHEMA_VERSIONS)))\n return True\n", "path": "ckan/lib/search/__init__.py"}]}
| 3,845 | 192 |
gh_patches_debug_37344
|
rasdani/github-patches
|
git_diff
|
sbi-dev__sbi-421
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
fix progress bar and simulation batch size in simulate_in_batches
Two problems related to `simulate_in_batches`:
1) The progress bar fills up by sending jobs to the workers, i.e., before these are actually done. E.g., when running 100 parameters with batch size 10 and 10 workers, the progress bar is done instantly, but simulations may continue for minutes or hours. We should find a way to fill up the pbar only when the jobs are done.
2) The simulation batch size defaults to 1. This can make things very slow when one uses a simulator that cannot simulate multiple parameters at once (vectorised) and one use multiple workers. In this scenario each worker deals with a single simulation -- creating a lot of overhead. It would be much better (for me it gave a 10x speed up) to write a simple "batch"-simulator with a for loop and set the simulation batch size to 10 or 100 so that each worker call will run at least 10 or 100 simulations before going back. Actually, it could make sense to just set
```
sim_batch_size = int(num_simulations / num_workers)
```
in order to minimise overhead.
The only downside would be that the pbar becomes less informative. So one could think about reducing the batch size a bit to obtain more "steps" in the pbar.
Any thoughts?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 #
4 # This file is part of sbi, a toolkit for simulation-based inference. sbi is licensed
5 # under the Affero General Public License v3, see <https://www.gnu.org/licenses/>.
6 #
7 # Note: To use the 'upload' functionality of this file, you must:
8 # $ pipenv install twine --dev
9
10 import io
11 import os
12 import sys
13 from shutil import rmtree
14
15 from setuptools import find_packages, setup, Command
16
17 # Package meta-data.
18 NAME = "sbi"
19 DESCRIPTION = "Simulation-based inference."
20 KEYWORDS = "bayesian parameter inference system_identification simulator PyTorch"
21 URL = "https://github.com/mackelab/sbi"
22 EMAIL = "[email protected]"
23 AUTHOR = "Álvaro Tejero-Cantero, Jakob H. Macke, Jan-Matthis Lückmann, Conor M. Durkan, Michael Deistler, Jan Bölts"
24 REQUIRES_PYTHON = ">=3.6.0"
25
26 REQUIRED = [
27 "joblib",
28 "matplotlib",
29 "numpy",
30 "pillow",
31 "pyknos>=0.14.0",
32 "pyro-ppl>=1.3.1",
33 "scipy",
34 "tensorboard",
35 "torch>=1.6.0",
36 "tqdm",
37 ]
38
39 EXTRAS = {
40 "dev": [
41 "autoflake",
42 "black",
43 "deepdiff",
44 "flake8",
45 "isort",
46 "jupyter",
47 "mkdocs",
48 "mkdocs-material",
49 "markdown-include",
50 "mkdocs-redirects",
51 "mkdocstrings",
52 "nbconvert",
53 "pep517",
54 "pytest",
55 "pyyaml",
56 "scikit-learn",
57 "torchtestcase",
58 "twine",
59 ],
60 }
61
62 here = os.path.abspath(os.path.dirname(__file__))
63
64 # Import the README and use it as the long-description.
65 try:
66 with io.open(os.path.join(here, "README.md"), encoding="utf-8") as f:
67 long_description = "\n" + f.read()
68 except FileNotFoundError:
69 long_description = DESCRIPTION
70
71 # Load the package's __version__.py module as a dictionary.
72 about = {}
73 project_slug = NAME.lower().replace("-", "_").replace(" ", "_")
74 with open(os.path.join(here, project_slug, "__version__.py")) as f:
75 exec(f.read(), about)
76
77
78 class UploadCommand(Command):
79 """Support setup.py upload."""
80
81 description = "Build and publish the package."
82 user_options = []
83
84 @staticmethod
85 def status(s):
86 """Prints things in bold."""
87 print("\033[1m{0}\033[0m".format(s))
88
89 def initialize_options(self):
90 pass
91
92 def finalize_options(self):
93 pass
94
95 def run(self):
96 try:
97 self.status("Removing previous builds…")
98 rmtree(os.path.join(here, "dist"))
99 except OSError:
100 pass
101
102 self.status("Building Source and Wheel (universal) distribution…")
103 os.system("{0} setup.py sdist bdist_wheel --universal".format(sys.executable))
104
105 self.status("Uploading the package to PyPI via Twine…")
106 os.system("twine upload dist/*")
107
108 self.status("Pushing git tags…")
109 os.system("git tag v{0}".format(about["__version__"]))
110 os.system("git push --tags")
111
112 sys.exit()
113
114
115 setup(
116 name=NAME,
117 version=about["__version__"],
118 description=DESCRIPTION,
119 keywords=KEYWORDS,
120 long_description=long_description,
121 long_description_content_type="text/markdown",
122 author=AUTHOR,
123 author_email=EMAIL,
124 python_requires=REQUIRES_PYTHON,
125 url=URL,
126 packages=find_packages(exclude=["tests", "*.tests", "*.tests.*", "tests.*"]),
127 install_requires=REQUIRED,
128 extras_require=EXTRAS,
129 include_package_data=True,
130 license="AGPLv3",
131 classifiers=[
132 # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers
133 "Development Status :: 3 - Alpha",
134 "License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
135 "Intended Audience :: Developers",
136 "Intended Audience :: Education",
137 "Intended Audience :: Science/Research",
138 "Topic :: Adaptive Technologies",
139 "Topic :: Scientific/Engineering",
140 "Topic :: Scientific/Engineering :: Artificial Intelligence",
141 "Topic :: Scientific/Engineering :: Mathematics",
142 "Programming Language :: Python",
143 "Programming Language :: Python :: 3",
144 "Programming Language :: Python :: 3.6",
145 "Programming Language :: Python :: 3.7",
146 "Programming Language :: Python :: 3.8",
147 ],
148 # $ setup.py publish support.
149 cmdclass=dict(upload=UploadCommand),
150 )
151
```
Path: `sbi/simulators/simutils.py`
Content:
```
1 # This file is part of sbi, a toolkit for simulation-based inference. sbi is licensed
2 # under the Affero General Public License v3, see <https://www.gnu.org/licenses/>.
3
4
5 from typing import Callable
6
7 import torch
8 from torch import Tensor
9 from tqdm.auto import tqdm
10 from joblib import Parallel, delayed
11
12
13 def simulate_in_batches(
14 simulator: Callable,
15 theta: Tensor,
16 sim_batch_size: int = 1,
17 num_workers: int = 1,
18 show_progress_bars: bool = True,
19 ) -> Tensor:
20 r"""
21 Return simulations $x$ for parameters $\theta$ conducted batchwise.
22
23 Parameters are batched with size `sim_batch_size` (default whole theta at once).
24 Multiprocessing is used when `num_workers > 1`.
25
26 Args:
27 simulator: Simulator callable (a function or a class with `__call__`).
28 theta: All parameters $\theta$ sampled from prior or posterior.
29 sim_batch_size: Number of simulations per batch. Default is to simulate
30 the entire theta in a single batch.
31 num_workers: Number of workers for multiprocessing.
32 show_progress_bars: Whether to show a progress bar during simulation.
33
34 Returns:
35 Parameters theta and simulations $x$.
36 """
37
38 num_sims, *_ = theta.shape
39
40 if num_sims == 0:
41 x = torch.tensor([])
42 elif sim_batch_size is not None and sim_batch_size < num_sims:
43 # Dev note: pyright complains of torch.split lacking a type stub
44 # as of PyTorch 1.4.0, see https://github.com/microsoft/pyright/issues/291
45 batches = torch.split(theta, sim_batch_size, dim=0)
46
47 if num_workers > 1:
48 # Parallelize the sequence of batches across workers.
49 # TODO: This usage of tqdm tracks the dispatching of jobs instead of the
50 # moment when they are done, resulting in waiting time at 100% in case the
51 # last jobs takes long. A potential solution can be found here: https://
52 # stackoverflow.com/a/61689175
53 simulation_outputs = Parallel(n_jobs=num_workers)(
54 delayed(simulator)(batch)
55 for batch in tqdm(
56 batches,
57 disable=not show_progress_bars,
58 desc=f"Running {num_sims} simulations in {len(batches)} batches.",
59 total=len(batches),
60 )
61 )
62 else:
63 pbar = tqdm(
64 total=num_sims,
65 disable=not show_progress_bars,
66 desc=f"Running {num_sims} simulations.",
67 )
68
69 with pbar:
70 simulation_outputs = []
71 for batch in batches:
72 simulation_outputs.append(simulator(batch))
73 pbar.update(sim_batch_size)
74
75 x = torch.cat(simulation_outputs, dim=0)
76 else:
77 x = simulator(theta)
78
79 return x
80
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sbi/simulators/simutils.py b/sbi/simulators/simutils.py
--- a/sbi/simulators/simutils.py
+++ b/sbi/simulators/simutils.py
@@ -8,6 +8,8 @@
from torch import Tensor
from tqdm.auto import tqdm
from joblib import Parallel, delayed
+import contextlib
+import joblib
def simulate_in_batches(
@@ -46,19 +48,19 @@
if num_workers > 1:
# Parallelize the sequence of batches across workers.
- # TODO: This usage of tqdm tracks the dispatching of jobs instead of the
- # moment when they are done, resulting in waiting time at 100% in case the
- # last jobs takes long. A potential solution can be found here: https://
- # stackoverflow.com/a/61689175
- simulation_outputs = Parallel(n_jobs=num_workers)(
- delayed(simulator)(batch)
- for batch in tqdm(
+ # We use the solution proposed here: https://stackoverflow.com/a/61689175
+ # to update the pbar only after the workers finished a task.
+ with tqdm_joblib(
+ tqdm(
batches,
disable=not show_progress_bars,
desc=f"Running {num_sims} simulations in {len(batches)} batches.",
total=len(batches),
)
- )
+ ) as progress_bar:
+ simulation_outputs = Parallel(n_jobs=num_workers)(
+ delayed(simulator)(batch) for batch in batches
+ )
else:
pbar = tqdm(
total=num_sims,
@@ -77,3 +79,28 @@
x = simulator(theta)
return x
+
+
[email protected]
+def tqdm_joblib(tqdm_object):
+ """Context manager to patch joblib to report into tqdm progress bar given as
+ argument
+
+ This wrapped context manager obtains the number of finished tasks from the tqdm
+ print function and uses it to update the pbar, as suggested in
+ https://stackoverflow.com/a/61689175. See #419, #421
+ """
+
+ def tqdm_print_progress(self):
+ if self.n_completed_tasks > tqdm_object.n:
+ n_completed = self.n_completed_tasks - tqdm_object.n
+ tqdm_object.update(n=n_completed)
+
+ original_print_progress = joblib.parallel.Parallel.print_progress
+ joblib.parallel.Parallel.print_progress = tqdm_print_progress
+
+ try:
+ yield tqdm_object
+ finally:
+ joblib.parallel.Parallel.print_progress = original_print_progress
+ tqdm_object.close()
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -24,7 +24,7 @@
REQUIRES_PYTHON = ">=3.6.0"
REQUIRED = [
- "joblib",
+ "joblib>=1.0.0",
"matplotlib",
"numpy",
"pillow",
|
{"golden_diff": "diff --git a/sbi/simulators/simutils.py b/sbi/simulators/simutils.py\n--- a/sbi/simulators/simutils.py\n+++ b/sbi/simulators/simutils.py\n@@ -8,6 +8,8 @@\n from torch import Tensor\n from tqdm.auto import tqdm\n from joblib import Parallel, delayed\n+import contextlib\n+import joblib\n \n \n def simulate_in_batches(\n@@ -46,19 +48,19 @@\n \n if num_workers > 1:\n # Parallelize the sequence of batches across workers.\n- # TODO: This usage of tqdm tracks the dispatching of jobs instead of the\n- # moment when they are done, resulting in waiting time at 100% in case the\n- # last jobs takes long. A potential solution can be found here: https://\n- # stackoverflow.com/a/61689175\n- simulation_outputs = Parallel(n_jobs=num_workers)(\n- delayed(simulator)(batch)\n- for batch in tqdm(\n+ # We use the solution proposed here: https://stackoverflow.com/a/61689175\n+ # to update the pbar only after the workers finished a task.\n+ with tqdm_joblib(\n+ tqdm(\n batches,\n disable=not show_progress_bars,\n desc=f\"Running {num_sims} simulations in {len(batches)} batches.\",\n total=len(batches),\n )\n- )\n+ ) as progress_bar:\n+ simulation_outputs = Parallel(n_jobs=num_workers)(\n+ delayed(simulator)(batch) for batch in batches\n+ )\n else:\n pbar = tqdm(\n total=num_sims,\n@@ -77,3 +79,28 @@\n x = simulator(theta)\n \n return x\n+\n+\[email protected]\n+def tqdm_joblib(tqdm_object):\n+ \"\"\"Context manager to patch joblib to report into tqdm progress bar given as\n+ argument\n+\n+ This wrapped context manager obtains the number of finished tasks from the tqdm\n+ print function and uses it to update the pbar, as suggested in\n+ https://stackoverflow.com/a/61689175. See #419, #421\n+ \"\"\"\n+\n+ def tqdm_print_progress(self):\n+ if self.n_completed_tasks > tqdm_object.n:\n+ n_completed = self.n_completed_tasks - tqdm_object.n\n+ tqdm_object.update(n=n_completed)\n+\n+ original_print_progress = joblib.parallel.Parallel.print_progress\n+ joblib.parallel.Parallel.print_progress = tqdm_print_progress\n+\n+ try:\n+ yield tqdm_object\n+ finally:\n+ joblib.parallel.Parallel.print_progress = original_print_progress\n+ tqdm_object.close()\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -24,7 +24,7 @@\n REQUIRES_PYTHON = \">=3.6.0\"\n \n REQUIRED = [\n- \"joblib\",\n+ \"joblib>=1.0.0\",\n \"matplotlib\",\n \"numpy\",\n \"pillow\",\n", "issue": "fix progress bar and simulation batch size in simulate_in_batches\nTwo problems related to `simulate_in_batches`: \r\n\r\n1) The progress bar fills up by sending jobs to the workers, i.e., before these are actually done. E.g., when running 100 parameters with batch size 10 and 10 workers, the progress bar is done instantly, but simulations may continue for minutes or hours. We should find a way to fill up the pbar only when the jobs are done. \r\n\r\n2) The simulation batch size defaults to 1. This can make things very slow when one uses a simulator that cannot simulate multiple parameters at once (vectorised) and one use multiple workers. In this scenario each worker deals with a single simulation -- creating a lot of overhead. It would be much better (for me it gave a 10x speed up) to write a simple \"batch\"-simulator with a for loop and set the simulation batch size to 10 or 100 so that each worker call will run at least 10 or 100 simulations before going back. Actually, it could make sense to just set \r\n```\r\nsim_batch_size = int(num_simulations / num_workers)\r\n``` \r\nin order to minimise overhead. \r\nThe only downside would be that the pbar becomes less informative. So one could think about reducing the batch size a bit to obtain more \"steps\" in the pbar. \r\n\r\nAny thoughts? \n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# This file is part of sbi, a toolkit for simulation-based inference. sbi is licensed\n# under the Affero General Public License v3, see <https://www.gnu.org/licenses/>.\n#\n# Note: To use the 'upload' functionality of this file, you must:\n# $ pipenv install twine --dev\n\nimport io\nimport os\nimport sys\nfrom shutil import rmtree\n\nfrom setuptools import find_packages, setup, Command\n\n# Package meta-data.\nNAME = \"sbi\"\nDESCRIPTION = \"Simulation-based inference.\"\nKEYWORDS = \"bayesian parameter inference system_identification simulator PyTorch\"\nURL = \"https://github.com/mackelab/sbi\"\nEMAIL = \"[email protected]\"\nAUTHOR = \"\u00c1lvaro Tejero-Cantero, Jakob H. Macke, Jan-Matthis L\u00fcckmann, Conor M. Durkan, Michael Deistler, Jan B\u00f6lts\"\nREQUIRES_PYTHON = \">=3.6.0\"\n\nREQUIRED = [\n \"joblib\",\n \"matplotlib\",\n \"numpy\",\n \"pillow\",\n \"pyknos>=0.14.0\",\n \"pyro-ppl>=1.3.1\",\n \"scipy\",\n \"tensorboard\",\n \"torch>=1.6.0\",\n \"tqdm\",\n]\n\nEXTRAS = {\n \"dev\": [\n \"autoflake\",\n \"black\",\n \"deepdiff\",\n \"flake8\",\n \"isort\",\n \"jupyter\",\n \"mkdocs\",\n \"mkdocs-material\",\n \"markdown-include\",\n \"mkdocs-redirects\",\n \"mkdocstrings\",\n \"nbconvert\",\n \"pep517\",\n \"pytest\",\n \"pyyaml\",\n \"scikit-learn\",\n \"torchtestcase\",\n \"twine\",\n ],\n}\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n# Import the README and use it as the long-description.\ntry:\n with io.open(os.path.join(here, \"README.md\"), encoding=\"utf-8\") as f:\n long_description = \"\\n\" + f.read()\nexcept FileNotFoundError:\n long_description = DESCRIPTION\n\n# Load the package's __version__.py module as a dictionary.\nabout = {}\nproject_slug = NAME.lower().replace(\"-\", \"_\").replace(\" \", \"_\")\nwith open(os.path.join(here, project_slug, \"__version__.py\")) as f:\n exec(f.read(), about)\n\n\nclass UploadCommand(Command):\n \"\"\"Support setup.py upload.\"\"\"\n\n description = \"Build and publish the package.\"\n user_options = []\n\n @staticmethod\n def status(s):\n \"\"\"Prints things in bold.\"\"\"\n print(\"\\033[1m{0}\\033[0m\".format(s))\n\n def initialize_options(self):\n pass\n\n def finalize_options(self):\n pass\n\n def run(self):\n try:\n self.status(\"Removing previous builds\u2026\")\n rmtree(os.path.join(here, \"dist\"))\n except OSError:\n pass\n\n self.status(\"Building Source and Wheel (universal) distribution\u2026\")\n os.system(\"{0} setup.py sdist bdist_wheel --universal\".format(sys.executable))\n\n self.status(\"Uploading the package to PyPI via Twine\u2026\")\n os.system(\"twine upload dist/*\")\n\n self.status(\"Pushing git tags\u2026\")\n os.system(\"git tag v{0}\".format(about[\"__version__\"]))\n os.system(\"git push --tags\")\n\n sys.exit()\n\n\nsetup(\n name=NAME,\n version=about[\"__version__\"],\n description=DESCRIPTION,\n keywords=KEYWORDS,\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n author=AUTHOR,\n author_email=EMAIL,\n python_requires=REQUIRES_PYTHON,\n url=URL,\n packages=find_packages(exclude=[\"tests\", \"*.tests\", \"*.tests.*\", \"tests.*\"]),\n install_requires=REQUIRED,\n extras_require=EXTRAS,\n include_package_data=True,\n license=\"AGPLv3\",\n classifiers=[\n # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers\n \"Development Status :: 3 - Alpha\",\n \"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Education\",\n \"Intended Audience :: Science/Research\",\n \"Topic :: Adaptive Technologies\",\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Topic :: Scientific/Engineering :: Mathematics\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n ],\n # $ setup.py publish support.\n cmdclass=dict(upload=UploadCommand),\n)\n", "path": "setup.py"}, {"content": "# This file is part of sbi, a toolkit for simulation-based inference. sbi is licensed\n# under the Affero General Public License v3, see <https://www.gnu.org/licenses/>.\n\n\nfrom typing import Callable\n\nimport torch\nfrom torch import Tensor\nfrom tqdm.auto import tqdm\nfrom joblib import Parallel, delayed\n\n\ndef simulate_in_batches(\n simulator: Callable,\n theta: Tensor,\n sim_batch_size: int = 1,\n num_workers: int = 1,\n show_progress_bars: bool = True,\n) -> Tensor:\n r\"\"\"\n Return simulations $x$ for parameters $\\theta$ conducted batchwise.\n\n Parameters are batched with size `sim_batch_size` (default whole theta at once).\n Multiprocessing is used when `num_workers > 1`.\n\n Args:\n simulator: Simulator callable (a function or a class with `__call__`).\n theta: All parameters $\\theta$ sampled from prior or posterior.\n sim_batch_size: Number of simulations per batch. Default is to simulate\n the entire theta in a single batch.\n num_workers: Number of workers for multiprocessing.\n show_progress_bars: Whether to show a progress bar during simulation.\n\n Returns:\n Parameters theta and simulations $x$.\n \"\"\"\n\n num_sims, *_ = theta.shape\n\n if num_sims == 0:\n x = torch.tensor([])\n elif sim_batch_size is not None and sim_batch_size < num_sims:\n # Dev note: pyright complains of torch.split lacking a type stub\n # as of PyTorch 1.4.0, see https://github.com/microsoft/pyright/issues/291\n batches = torch.split(theta, sim_batch_size, dim=0)\n\n if num_workers > 1:\n # Parallelize the sequence of batches across workers.\n # TODO: This usage of tqdm tracks the dispatching of jobs instead of the\n # moment when they are done, resulting in waiting time at 100% in case the\n # last jobs takes long. A potential solution can be found here: https://\n # stackoverflow.com/a/61689175\n simulation_outputs = Parallel(n_jobs=num_workers)(\n delayed(simulator)(batch)\n for batch in tqdm(\n batches,\n disable=not show_progress_bars,\n desc=f\"Running {num_sims} simulations in {len(batches)} batches.\",\n total=len(batches),\n )\n )\n else:\n pbar = tqdm(\n total=num_sims,\n disable=not show_progress_bars,\n desc=f\"Running {num_sims} simulations.\",\n )\n\n with pbar:\n simulation_outputs = []\n for batch in batches:\n simulation_outputs.append(simulator(batch))\n pbar.update(sim_batch_size)\n\n x = torch.cat(simulation_outputs, dim=0)\n else:\n x = simulator(theta)\n\n return x\n", "path": "sbi/simulators/simutils.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# This file is part of sbi, a toolkit for simulation-based inference. sbi is licensed\n# under the Affero General Public License v3, see <https://www.gnu.org/licenses/>.\n#\n# Note: To use the 'upload' functionality of this file, you must:\n# $ pipenv install twine --dev\n\nimport io\nimport os\nimport sys\nfrom shutil import rmtree\n\nfrom setuptools import find_packages, setup, Command\n\n# Package meta-data.\nNAME = \"sbi\"\nDESCRIPTION = \"Simulation-based inference.\"\nKEYWORDS = \"bayesian parameter inference system_identification simulator PyTorch\"\nURL = \"https://github.com/mackelab/sbi\"\nEMAIL = \"[email protected]\"\nAUTHOR = \"\u00c1lvaro Tejero-Cantero, Jakob H. Macke, Jan-Matthis L\u00fcckmann, Conor M. Durkan, Michael Deistler, Jan B\u00f6lts\"\nREQUIRES_PYTHON = \">=3.6.0\"\n\nREQUIRED = [\n \"joblib>=1.0.0\",\n \"matplotlib\",\n \"numpy\",\n \"pillow\",\n \"pyknos>=0.14.0\",\n \"pyro-ppl>=1.3.1\",\n \"scipy\",\n \"tensorboard\",\n \"torch>=1.6.0\",\n \"tqdm\",\n]\n\nEXTRAS = {\n \"dev\": [\n \"autoflake\",\n \"black\",\n \"deepdiff\",\n \"flake8\",\n \"isort\",\n \"jupyter\",\n \"mkdocs\",\n \"mkdocs-material\",\n \"markdown-include\",\n \"mkdocs-redirects\",\n \"mkdocstrings\",\n \"nbconvert\",\n \"pep517\",\n \"pytest\",\n \"pyyaml\",\n \"scikit-learn\",\n \"torchtestcase\",\n \"twine\",\n ],\n}\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n# Import the README and use it as the long-description.\ntry:\n with io.open(os.path.join(here, \"README.md\"), encoding=\"utf-8\") as f:\n long_description = \"\\n\" + f.read()\nexcept FileNotFoundError:\n long_description = DESCRIPTION\n\n# Load the package's __version__.py module as a dictionary.\nabout = {}\nproject_slug = NAME.lower().replace(\"-\", \"_\").replace(\" \", \"_\")\nwith open(os.path.join(here, project_slug, \"__version__.py\")) as f:\n exec(f.read(), about)\n\n\nclass UploadCommand(Command):\n \"\"\"Support setup.py upload.\"\"\"\n\n description = \"Build and publish the package.\"\n user_options = []\n\n @staticmethod\n def status(s):\n \"\"\"Prints things in bold.\"\"\"\n print(\"\\033[1m{0}\\033[0m\".format(s))\n\n def initialize_options(self):\n pass\n\n def finalize_options(self):\n pass\n\n def run(self):\n try:\n self.status(\"Removing previous builds\u2026\")\n rmtree(os.path.join(here, \"dist\"))\n except OSError:\n pass\n\n self.status(\"Building Source and Wheel (universal) distribution\u2026\")\n os.system(\"{0} setup.py sdist bdist_wheel --universal\".format(sys.executable))\n\n self.status(\"Uploading the package to PyPI via Twine\u2026\")\n os.system(\"twine upload dist/*\")\n\n self.status(\"Pushing git tags\u2026\")\n os.system(\"git tag v{0}\".format(about[\"__version__\"]))\n os.system(\"git push --tags\")\n\n sys.exit()\n\n\nsetup(\n name=NAME,\n version=about[\"__version__\"],\n description=DESCRIPTION,\n keywords=KEYWORDS,\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n author=AUTHOR,\n author_email=EMAIL,\n python_requires=REQUIRES_PYTHON,\n url=URL,\n packages=find_packages(exclude=[\"tests\", \"*.tests\", \"*.tests.*\", \"tests.*\"]),\n install_requires=REQUIRED,\n extras_require=EXTRAS,\n include_package_data=True,\n license=\"AGPLv3\",\n classifiers=[\n # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers\n \"Development Status :: 3 - Alpha\",\n \"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Education\",\n \"Intended Audience :: Science/Research\",\n \"Topic :: Adaptive Technologies\",\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Topic :: Scientific/Engineering :: Mathematics\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n ],\n # $ setup.py publish support.\n cmdclass=dict(upload=UploadCommand),\n)\n", "path": "setup.py"}, {"content": "# This file is part of sbi, a toolkit for simulation-based inference. sbi is licensed\n# under the Affero General Public License v3, see <https://www.gnu.org/licenses/>.\n\n\nfrom typing import Callable\n\nimport torch\nfrom torch import Tensor\nfrom tqdm.auto import tqdm\nfrom joblib import Parallel, delayed\nimport contextlib\nimport joblib\n\n\ndef simulate_in_batches(\n simulator: Callable,\n theta: Tensor,\n sim_batch_size: int = 1,\n num_workers: int = 1,\n show_progress_bars: bool = True,\n) -> Tensor:\n r\"\"\"\n Return simulations $x$ for parameters $\\theta$ conducted batchwise.\n\n Parameters are batched with size `sim_batch_size` (default whole theta at once).\n Multiprocessing is used when `num_workers > 1`.\n\n Args:\n simulator: Simulator callable (a function or a class with `__call__`).\n theta: All parameters $\\theta$ sampled from prior or posterior.\n sim_batch_size: Number of simulations per batch. Default is to simulate\n the entire theta in a single batch.\n num_workers: Number of workers for multiprocessing.\n show_progress_bars: Whether to show a progress bar during simulation.\n\n Returns:\n Parameters theta and simulations $x$.\n \"\"\"\n\n num_sims, *_ = theta.shape\n\n if num_sims == 0:\n x = torch.tensor([])\n elif sim_batch_size is not None and sim_batch_size < num_sims:\n # Dev note: pyright complains of torch.split lacking a type stub\n # as of PyTorch 1.4.0, see https://github.com/microsoft/pyright/issues/291\n batches = torch.split(theta, sim_batch_size, dim=0)\n\n if num_workers > 1:\n # Parallelize the sequence of batches across workers.\n # We use the solution proposed here: https://stackoverflow.com/a/61689175\n # to update the pbar only after the workers finished a task.\n with tqdm_joblib(\n tqdm(\n batches,\n disable=not show_progress_bars,\n desc=f\"Running {num_sims} simulations in {len(batches)} batches.\",\n total=len(batches),\n )\n ) as progress_bar:\n simulation_outputs = Parallel(n_jobs=num_workers)(\n delayed(simulator)(batch) for batch in batches\n )\n else:\n pbar = tqdm(\n total=num_sims,\n disable=not show_progress_bars,\n desc=f\"Running {num_sims} simulations.\",\n )\n\n with pbar:\n simulation_outputs = []\n for batch in batches:\n simulation_outputs.append(simulator(batch))\n pbar.update(sim_batch_size)\n\n x = torch.cat(simulation_outputs, dim=0)\n else:\n x = simulator(theta)\n\n return x\n\n\[email protected]\ndef tqdm_joblib(tqdm_object):\n \"\"\"Context manager to patch joblib to report into tqdm progress bar given as\n argument\n\n This wrapped context manager obtains the number of finished tasks from the tqdm\n print function and uses it to update the pbar, as suggested in\n https://stackoverflow.com/a/61689175. See #419, #421\n \"\"\"\n\n def tqdm_print_progress(self):\n if self.n_completed_tasks > tqdm_object.n:\n n_completed = self.n_completed_tasks - tqdm_object.n\n tqdm_object.update(n=n_completed)\n\n original_print_progress = joblib.parallel.Parallel.print_progress\n joblib.parallel.Parallel.print_progress = tqdm_print_progress\n\n try:\n yield tqdm_object\n finally:\n joblib.parallel.Parallel.print_progress = original_print_progress\n tqdm_object.close()\n", "path": "sbi/simulators/simutils.py"}]}
| 2,798 | 692 |
gh_patches_debug_12895
|
rasdani/github-patches
|
git_diff
|
great-expectations__great_expectations-4066
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use cleaner solution for non-truncating division in python 2
Prefer `from __future__ import division` to `1.*x/y`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/build_glossary_page.py`
Content:
```
1 import json
2
3
4 def build_glossary_tuples(source_json_path):
5 # Open the source_json_path
6 with open(source_json_path) as json_file:
7 # Read the contents as a json
8 data = json.load(json_file)
9 # Convert to list of tuples containing ("term", "definition", "url")
10 data_list = [(x['term'], x['definition']) for x in data.values()]
11 # Order list alphabetically by "term"
12 data_list.sort(key=lambda y: y[0])
13 # return the ordered list.
14 return data_list
15
16
17 def build_glossary_page(orderd_list_of_terms_tuples, glossary_file_path):
18 # Open the glossary page for editing
19 with open(glossary_file_path, "w") as glossary_file:
20 # Write the glossary page header
21 glossary_file.write("---\nid: glossary\ntitle: \"Glossary of Terms\"\n---\n")
22 # iterate the glossary list of tuples and write glossary entries.
23 for term, definition in orderd_list_of_terms_tuples:
24 glossary_file.write(f"**{term}:** {definition}\n\n")
25
26
27 def all_together_now(source_json_path, glossary_file_path):
28 list_of_terms_tuples = build_glossary_tuples(source_json_path)
29 build_glossary_page(list_of_terms_tuples, glossary_file_path)
30
31
32 if __name__ == '__main__':
33 all_together_now(
34 source_json_path="../docs/term_tags/terms.json",
35 glossary_file_path="../docs/glossary.md"
36 )
37
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scripts/build_glossary_page.py b/scripts/build_glossary_page.py
--- a/scripts/build_glossary_page.py
+++ b/scripts/build_glossary_page.py
@@ -18,7 +18,7 @@
# Open the glossary page for editing
with open(glossary_file_path, "w") as glossary_file:
# Write the glossary page header
- glossary_file.write("---\nid: glossary\ntitle: \"Glossary of Terms\"\n---\n")
+ glossary_file.write("---\nid: glossary\ntitle: \"Glossary of Terms\"\n---\n\n")
# iterate the glossary list of tuples and write glossary entries.
for term, definition in orderd_list_of_terms_tuples:
glossary_file.write(f"**{term}:** {definition}\n\n")
|
{"golden_diff": "diff --git a/scripts/build_glossary_page.py b/scripts/build_glossary_page.py\n--- a/scripts/build_glossary_page.py\n+++ b/scripts/build_glossary_page.py\n@@ -18,7 +18,7 @@\n # Open the glossary page for editing\n with open(glossary_file_path, \"w\") as glossary_file:\n # Write the glossary page header\n- glossary_file.write(\"---\\nid: glossary\\ntitle: \\\"Glossary of Terms\\\"\\n---\\n\")\n+ glossary_file.write(\"---\\nid: glossary\\ntitle: \\\"Glossary of Terms\\\"\\n---\\n\\n\")\n # iterate the glossary list of tuples and write glossary entries.\n for term, definition in orderd_list_of_terms_tuples:\n glossary_file.write(f\"**{term}:** {definition}\\n\\n\")\n", "issue": "Use cleaner solution for non-truncating division in python 2\nPrefer `from __future__ import division` to `1.*x/y`\n", "before_files": [{"content": "import json\n\n\ndef build_glossary_tuples(source_json_path):\n # Open the source_json_path\n with open(source_json_path) as json_file:\n # Read the contents as a json\n data = json.load(json_file)\n # Convert to list of tuples containing (\"term\", \"definition\", \"url\")\n data_list = [(x['term'], x['definition']) for x in data.values()]\n # Order list alphabetically by \"term\"\n data_list.sort(key=lambda y: y[0])\n # return the ordered list.\n return data_list\n\n\ndef build_glossary_page(orderd_list_of_terms_tuples, glossary_file_path):\n # Open the glossary page for editing\n with open(glossary_file_path, \"w\") as glossary_file:\n # Write the glossary page header\n glossary_file.write(\"---\\nid: glossary\\ntitle: \\\"Glossary of Terms\\\"\\n---\\n\")\n # iterate the glossary list of tuples and write glossary entries.\n for term, definition in orderd_list_of_terms_tuples:\n glossary_file.write(f\"**{term}:** {definition}\\n\\n\")\n\n\ndef all_together_now(source_json_path, glossary_file_path):\n list_of_terms_tuples = build_glossary_tuples(source_json_path)\n build_glossary_page(list_of_terms_tuples, glossary_file_path)\n\n\nif __name__ == '__main__':\n all_together_now(\n source_json_path=\"../docs/term_tags/terms.json\",\n glossary_file_path=\"../docs/glossary.md\"\n )\n", "path": "scripts/build_glossary_page.py"}], "after_files": [{"content": "import json\n\n\ndef build_glossary_tuples(source_json_path):\n # Open the source_json_path\n with open(source_json_path) as json_file:\n # Read the contents as a json\n data = json.load(json_file)\n # Convert to list of tuples containing (\"term\", \"definition\", \"url\")\n data_list = [(x['term'], x['definition']) for x in data.values()]\n # Order list alphabetically by \"term\"\n data_list.sort(key=lambda y: y[0])\n # return the ordered list.\n return data_list\n\n\ndef build_glossary_page(orderd_list_of_terms_tuples, glossary_file_path):\n # Open the glossary page for editing\n with open(glossary_file_path, \"w\") as glossary_file:\n # Write the glossary page header\n glossary_file.write(\"---\\nid: glossary\\ntitle: \\\"Glossary of Terms\\\"\\n---\\n\\n\")\n # iterate the glossary list of tuples and write glossary entries.\n for term, definition in orderd_list_of_terms_tuples:\n glossary_file.write(f\"**{term}:** {definition}\\n\\n\")\n\n\ndef all_together_now(source_json_path, glossary_file_path):\n list_of_terms_tuples = build_glossary_tuples(source_json_path)\n build_glossary_page(list_of_terms_tuples, glossary_file_path)\n\n\nif __name__ == '__main__':\n all_together_now(\n source_json_path=\"../docs/term_tags/terms.json\",\n glossary_file_path=\"../docs/glossary.md\"\n )\n", "path": "scripts/build_glossary_page.py"}]}
| 696 | 191 |
gh_patches_debug_35189
|
rasdani/github-patches
|
git_diff
|
StackStorm__st2-2506
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
API returns 400 for some requests due to "bad" Content-type header
It seems that _some_ API endpoints will return a HTTP 400 when presented with a header such as:
```
Content-type: application/json; charset=utf-8
```
but works fine with:
```
Content-type: application/json
```
I ran into this with a POST to the URL `/webhooks/st2`. I'm currently running version `1.4dev-52`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `st2common/st2common/util/http.py`
Content:
```
1 # Licensed to the StackStorm, Inc ('StackStorm') under one or more
2 # contributor license agreements. See the NOTICE file distributed with
3 # this work for additional information regarding copyright ownership.
4 # The ASF licenses this file to You under the Apache License, Version 2.0
5 # (the "License"); you may not use this file except in compliance with
6 # the License. You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 import six
17
18 http_client = six.moves.http_client
19
20 HTTP_SUCCESS = [http_client.OK, http_client.CREATED, http_client.ACCEPTED,
21 http_client.NON_AUTHORITATIVE_INFORMATION, http_client.NO_CONTENT,
22 http_client.RESET_CONTENT, http_client.PARTIAL_CONTENT,
23 http_client.MULTI_STATUS, http_client.IM_USED,
24 ]
25
```
Path: `st2api/st2api/controllers/v1/webhooks.py`
Content:
```
1 # Licensed to the StackStorm, Inc ('StackStorm') under one or more
2 # contributor license agreements. See the NOTICE file distributed with
3 # this work for additional information regarding copyright ownership.
4 # The ASF licenses this file to You under the Apache License, Version 2.0
5 # (the "License"); you may not use this file except in compliance with
6 # the License. You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 try:
17 import simplejson as json
18 except ImportError:
19 import json
20
21 import six
22 import pecan
23 import uuid
24 from pecan import abort
25 from pecan.rest import RestController
26 from six.moves.urllib import parse as urlparse
27 urljoin = urlparse.urljoin
28
29 from st2common import log as logging
30 from st2common.constants.triggers import WEBHOOK_TRIGGER_TYPES
31 from st2common.models.api.base import jsexpose
32 from st2common.models.api.trace import TraceContext
33 import st2common.services.triggers as trigger_service
34 from st2common.services.triggerwatcher import TriggerWatcher
35 from st2common.transport.reactor import TriggerDispatcher
36 from st2common.rbac.types import PermissionType
37 from st2common.rbac.decorators import request_user_has_webhook_permission
38
39 http_client = six.moves.http_client
40
41 LOG = logging.getLogger(__name__)
42
43 TRACE_TAG_HEADER = 'St2-Trace-Tag'
44
45
46 class WebhooksController(RestController):
47 def __init__(self, *args, **kwargs):
48 super(WebhooksController, self).__init__(*args, **kwargs)
49 self._hooks = {}
50 self._base_url = '/webhooks/'
51 self._trigger_types = WEBHOOK_TRIGGER_TYPES.keys()
52
53 self._trigger_dispatcher = TriggerDispatcher(LOG)
54 queue_suffix = self.__class__.__name__
55 self._trigger_watcher = TriggerWatcher(create_handler=self._handle_create_trigger,
56 update_handler=self._handle_update_trigger,
57 delete_handler=self._handle_delete_trigger,
58 trigger_types=self._trigger_types,
59 queue_suffix=queue_suffix,
60 exclusive=True)
61 self._trigger_watcher.start()
62 self._register_webhook_trigger_types()
63
64 @jsexpose()
65 def get_all(self):
66 # Return only the hooks known by this controller.
67 return [trigger for trigger in six.itervalues(self._hooks)]
68
69 @jsexpose()
70 def get_one(self, name):
71 hook = self._hooks.get(name, None)
72
73 if not hook:
74 abort(http_client.NOT_FOUND)
75 return
76
77 return hook
78
79 @request_user_has_webhook_permission(permission_type=PermissionType.WEBHOOK_SEND)
80 @jsexpose(arg_types=[str], status_code=http_client.ACCEPTED)
81 def post(self, *args, **kwargs):
82 hook = '/'.join(args) # TODO: There must be a better way to do this.
83
84 # Note: For backward compatibility reasons we default to application/json if content
85 # type is not explicitly provided
86 content_type = pecan.request.headers.get('Content-Type', 'application/json')
87 body = pecan.request.body
88
89 try:
90 body = self._parse_request_body(content_type=content_type, body=body)
91 except Exception as e:
92 self._log_request('Failed to parse request body: %s.' % (str(e)), pecan.request)
93 msg = 'Failed to parse request body "%s": %s' % (body, str(e))
94 return pecan.abort(http_client.BAD_REQUEST, msg)
95
96 headers = self._get_headers_as_dict(pecan.request.headers)
97 # If webhook contains a trace-tag use that else create create a unique trace-tag.
98 trace_context = self._create_trace_context(trace_tag=headers.pop(TRACE_TAG_HEADER, None),
99 hook=hook)
100
101 if hook == 'st2' or hook == 'st2/':
102 return self._handle_st2_webhook(body, trace_context=trace_context)
103
104 if not self._is_valid_hook(hook):
105 self._log_request('Invalid hook.', pecan.request)
106 msg = 'Webhook %s not registered with st2' % hook
107 return pecan.abort(http_client.NOT_FOUND, msg)
108
109 trigger = self._get_trigger_for_hook(hook)
110 payload = {}
111
112 payload['headers'] = headers
113 payload['body'] = body
114 self._trigger_dispatcher.dispatch(trigger, payload=payload, trace_context=trace_context)
115
116 return body
117
118 def _parse_request_body(self, content_type, body):
119 if content_type == 'application/json':
120 self._log_request('Parsing request body as JSON', request=pecan.request)
121 body = json.loads(body)
122 elif content_type in ['application/x-www-form-urlencoded', 'multipart/form-data']:
123 self._log_request('Parsing request body as form encoded data', request=pecan.request)
124 body = urlparse.parse_qs(body)
125 else:
126 raise ValueError('Unsupported Content-Type: "%s"' % (content_type))
127
128 return body
129
130 def _handle_st2_webhook(self, body, trace_context):
131 trigger = body.get('trigger', None)
132 payload = body.get('payload', None)
133 if not trigger:
134 msg = 'Trigger not specified.'
135 return pecan.abort(http_client.BAD_REQUEST, msg)
136 self._trigger_dispatcher.dispatch(trigger, payload=payload, trace_context=trace_context)
137
138 return body
139
140 def _is_valid_hook(self, hook):
141 # TODO: Validate hook payload with payload_schema.
142 return hook in self._hooks
143
144 def _get_trigger_for_hook(self, hook):
145 return self._hooks[hook]
146
147 def _register_webhook_trigger_types(self):
148 for trigger_type in WEBHOOK_TRIGGER_TYPES.values():
149 trigger_service.create_trigger_type_db(trigger_type)
150
151 def _create_trace_context(self, trace_tag, hook):
152 # if no trace_tag then create a unique one
153 if not trace_tag:
154 trace_tag = 'webhook-%s-%s' % (hook, uuid.uuid4().hex)
155 return TraceContext(trace_tag=trace_tag)
156
157 def add_trigger(self, trigger):
158 # Note: Permission checking for creating and deleting a webhook is done during rule
159 # creation
160 url = trigger['parameters']['url']
161 LOG.info('Listening to endpoint: %s', urljoin(self._base_url, url))
162 self._hooks[url] = trigger
163
164 def update_trigger(self, trigger):
165 pass
166
167 def remove_trigger(self, trigger):
168 # Note: Permission checking for creating and deleting a webhook is done during rule
169 # creation
170 url = trigger['parameters']['url']
171
172 if url in self._hooks:
173 LOG.info('Stop listening to endpoint: %s', urljoin(self._base_url, url))
174 del self._hooks[url]
175
176 def _get_headers_as_dict(self, headers):
177 headers_dict = {}
178 for key, value in headers.items():
179 headers_dict[key] = value
180 return headers_dict
181
182 def _log_request(self, msg, request, log_method=LOG.debug):
183 headers = self._get_headers_as_dict(request.headers)
184 body = str(request.body)
185 log_method('%s\n\trequest.header: %s.\n\trequest.body: %s.', msg, headers, body)
186
187 ##############################################
188 # Event handler methods for the trigger events
189 ##############################################
190
191 def _handle_create_trigger(self, trigger):
192 LOG.debug('Calling "add_trigger" method (trigger.type=%s)' % (trigger.type))
193 trigger = self._sanitize_trigger(trigger=trigger)
194 self.add_trigger(trigger=trigger)
195
196 def _handle_update_trigger(self, trigger):
197 LOG.debug('Calling "update_trigger" method (trigger.type=%s)' % (trigger.type))
198 trigger = self._sanitize_trigger(trigger=trigger)
199 self.update_trigger(trigger=trigger)
200
201 def _handle_delete_trigger(self, trigger):
202 LOG.debug('Calling "remove_trigger" method (trigger.type=%s)' % (trigger.type))
203 trigger = self._sanitize_trigger(trigger=trigger)
204 self.remove_trigger(trigger=trigger)
205
206 def _sanitize_trigger(self, trigger):
207 sanitized = trigger._data
208 if 'id' in sanitized:
209 # Friendly objectid rather than the MongoEngine representation.
210 sanitized['id'] = str(sanitized['id'])
211 return sanitized
212
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/st2api/st2api/controllers/v1/webhooks.py b/st2api/st2api/controllers/v1/webhooks.py
--- a/st2api/st2api/controllers/v1/webhooks.py
+++ b/st2api/st2api/controllers/v1/webhooks.py
@@ -33,6 +33,7 @@
import st2common.services.triggers as trigger_service
from st2common.services.triggerwatcher import TriggerWatcher
from st2common.transport.reactor import TriggerDispatcher
+from st2common.util.http import parse_content_type_header
from st2common.rbac.types import PermissionType
from st2common.rbac.decorators import request_user_has_webhook_permission
@@ -84,6 +85,7 @@
# Note: For backward compatibility reasons we default to application/json if content
# type is not explicitly provided
content_type = pecan.request.headers.get('Content-Type', 'application/json')
+ content_type = parse_content_type_header(content_type=content_type)[0]
body = pecan.request.body
try:
diff --git a/st2common/st2common/util/http.py b/st2common/st2common/util/http.py
--- a/st2common/st2common/util/http.py
+++ b/st2common/st2common/util/http.py
@@ -17,8 +17,43 @@
http_client = six.moves.http_client
+__all__ = [
+ 'HTTP_SUCCESS',
+ 'parse_content_type_header'
+]
+
HTTP_SUCCESS = [http_client.OK, http_client.CREATED, http_client.ACCEPTED,
http_client.NON_AUTHORITATIVE_INFORMATION, http_client.NO_CONTENT,
http_client.RESET_CONTENT, http_client.PARTIAL_CONTENT,
http_client.MULTI_STATUS, http_client.IM_USED,
]
+
+
+def parse_content_type_header(content_type):
+ """
+ Parse and normalize request content type and return a tuple with the content type and the
+ options.
+
+ :rype: ``tuple``
+ """
+ if ';' in content_type:
+ split = content_type.split(';')
+ media = split[0]
+ options = {}
+
+ for pair in split[1:]:
+ split_pair = pair.split('=', 1)
+
+ if len(split_pair) != 2:
+ continue
+
+ key = split_pair[0].strip()
+ value = split_pair[1].strip()
+
+ options[key] = value
+ else:
+ media = content_type
+ options = {}
+
+ result = (media, options)
+ return result
|
{"golden_diff": "diff --git a/st2api/st2api/controllers/v1/webhooks.py b/st2api/st2api/controllers/v1/webhooks.py\n--- a/st2api/st2api/controllers/v1/webhooks.py\n+++ b/st2api/st2api/controllers/v1/webhooks.py\n@@ -33,6 +33,7 @@\n import st2common.services.triggers as trigger_service\n from st2common.services.triggerwatcher import TriggerWatcher\n from st2common.transport.reactor import TriggerDispatcher\n+from st2common.util.http import parse_content_type_header\n from st2common.rbac.types import PermissionType\n from st2common.rbac.decorators import request_user_has_webhook_permission\n \n@@ -84,6 +85,7 @@\n # Note: For backward compatibility reasons we default to application/json if content\n # type is not explicitly provided\n content_type = pecan.request.headers.get('Content-Type', 'application/json')\n+ content_type = parse_content_type_header(content_type=content_type)[0]\n body = pecan.request.body\n \n try:\ndiff --git a/st2common/st2common/util/http.py b/st2common/st2common/util/http.py\n--- a/st2common/st2common/util/http.py\n+++ b/st2common/st2common/util/http.py\n@@ -17,8 +17,43 @@\n \n http_client = six.moves.http_client\n \n+__all__ = [\n+ 'HTTP_SUCCESS',\n+ 'parse_content_type_header'\n+]\n+\n HTTP_SUCCESS = [http_client.OK, http_client.CREATED, http_client.ACCEPTED,\n http_client.NON_AUTHORITATIVE_INFORMATION, http_client.NO_CONTENT,\n http_client.RESET_CONTENT, http_client.PARTIAL_CONTENT,\n http_client.MULTI_STATUS, http_client.IM_USED,\n ]\n+\n+\n+def parse_content_type_header(content_type):\n+ \"\"\"\n+ Parse and normalize request content type and return a tuple with the content type and the\n+ options.\n+\n+ :rype: ``tuple``\n+ \"\"\"\n+ if ';' in content_type:\n+ split = content_type.split(';')\n+ media = split[0]\n+ options = {}\n+\n+ for pair in split[1:]:\n+ split_pair = pair.split('=', 1)\n+\n+ if len(split_pair) != 2:\n+ continue\n+\n+ key = split_pair[0].strip()\n+ value = split_pair[1].strip()\n+\n+ options[key] = value\n+ else:\n+ media = content_type\n+ options = {}\n+\n+ result = (media, options)\n+ return result\n", "issue": "API returns 400 for some requests due to \"bad\" Content-type header\nIt seems that _some_ API endpoints will return a HTTP 400 when presented with a header such as:\n\n```\nContent-type: application/json; charset=utf-8\n```\n\nbut works fine with:\n\n```\nContent-type: application/json\n```\n\nI ran into this with a POST to the URL `/webhooks/st2`. I'm currently running version `1.4dev-52`\n\n", "before_files": [{"content": "# Licensed to the StackStorm, Inc ('StackStorm') under one or more\n# contributor license agreements. See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport six\n\nhttp_client = six.moves.http_client\n\nHTTP_SUCCESS = [http_client.OK, http_client.CREATED, http_client.ACCEPTED,\n http_client.NON_AUTHORITATIVE_INFORMATION, http_client.NO_CONTENT,\n http_client.RESET_CONTENT, http_client.PARTIAL_CONTENT,\n http_client.MULTI_STATUS, http_client.IM_USED,\n ]\n", "path": "st2common/st2common/util/http.py"}, {"content": "# Licensed to the StackStorm, Inc ('StackStorm') under one or more\n# contributor license agreements. See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\ntry:\n import simplejson as json\nexcept ImportError:\n import json\n\nimport six\nimport pecan\nimport uuid\nfrom pecan import abort\nfrom pecan.rest import RestController\nfrom six.moves.urllib import parse as urlparse\nurljoin = urlparse.urljoin\n\nfrom st2common import log as logging\nfrom st2common.constants.triggers import WEBHOOK_TRIGGER_TYPES\nfrom st2common.models.api.base import jsexpose\nfrom st2common.models.api.trace import TraceContext\nimport st2common.services.triggers as trigger_service\nfrom st2common.services.triggerwatcher import TriggerWatcher\nfrom st2common.transport.reactor import TriggerDispatcher\nfrom st2common.rbac.types import PermissionType\nfrom st2common.rbac.decorators import request_user_has_webhook_permission\n\nhttp_client = six.moves.http_client\n\nLOG = logging.getLogger(__name__)\n\nTRACE_TAG_HEADER = 'St2-Trace-Tag'\n\n\nclass WebhooksController(RestController):\n def __init__(self, *args, **kwargs):\n super(WebhooksController, self).__init__(*args, **kwargs)\n self._hooks = {}\n self._base_url = '/webhooks/'\n self._trigger_types = WEBHOOK_TRIGGER_TYPES.keys()\n\n self._trigger_dispatcher = TriggerDispatcher(LOG)\n queue_suffix = self.__class__.__name__\n self._trigger_watcher = TriggerWatcher(create_handler=self._handle_create_trigger,\n update_handler=self._handle_update_trigger,\n delete_handler=self._handle_delete_trigger,\n trigger_types=self._trigger_types,\n queue_suffix=queue_suffix,\n exclusive=True)\n self._trigger_watcher.start()\n self._register_webhook_trigger_types()\n\n @jsexpose()\n def get_all(self):\n # Return only the hooks known by this controller.\n return [trigger for trigger in six.itervalues(self._hooks)]\n\n @jsexpose()\n def get_one(self, name):\n hook = self._hooks.get(name, None)\n\n if not hook:\n abort(http_client.NOT_FOUND)\n return\n\n return hook\n\n @request_user_has_webhook_permission(permission_type=PermissionType.WEBHOOK_SEND)\n @jsexpose(arg_types=[str], status_code=http_client.ACCEPTED)\n def post(self, *args, **kwargs):\n hook = '/'.join(args) # TODO: There must be a better way to do this.\n\n # Note: For backward compatibility reasons we default to application/json if content\n # type is not explicitly provided\n content_type = pecan.request.headers.get('Content-Type', 'application/json')\n body = pecan.request.body\n\n try:\n body = self._parse_request_body(content_type=content_type, body=body)\n except Exception as e:\n self._log_request('Failed to parse request body: %s.' % (str(e)), pecan.request)\n msg = 'Failed to parse request body \"%s\": %s' % (body, str(e))\n return pecan.abort(http_client.BAD_REQUEST, msg)\n\n headers = self._get_headers_as_dict(pecan.request.headers)\n # If webhook contains a trace-tag use that else create create a unique trace-tag.\n trace_context = self._create_trace_context(trace_tag=headers.pop(TRACE_TAG_HEADER, None),\n hook=hook)\n\n if hook == 'st2' or hook == 'st2/':\n return self._handle_st2_webhook(body, trace_context=trace_context)\n\n if not self._is_valid_hook(hook):\n self._log_request('Invalid hook.', pecan.request)\n msg = 'Webhook %s not registered with st2' % hook\n return pecan.abort(http_client.NOT_FOUND, msg)\n\n trigger = self._get_trigger_for_hook(hook)\n payload = {}\n\n payload['headers'] = headers\n payload['body'] = body\n self._trigger_dispatcher.dispatch(trigger, payload=payload, trace_context=trace_context)\n\n return body\n\n def _parse_request_body(self, content_type, body):\n if content_type == 'application/json':\n self._log_request('Parsing request body as JSON', request=pecan.request)\n body = json.loads(body)\n elif content_type in ['application/x-www-form-urlencoded', 'multipart/form-data']:\n self._log_request('Parsing request body as form encoded data', request=pecan.request)\n body = urlparse.parse_qs(body)\n else:\n raise ValueError('Unsupported Content-Type: \"%s\"' % (content_type))\n\n return body\n\n def _handle_st2_webhook(self, body, trace_context):\n trigger = body.get('trigger', None)\n payload = body.get('payload', None)\n if not trigger:\n msg = 'Trigger not specified.'\n return pecan.abort(http_client.BAD_REQUEST, msg)\n self._trigger_dispatcher.dispatch(trigger, payload=payload, trace_context=trace_context)\n\n return body\n\n def _is_valid_hook(self, hook):\n # TODO: Validate hook payload with payload_schema.\n return hook in self._hooks\n\n def _get_trigger_for_hook(self, hook):\n return self._hooks[hook]\n\n def _register_webhook_trigger_types(self):\n for trigger_type in WEBHOOK_TRIGGER_TYPES.values():\n trigger_service.create_trigger_type_db(trigger_type)\n\n def _create_trace_context(self, trace_tag, hook):\n # if no trace_tag then create a unique one\n if not trace_tag:\n trace_tag = 'webhook-%s-%s' % (hook, uuid.uuid4().hex)\n return TraceContext(trace_tag=trace_tag)\n\n def add_trigger(self, trigger):\n # Note: Permission checking for creating and deleting a webhook is done during rule\n # creation\n url = trigger['parameters']['url']\n LOG.info('Listening to endpoint: %s', urljoin(self._base_url, url))\n self._hooks[url] = trigger\n\n def update_trigger(self, trigger):\n pass\n\n def remove_trigger(self, trigger):\n # Note: Permission checking for creating and deleting a webhook is done during rule\n # creation\n url = trigger['parameters']['url']\n\n if url in self._hooks:\n LOG.info('Stop listening to endpoint: %s', urljoin(self._base_url, url))\n del self._hooks[url]\n\n def _get_headers_as_dict(self, headers):\n headers_dict = {}\n for key, value in headers.items():\n headers_dict[key] = value\n return headers_dict\n\n def _log_request(self, msg, request, log_method=LOG.debug):\n headers = self._get_headers_as_dict(request.headers)\n body = str(request.body)\n log_method('%s\\n\\trequest.header: %s.\\n\\trequest.body: %s.', msg, headers, body)\n\n ##############################################\n # Event handler methods for the trigger events\n ##############################################\n\n def _handle_create_trigger(self, trigger):\n LOG.debug('Calling \"add_trigger\" method (trigger.type=%s)' % (trigger.type))\n trigger = self._sanitize_trigger(trigger=trigger)\n self.add_trigger(trigger=trigger)\n\n def _handle_update_trigger(self, trigger):\n LOG.debug('Calling \"update_trigger\" method (trigger.type=%s)' % (trigger.type))\n trigger = self._sanitize_trigger(trigger=trigger)\n self.update_trigger(trigger=trigger)\n\n def _handle_delete_trigger(self, trigger):\n LOG.debug('Calling \"remove_trigger\" method (trigger.type=%s)' % (trigger.type))\n trigger = self._sanitize_trigger(trigger=trigger)\n self.remove_trigger(trigger=trigger)\n\n def _sanitize_trigger(self, trigger):\n sanitized = trigger._data\n if 'id' in sanitized:\n # Friendly objectid rather than the MongoEngine representation.\n sanitized['id'] = str(sanitized['id'])\n return sanitized\n", "path": "st2api/st2api/controllers/v1/webhooks.py"}], "after_files": [{"content": "# Licensed to the StackStorm, Inc ('StackStorm') under one or more\n# contributor license agreements. See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport six\n\nhttp_client = six.moves.http_client\n\n__all__ = [\n 'HTTP_SUCCESS',\n 'parse_content_type_header'\n]\n\nHTTP_SUCCESS = [http_client.OK, http_client.CREATED, http_client.ACCEPTED,\n http_client.NON_AUTHORITATIVE_INFORMATION, http_client.NO_CONTENT,\n http_client.RESET_CONTENT, http_client.PARTIAL_CONTENT,\n http_client.MULTI_STATUS, http_client.IM_USED,\n ]\n\n\ndef parse_content_type_header(content_type):\n \"\"\"\n Parse and normalize request content type and return a tuple with the content type and the\n options.\n\n :rype: ``tuple``\n \"\"\"\n if ';' in content_type:\n split = content_type.split(';')\n media = split[0]\n options = {}\n\n for pair in split[1:]:\n split_pair = pair.split('=', 1)\n\n if len(split_pair) != 2:\n continue\n\n key = split_pair[0].strip()\n value = split_pair[1].strip()\n\n options[key] = value\n else:\n media = content_type\n options = {}\n\n result = (media, options)\n return result\n", "path": "st2common/st2common/util/http.py"}, {"content": "# Licensed to the StackStorm, Inc ('StackStorm') under one or more\n# contributor license agreements. See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\ntry:\n import simplejson as json\nexcept ImportError:\n import json\n\nimport six\nimport pecan\nimport uuid\nfrom pecan import abort\nfrom pecan.rest import RestController\nfrom six.moves.urllib import parse as urlparse\nurljoin = urlparse.urljoin\n\nfrom st2common import log as logging\nfrom st2common.constants.triggers import WEBHOOK_TRIGGER_TYPES\nfrom st2common.models.api.base import jsexpose\nfrom st2common.models.api.trace import TraceContext\nimport st2common.services.triggers as trigger_service\nfrom st2common.services.triggerwatcher import TriggerWatcher\nfrom st2common.transport.reactor import TriggerDispatcher\nfrom st2common.util.http import parse_content_type_header\nfrom st2common.rbac.types import PermissionType\nfrom st2common.rbac.decorators import request_user_has_webhook_permission\n\nhttp_client = six.moves.http_client\n\nLOG = logging.getLogger(__name__)\n\nTRACE_TAG_HEADER = 'St2-Trace-Tag'\n\n\nclass WebhooksController(RestController):\n def __init__(self, *args, **kwargs):\n super(WebhooksController, self).__init__(*args, **kwargs)\n self._hooks = {}\n self._base_url = '/webhooks/'\n self._trigger_types = WEBHOOK_TRIGGER_TYPES.keys()\n\n self._trigger_dispatcher = TriggerDispatcher(LOG)\n queue_suffix = self.__class__.__name__\n self._trigger_watcher = TriggerWatcher(create_handler=self._handle_create_trigger,\n update_handler=self._handle_update_trigger,\n delete_handler=self._handle_delete_trigger,\n trigger_types=self._trigger_types,\n queue_suffix=queue_suffix,\n exclusive=True)\n self._trigger_watcher.start()\n self._register_webhook_trigger_types()\n\n @jsexpose()\n def get_all(self):\n # Return only the hooks known by this controller.\n return [trigger for trigger in six.itervalues(self._hooks)]\n\n @jsexpose()\n def get_one(self, name):\n hook = self._hooks.get(name, None)\n\n if not hook:\n abort(http_client.NOT_FOUND)\n return\n\n return hook\n\n @request_user_has_webhook_permission(permission_type=PermissionType.WEBHOOK_SEND)\n @jsexpose(arg_types=[str], status_code=http_client.ACCEPTED)\n def post(self, *args, **kwargs):\n hook = '/'.join(args) # TODO: There must be a better way to do this.\n\n # Note: For backward compatibility reasons we default to application/json if content\n # type is not explicitly provided\n content_type = pecan.request.headers.get('Content-Type', 'application/json')\n content_type = parse_content_type_header(content_type=content_type)[0]\n body = pecan.request.body\n\n try:\n body = self._parse_request_body(content_type=content_type, body=body)\n except Exception as e:\n self._log_request('Failed to parse request body: %s.' % (str(e)), pecan.request)\n msg = 'Failed to parse request body \"%s\": %s' % (body, str(e))\n return pecan.abort(http_client.BAD_REQUEST, msg)\n\n headers = self._get_headers_as_dict(pecan.request.headers)\n # If webhook contains a trace-tag use that else create create a unique trace-tag.\n trace_context = self._create_trace_context(trace_tag=headers.pop(TRACE_TAG_HEADER, None),\n hook=hook)\n\n if hook == 'st2' or hook == 'st2/':\n return self._handle_st2_webhook(body, trace_context=trace_context)\n\n if not self._is_valid_hook(hook):\n self._log_request('Invalid hook.', pecan.request)\n msg = 'Webhook %s not registered with st2' % hook\n return pecan.abort(http_client.NOT_FOUND, msg)\n\n trigger = self._get_trigger_for_hook(hook)\n payload = {}\n\n payload['headers'] = headers\n payload['body'] = body\n self._trigger_dispatcher.dispatch(trigger, payload=payload, trace_context=trace_context)\n\n return body\n\n def _parse_request_body(self, content_type, body):\n if content_type == 'application/json':\n self._log_request('Parsing request body as JSON', request=pecan.request)\n body = json.loads(body)\n elif content_type in ['application/x-www-form-urlencoded', 'multipart/form-data']:\n self._log_request('Parsing request body as form encoded data', request=pecan.request)\n body = urlparse.parse_qs(body)\n else:\n raise ValueError('Unsupported Content-Type: \"%s\"' % (content_type))\n\n return body\n\n def _handle_st2_webhook(self, body, trace_context):\n trigger = body.get('trigger', None)\n payload = body.get('payload', None)\n if not trigger:\n msg = 'Trigger not specified.'\n return pecan.abort(http_client.BAD_REQUEST, msg)\n self._trigger_dispatcher.dispatch(trigger, payload=payload, trace_context=trace_context)\n\n return body\n\n def _is_valid_hook(self, hook):\n # TODO: Validate hook payload with payload_schema.\n return hook in self._hooks\n\n def _get_trigger_for_hook(self, hook):\n return self._hooks[hook]\n\n def _register_webhook_trigger_types(self):\n for trigger_type in WEBHOOK_TRIGGER_TYPES.values():\n trigger_service.create_trigger_type_db(trigger_type)\n\n def _create_trace_context(self, trace_tag, hook):\n # if no trace_tag then create a unique one\n if not trace_tag:\n trace_tag = 'webhook-%s-%s' % (hook, uuid.uuid4().hex)\n return TraceContext(trace_tag=trace_tag)\n\n def add_trigger(self, trigger):\n # Note: Permission checking for creating and deleting a webhook is done during rule\n # creation\n url = trigger['parameters']['url']\n LOG.info('Listening to endpoint: %s', urljoin(self._base_url, url))\n self._hooks[url] = trigger\n\n def update_trigger(self, trigger):\n pass\n\n def remove_trigger(self, trigger):\n # Note: Permission checking for creating and deleting a webhook is done during rule\n # creation\n url = trigger['parameters']['url']\n\n if url in self._hooks:\n LOG.info('Stop listening to endpoint: %s', urljoin(self._base_url, url))\n del self._hooks[url]\n\n def _get_headers_as_dict(self, headers):\n headers_dict = {}\n for key, value in headers.items():\n headers_dict[key] = value\n return headers_dict\n\n def _log_request(self, msg, request, log_method=LOG.debug):\n headers = self._get_headers_as_dict(request.headers)\n body = str(request.body)\n log_method('%s\\n\\trequest.header: %s.\\n\\trequest.body: %s.', msg, headers, body)\n\n ##############################################\n # Event handler methods for the trigger events\n ##############################################\n\n def _handle_create_trigger(self, trigger):\n LOG.debug('Calling \"add_trigger\" method (trigger.type=%s)' % (trigger.type))\n trigger = self._sanitize_trigger(trigger=trigger)\n self.add_trigger(trigger=trigger)\n\n def _handle_update_trigger(self, trigger):\n LOG.debug('Calling \"update_trigger\" method (trigger.type=%s)' % (trigger.type))\n trigger = self._sanitize_trigger(trigger=trigger)\n self.update_trigger(trigger=trigger)\n\n def _handle_delete_trigger(self, trigger):\n LOG.debug('Calling \"remove_trigger\" method (trigger.type=%s)' % (trigger.type))\n trigger = self._sanitize_trigger(trigger=trigger)\n self.remove_trigger(trigger=trigger)\n\n def _sanitize_trigger(self, trigger):\n sanitized = trigger._data\n if 'id' in sanitized:\n # Friendly objectid rather than the MongoEngine representation.\n sanitized['id'] = str(sanitized['id'])\n return sanitized\n", "path": "st2api/st2api/controllers/v1/webhooks.py"}]}
| 3,049 | 560 |
gh_patches_debug_15247
|
rasdani/github-patches
|
git_diff
|
ycm-core__ycmd-397
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
LookupError: unknown encoding on Mac with Python 2
With latest `ycmd` I get the following error on Mac with Py2:
``` python
Traceback (most recent call last):
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/Users/peter/src/ycmd/ycmd/__main__.py", line 181, in <module>
Main()
File "/Users/peter/src/ycmd/ycmd/__main__.py", line 151, in Main
options, hmac_secret = SetupOptions( args.options_file )
File "/Users/peter/src/ycmd/ycmd/__main__.py", line 127, in SetupOptions
options = user_options_store.DefaultOptions()
File "/Users/peter/src/ycmd/ycmd/../ycmd/user_options_store.py", line 52, in DefaultOptions
with open( settings_path ) as f:
LookupError: unknown encoding:
Process ycmd-server exited abnormally with code 1
```
I have a python 2 installation from homebrew
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ycmd/user_options_store.py`
Content:
```
1 # Copyright (C) 2013 Google Inc.
2 #
3 # This file is part of ycmd.
4 #
5 # ycmd is free software: you can redistribute it and/or modify
6 # it under the terms of the GNU General Public License as published by
7 # the Free Software Foundation, either version 3 of the License, or
8 # (at your option) any later version.
9 #
10 # ycmd is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU General Public License for more details.
14 #
15 # You should have received a copy of the GNU General Public License
16 # along with ycmd. If not, see <http://www.gnu.org/licenses/>.
17
18 from __future__ import unicode_literals
19 from __future__ import print_function
20 from __future__ import division
21 from __future__ import absolute_import
22 from future import standard_library
23 standard_library.install_aliases()
24 from builtins import * # noqa
25
26 import json
27 import os
28 from frozendict import frozendict
29
30 _USER_OPTIONS = {}
31
32 def SetAll( new_options ):
33 global _USER_OPTIONS
34 _USER_OPTIONS = frozendict( new_options )
35
36
37 def GetAll():
38 return _USER_OPTIONS
39
40
41 def Value( key ):
42 return _USER_OPTIONS[ key ]
43
44
45 def LoadDefaults():
46 SetAll( DefaultOptions() )
47
48
49 def DefaultOptions():
50 settings_path = os.path.join(
51 os.path.dirname( os.path.abspath( __file__ ) ), 'default_settings.json' )
52 with open( settings_path ) as f:
53 options = json.loads( f.read() )
54 options.pop( 'hmac_secret', None )
55 return options
56
57
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ycmd/user_options_store.py b/ycmd/user_options_store.py
--- a/ycmd/user_options_store.py
+++ b/ycmd/user_options_store.py
@@ -27,8 +27,11 @@
import os
from frozendict import frozendict
+from ycmd.utils import ReadFile
+
_USER_OPTIONS = {}
+
def SetAll( new_options ):
global _USER_OPTIONS
_USER_OPTIONS = frozendict( new_options )
@@ -49,8 +52,6 @@
def DefaultOptions():
settings_path = os.path.join(
os.path.dirname( os.path.abspath( __file__ ) ), 'default_settings.json' )
- with open( settings_path ) as f:
- options = json.loads( f.read() )
- options.pop( 'hmac_secret', None )
- return options
-
+ options = json.loads( ReadFile( settings_path ) )
+ options.pop( 'hmac_secret', None )
+ return options
|
{"golden_diff": "diff --git a/ycmd/user_options_store.py b/ycmd/user_options_store.py\n--- a/ycmd/user_options_store.py\n+++ b/ycmd/user_options_store.py\n@@ -27,8 +27,11 @@\n import os\n from frozendict import frozendict\n \n+from ycmd.utils import ReadFile\n+\n _USER_OPTIONS = {}\n \n+\n def SetAll( new_options ):\n global _USER_OPTIONS\n _USER_OPTIONS = frozendict( new_options )\n@@ -49,8 +52,6 @@\n def DefaultOptions():\n settings_path = os.path.join(\n os.path.dirname( os.path.abspath( __file__ ) ), 'default_settings.json' )\n- with open( settings_path ) as f:\n- options = json.loads( f.read() )\n- options.pop( 'hmac_secret', None )\n- return options\n-\n+ options = json.loads( ReadFile( settings_path ) )\n+ options.pop( 'hmac_secret', None )\n+ return options\n", "issue": "LookupError: unknown encoding on Mac with Python 2\nWith latest `ycmd` I get the following error on Mac with Py2:\n\n``` python\nTraceback (most recent call last):\n File \"/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py\", line 162, in _run_module_as_main\n \"__main__\", fname, loader, pkg_name)\n File \"/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py\", line 72, in _run_code\n exec code in run_globals\n File \"/Users/peter/src/ycmd/ycmd/__main__.py\", line 181, in <module>\n Main()\n File \"/Users/peter/src/ycmd/ycmd/__main__.py\", line 151, in Main\n options, hmac_secret = SetupOptions( args.options_file )\n File \"/Users/peter/src/ycmd/ycmd/__main__.py\", line 127, in SetupOptions\n options = user_options_store.DefaultOptions()\n File \"/Users/peter/src/ycmd/ycmd/../ycmd/user_options_store.py\", line 52, in DefaultOptions\n with open( settings_path ) as f:\nLookupError: unknown encoding: \n\nProcess ycmd-server exited abnormally with code 1\n```\n\nI have a python 2 installation from homebrew\n\n", "before_files": [{"content": "# Copyright (C) 2013 Google Inc.\n#\n# This file is part of ycmd.\n#\n# ycmd is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# ycmd is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with ycmd. If not, see <http://www.gnu.org/licenses/>.\n\nfrom __future__ import unicode_literals\nfrom __future__ import print_function\nfrom __future__ import division\nfrom __future__ import absolute_import\nfrom future import standard_library\nstandard_library.install_aliases()\nfrom builtins import * # noqa\n\nimport json\nimport os\nfrom frozendict import frozendict\n\n_USER_OPTIONS = {}\n\ndef SetAll( new_options ):\n global _USER_OPTIONS\n _USER_OPTIONS = frozendict( new_options )\n\n\ndef GetAll():\n return _USER_OPTIONS\n\n\ndef Value( key ):\n return _USER_OPTIONS[ key ]\n\n\ndef LoadDefaults():\n SetAll( DefaultOptions() )\n\n\ndef DefaultOptions():\n settings_path = os.path.join(\n os.path.dirname( os.path.abspath( __file__ ) ), 'default_settings.json' )\n with open( settings_path ) as f:\n options = json.loads( f.read() )\n options.pop( 'hmac_secret', None )\n return options\n\n", "path": "ycmd/user_options_store.py"}], "after_files": [{"content": "# Copyright (C) 2013 Google Inc.\n#\n# This file is part of ycmd.\n#\n# ycmd is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# ycmd is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with ycmd. If not, see <http://www.gnu.org/licenses/>.\n\nfrom __future__ import unicode_literals\nfrom __future__ import print_function\nfrom __future__ import division\nfrom __future__ import absolute_import\nfrom future import standard_library\nstandard_library.install_aliases()\nfrom builtins import * # noqa\n\nimport json\nimport os\nfrom frozendict import frozendict\n\nfrom ycmd.utils import ReadFile\n\n_USER_OPTIONS = {}\n\n\ndef SetAll( new_options ):\n global _USER_OPTIONS\n _USER_OPTIONS = frozendict( new_options )\n\n\ndef GetAll():\n return _USER_OPTIONS\n\n\ndef Value( key ):\n return _USER_OPTIONS[ key ]\n\n\ndef LoadDefaults():\n SetAll( DefaultOptions() )\n\n\ndef DefaultOptions():\n settings_path = os.path.join(\n os.path.dirname( os.path.abspath( __file__ ) ), 'default_settings.json' )\n options = json.loads( ReadFile( settings_path ) )\n options.pop( 'hmac_secret', None )\n return options\n", "path": "ycmd/user_options_store.py"}]}
| 1,073 | 222 |
gh_patches_debug_17035
|
rasdani/github-patches
|
git_diff
|
pytorch__vision-3131
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
DatasetFolder incorrectly specifies required folder structure
## 📚 Documentation
<!-- A clear and concise description of what content in https://pytorch.org/docs is an issue. If this has to do with the general https://pytorch.org website, please file an issue at https://github.com/pytorch/pytorch.github.io/issues/new/choose instead. If this has to do with https://pytorch.org/tutorials, please file an issue at https://github.com/pytorch/tutorials/issues/new -->
The documentation for [DatasetFolder](https://pytorch.org/docs/stable/torchvision/datasets.html#datasetfolder) specifies the following required folder organization:
```
root/class_x/xxx.ext
root/class_x/xxy.ext
root/class_x/xxz.ext
root/class_y/123.ext
root/class_y/nsdf3.ext
root/class_y/asd932_.ext
```
However, the [source code for DatasetFolder](https://pytorch.org/docs/stable/_modules/torchvision/datasets/folder.html#DatasetFolder) shows that `make_dataset()` uses `os.walk()` to discover the files in each class' folder. This means that any file within the class folder **or its subfolders** that passes `is_valid_file()` will be included. To me, this behavior was unexpected.
A more complete specification of the format could be:
```
root/class_x/xxx.ext
root/class_x/xxy.ext
root/class_x/[...]/xxz.ext
root/class_y/123.ext
root/class_y/nsdf3.ext
root/class_y/[...]/asd932_.ext
```
cc @pmeier
DatasetFolder incorrectly specifies required folder structure
## 📚 Documentation
<!-- A clear and concise description of what content in https://pytorch.org/docs is an issue. If this has to do with the general https://pytorch.org website, please file an issue at https://github.com/pytorch/pytorch.github.io/issues/new/choose instead. If this has to do with https://pytorch.org/tutorials, please file an issue at https://github.com/pytorch/tutorials/issues/new -->
The documentation for [DatasetFolder](https://pytorch.org/docs/stable/torchvision/datasets.html#datasetfolder) specifies the following required folder organization:
```
root/class_x/xxx.ext
root/class_x/xxy.ext
root/class_x/xxz.ext
root/class_y/123.ext
root/class_y/nsdf3.ext
root/class_y/asd932_.ext
```
However, the [source code for DatasetFolder](https://pytorch.org/docs/stable/_modules/torchvision/datasets/folder.html#DatasetFolder) shows that `make_dataset()` uses `os.walk()` to discover the files in each class' folder. This means that any file within the class folder **or its subfolders** that passes `is_valid_file()` will be included. To me, this behavior was unexpected.
A more complete specification of the format could be:
```
root/class_x/xxx.ext
root/class_x/xxy.ext
root/class_x/[...]/xxz.ext
root/class_y/123.ext
root/class_y/nsdf3.ext
root/class_y/[...]/asd932_.ext
```
cc @pmeier
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torchvision/datasets/folder.py`
Content:
```
1 from .vision import VisionDataset
2
3 from PIL import Image
4
5 import os
6 import os.path
7 from typing import Any, Callable, cast, Dict, List, Optional, Tuple
8
9
10 def has_file_allowed_extension(filename: str, extensions: Tuple[str, ...]) -> bool:
11 """Checks if a file is an allowed extension.
12
13 Args:
14 filename (string): path to a file
15 extensions (tuple of strings): extensions to consider (lowercase)
16
17 Returns:
18 bool: True if the filename ends with one of given extensions
19 """
20 return filename.lower().endswith(extensions)
21
22
23 def is_image_file(filename: str) -> bool:
24 """Checks if a file is an allowed image extension.
25
26 Args:
27 filename (string): path to a file
28
29 Returns:
30 bool: True if the filename ends with a known image extension
31 """
32 return has_file_allowed_extension(filename, IMG_EXTENSIONS)
33
34
35 def make_dataset(
36 directory: str,
37 class_to_idx: Dict[str, int],
38 extensions: Optional[Tuple[str, ...]] = None,
39 is_valid_file: Optional[Callable[[str], bool]] = None,
40 ) -> List[Tuple[str, int]]:
41 """Generates a list of samples of a form (path_to_sample, class).
42
43 Args:
44 directory (str): root dataset directory
45 class_to_idx (Dict[str, int]): dictionary mapping class name to class index
46 extensions (optional): A list of allowed extensions.
47 Either extensions or is_valid_file should be passed. Defaults to None.
48 is_valid_file (optional): A function that takes path of a file
49 and checks if the file is a valid file
50 (used to check of corrupt files) both extensions and
51 is_valid_file should not be passed. Defaults to None.
52
53 Raises:
54 ValueError: In case ``extensions`` and ``is_valid_file`` are None or both are not None.
55
56 Returns:
57 List[Tuple[str, int]]: samples of a form (path_to_sample, class)
58 """
59 instances = []
60 directory = os.path.expanduser(directory)
61 both_none = extensions is None and is_valid_file is None
62 both_something = extensions is not None and is_valid_file is not None
63 if both_none or both_something:
64 raise ValueError("Both extensions and is_valid_file cannot be None or not None at the same time")
65 if extensions is not None:
66 def is_valid_file(x: str) -> bool:
67 return has_file_allowed_extension(x, cast(Tuple[str, ...], extensions))
68 is_valid_file = cast(Callable[[str], bool], is_valid_file)
69 for target_class in sorted(class_to_idx.keys()):
70 class_index = class_to_idx[target_class]
71 target_dir = os.path.join(directory, target_class)
72 if not os.path.isdir(target_dir):
73 continue
74 for root, _, fnames in sorted(os.walk(target_dir, followlinks=True)):
75 for fname in sorted(fnames):
76 path = os.path.join(root, fname)
77 if is_valid_file(path):
78 item = path, class_index
79 instances.append(item)
80 return instances
81
82
83 class DatasetFolder(VisionDataset):
84 """A generic data loader where the samples are arranged in this way: ::
85
86 root/class_x/xxx.ext
87 root/class_x/xxy.ext
88 root/class_x/xxz.ext
89
90 root/class_y/123.ext
91 root/class_y/nsdf3.ext
92 root/class_y/asd932_.ext
93
94 Args:
95 root (string): Root directory path.
96 loader (callable): A function to load a sample given its path.
97 extensions (tuple[string]): A list of allowed extensions.
98 both extensions and is_valid_file should not be passed.
99 transform (callable, optional): A function/transform that takes in
100 a sample and returns a transformed version.
101 E.g, ``transforms.RandomCrop`` for images.
102 target_transform (callable, optional): A function/transform that takes
103 in the target and transforms it.
104 is_valid_file (callable, optional): A function that takes path of a file
105 and check if the file is a valid file (used to check of corrupt files)
106 both extensions and is_valid_file should not be passed.
107
108 Attributes:
109 classes (list): List of the class names sorted alphabetically.
110 class_to_idx (dict): Dict with items (class_name, class_index).
111 samples (list): List of (sample path, class_index) tuples
112 targets (list): The class_index value for each image in the dataset
113 """
114
115 def __init__(
116 self,
117 root: str,
118 loader: Callable[[str], Any],
119 extensions: Optional[Tuple[str, ...]] = None,
120 transform: Optional[Callable] = None,
121 target_transform: Optional[Callable] = None,
122 is_valid_file: Optional[Callable[[str], bool]] = None,
123 ) -> None:
124 super(DatasetFolder, self).__init__(root, transform=transform,
125 target_transform=target_transform)
126 classes, class_to_idx = self._find_classes(self.root)
127 samples = make_dataset(self.root, class_to_idx, extensions, is_valid_file)
128 if len(samples) == 0:
129 msg = "Found 0 files in subfolders of: {}\n".format(self.root)
130 if extensions is not None:
131 msg += "Supported extensions are: {}".format(",".join(extensions))
132 raise RuntimeError(msg)
133
134 self.loader = loader
135 self.extensions = extensions
136
137 self.classes = classes
138 self.class_to_idx = class_to_idx
139 self.samples = samples
140 self.targets = [s[1] for s in samples]
141
142 def _find_classes(self, dir: str) -> Tuple[List[str], Dict[str, int]]:
143 """
144 Finds the class folders in a dataset.
145
146 Args:
147 dir (string): Root directory path.
148
149 Returns:
150 tuple: (classes, class_to_idx) where classes are relative to (dir), and class_to_idx is a dictionary.
151
152 Ensures:
153 No class is a subdirectory of another.
154 """
155 classes = [d.name for d in os.scandir(dir) if d.is_dir()]
156 classes.sort()
157 class_to_idx = {cls_name: i for i, cls_name in enumerate(classes)}
158 return classes, class_to_idx
159
160 def __getitem__(self, index: int) -> Tuple[Any, Any]:
161 """
162 Args:
163 index (int): Index
164
165 Returns:
166 tuple: (sample, target) where target is class_index of the target class.
167 """
168 path, target = self.samples[index]
169 sample = self.loader(path)
170 if self.transform is not None:
171 sample = self.transform(sample)
172 if self.target_transform is not None:
173 target = self.target_transform(target)
174
175 return sample, target
176
177 def __len__(self) -> int:
178 return len(self.samples)
179
180
181 IMG_EXTENSIONS = ('.jpg', '.jpeg', '.png', '.ppm', '.bmp', '.pgm', '.tif', '.tiff', '.webp')
182
183
184 def pil_loader(path: str) -> Image.Image:
185 # open path as file to avoid ResourceWarning (https://github.com/python-pillow/Pillow/issues/835)
186 with open(path, 'rb') as f:
187 img = Image.open(f)
188 return img.convert('RGB')
189
190
191 # TODO: specify the return type
192 def accimage_loader(path: str) -> Any:
193 import accimage
194 try:
195 return accimage.Image(path)
196 except IOError:
197 # Potentially a decoding problem, fall back to PIL.Image
198 return pil_loader(path)
199
200
201 def default_loader(path: str) -> Any:
202 from torchvision import get_image_backend
203 if get_image_backend() == 'accimage':
204 return accimage_loader(path)
205 else:
206 return pil_loader(path)
207
208
209 class ImageFolder(DatasetFolder):
210 """A generic data loader where the images are arranged in this way: ::
211
212 root/dog/xxx.png
213 root/dog/xxy.png
214 root/dog/xxz.png
215
216 root/cat/123.png
217 root/cat/nsdf3.png
218 root/cat/asd932_.png
219
220 Args:
221 root (string): Root directory path.
222 transform (callable, optional): A function/transform that takes in an PIL image
223 and returns a transformed version. E.g, ``transforms.RandomCrop``
224 target_transform (callable, optional): A function/transform that takes in the
225 target and transforms it.
226 loader (callable, optional): A function to load an image given its path.
227 is_valid_file (callable, optional): A function that takes path of an Image file
228 and check if the file is a valid file (used to check of corrupt files)
229
230 Attributes:
231 classes (list): List of the class names sorted alphabetically.
232 class_to_idx (dict): Dict with items (class_name, class_index).
233 imgs (list): List of (image path, class_index) tuples
234 """
235
236 def __init__(
237 self,
238 root: str,
239 transform: Optional[Callable] = None,
240 target_transform: Optional[Callable] = None,
241 loader: Callable[[str], Any] = default_loader,
242 is_valid_file: Optional[Callable[[str], bool]] = None,
243 ):
244 super(ImageFolder, self).__init__(root, loader, IMG_EXTENSIONS if is_valid_file is None else None,
245 transform=transform,
246 target_transform=target_transform,
247 is_valid_file=is_valid_file)
248 self.imgs = self.samples
249
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/torchvision/datasets/folder.py b/torchvision/datasets/folder.py
--- a/torchvision/datasets/folder.py
+++ b/torchvision/datasets/folder.py
@@ -85,11 +85,11 @@
root/class_x/xxx.ext
root/class_x/xxy.ext
- root/class_x/xxz.ext
+ root/class_x/[...]/xxz.ext
root/class_y/123.ext
root/class_y/nsdf3.ext
- root/class_y/asd932_.ext
+ root/class_y/[...]/asd932_.ext
Args:
root (string): Root directory path.
@@ -211,11 +211,11 @@
root/dog/xxx.png
root/dog/xxy.png
- root/dog/xxz.png
+ root/dog/[...]/xxz.png
root/cat/123.png
root/cat/nsdf3.png
- root/cat/asd932_.png
+ root/cat/[...]/asd932_.png
Args:
root (string): Root directory path.
|
{"golden_diff": "diff --git a/torchvision/datasets/folder.py b/torchvision/datasets/folder.py\n--- a/torchvision/datasets/folder.py\n+++ b/torchvision/datasets/folder.py\n@@ -85,11 +85,11 @@\n \n root/class_x/xxx.ext\n root/class_x/xxy.ext\n- root/class_x/xxz.ext\n+ root/class_x/[...]/xxz.ext\n \n root/class_y/123.ext\n root/class_y/nsdf3.ext\n- root/class_y/asd932_.ext\n+ root/class_y/[...]/asd932_.ext\n \n Args:\n root (string): Root directory path.\n@@ -211,11 +211,11 @@\n \n root/dog/xxx.png\n root/dog/xxy.png\n- root/dog/xxz.png\n+ root/dog/[...]/xxz.png\n \n root/cat/123.png\n root/cat/nsdf3.png\n- root/cat/asd932_.png\n+ root/cat/[...]/asd932_.png\n \n Args:\n root (string): Root directory path.\n", "issue": "DatasetFolder incorrectly specifies required folder structure\n## \ud83d\udcda Documentation\r\n\r\n<!-- A clear and concise description of what content in https://pytorch.org/docs is an issue. If this has to do with the general https://pytorch.org website, please file an issue at https://github.com/pytorch/pytorch.github.io/issues/new/choose instead. If this has to do with https://pytorch.org/tutorials, please file an issue at https://github.com/pytorch/tutorials/issues/new -->\r\n\r\nThe documentation for [DatasetFolder](https://pytorch.org/docs/stable/torchvision/datasets.html#datasetfolder) specifies the following required folder organization:\r\n\r\n```\r\nroot/class_x/xxx.ext\r\nroot/class_x/xxy.ext\r\nroot/class_x/xxz.ext\r\n\r\nroot/class_y/123.ext\r\nroot/class_y/nsdf3.ext\r\nroot/class_y/asd932_.ext\r\n```\r\n\r\nHowever, the [source code for DatasetFolder](https://pytorch.org/docs/stable/_modules/torchvision/datasets/folder.html#DatasetFolder) shows that `make_dataset()` uses `os.walk()` to discover the files in each class' folder. This means that any file within the class folder **or its subfolders** that passes `is_valid_file()` will be included. To me, this behavior was unexpected.\r\n\r\nA more complete specification of the format could be:\r\n\r\n```\r\nroot/class_x/xxx.ext\r\nroot/class_x/xxy.ext\r\nroot/class_x/[...]/xxz.ext\r\n\r\nroot/class_y/123.ext\r\nroot/class_y/nsdf3.ext\r\nroot/class_y/[...]/asd932_.ext\r\n```\n\ncc @pmeier\nDatasetFolder incorrectly specifies required folder structure\n## \ud83d\udcda Documentation\r\n\r\n<!-- A clear and concise description of what content in https://pytorch.org/docs is an issue. If this has to do with the general https://pytorch.org website, please file an issue at https://github.com/pytorch/pytorch.github.io/issues/new/choose instead. If this has to do with https://pytorch.org/tutorials, please file an issue at https://github.com/pytorch/tutorials/issues/new -->\r\n\r\nThe documentation for [DatasetFolder](https://pytorch.org/docs/stable/torchvision/datasets.html#datasetfolder) specifies the following required folder organization:\r\n\r\n```\r\nroot/class_x/xxx.ext\r\nroot/class_x/xxy.ext\r\nroot/class_x/xxz.ext\r\n\r\nroot/class_y/123.ext\r\nroot/class_y/nsdf3.ext\r\nroot/class_y/asd932_.ext\r\n```\r\n\r\nHowever, the [source code for DatasetFolder](https://pytorch.org/docs/stable/_modules/torchvision/datasets/folder.html#DatasetFolder) shows that `make_dataset()` uses `os.walk()` to discover the files in each class' folder. This means that any file within the class folder **or its subfolders** that passes `is_valid_file()` will be included. To me, this behavior was unexpected.\r\n\r\nA more complete specification of the format could be:\r\n\r\n```\r\nroot/class_x/xxx.ext\r\nroot/class_x/xxy.ext\r\nroot/class_x/[...]/xxz.ext\r\n\r\nroot/class_y/123.ext\r\nroot/class_y/nsdf3.ext\r\nroot/class_y/[...]/asd932_.ext\r\n```\n\ncc @pmeier\n", "before_files": [{"content": "from .vision import VisionDataset\n\nfrom PIL import Image\n\nimport os\nimport os.path\nfrom typing import Any, Callable, cast, Dict, List, Optional, Tuple\n\n\ndef has_file_allowed_extension(filename: str, extensions: Tuple[str, ...]) -> bool:\n \"\"\"Checks if a file is an allowed extension.\n\n Args:\n filename (string): path to a file\n extensions (tuple of strings): extensions to consider (lowercase)\n\n Returns:\n bool: True if the filename ends with one of given extensions\n \"\"\"\n return filename.lower().endswith(extensions)\n\n\ndef is_image_file(filename: str) -> bool:\n \"\"\"Checks if a file is an allowed image extension.\n\n Args:\n filename (string): path to a file\n\n Returns:\n bool: True if the filename ends with a known image extension\n \"\"\"\n return has_file_allowed_extension(filename, IMG_EXTENSIONS)\n\n\ndef make_dataset(\n directory: str,\n class_to_idx: Dict[str, int],\n extensions: Optional[Tuple[str, ...]] = None,\n is_valid_file: Optional[Callable[[str], bool]] = None,\n) -> List[Tuple[str, int]]:\n \"\"\"Generates a list of samples of a form (path_to_sample, class).\n\n Args:\n directory (str): root dataset directory\n class_to_idx (Dict[str, int]): dictionary mapping class name to class index\n extensions (optional): A list of allowed extensions.\n Either extensions or is_valid_file should be passed. Defaults to None.\n is_valid_file (optional): A function that takes path of a file\n and checks if the file is a valid file\n (used to check of corrupt files) both extensions and\n is_valid_file should not be passed. Defaults to None.\n\n Raises:\n ValueError: In case ``extensions`` and ``is_valid_file`` are None or both are not None.\n\n Returns:\n List[Tuple[str, int]]: samples of a form (path_to_sample, class)\n \"\"\"\n instances = []\n directory = os.path.expanduser(directory)\n both_none = extensions is None and is_valid_file is None\n both_something = extensions is not None and is_valid_file is not None\n if both_none or both_something:\n raise ValueError(\"Both extensions and is_valid_file cannot be None or not None at the same time\")\n if extensions is not None:\n def is_valid_file(x: str) -> bool:\n return has_file_allowed_extension(x, cast(Tuple[str, ...], extensions))\n is_valid_file = cast(Callable[[str], bool], is_valid_file)\n for target_class in sorted(class_to_idx.keys()):\n class_index = class_to_idx[target_class]\n target_dir = os.path.join(directory, target_class)\n if not os.path.isdir(target_dir):\n continue\n for root, _, fnames in sorted(os.walk(target_dir, followlinks=True)):\n for fname in sorted(fnames):\n path = os.path.join(root, fname)\n if is_valid_file(path):\n item = path, class_index\n instances.append(item)\n return instances\n\n\nclass DatasetFolder(VisionDataset):\n \"\"\"A generic data loader where the samples are arranged in this way: ::\n\n root/class_x/xxx.ext\n root/class_x/xxy.ext\n root/class_x/xxz.ext\n\n root/class_y/123.ext\n root/class_y/nsdf3.ext\n root/class_y/asd932_.ext\n\n Args:\n root (string): Root directory path.\n loader (callable): A function to load a sample given its path.\n extensions (tuple[string]): A list of allowed extensions.\n both extensions and is_valid_file should not be passed.\n transform (callable, optional): A function/transform that takes in\n a sample and returns a transformed version.\n E.g, ``transforms.RandomCrop`` for images.\n target_transform (callable, optional): A function/transform that takes\n in the target and transforms it.\n is_valid_file (callable, optional): A function that takes path of a file\n and check if the file is a valid file (used to check of corrupt files)\n both extensions and is_valid_file should not be passed.\n\n Attributes:\n classes (list): List of the class names sorted alphabetically.\n class_to_idx (dict): Dict with items (class_name, class_index).\n samples (list): List of (sample path, class_index) tuples\n targets (list): The class_index value for each image in the dataset\n \"\"\"\n\n def __init__(\n self,\n root: str,\n loader: Callable[[str], Any],\n extensions: Optional[Tuple[str, ...]] = None,\n transform: Optional[Callable] = None,\n target_transform: Optional[Callable] = None,\n is_valid_file: Optional[Callable[[str], bool]] = None,\n ) -> None:\n super(DatasetFolder, self).__init__(root, transform=transform,\n target_transform=target_transform)\n classes, class_to_idx = self._find_classes(self.root)\n samples = make_dataset(self.root, class_to_idx, extensions, is_valid_file)\n if len(samples) == 0:\n msg = \"Found 0 files in subfolders of: {}\\n\".format(self.root)\n if extensions is not None:\n msg += \"Supported extensions are: {}\".format(\",\".join(extensions))\n raise RuntimeError(msg)\n\n self.loader = loader\n self.extensions = extensions\n\n self.classes = classes\n self.class_to_idx = class_to_idx\n self.samples = samples\n self.targets = [s[1] for s in samples]\n\n def _find_classes(self, dir: str) -> Tuple[List[str], Dict[str, int]]:\n \"\"\"\n Finds the class folders in a dataset.\n\n Args:\n dir (string): Root directory path.\n\n Returns:\n tuple: (classes, class_to_idx) where classes are relative to (dir), and class_to_idx is a dictionary.\n\n Ensures:\n No class is a subdirectory of another.\n \"\"\"\n classes = [d.name for d in os.scandir(dir) if d.is_dir()]\n classes.sort()\n class_to_idx = {cls_name: i for i, cls_name in enumerate(classes)}\n return classes, class_to_idx\n\n def __getitem__(self, index: int) -> Tuple[Any, Any]:\n \"\"\"\n Args:\n index (int): Index\n\n Returns:\n tuple: (sample, target) where target is class_index of the target class.\n \"\"\"\n path, target = self.samples[index]\n sample = self.loader(path)\n if self.transform is not None:\n sample = self.transform(sample)\n if self.target_transform is not None:\n target = self.target_transform(target)\n\n return sample, target\n\n def __len__(self) -> int:\n return len(self.samples)\n\n\nIMG_EXTENSIONS = ('.jpg', '.jpeg', '.png', '.ppm', '.bmp', '.pgm', '.tif', '.tiff', '.webp')\n\n\ndef pil_loader(path: str) -> Image.Image:\n # open path as file to avoid ResourceWarning (https://github.com/python-pillow/Pillow/issues/835)\n with open(path, 'rb') as f:\n img = Image.open(f)\n return img.convert('RGB')\n\n\n# TODO: specify the return type\ndef accimage_loader(path: str) -> Any:\n import accimage\n try:\n return accimage.Image(path)\n except IOError:\n # Potentially a decoding problem, fall back to PIL.Image\n return pil_loader(path)\n\n\ndef default_loader(path: str) -> Any:\n from torchvision import get_image_backend\n if get_image_backend() == 'accimage':\n return accimage_loader(path)\n else:\n return pil_loader(path)\n\n\nclass ImageFolder(DatasetFolder):\n \"\"\"A generic data loader where the images are arranged in this way: ::\n\n root/dog/xxx.png\n root/dog/xxy.png\n root/dog/xxz.png\n\n root/cat/123.png\n root/cat/nsdf3.png\n root/cat/asd932_.png\n\n Args:\n root (string): Root directory path.\n transform (callable, optional): A function/transform that takes in an PIL image\n and returns a transformed version. E.g, ``transforms.RandomCrop``\n target_transform (callable, optional): A function/transform that takes in the\n target and transforms it.\n loader (callable, optional): A function to load an image given its path.\n is_valid_file (callable, optional): A function that takes path of an Image file\n and check if the file is a valid file (used to check of corrupt files)\n\n Attributes:\n classes (list): List of the class names sorted alphabetically.\n class_to_idx (dict): Dict with items (class_name, class_index).\n imgs (list): List of (image path, class_index) tuples\n \"\"\"\n\n def __init__(\n self,\n root: str,\n transform: Optional[Callable] = None,\n target_transform: Optional[Callable] = None,\n loader: Callable[[str], Any] = default_loader,\n is_valid_file: Optional[Callable[[str], bool]] = None,\n ):\n super(ImageFolder, self).__init__(root, loader, IMG_EXTENSIONS if is_valid_file is None else None,\n transform=transform,\n target_transform=target_transform,\n is_valid_file=is_valid_file)\n self.imgs = self.samples\n", "path": "torchvision/datasets/folder.py"}], "after_files": [{"content": "from .vision import VisionDataset\n\nfrom PIL import Image\n\nimport os\nimport os.path\nfrom typing import Any, Callable, cast, Dict, List, Optional, Tuple\n\n\ndef has_file_allowed_extension(filename: str, extensions: Tuple[str, ...]) -> bool:\n \"\"\"Checks if a file is an allowed extension.\n\n Args:\n filename (string): path to a file\n extensions (tuple of strings): extensions to consider (lowercase)\n\n Returns:\n bool: True if the filename ends with one of given extensions\n \"\"\"\n return filename.lower().endswith(extensions)\n\n\ndef is_image_file(filename: str) -> bool:\n \"\"\"Checks if a file is an allowed image extension.\n\n Args:\n filename (string): path to a file\n\n Returns:\n bool: True if the filename ends with a known image extension\n \"\"\"\n return has_file_allowed_extension(filename, IMG_EXTENSIONS)\n\n\ndef make_dataset(\n directory: str,\n class_to_idx: Dict[str, int],\n extensions: Optional[Tuple[str, ...]] = None,\n is_valid_file: Optional[Callable[[str], bool]] = None,\n) -> List[Tuple[str, int]]:\n \"\"\"Generates a list of samples of a form (path_to_sample, class).\n\n Args:\n directory (str): root dataset directory\n class_to_idx (Dict[str, int]): dictionary mapping class name to class index\n extensions (optional): A list of allowed extensions.\n Either extensions or is_valid_file should be passed. Defaults to None.\n is_valid_file (optional): A function that takes path of a file\n and checks if the file is a valid file\n (used to check of corrupt files) both extensions and\n is_valid_file should not be passed. Defaults to None.\n\n Raises:\n ValueError: In case ``extensions`` and ``is_valid_file`` are None or both are not None.\n\n Returns:\n List[Tuple[str, int]]: samples of a form (path_to_sample, class)\n \"\"\"\n instances = []\n directory = os.path.expanduser(directory)\n both_none = extensions is None and is_valid_file is None\n both_something = extensions is not None and is_valid_file is not None\n if both_none or both_something:\n raise ValueError(\"Both extensions and is_valid_file cannot be None or not None at the same time\")\n if extensions is not None:\n def is_valid_file(x: str) -> bool:\n return has_file_allowed_extension(x, cast(Tuple[str, ...], extensions))\n is_valid_file = cast(Callable[[str], bool], is_valid_file)\n for target_class in sorted(class_to_idx.keys()):\n class_index = class_to_idx[target_class]\n target_dir = os.path.join(directory, target_class)\n if not os.path.isdir(target_dir):\n continue\n for root, _, fnames in sorted(os.walk(target_dir, followlinks=True)):\n for fname in sorted(fnames):\n path = os.path.join(root, fname)\n if is_valid_file(path):\n item = path, class_index\n instances.append(item)\n return instances\n\n\nclass DatasetFolder(VisionDataset):\n \"\"\"A generic data loader where the samples are arranged in this way: ::\n\n root/class_x/xxx.ext\n root/class_x/xxy.ext\n root/class_x/[...]/xxz.ext\n\n root/class_y/123.ext\n root/class_y/nsdf3.ext\n root/class_y/[...]/asd932_.ext\n\n Args:\n root (string): Root directory path.\n loader (callable): A function to load a sample given its path.\n extensions (tuple[string]): A list of allowed extensions.\n both extensions and is_valid_file should not be passed.\n transform (callable, optional): A function/transform that takes in\n a sample and returns a transformed version.\n E.g, ``transforms.RandomCrop`` for images.\n target_transform (callable, optional): A function/transform that takes\n in the target and transforms it.\n is_valid_file (callable, optional): A function that takes path of a file\n and check if the file is a valid file (used to check of corrupt files)\n both extensions and is_valid_file should not be passed.\n\n Attributes:\n classes (list): List of the class names sorted alphabetically.\n class_to_idx (dict): Dict with items (class_name, class_index).\n samples (list): List of (sample path, class_index) tuples\n targets (list): The class_index value for each image in the dataset\n \"\"\"\n\n def __init__(\n self,\n root: str,\n loader: Callable[[str], Any],\n extensions: Optional[Tuple[str, ...]] = None,\n transform: Optional[Callable] = None,\n target_transform: Optional[Callable] = None,\n is_valid_file: Optional[Callable[[str], bool]] = None,\n ) -> None:\n super(DatasetFolder, self).__init__(root, transform=transform,\n target_transform=target_transform)\n classes, class_to_idx = self._find_classes(self.root)\n samples = make_dataset(self.root, class_to_idx, extensions, is_valid_file)\n if len(samples) == 0:\n msg = \"Found 0 files in subfolders of: {}\\n\".format(self.root)\n if extensions is not None:\n msg += \"Supported extensions are: {}\".format(\",\".join(extensions))\n raise RuntimeError(msg)\n\n self.loader = loader\n self.extensions = extensions\n\n self.classes = classes\n self.class_to_idx = class_to_idx\n self.samples = samples\n self.targets = [s[1] for s in samples]\n\n def _find_classes(self, dir: str) -> Tuple[List[str], Dict[str, int]]:\n \"\"\"\n Finds the class folders in a dataset.\n\n Args:\n dir (string): Root directory path.\n\n Returns:\n tuple: (classes, class_to_idx) where classes are relative to (dir), and class_to_idx is a dictionary.\n\n Ensures:\n No class is a subdirectory of another.\n \"\"\"\n classes = [d.name for d in os.scandir(dir) if d.is_dir()]\n classes.sort()\n class_to_idx = {cls_name: i for i, cls_name in enumerate(classes)}\n return classes, class_to_idx\n\n def __getitem__(self, index: int) -> Tuple[Any, Any]:\n \"\"\"\n Args:\n index (int): Index\n\n Returns:\n tuple: (sample, target) where target is class_index of the target class.\n \"\"\"\n path, target = self.samples[index]\n sample = self.loader(path)\n if self.transform is not None:\n sample = self.transform(sample)\n if self.target_transform is not None:\n target = self.target_transform(target)\n\n return sample, target\n\n def __len__(self) -> int:\n return len(self.samples)\n\n\nIMG_EXTENSIONS = ('.jpg', '.jpeg', '.png', '.ppm', '.bmp', '.pgm', '.tif', '.tiff', '.webp')\n\n\ndef pil_loader(path: str) -> Image.Image:\n # open path as file to avoid ResourceWarning (https://github.com/python-pillow/Pillow/issues/835)\n with open(path, 'rb') as f:\n img = Image.open(f)\n return img.convert('RGB')\n\n\n# TODO: specify the return type\ndef accimage_loader(path: str) -> Any:\n import accimage\n try:\n return accimage.Image(path)\n except IOError:\n # Potentially a decoding problem, fall back to PIL.Image\n return pil_loader(path)\n\n\ndef default_loader(path: str) -> Any:\n from torchvision import get_image_backend\n if get_image_backend() == 'accimage':\n return accimage_loader(path)\n else:\n return pil_loader(path)\n\n\nclass ImageFolder(DatasetFolder):\n \"\"\"A generic data loader where the images are arranged in this way: ::\n\n root/dog/xxx.png\n root/dog/xxy.png\n root/dog/[...]/xxz.png\n\n root/cat/123.png\n root/cat/nsdf3.png\n root/cat/[...]/asd932_.png\n\n Args:\n root (string): Root directory path.\n transform (callable, optional): A function/transform that takes in an PIL image\n and returns a transformed version. E.g, ``transforms.RandomCrop``\n target_transform (callable, optional): A function/transform that takes in the\n target and transforms it.\n loader (callable, optional): A function to load an image given its path.\n is_valid_file (callable, optional): A function that takes path of an Image file\n and check if the file is a valid file (used to check of corrupt files)\n\n Attributes:\n classes (list): List of the class names sorted alphabetically.\n class_to_idx (dict): Dict with items (class_name, class_index).\n imgs (list): List of (image path, class_index) tuples\n \"\"\"\n\n def __init__(\n self,\n root: str,\n transform: Optional[Callable] = None,\n target_transform: Optional[Callable] = None,\n loader: Callable[[str], Any] = default_loader,\n is_valid_file: Optional[Callable[[str], bool]] = None,\n ):\n super(ImageFolder, self).__init__(root, loader, IMG_EXTENSIONS if is_valid_file is None else None,\n transform=transform,\n target_transform=target_transform,\n is_valid_file=is_valid_file)\n self.imgs = self.samples\n", "path": "torchvision/datasets/folder.py"}]}
| 3,665 | 266 |
gh_patches_debug_39887
|
rasdani/github-patches
|
git_diff
|
translate__pootle-6699
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Pootle FS file system backends list isn't sorted
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pootle/apps/pootle_fs/forms.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 import uuid
10 from collections import Counter, OrderedDict
11
12 from django import forms
13 from django.utils.functional import cached_property
14
15 from pootle.core.delegate import revision
16 from pootle.i18n.gettext import ugettext_lazy as _
17 from pootle_language.models import Language
18
19 from .delegate import (
20 fs_plugins, fs_translation_mapping_validator, fs_url_validator)
21
22
23 FS_CHOICES = (
24 ("gnu", _("GNU-style"), "/po/<language_code>.<ext>"),
25 ("non-gnu",
26 _("non GNU-style"),
27 "/<language_code>/<dir_path>/<filename>.<ext>"),
28 ("django",
29 _("Django-style"),
30 "/locale/<language_code>/LC_MESSAGES/<filename>.<ext>"),
31 ("custom", _("Custom"), ""))
32
33
34 class ProjectFSAdminForm(forms.Form):
35
36 fs_type = forms.ChoiceField(
37 label=_("Filesystem backend"),
38 help_text=_("Select a filesystem backend"),
39 choices=(),
40 widget=forms.Select(
41 attrs={'class': 'js-select2'}))
42 fs_url = forms.CharField(
43 label=_("Backend URL or path"),
44 help_text=_(
45 "The URL or path to your translation files"))
46 translation_mapping_presets = forms.ChoiceField(
47 label=_("Translation mapping presets"),
48 required=False,
49 choices=(
50 [("", "-----"), ]
51 + [(x[0], x[1]) for x in FS_CHOICES]),
52 widget=forms.Select(
53 attrs={'class': 'js-select2 js-select-fs-mapping'}))
54 translation_mapping = forms.CharField(
55 label=_("Translation path mapping"),
56 help_text=_("Translation path mapping that maps the localisation "
57 "files on the filesystem to files on Pootle."),
58 widget=forms.TextInput(
59 attrs={'class': 'js-select-fs-mapping-target'}))
60
61 def should_save(self):
62 return self.is_valid()
63
64 @property
65 def fs_type_choices(self):
66 return (
67 (plugin_type, plugin.name or plugin.fs_type)
68 for plugin_type, plugin
69 in fs_plugins.gather().items())
70
71 def __init__(self, *args, **kwargs):
72 self.project = kwargs.pop("project")
73 super(ProjectFSAdminForm, self).__init__(*args, **kwargs)
74 self.fields["fs_type"].choices = self.fs_type_choices
75
76 self.fields["fs_url"].initial = self.project.config.get("pootle_fs.fs_url")
77 self.fields["fs_type"].initial = (
78 self.project.config.get("pootle_fs.fs_type"))
79 translation_mapping = (
80 self.project.config.get("pootle_fs.translation_mappings"))
81 if translation_mapping:
82 self.fields["translation_mapping"].initial = (
83 translation_mapping.get("default"))
84
85 @property
86 def fs_path_validator(self):
87 return fs_translation_mapping_validator.get()
88
89 @cached_property
90 def fs_plugin(self):
91 if self.cleaned_data.get("fs_type"):
92 return fs_plugins.gather()[self.cleaned_data["fs_type"]]
93
94 @cached_property
95 def fs_url_validator(self):
96 validator = fs_url_validator.get(self.fs_plugin)
97 return validator and validator()
98
99 def clean(self):
100 if not hasattr(self, "cleaned_data") or not self.cleaned_data:
101 return
102 if self.cleaned_data.get("translation_mapping"):
103 try:
104 self.fs_path_validator(
105 self.cleaned_data["translation_mapping"]).validate()
106 except ValueError as e:
107 self.add_error("translation_mapping", e)
108 if not self.fs_url_validator or not self.cleaned_data.get("fs_url"):
109 return
110 try:
111 self.fs_url_validator.validate(self.cleaned_data["fs_url"])
112 except forms.ValidationError as e:
113 self.add_error(
114 "fs_url",
115 forms.ValidationError(
116 "Incorrect URL or path ('%s') for plugin type '%s': %s"
117 % (self.cleaned_data.get("fs_url"),
118 self.cleaned_data.get("fs_type"),
119 e)))
120
121 def save(self):
122 self.project.config["pootle_fs.fs_type"] = self.cleaned_data["fs_type"]
123 self.project.config["pootle_fs.fs_url"] = self.cleaned_data["fs_url"]
124 self.project.config["pootle_fs.translation_mappings"] = dict(
125 default=self.cleaned_data["translation_mapping"])
126
127
128 class LangMappingForm(forms.Form):
129 remove = forms.BooleanField(required=False)
130 pootle_code = forms.ModelChoiceField(
131 Language.objects.all(),
132 to_field_name="code",
133 widget=forms.Select(attrs={'class': 'js-select2'}))
134 fs_code = forms.CharField(
135 max_length=32)
136
137 def __init__(self, *args, **kwargs):
138 self.project = kwargs.pop("project")
139 existing_codes = kwargs.pop("existing_codes")
140 super(LangMappingForm, self).__init__(*args, **kwargs)
141 if existing_codes:
142 excluded_codes = (
143 [c for c in existing_codes if c != self.initial["pootle_code"]]
144 if self.initial and self.initial.get("pootle_code")
145 else existing_codes)
146 self.fields["pootle_code"].queryset = (
147 self.fields["pootle_code"].queryset.exclude(
148 code__in=excluded_codes))
149
150
151 class BaseLangMappingFormSet(forms.BaseFormSet):
152
153 def __init__(self, *args, **kwargs):
154 self.project = kwargs.pop("project")
155 mappings = self.project.config.get("pootle.core.lang_mapping", {})
156 if mappings:
157 kwargs["initial"] = [
158 dict(pootle_code=v, fs_code=k)
159 for k, v in mappings.items()]
160 super(BaseLangMappingFormSet, self).__init__(*args, **kwargs)
161
162 @property
163 def cleaned_mapping(self):
164 mapping = OrderedDict()
165 for mapped in self.cleaned_data:
166 if not mapped or mapped["remove"]:
167 continue
168 mapping[mapped["fs_code"]] = mapped["pootle_code"].code
169 return mapping
170
171 def save(self):
172 self.project.config["pootle.core.lang_mapping"] = self.cleaned_mapping
173 revision.get(self.project.__class__)(self.project).set(
174 keys=["pootle.fs.sync"], value=uuid.uuid4().hex)
175
176 def clean(self):
177 if any(self.errors):
178 return
179 fs_counter = Counter([v["fs_code"] for v in self.cleaned_data if v])
180 if set(fs_counter.values()) != set([1]):
181 raise forms.ValidationError(
182 _("Filesystem language codes must be unique"))
183 pootle_counter = Counter([v["pootle_code"] for v in self.cleaned_data if v])
184 if set(pootle_counter.values()) != set([1]):
185 raise forms.ValidationError(
186 _("Pootle language mappings must be unique"))
187
188 def get_form_kwargs(self, index):
189 kwargs = super(BaseLangMappingFormSet, self).get_form_kwargs(index)
190 kwargs["project"] = self.project
191 kwargs["existing_codes"] = (
192 [i["pootle_code"] for i in self.initial]
193 if self.initial
194 else [])
195 return kwargs
196
197
198 LangMappingFormSet = forms.formset_factory(
199 LangMappingForm,
200 formset=BaseLangMappingFormSet)
201
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pootle/apps/pootle_fs/forms.py b/pootle/apps/pootle_fs/forms.py
--- a/pootle/apps/pootle_fs/forms.py
+++ b/pootle/apps/pootle_fs/forms.py
@@ -33,12 +33,6 @@
class ProjectFSAdminForm(forms.Form):
- fs_type = forms.ChoiceField(
- label=_("Filesystem backend"),
- help_text=_("Select a filesystem backend"),
- choices=(),
- widget=forms.Select(
- attrs={'class': 'js-select2'}))
fs_url = forms.CharField(
label=_("Backend URL or path"),
help_text=_(
@@ -61,36 +55,26 @@
def should_save(self):
return self.is_valid()
- @property
- def fs_type_choices(self):
- return (
- (plugin_type, plugin.name or plugin.fs_type)
- for plugin_type, plugin
- in fs_plugins.gather().items())
-
def __init__(self, *args, **kwargs):
self.project = kwargs.pop("project")
super(ProjectFSAdminForm, self).__init__(*args, **kwargs)
- self.fields["fs_type"].choices = self.fs_type_choices
self.fields["fs_url"].initial = self.project.config.get("pootle_fs.fs_url")
- self.fields["fs_type"].initial = (
- self.project.config.get("pootle_fs.fs_type"))
translation_mapping = (
self.project.config.get("pootle_fs.translation_mappings"))
if translation_mapping:
self.fields["translation_mapping"].initial = (
translation_mapping.get("default"))
+ @cached_property
+ def fs_plugin(self):
+ return fs_plugins.gather()[
+ self.project.config["pootle_fs.fs_type"]]
+
@property
def fs_path_validator(self):
return fs_translation_mapping_validator.get()
- @cached_property
- def fs_plugin(self):
- if self.cleaned_data.get("fs_type"):
- return fs_plugins.gather()[self.cleaned_data["fs_type"]]
-
@cached_property
def fs_url_validator(self):
validator = fs_url_validator.get(self.fs_plugin)
@@ -115,11 +99,10 @@
forms.ValidationError(
"Incorrect URL or path ('%s') for plugin type '%s': %s"
% (self.cleaned_data.get("fs_url"),
- self.cleaned_data.get("fs_type"),
+ self.fs_plugin.fs_type,
e)))
def save(self):
- self.project.config["pootle_fs.fs_type"] = self.cleaned_data["fs_type"]
self.project.config["pootle_fs.fs_url"] = self.cleaned_data["fs_url"]
self.project.config["pootle_fs.translation_mappings"] = dict(
default=self.cleaned_data["translation_mapping"])
|
{"golden_diff": "diff --git a/pootle/apps/pootle_fs/forms.py b/pootle/apps/pootle_fs/forms.py\n--- a/pootle/apps/pootle_fs/forms.py\n+++ b/pootle/apps/pootle_fs/forms.py\n@@ -33,12 +33,6 @@\n \n class ProjectFSAdminForm(forms.Form):\n \n- fs_type = forms.ChoiceField(\n- label=_(\"Filesystem backend\"),\n- help_text=_(\"Select a filesystem backend\"),\n- choices=(),\n- widget=forms.Select(\n- attrs={'class': 'js-select2'}))\n fs_url = forms.CharField(\n label=_(\"Backend URL or path\"),\n help_text=_(\n@@ -61,36 +55,26 @@\n def should_save(self):\n return self.is_valid()\n \n- @property\n- def fs_type_choices(self):\n- return (\n- (plugin_type, plugin.name or plugin.fs_type)\n- for plugin_type, plugin\n- in fs_plugins.gather().items())\n-\n def __init__(self, *args, **kwargs):\n self.project = kwargs.pop(\"project\")\n super(ProjectFSAdminForm, self).__init__(*args, **kwargs)\n- self.fields[\"fs_type\"].choices = self.fs_type_choices\n \n self.fields[\"fs_url\"].initial = self.project.config.get(\"pootle_fs.fs_url\")\n- self.fields[\"fs_type\"].initial = (\n- self.project.config.get(\"pootle_fs.fs_type\"))\n translation_mapping = (\n self.project.config.get(\"pootle_fs.translation_mappings\"))\n if translation_mapping:\n self.fields[\"translation_mapping\"].initial = (\n translation_mapping.get(\"default\"))\n \n+ @cached_property\n+ def fs_plugin(self):\n+ return fs_plugins.gather()[\n+ self.project.config[\"pootle_fs.fs_type\"]]\n+\n @property\n def fs_path_validator(self):\n return fs_translation_mapping_validator.get()\n \n- @cached_property\n- def fs_plugin(self):\n- if self.cleaned_data.get(\"fs_type\"):\n- return fs_plugins.gather()[self.cleaned_data[\"fs_type\"]]\n-\n @cached_property\n def fs_url_validator(self):\n validator = fs_url_validator.get(self.fs_plugin)\n@@ -115,11 +99,10 @@\n forms.ValidationError(\n \"Incorrect URL or path ('%s') for plugin type '%s': %s\"\n % (self.cleaned_data.get(\"fs_url\"),\n- self.cleaned_data.get(\"fs_type\"),\n+ self.fs_plugin.fs_type,\n e)))\n \n def save(self):\n- self.project.config[\"pootle_fs.fs_type\"] = self.cleaned_data[\"fs_type\"]\n self.project.config[\"pootle_fs.fs_url\"] = self.cleaned_data[\"fs_url\"]\n self.project.config[\"pootle_fs.translation_mappings\"] = dict(\n default=self.cleaned_data[\"translation_mapping\"])\n", "issue": "Pootle FS file system backends list isn't sorted\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport uuid\nfrom collections import Counter, OrderedDict\n\nfrom django import forms\nfrom django.utils.functional import cached_property\n\nfrom pootle.core.delegate import revision\nfrom pootle.i18n.gettext import ugettext_lazy as _\nfrom pootle_language.models import Language\n\nfrom .delegate import (\n fs_plugins, fs_translation_mapping_validator, fs_url_validator)\n\n\nFS_CHOICES = (\n (\"gnu\", _(\"GNU-style\"), \"/po/<language_code>.<ext>\"),\n (\"non-gnu\",\n _(\"non GNU-style\"),\n \"/<language_code>/<dir_path>/<filename>.<ext>\"),\n (\"django\",\n _(\"Django-style\"),\n \"/locale/<language_code>/LC_MESSAGES/<filename>.<ext>\"),\n (\"custom\", _(\"Custom\"), \"\"))\n\n\nclass ProjectFSAdminForm(forms.Form):\n\n fs_type = forms.ChoiceField(\n label=_(\"Filesystem backend\"),\n help_text=_(\"Select a filesystem backend\"),\n choices=(),\n widget=forms.Select(\n attrs={'class': 'js-select2'}))\n fs_url = forms.CharField(\n label=_(\"Backend URL or path\"),\n help_text=_(\n \"The URL or path to your translation files\"))\n translation_mapping_presets = forms.ChoiceField(\n label=_(\"Translation mapping presets\"),\n required=False,\n choices=(\n [(\"\", \"-----\"), ]\n + [(x[0], x[1]) for x in FS_CHOICES]),\n widget=forms.Select(\n attrs={'class': 'js-select2 js-select-fs-mapping'}))\n translation_mapping = forms.CharField(\n label=_(\"Translation path mapping\"),\n help_text=_(\"Translation path mapping that maps the localisation \"\n \"files on the filesystem to files on Pootle.\"),\n widget=forms.TextInput(\n attrs={'class': 'js-select-fs-mapping-target'}))\n\n def should_save(self):\n return self.is_valid()\n\n @property\n def fs_type_choices(self):\n return (\n (plugin_type, plugin.name or plugin.fs_type)\n for plugin_type, plugin\n in fs_plugins.gather().items())\n\n def __init__(self, *args, **kwargs):\n self.project = kwargs.pop(\"project\")\n super(ProjectFSAdminForm, self).__init__(*args, **kwargs)\n self.fields[\"fs_type\"].choices = self.fs_type_choices\n\n self.fields[\"fs_url\"].initial = self.project.config.get(\"pootle_fs.fs_url\")\n self.fields[\"fs_type\"].initial = (\n self.project.config.get(\"pootle_fs.fs_type\"))\n translation_mapping = (\n self.project.config.get(\"pootle_fs.translation_mappings\"))\n if translation_mapping:\n self.fields[\"translation_mapping\"].initial = (\n translation_mapping.get(\"default\"))\n\n @property\n def fs_path_validator(self):\n return fs_translation_mapping_validator.get()\n\n @cached_property\n def fs_plugin(self):\n if self.cleaned_data.get(\"fs_type\"):\n return fs_plugins.gather()[self.cleaned_data[\"fs_type\"]]\n\n @cached_property\n def fs_url_validator(self):\n validator = fs_url_validator.get(self.fs_plugin)\n return validator and validator()\n\n def clean(self):\n if not hasattr(self, \"cleaned_data\") or not self.cleaned_data:\n return\n if self.cleaned_data.get(\"translation_mapping\"):\n try:\n self.fs_path_validator(\n self.cleaned_data[\"translation_mapping\"]).validate()\n except ValueError as e:\n self.add_error(\"translation_mapping\", e)\n if not self.fs_url_validator or not self.cleaned_data.get(\"fs_url\"):\n return\n try:\n self.fs_url_validator.validate(self.cleaned_data[\"fs_url\"])\n except forms.ValidationError as e:\n self.add_error(\n \"fs_url\",\n forms.ValidationError(\n \"Incorrect URL or path ('%s') for plugin type '%s': %s\"\n % (self.cleaned_data.get(\"fs_url\"),\n self.cleaned_data.get(\"fs_type\"),\n e)))\n\n def save(self):\n self.project.config[\"pootle_fs.fs_type\"] = self.cleaned_data[\"fs_type\"]\n self.project.config[\"pootle_fs.fs_url\"] = self.cleaned_data[\"fs_url\"]\n self.project.config[\"pootle_fs.translation_mappings\"] = dict(\n default=self.cleaned_data[\"translation_mapping\"])\n\n\nclass LangMappingForm(forms.Form):\n remove = forms.BooleanField(required=False)\n pootle_code = forms.ModelChoiceField(\n Language.objects.all(),\n to_field_name=\"code\",\n widget=forms.Select(attrs={'class': 'js-select2'}))\n fs_code = forms.CharField(\n max_length=32)\n\n def __init__(self, *args, **kwargs):\n self.project = kwargs.pop(\"project\")\n existing_codes = kwargs.pop(\"existing_codes\")\n super(LangMappingForm, self).__init__(*args, **kwargs)\n if existing_codes:\n excluded_codes = (\n [c for c in existing_codes if c != self.initial[\"pootle_code\"]]\n if self.initial and self.initial.get(\"pootle_code\")\n else existing_codes)\n self.fields[\"pootle_code\"].queryset = (\n self.fields[\"pootle_code\"].queryset.exclude(\n code__in=excluded_codes))\n\n\nclass BaseLangMappingFormSet(forms.BaseFormSet):\n\n def __init__(self, *args, **kwargs):\n self.project = kwargs.pop(\"project\")\n mappings = self.project.config.get(\"pootle.core.lang_mapping\", {})\n if mappings:\n kwargs[\"initial\"] = [\n dict(pootle_code=v, fs_code=k)\n for k, v in mappings.items()]\n super(BaseLangMappingFormSet, self).__init__(*args, **kwargs)\n\n @property\n def cleaned_mapping(self):\n mapping = OrderedDict()\n for mapped in self.cleaned_data:\n if not mapped or mapped[\"remove\"]:\n continue\n mapping[mapped[\"fs_code\"]] = mapped[\"pootle_code\"].code\n return mapping\n\n def save(self):\n self.project.config[\"pootle.core.lang_mapping\"] = self.cleaned_mapping\n revision.get(self.project.__class__)(self.project).set(\n keys=[\"pootle.fs.sync\"], value=uuid.uuid4().hex)\n\n def clean(self):\n if any(self.errors):\n return\n fs_counter = Counter([v[\"fs_code\"] for v in self.cleaned_data if v])\n if set(fs_counter.values()) != set([1]):\n raise forms.ValidationError(\n _(\"Filesystem language codes must be unique\"))\n pootle_counter = Counter([v[\"pootle_code\"] for v in self.cleaned_data if v])\n if set(pootle_counter.values()) != set([1]):\n raise forms.ValidationError(\n _(\"Pootle language mappings must be unique\"))\n\n def get_form_kwargs(self, index):\n kwargs = super(BaseLangMappingFormSet, self).get_form_kwargs(index)\n kwargs[\"project\"] = self.project\n kwargs[\"existing_codes\"] = (\n [i[\"pootle_code\"] for i in self.initial]\n if self.initial\n else [])\n return kwargs\n\n\nLangMappingFormSet = forms.formset_factory(\n LangMappingForm,\n formset=BaseLangMappingFormSet)\n", "path": "pootle/apps/pootle_fs/forms.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport uuid\nfrom collections import Counter, OrderedDict\n\nfrom django import forms\nfrom django.utils.functional import cached_property\n\nfrom pootle.core.delegate import revision\nfrom pootle.i18n.gettext import ugettext_lazy as _\nfrom pootle_language.models import Language\n\nfrom .delegate import (\n fs_plugins, fs_translation_mapping_validator, fs_url_validator)\n\n\nFS_CHOICES = (\n (\"gnu\", _(\"GNU-style\"), \"/po/<language_code>.<ext>\"),\n (\"non-gnu\",\n _(\"non GNU-style\"),\n \"/<language_code>/<dir_path>/<filename>.<ext>\"),\n (\"django\",\n _(\"Django-style\"),\n \"/locale/<language_code>/LC_MESSAGES/<filename>.<ext>\"),\n (\"custom\", _(\"Custom\"), \"\"))\n\n\nclass ProjectFSAdminForm(forms.Form):\n\n fs_url = forms.CharField(\n label=_(\"Backend URL or path\"),\n help_text=_(\n \"The URL or path to your translation files\"))\n translation_mapping_presets = forms.ChoiceField(\n label=_(\"Translation mapping presets\"),\n required=False,\n choices=(\n [(\"\", \"-----\"), ]\n + [(x[0], x[1]) for x in FS_CHOICES]),\n widget=forms.Select(\n attrs={'class': 'js-select2 js-select-fs-mapping'}))\n translation_mapping = forms.CharField(\n label=_(\"Translation path mapping\"),\n help_text=_(\"Translation path mapping that maps the localisation \"\n \"files on the filesystem to files on Pootle.\"),\n widget=forms.TextInput(\n attrs={'class': 'js-select-fs-mapping-target'}))\n\n def should_save(self):\n return self.is_valid()\n\n def __init__(self, *args, **kwargs):\n self.project = kwargs.pop(\"project\")\n super(ProjectFSAdminForm, self).__init__(*args, **kwargs)\n\n self.fields[\"fs_url\"].initial = self.project.config.get(\"pootle_fs.fs_url\")\n translation_mapping = (\n self.project.config.get(\"pootle_fs.translation_mappings\"))\n if translation_mapping:\n self.fields[\"translation_mapping\"].initial = (\n translation_mapping.get(\"default\"))\n\n @cached_property\n def fs_plugin(self):\n return fs_plugins.gather()[\n self.project.config[\"pootle_fs.fs_type\"]]\n\n @property\n def fs_path_validator(self):\n return fs_translation_mapping_validator.get()\n\n @cached_property\n def fs_url_validator(self):\n validator = fs_url_validator.get(self.fs_plugin)\n return validator and validator()\n\n def clean(self):\n if not hasattr(self, \"cleaned_data\") or not self.cleaned_data:\n return\n if self.cleaned_data.get(\"translation_mapping\"):\n try:\n self.fs_path_validator(\n self.cleaned_data[\"translation_mapping\"]).validate()\n except ValueError as e:\n self.add_error(\"translation_mapping\", e)\n if not self.fs_url_validator or not self.cleaned_data.get(\"fs_url\"):\n return\n try:\n self.fs_url_validator.validate(self.cleaned_data[\"fs_url\"])\n except forms.ValidationError as e:\n self.add_error(\n \"fs_url\",\n forms.ValidationError(\n \"Incorrect URL or path ('%s') for plugin type '%s': %s\"\n % (self.cleaned_data.get(\"fs_url\"),\n self.fs_plugin.fs_type,\n e)))\n\n def save(self):\n self.project.config[\"pootle_fs.fs_url\"] = self.cleaned_data[\"fs_url\"]\n self.project.config[\"pootle_fs.translation_mappings\"] = dict(\n default=self.cleaned_data[\"translation_mapping\"])\n\n\nclass LangMappingForm(forms.Form):\n remove = forms.BooleanField(required=False)\n pootle_code = forms.ModelChoiceField(\n Language.objects.all(),\n to_field_name=\"code\",\n widget=forms.Select(attrs={'class': 'js-select2'}))\n fs_code = forms.CharField(\n max_length=32)\n\n def __init__(self, *args, **kwargs):\n self.project = kwargs.pop(\"project\")\n existing_codes = kwargs.pop(\"existing_codes\")\n super(LangMappingForm, self).__init__(*args, **kwargs)\n if existing_codes:\n excluded_codes = (\n [c for c in existing_codes if c != self.initial[\"pootle_code\"]]\n if self.initial and self.initial.get(\"pootle_code\")\n else existing_codes)\n self.fields[\"pootle_code\"].queryset = (\n self.fields[\"pootle_code\"].queryset.exclude(\n code__in=excluded_codes))\n\n\nclass BaseLangMappingFormSet(forms.BaseFormSet):\n\n def __init__(self, *args, **kwargs):\n self.project = kwargs.pop(\"project\")\n mappings = self.project.config.get(\"pootle.core.lang_mapping\", {})\n if mappings:\n kwargs[\"initial\"] = [\n dict(pootle_code=v, fs_code=k)\n for k, v in mappings.items()]\n super(BaseLangMappingFormSet, self).__init__(*args, **kwargs)\n\n @property\n def cleaned_mapping(self):\n mapping = OrderedDict()\n for mapped in self.cleaned_data:\n if not mapped or mapped[\"remove\"]:\n continue\n mapping[mapped[\"fs_code\"]] = mapped[\"pootle_code\"].code\n return mapping\n\n def save(self):\n self.project.config[\"pootle.core.lang_mapping\"] = self.cleaned_mapping\n revision.get(self.project.__class__)(self.project).set(\n keys=[\"pootle.fs.sync\"], value=uuid.uuid4().hex)\n\n def clean(self):\n if any(self.errors):\n return\n fs_counter = Counter([v[\"fs_code\"] for v in self.cleaned_data if v])\n if set(fs_counter.values()) != set([1]):\n raise forms.ValidationError(\n _(\"Filesystem language codes must be unique\"))\n pootle_counter = Counter([v[\"pootle_code\"] for v in self.cleaned_data if v])\n if set(pootle_counter.values()) != set([1]):\n raise forms.ValidationError(\n _(\"Pootle language mappings must be unique\"))\n\n def get_form_kwargs(self, index):\n kwargs = super(BaseLangMappingFormSet, self).get_form_kwargs(index)\n kwargs[\"project\"] = self.project\n kwargs[\"existing_codes\"] = (\n [i[\"pootle_code\"] for i in self.initial]\n if self.initial\n else [])\n return kwargs\n\n\nLangMappingFormSet = forms.formset_factory(\n LangMappingForm,\n formset=BaseLangMappingFormSet)\n", "path": "pootle/apps/pootle_fs/forms.py"}]}
| 2,369 | 622 |
gh_patches_debug_46628
|
rasdani/github-patches
|
git_diff
|
nautobot__nautobot-1028
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
1.2 Homepage object counts for DCIM all point to Rack model
<!--
NOTE: IF YOUR ISSUE DOES NOT FOLLOW THIS TEMPLATE, IT WILL BE CLOSED.
This form is only for reporting reproducible bugs. If you need assistance
with Nautobot installation, or if you have a general question, please start a
discussion instead: https://github.com/nautobot/nautobot/discussions
Please describe the environment in which you are running Nautobot. Be sure
that you are running an unmodified instance of the latest stable release
before submitting a bug report, and that any plugins have been disabled.
-->
### Environment
* Python version: 3.9
* Nautobot version: 1.2.0-alpha1
<!--
Describe in detail the exact steps that someone else can take to reproduce
this bug using the current stable release of Nautobot. Begin with the
creation of any necessary database objects and call out every operation
being performed explicitly. If reporting a bug in the REST API, be sure to
reconstruct the raw HTTP request(s) being made: Don't rely on a client
library such as pynautobot.
-->
### Steps to Reproduce
1. Create a set of each DCIM object which is listed on the homepage, like devices, interfaces, etc., and exactly 1 Rack
2. Navigate to the home page
<!-- What did you expect to happen? -->
### Expected Behavior
The home page widgets for object counts for DCIM should list correct counts for each object type.
<!-- What happened instead? -->
### Observed Behavior
Other than sites, all other DCIM object counts show the number of racks (in this case, exactly 1).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nautobot/dcim/homepage.py`
Content:
```
1 from nautobot.dcim.models.power import PowerFeed, PowerPanel
2 from nautobot.dcim.models import Cable, ConsolePort, Interface, PowerOutlet, Rack, Site
3 from nautobot.core.apps import HomePageGroup, HomePageItem, HomePagePanel
4
5
6 layout = (
7 HomePagePanel(
8 name="Organization",
9 weight=100,
10 items=(
11 HomePageItem(
12 name="Sites",
13 link="dcim:site_list",
14 model=Site,
15 description="Geographic location",
16 permissions=["dcim.view_site"],
17 weight=100,
18 ),
19 ),
20 ),
21 HomePagePanel(
22 name="DCIM",
23 weight=200,
24 items=(
25 HomePageItem(
26 name="Racks",
27 link="dcim:rack_list",
28 model=Rack,
29 description="Equipment racks, optionally organized by group",
30 permissions=["dcim.view_rack"],
31 weight=100,
32 ),
33 HomePageItem(
34 name="Device Types",
35 link="dcim:devicetype_list",
36 model=Rack,
37 description="Physical hardware models by manufacturer",
38 permissions=["dcim.view_devicetype"],
39 weight=200,
40 ),
41 HomePageItem(
42 name="Devices",
43 link="dcim:device_list",
44 model=Rack,
45 description="Rack-mounted network equipment, servers, and other devices",
46 permissions=["dcim.view_device"],
47 weight=300,
48 ),
49 HomePageItem(
50 name="Virtual Chassis",
51 link="dcim:virtualchassis_list",
52 model=Rack,
53 permissions=["dcim.view_virtualchassis"],
54 description="Represents a set of devices which share a common control plane",
55 weight=400,
56 ),
57 HomePageGroup(
58 name="Connections",
59 weight=500,
60 items=(
61 HomePageItem(
62 name="Cables",
63 link="dcim:cable_list",
64 model=Cable,
65 permissions=["dcim.view_cable"],
66 weight=100,
67 ),
68 HomePageItem(
69 name="Interfaces",
70 link="dcim:interface_connections_list",
71 model=Interface,
72 permissions=["dcim.view_interface"],
73 weight=200,
74 ),
75 HomePageItem(
76 name="Console",
77 link="dcim:console_connections_list",
78 model=ConsolePort,
79 permissions=["dcim.view_consoleport", "dcim.view_consoleserverport"],
80 weight=300,
81 ),
82 HomePageItem(
83 name="Power",
84 link="dcim:power_connections_list",
85 model=PowerOutlet,
86 permissions=["dcim.view_powerport", "dcim.view_poweroutlet"],
87 weight=400,
88 ),
89 ),
90 ),
91 ),
92 ),
93 HomePagePanel(
94 name="Power",
95 weight=300,
96 items=(
97 HomePageItem(
98 name="Power Feeds",
99 link="dcim:powerfeed_list",
100 model=PowerFeed,
101 description="Electrical circuits delivering power from panels",
102 permissions=["dcim.view_powerfeed"],
103 weight=100,
104 ),
105 HomePageItem(
106 name="Power Panels",
107 link="dcim:powerpanel_list",
108 model=PowerPanel,
109 description="Electrical panels receiving utility power",
110 permissions=["dcim.view_powerpanel"],
111 weight=200,
112 ),
113 ),
114 ),
115 )
116
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/nautobot/dcim/homepage.py b/nautobot/dcim/homepage.py
--- a/nautobot/dcim/homepage.py
+++ b/nautobot/dcim/homepage.py
@@ -1,6 +1,5 @@
-from nautobot.dcim.models.power import PowerFeed, PowerPanel
-from nautobot.dcim.models import Cable, ConsolePort, Interface, PowerOutlet, Rack, Site
from nautobot.core.apps import HomePageGroup, HomePageItem, HomePagePanel
+from nautobot.dcim import models
layout = (
@@ -11,7 +10,7 @@
HomePageItem(
name="Sites",
link="dcim:site_list",
- model=Site,
+ model=models.Site,
description="Geographic location",
permissions=["dcim.view_site"],
weight=100,
@@ -25,7 +24,7 @@
HomePageItem(
name="Racks",
link="dcim:rack_list",
- model=Rack,
+ model=models.Rack,
description="Equipment racks, optionally organized by group",
permissions=["dcim.view_rack"],
weight=100,
@@ -33,7 +32,7 @@
HomePageItem(
name="Device Types",
link="dcim:devicetype_list",
- model=Rack,
+ model=models.DeviceType,
description="Physical hardware models by manufacturer",
permissions=["dcim.view_devicetype"],
weight=200,
@@ -41,7 +40,7 @@
HomePageItem(
name="Devices",
link="dcim:device_list",
- model=Rack,
+ model=models.Device,
description="Rack-mounted network equipment, servers, and other devices",
permissions=["dcim.view_device"],
weight=300,
@@ -49,7 +48,7 @@
HomePageItem(
name="Virtual Chassis",
link="dcim:virtualchassis_list",
- model=Rack,
+ model=models.VirtualChassis,
permissions=["dcim.view_virtualchassis"],
description="Represents a set of devices which share a common control plane",
weight=400,
@@ -61,28 +60,28 @@
HomePageItem(
name="Cables",
link="dcim:cable_list",
- model=Cable,
+ model=models.Cable,
permissions=["dcim.view_cable"],
weight=100,
),
HomePageItem(
name="Interfaces",
link="dcim:interface_connections_list",
- model=Interface,
+ model=models.Interface,
permissions=["dcim.view_interface"],
weight=200,
),
HomePageItem(
name="Console",
link="dcim:console_connections_list",
- model=ConsolePort,
+ model=models.ConsolePort,
permissions=["dcim.view_consoleport", "dcim.view_consoleserverport"],
weight=300,
),
HomePageItem(
name="Power",
link="dcim:power_connections_list",
- model=PowerOutlet,
+ model=models.PowerOutlet,
permissions=["dcim.view_powerport", "dcim.view_poweroutlet"],
weight=400,
),
@@ -97,7 +96,7 @@
HomePageItem(
name="Power Feeds",
link="dcim:powerfeed_list",
- model=PowerFeed,
+ model=models.PowerFeed,
description="Electrical circuits delivering power from panels",
permissions=["dcim.view_powerfeed"],
weight=100,
@@ -105,7 +104,7 @@
HomePageItem(
name="Power Panels",
link="dcim:powerpanel_list",
- model=PowerPanel,
+ model=models.PowerPanel,
description="Electrical panels receiving utility power",
permissions=["dcim.view_powerpanel"],
weight=200,
|
{"golden_diff": "diff --git a/nautobot/dcim/homepage.py b/nautobot/dcim/homepage.py\n--- a/nautobot/dcim/homepage.py\n+++ b/nautobot/dcim/homepage.py\n@@ -1,6 +1,5 @@\n-from nautobot.dcim.models.power import PowerFeed, PowerPanel\n-from nautobot.dcim.models import Cable, ConsolePort, Interface, PowerOutlet, Rack, Site\n from nautobot.core.apps import HomePageGroup, HomePageItem, HomePagePanel\n+from nautobot.dcim import models\n \n \n layout = (\n@@ -11,7 +10,7 @@\n HomePageItem(\n name=\"Sites\",\n link=\"dcim:site_list\",\n- model=Site,\n+ model=models.Site,\n description=\"Geographic location\",\n permissions=[\"dcim.view_site\"],\n weight=100,\n@@ -25,7 +24,7 @@\n HomePageItem(\n name=\"Racks\",\n link=\"dcim:rack_list\",\n- model=Rack,\n+ model=models.Rack,\n description=\"Equipment racks, optionally organized by group\",\n permissions=[\"dcim.view_rack\"],\n weight=100,\n@@ -33,7 +32,7 @@\n HomePageItem(\n name=\"Device Types\",\n link=\"dcim:devicetype_list\",\n- model=Rack,\n+ model=models.DeviceType,\n description=\"Physical hardware models by manufacturer\",\n permissions=[\"dcim.view_devicetype\"],\n weight=200,\n@@ -41,7 +40,7 @@\n HomePageItem(\n name=\"Devices\",\n link=\"dcim:device_list\",\n- model=Rack,\n+ model=models.Device,\n description=\"Rack-mounted network equipment, servers, and other devices\",\n permissions=[\"dcim.view_device\"],\n weight=300,\n@@ -49,7 +48,7 @@\n HomePageItem(\n name=\"Virtual Chassis\",\n link=\"dcim:virtualchassis_list\",\n- model=Rack,\n+ model=models.VirtualChassis,\n permissions=[\"dcim.view_virtualchassis\"],\n description=\"Represents a set of devices which share a common control plane\",\n weight=400,\n@@ -61,28 +60,28 @@\n HomePageItem(\n name=\"Cables\",\n link=\"dcim:cable_list\",\n- model=Cable,\n+ model=models.Cable,\n permissions=[\"dcim.view_cable\"],\n weight=100,\n ),\n HomePageItem(\n name=\"Interfaces\",\n link=\"dcim:interface_connections_list\",\n- model=Interface,\n+ model=models.Interface,\n permissions=[\"dcim.view_interface\"],\n weight=200,\n ),\n HomePageItem(\n name=\"Console\",\n link=\"dcim:console_connections_list\",\n- model=ConsolePort,\n+ model=models.ConsolePort,\n permissions=[\"dcim.view_consoleport\", \"dcim.view_consoleserverport\"],\n weight=300,\n ),\n HomePageItem(\n name=\"Power\",\n link=\"dcim:power_connections_list\",\n- model=PowerOutlet,\n+ model=models.PowerOutlet,\n permissions=[\"dcim.view_powerport\", \"dcim.view_poweroutlet\"],\n weight=400,\n ),\n@@ -97,7 +96,7 @@\n HomePageItem(\n name=\"Power Feeds\",\n link=\"dcim:powerfeed_list\",\n- model=PowerFeed,\n+ model=models.PowerFeed,\n description=\"Electrical circuits delivering power from panels\",\n permissions=[\"dcim.view_powerfeed\"],\n weight=100,\n@@ -105,7 +104,7 @@\n HomePageItem(\n name=\"Power Panels\",\n link=\"dcim:powerpanel_list\",\n- model=PowerPanel,\n+ model=models.PowerPanel,\n description=\"Electrical panels receiving utility power\",\n permissions=[\"dcim.view_powerpanel\"],\n weight=200,\n", "issue": "1.2 Homepage object counts for DCIM all point to Rack model\n<!--\r\n NOTE: IF YOUR ISSUE DOES NOT FOLLOW THIS TEMPLATE, IT WILL BE CLOSED.\r\n\r\n This form is only for reporting reproducible bugs. If you need assistance\r\n with Nautobot installation, or if you have a general question, please start a\r\n discussion instead: https://github.com/nautobot/nautobot/discussions\r\n\r\n Please describe the environment in which you are running Nautobot. Be sure\r\n that you are running an unmodified instance of the latest stable release\r\n before submitting a bug report, and that any plugins have been disabled.\r\n-->\r\n### Environment\r\n* Python version: 3.9\r\n* Nautobot version: 1.2.0-alpha1\r\n\r\n<!--\r\n Describe in detail the exact steps that someone else can take to reproduce\r\n this bug using the current stable release of Nautobot. Begin with the\r\n creation of any necessary database objects and call out every operation\r\n being performed explicitly. If reporting a bug in the REST API, be sure to\r\n reconstruct the raw HTTP request(s) being made: Don't rely on a client\r\n library such as pynautobot.\r\n-->\r\n### Steps to Reproduce\r\n1. Create a set of each DCIM object which is listed on the homepage, like devices, interfaces, etc., and exactly 1 Rack\r\n2. Navigate to the home page\r\n\r\n<!-- What did you expect to happen? -->\r\n### Expected Behavior\r\nThe home page widgets for object counts for DCIM should list correct counts for each object type.\r\n\r\n<!-- What happened instead? -->\r\n### Observed Behavior\r\nOther than sites, all other DCIM object counts show the number of racks (in this case, exactly 1).\n", "before_files": [{"content": "from nautobot.dcim.models.power import PowerFeed, PowerPanel\nfrom nautobot.dcim.models import Cable, ConsolePort, Interface, PowerOutlet, Rack, Site\nfrom nautobot.core.apps import HomePageGroup, HomePageItem, HomePagePanel\n\n\nlayout = (\n HomePagePanel(\n name=\"Organization\",\n weight=100,\n items=(\n HomePageItem(\n name=\"Sites\",\n link=\"dcim:site_list\",\n model=Site,\n description=\"Geographic location\",\n permissions=[\"dcim.view_site\"],\n weight=100,\n ),\n ),\n ),\n HomePagePanel(\n name=\"DCIM\",\n weight=200,\n items=(\n HomePageItem(\n name=\"Racks\",\n link=\"dcim:rack_list\",\n model=Rack,\n description=\"Equipment racks, optionally organized by group\",\n permissions=[\"dcim.view_rack\"],\n weight=100,\n ),\n HomePageItem(\n name=\"Device Types\",\n link=\"dcim:devicetype_list\",\n model=Rack,\n description=\"Physical hardware models by manufacturer\",\n permissions=[\"dcim.view_devicetype\"],\n weight=200,\n ),\n HomePageItem(\n name=\"Devices\",\n link=\"dcim:device_list\",\n model=Rack,\n description=\"Rack-mounted network equipment, servers, and other devices\",\n permissions=[\"dcim.view_device\"],\n weight=300,\n ),\n HomePageItem(\n name=\"Virtual Chassis\",\n link=\"dcim:virtualchassis_list\",\n model=Rack,\n permissions=[\"dcim.view_virtualchassis\"],\n description=\"Represents a set of devices which share a common control plane\",\n weight=400,\n ),\n HomePageGroup(\n name=\"Connections\",\n weight=500,\n items=(\n HomePageItem(\n name=\"Cables\",\n link=\"dcim:cable_list\",\n model=Cable,\n permissions=[\"dcim.view_cable\"],\n weight=100,\n ),\n HomePageItem(\n name=\"Interfaces\",\n link=\"dcim:interface_connections_list\",\n model=Interface,\n permissions=[\"dcim.view_interface\"],\n weight=200,\n ),\n HomePageItem(\n name=\"Console\",\n link=\"dcim:console_connections_list\",\n model=ConsolePort,\n permissions=[\"dcim.view_consoleport\", \"dcim.view_consoleserverport\"],\n weight=300,\n ),\n HomePageItem(\n name=\"Power\",\n link=\"dcim:power_connections_list\",\n model=PowerOutlet,\n permissions=[\"dcim.view_powerport\", \"dcim.view_poweroutlet\"],\n weight=400,\n ),\n ),\n ),\n ),\n ),\n HomePagePanel(\n name=\"Power\",\n weight=300,\n items=(\n HomePageItem(\n name=\"Power Feeds\",\n link=\"dcim:powerfeed_list\",\n model=PowerFeed,\n description=\"Electrical circuits delivering power from panels\",\n permissions=[\"dcim.view_powerfeed\"],\n weight=100,\n ),\n HomePageItem(\n name=\"Power Panels\",\n link=\"dcim:powerpanel_list\",\n model=PowerPanel,\n description=\"Electrical panels receiving utility power\",\n permissions=[\"dcim.view_powerpanel\"],\n weight=200,\n ),\n ),\n ),\n)\n", "path": "nautobot/dcim/homepage.py"}], "after_files": [{"content": "from nautobot.core.apps import HomePageGroup, HomePageItem, HomePagePanel\nfrom nautobot.dcim import models\n\n\nlayout = (\n HomePagePanel(\n name=\"Organization\",\n weight=100,\n items=(\n HomePageItem(\n name=\"Sites\",\n link=\"dcim:site_list\",\n model=models.Site,\n description=\"Geographic location\",\n permissions=[\"dcim.view_site\"],\n weight=100,\n ),\n ),\n ),\n HomePagePanel(\n name=\"DCIM\",\n weight=200,\n items=(\n HomePageItem(\n name=\"Racks\",\n link=\"dcim:rack_list\",\n model=models.Rack,\n description=\"Equipment racks, optionally organized by group\",\n permissions=[\"dcim.view_rack\"],\n weight=100,\n ),\n HomePageItem(\n name=\"Device Types\",\n link=\"dcim:devicetype_list\",\n model=models.DeviceType,\n description=\"Physical hardware models by manufacturer\",\n permissions=[\"dcim.view_devicetype\"],\n weight=200,\n ),\n HomePageItem(\n name=\"Devices\",\n link=\"dcim:device_list\",\n model=models.Device,\n description=\"Rack-mounted network equipment, servers, and other devices\",\n permissions=[\"dcim.view_device\"],\n weight=300,\n ),\n HomePageItem(\n name=\"Virtual Chassis\",\n link=\"dcim:virtualchassis_list\",\n model=models.VirtualChassis,\n permissions=[\"dcim.view_virtualchassis\"],\n description=\"Represents a set of devices which share a common control plane\",\n weight=400,\n ),\n HomePageGroup(\n name=\"Connections\",\n weight=500,\n items=(\n HomePageItem(\n name=\"Cables\",\n link=\"dcim:cable_list\",\n model=models.Cable,\n permissions=[\"dcim.view_cable\"],\n weight=100,\n ),\n HomePageItem(\n name=\"Interfaces\",\n link=\"dcim:interface_connections_list\",\n model=models.Interface,\n permissions=[\"dcim.view_interface\"],\n weight=200,\n ),\n HomePageItem(\n name=\"Console\",\n link=\"dcim:console_connections_list\",\n model=models.ConsolePort,\n permissions=[\"dcim.view_consoleport\", \"dcim.view_consoleserverport\"],\n weight=300,\n ),\n HomePageItem(\n name=\"Power\",\n link=\"dcim:power_connections_list\",\n model=models.PowerOutlet,\n permissions=[\"dcim.view_powerport\", \"dcim.view_poweroutlet\"],\n weight=400,\n ),\n ),\n ),\n ),\n ),\n HomePagePanel(\n name=\"Power\",\n weight=300,\n items=(\n HomePageItem(\n name=\"Power Feeds\",\n link=\"dcim:powerfeed_list\",\n model=models.PowerFeed,\n description=\"Electrical circuits delivering power from panels\",\n permissions=[\"dcim.view_powerfeed\"],\n weight=100,\n ),\n HomePageItem(\n name=\"Power Panels\",\n link=\"dcim:powerpanel_list\",\n model=models.PowerPanel,\n description=\"Electrical panels receiving utility power\",\n permissions=[\"dcim.view_powerpanel\"],\n weight=200,\n ),\n ),\n ),\n)\n", "path": "nautobot/dcim/homepage.py"}]}
| 1,586 | 860 |
gh_patches_debug_19669
|
rasdani/github-patches
|
git_diff
|
PrefectHQ__prefect-10059
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
2.10.17 failure starting prefect agent on a windows machine
### First check
- [X] I added a descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the Prefect documentation for this issue.
- [X] I checked that this issue is related to Prefect and not one of its dependencies.
### Bug summary
prefect agent won't start in 2.10.17 but it does in 2.10.16
### Reproduction
```python3
(venv) C:\Users\jkang1\PycharmProjects\site_analytics\venv\Scripts>prefect agent start --pool default-agent-pool
```
### Error
```python3
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\Users\jkang1\PycharmProjects\site_analytics\venv\Scripts\prefect.exe\__main__.py", line 4, in <module>
File "C:\Users\jkang1\PycharmProjects\site_analytics\venv\Lib\site-packages\prefect\__init__.py", line 37, in <module>
from prefect.states import State
File "C:\Users\jkang1\PycharmProjects\site_analytics\venv\Lib\site-packages\prefect\states.py", line 14, in <module>
from prefect.client.schemas import State as State
File "C:\Users\jkang1\PycharmProjects\site_analytics\venv\Lib\site-packages\prefect\client\schemas\__init__.py", line 4, in <module>
from .actions import BlockTypeUpdate, StateCreate
File "C:\Users\jkang1\PycharmProjects\site_analytics\venv\Lib\site-packages\prefect\client\schemas\actions.py", line 9, in <module>
import prefect.client.schemas.objects as objects
File "C:\Users\jkang1\PycharmProjects\site_analytics\venv\Lib\site-packages\prefect\client\schemas\objects.py", line 20, in <module>
from prefect._internal.schemas.bases import ObjectBaseModel, PrefectBaseModel
File "C:\Users\jkang1\PycharmProjects\site_analytics\venv\Lib\site-packages\prefect\_internal\schemas\bases.py", line 18, in <module>
from prefect._internal.compatibility.experimental import experiment_enabled
File "C:\Users\jkang1\PycharmProjects\site_analytics\venv\Lib\site-packages\prefect\_internal\compatibility\experimental.py", line 19, in <module>
from prefect.settings import PREFECT_EXPERIMENTAL_WARN, SETTING_VARIABLES, Setting
File "C:\Users\jkang1\PycharmProjects\site_analytics\venv\Lib\site-packages\prefect\settings.py", line 70, in <module>
from prefect._internal.compatibility.deprecated import generate_deprecation_message
File "C:\Users\jkang1\PycharmProjects\site_analytics\venv\Lib\site-packages\prefect\_internal\compatibility\deprecated.py", line 21, in <module>
from prefect.utilities.importtools import (
File "C:\Users\jkang1\PycharmProjects\site_analytics\venv\Lib\site-packages\prefect\utilities\importtools.py", line 17, in <module>
from prefect.utilities.filesystem import filename, is_local_path, tmpchdir
File "C:\Users\jkang1\PycharmProjects\site_analytics\venv\Lib\site-packages\prefect\utilities\filesystem.py", line 5, in <module>
import resource
ModuleNotFoundError: No module named 'resource'
```
### Versions
```Text
If I try to call it I receive the same error:
(venv) C:\Users\jkang1\PycharmProjects\site_analytics\venv\Scripts>prefect version
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\Users\jkang1\PycharmProjects\site_analytics\venv\Scripts\prefect.exe\__main__.py", line 4, in <module>
File "C:\Users\jkang1\PycharmProjects\site_analytics\venv\Lib\site-packages\prefect\__init__.py", line 37, in <module>
from prefect.states import State
File "C:\Users\jkang1\PycharmProjects\site_analytics\venv\Lib\site-packages\prefect\states.py", line 14, in <module>
from prefect.client.schemas import State as State
File "C:\Users\jkang1\PycharmProjects\site_analytics\venv\Lib\site-packages\prefect\client\schemas\__init__.py", line 4, in <module>
from .actions import BlockTypeUpdate, StateCreate
File "C:\Users\jkang1\PycharmProjects\site_analytics\venv\Lib\site-packages\prefect\client\schemas\actions.py", line 9, in <module>
import prefect.client.schemas.objects as objects
File "C:\Users\jkang1\PycharmProjects\site_analytics\venv\Lib\site-packages\prefect\client\schemas\objects.py", line 20, in <module>
from prefect._internal.schemas.bases import ObjectBaseModel, PrefectBaseModel
File "C:\Users\jkang1\PycharmProjects\site_analytics\venv\Lib\site-packages\prefect\_internal\schemas\bases.py", line 18, in <module>
from prefect._internal.compatibility.experimental import experiment_enabled
File "C:\Users\jkang1\PycharmProjects\site_analytics\venv\Lib\site-packages\prefect\_internal\compatibility\experimental.py", line 19, in <module>
from prefect.settings import PREFECT_EXPERIMENTAL_WARN, SETTING_VARIABLES, Setting
File "C:\Users\jkang1\PycharmProjects\site_analytics\venv\Lib\site-packages\prefect\settings.py", line 70, in <module>
from prefect._internal.compatibility.deprecated import generate_deprecation_message
File "C:\Users\jkang1\PycharmProjects\site_analytics\venv\Lib\site-packages\prefect\_internal\compatibility\deprecated.py", line 21, in <module>
from prefect.utilities.importtools import (
File "C:\Users\jkang1\PycharmProjects\site_analytics\venv\Lib\site-packages\prefect\utilities\importtools.py", line 17, in <module>
from prefect.utilities.filesystem import filename, is_local_path, tmpchdir
File "C:\Users\jkang1\PycharmProjects\site_analytics\venv\Lib\site-packages\prefect\utilities\filesystem.py", line 5, in <module>
import resource
ModuleNotFoundError: No module named 'resource'
When I downgrade to 2.10.16 and run it:
(venv) C:\Users\jkang1\PycharmProjects\site_analytics\venv\Scripts>prefect version
Version: 2.10.16
API version: 0.8.4
Python version: 3.11.3
Git commit: 6cd7c3ee
Built: Tue, Jun 20, 2023 2:59 PM
OS/Arch: win32/AMD64
Profile: remote-desktop
Server type: cloud
```
### Additional context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/prefect/utilities/filesystem.py`
Content:
```
1 """
2 Utilities for working with file systems
3 """
4 import os
5 import resource
6 import pathlib
7 from contextlib import contextmanager
8 from pathlib import Path, PureWindowsPath
9 from typing import Union
10
11 import fsspec
12 import pathspec
13 import prefect
14 from fsspec.core import OpenFile
15 from fsspec.implementations.local import LocalFileSystem
16
17
18 def create_default_ignore_file(path: str) -> bool:
19 """
20 Creates default ignore file in the provided path if one does not already exist; returns boolean specifying
21 whether a file was created.
22 """
23 path = pathlib.Path(path)
24 ignore_file = path / ".prefectignore"
25 if ignore_file.exists():
26 return False
27 default_file = pathlib.Path(prefect.__module_path__) / ".prefectignore"
28 with ignore_file.open(mode="w") as f:
29 f.write(default_file.read_text())
30 return True
31
32
33 def filter_files(
34 root: str = ".", ignore_patterns: list = None, include_dirs: bool = True
35 ) -> set:
36 """
37 This function accepts a root directory path and a list of file patterns to ignore, and returns
38 a list of files that excludes those that should be ignored.
39
40 The specification matches that of [.gitignore files](https://git-scm.com/docs/gitignore).
41 """
42 if ignore_patterns is None:
43 ignore_patterns = []
44 spec = pathspec.PathSpec.from_lines("gitwildmatch", ignore_patterns)
45 ignored_files = {p.path for p in spec.match_tree_entries(root)}
46 if include_dirs:
47 all_files = {p.path for p in pathspec.util.iter_tree_entries(root)}
48 else:
49 all_files = set(pathspec.util.iter_tree_files(root))
50 included_files = all_files - ignored_files
51 return included_files
52
53
54 @contextmanager
55 def tmpchdir(path: str):
56 """
57 Change current-working directories for the duration of the context
58 """
59 path = os.path.abspath(path)
60 if os.path.isfile(path) or (not os.path.exists(path) and not path.endswith("/")):
61 path = os.path.dirname(path)
62
63 owd = os.getcwd()
64
65 try:
66 os.chdir(path)
67 yield path
68 finally:
69 os.chdir(owd)
70
71
72 def filename(path: str) -> str:
73 """Extract the file name from a path with remote file system support"""
74 try:
75 of: OpenFile = fsspec.open(path)
76 sep = of.fs.sep
77 except (ImportError, AttributeError):
78 sep = "\\" if "\\" in path else "/"
79 return path.split(sep)[-1]
80
81
82 def is_local_path(path: Union[str, pathlib.Path, OpenFile]):
83 """Check if the given path points to a local or remote file system"""
84 if isinstance(path, str):
85 try:
86 of = fsspec.open(path)
87 except ImportError:
88 # The path is a remote file system that uses a lib that is not installed
89 return False
90 elif isinstance(path, pathlib.Path):
91 return True
92 elif isinstance(path, OpenFile):
93 of = path
94 else:
95 raise TypeError(f"Invalid path of type {type(path).__name__!r}")
96
97 return type(of.fs) == LocalFileSystem
98
99
100 def to_display_path(
101 path: Union[pathlib.Path, str], relative_to: Union[pathlib.Path, str] = None
102 ) -> str:
103 """
104 Convert a path to a displayable path. The absolute path or relative path to the
105 current (or given) directory will be returned, whichever is shorter.
106 """
107 path, relative_to = (
108 pathlib.Path(path).resolve(),
109 pathlib.Path(relative_to or ".").resolve(),
110 )
111 relative_path = str(path.relative_to(relative_to))
112 absolute_path = str(path)
113 return relative_path if len(relative_path) < len(absolute_path) else absolute_path
114
115
116 def relative_path_to_current_platform(path_str: str) -> Path:
117 """
118 Converts a relative path generated on any platform to a relative path for the
119 current platform.
120 """
121
122 return Path(PureWindowsPath(path_str).as_posix())
123
124
125 def get_open_file_limit():
126 soft_limit, _ = resource.getrlimit(resource.RLIMIT_NOFILE)
127 return soft_limit
128
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/prefect/utilities/filesystem.py b/src/prefect/utilities/filesystem.py
--- a/src/prefect/utilities/filesystem.py
+++ b/src/prefect/utilities/filesystem.py
@@ -2,7 +2,6 @@
Utilities for working with file systems
"""
import os
-import resource
import pathlib
from contextlib import contextmanager
from pathlib import Path, PureWindowsPath
@@ -122,6 +121,21 @@
return Path(PureWindowsPath(path_str).as_posix())
-def get_open_file_limit():
- soft_limit, _ = resource.getrlimit(resource.RLIMIT_NOFILE)
- return soft_limit
+def get_open_file_limit() -> int:
+ """Get the maximum number of open files allowed for the current process"""
+
+ try:
+ if os.name == "nt":
+ import ctypes
+
+ return ctypes.cdll.ucrtbase._getmaxstdio()
+ else:
+ import resource
+
+ soft_limit, _ = resource.getrlimit(resource.RLIMIT_NOFILE)
+ return soft_limit
+ except Exception:
+ # Catch all exceptions, as ctypes can raise several errors
+ # depending on what went wrong. Return a safe default if we
+ # can't get the limit from the OS.
+ return 200
|
{"golden_diff": "diff --git a/src/prefect/utilities/filesystem.py b/src/prefect/utilities/filesystem.py\n--- a/src/prefect/utilities/filesystem.py\n+++ b/src/prefect/utilities/filesystem.py\n@@ -2,7 +2,6 @@\n Utilities for working with file systems\n \"\"\"\n import os\n-import resource\n import pathlib\n from contextlib import contextmanager\n from pathlib import Path, PureWindowsPath\n@@ -122,6 +121,21 @@\n return Path(PureWindowsPath(path_str).as_posix())\n \n \n-def get_open_file_limit():\n- soft_limit, _ = resource.getrlimit(resource.RLIMIT_NOFILE)\n- return soft_limit\n+def get_open_file_limit() -> int:\n+ \"\"\"Get the maximum number of open files allowed for the current process\"\"\"\n+\n+ try:\n+ if os.name == \"nt\":\n+ import ctypes\n+\n+ return ctypes.cdll.ucrtbase._getmaxstdio()\n+ else:\n+ import resource\n+\n+ soft_limit, _ = resource.getrlimit(resource.RLIMIT_NOFILE)\n+ return soft_limit\n+ except Exception:\n+ # Catch all exceptions, as ctypes can raise several errors\n+ # depending on what went wrong. Return a safe default if we\n+ # can't get the limit from the OS.\n+ return 200\n", "issue": "2.10.17 failure starting prefect agent on a windows machine\n### First check\n\n- [X] I added a descriptive title to this issue.\n- [X] I used the GitHub search to find a similar issue and didn't find it.\n- [X] I searched the Prefect documentation for this issue.\n- [X] I checked that this issue is related to Prefect and not one of its dependencies.\n\n### Bug summary\n\nprefect agent won't start in 2.10.17 but it does in 2.10.16\n\n### Reproduction\n\n```python3\n(venv) C:\\Users\\jkang1\\PycharmProjects\\site_analytics\\venv\\Scripts>prefect agent start --pool default-agent-pool\n```\n\n\n### Error\n\n```python3\nTraceback (most recent call last):\r\n File \"<frozen runpy>\", line 198, in _run_module_as_main\r\n File \"<frozen runpy>\", line 88, in _run_code\r\n File \"C:\\Users\\jkang1\\PycharmProjects\\site_analytics\\venv\\Scripts\\prefect.exe\\__main__.py\", line 4, in <module>\r\n File \"C:\\Users\\jkang1\\PycharmProjects\\site_analytics\\venv\\Lib\\site-packages\\prefect\\__init__.py\", line 37, in <module>\r\n from prefect.states import State\r\n File \"C:\\Users\\jkang1\\PycharmProjects\\site_analytics\\venv\\Lib\\site-packages\\prefect\\states.py\", line 14, in <module>\r\n from prefect.client.schemas import State as State\r\n File \"C:\\Users\\jkang1\\PycharmProjects\\site_analytics\\venv\\Lib\\site-packages\\prefect\\client\\schemas\\__init__.py\", line 4, in <module>\r\n from .actions import BlockTypeUpdate, StateCreate\r\n File \"C:\\Users\\jkang1\\PycharmProjects\\site_analytics\\venv\\Lib\\site-packages\\prefect\\client\\schemas\\actions.py\", line 9, in <module>\r\n import prefect.client.schemas.objects as objects\r\n File \"C:\\Users\\jkang1\\PycharmProjects\\site_analytics\\venv\\Lib\\site-packages\\prefect\\client\\schemas\\objects.py\", line 20, in <module>\r\n from prefect._internal.schemas.bases import ObjectBaseModel, PrefectBaseModel\r\n File \"C:\\Users\\jkang1\\PycharmProjects\\site_analytics\\venv\\Lib\\site-packages\\prefect\\_internal\\schemas\\bases.py\", line 18, in <module>\r\n from prefect._internal.compatibility.experimental import experiment_enabled\r\n File \"C:\\Users\\jkang1\\PycharmProjects\\site_analytics\\venv\\Lib\\site-packages\\prefect\\_internal\\compatibility\\experimental.py\", line 19, in <module>\r\n from prefect.settings import PREFECT_EXPERIMENTAL_WARN, SETTING_VARIABLES, Setting\r\n File \"C:\\Users\\jkang1\\PycharmProjects\\site_analytics\\venv\\Lib\\site-packages\\prefect\\settings.py\", line 70, in <module>\r\n from prefect._internal.compatibility.deprecated import generate_deprecation_message\r\n File \"C:\\Users\\jkang1\\PycharmProjects\\site_analytics\\venv\\Lib\\site-packages\\prefect\\_internal\\compatibility\\deprecated.py\", line 21, in <module>\r\n from prefect.utilities.importtools import (\r\n File \"C:\\Users\\jkang1\\PycharmProjects\\site_analytics\\venv\\Lib\\site-packages\\prefect\\utilities\\importtools.py\", line 17, in <module>\r\n from prefect.utilities.filesystem import filename, is_local_path, tmpchdir\r\n File \"C:\\Users\\jkang1\\PycharmProjects\\site_analytics\\venv\\Lib\\site-packages\\prefect\\utilities\\filesystem.py\", line 5, in <module>\r\n import resource\r\nModuleNotFoundError: No module named 'resource'\n```\n\n\n### Versions\n\n```Text\nIf I try to call it I receive the same error:\r\n\r\n(venv) C:\\Users\\jkang1\\PycharmProjects\\site_analytics\\venv\\Scripts>prefect version\r\nTraceback (most recent call last):\r\n File \"<frozen runpy>\", line 198, in _run_module_as_main\r\n File \"<frozen runpy>\", line 88, in _run_code\r\n File \"C:\\Users\\jkang1\\PycharmProjects\\site_analytics\\venv\\Scripts\\prefect.exe\\__main__.py\", line 4, in <module>\r\n File \"C:\\Users\\jkang1\\PycharmProjects\\site_analytics\\venv\\Lib\\site-packages\\prefect\\__init__.py\", line 37, in <module>\r\n from prefect.states import State\r\n File \"C:\\Users\\jkang1\\PycharmProjects\\site_analytics\\venv\\Lib\\site-packages\\prefect\\states.py\", line 14, in <module>\r\n from prefect.client.schemas import State as State\r\n File \"C:\\Users\\jkang1\\PycharmProjects\\site_analytics\\venv\\Lib\\site-packages\\prefect\\client\\schemas\\__init__.py\", line 4, in <module>\r\n from .actions import BlockTypeUpdate, StateCreate\r\n File \"C:\\Users\\jkang1\\PycharmProjects\\site_analytics\\venv\\Lib\\site-packages\\prefect\\client\\schemas\\actions.py\", line 9, in <module>\r\n import prefect.client.schemas.objects as objects\r\n File \"C:\\Users\\jkang1\\PycharmProjects\\site_analytics\\venv\\Lib\\site-packages\\prefect\\client\\schemas\\objects.py\", line 20, in <module>\r\n from prefect._internal.schemas.bases import ObjectBaseModel, PrefectBaseModel\r\n File \"C:\\Users\\jkang1\\PycharmProjects\\site_analytics\\venv\\Lib\\site-packages\\prefect\\_internal\\schemas\\bases.py\", line 18, in <module>\r\n from prefect._internal.compatibility.experimental import experiment_enabled\r\n File \"C:\\Users\\jkang1\\PycharmProjects\\site_analytics\\venv\\Lib\\site-packages\\prefect\\_internal\\compatibility\\experimental.py\", line 19, in <module>\r\n from prefect.settings import PREFECT_EXPERIMENTAL_WARN, SETTING_VARIABLES, Setting\r\n File \"C:\\Users\\jkang1\\PycharmProjects\\site_analytics\\venv\\Lib\\site-packages\\prefect\\settings.py\", line 70, in <module>\r\n from prefect._internal.compatibility.deprecated import generate_deprecation_message\r\n File \"C:\\Users\\jkang1\\PycharmProjects\\site_analytics\\venv\\Lib\\site-packages\\prefect\\_internal\\compatibility\\deprecated.py\", line 21, in <module>\r\n from prefect.utilities.importtools import (\r\n File \"C:\\Users\\jkang1\\PycharmProjects\\site_analytics\\venv\\Lib\\site-packages\\prefect\\utilities\\importtools.py\", line 17, in <module>\r\n from prefect.utilities.filesystem import filename, is_local_path, tmpchdir\r\n File \"C:\\Users\\jkang1\\PycharmProjects\\site_analytics\\venv\\Lib\\site-packages\\prefect\\utilities\\filesystem.py\", line 5, in <module>\r\n import resource\r\nModuleNotFoundError: No module named 'resource'\r\n\r\nWhen I downgrade to 2.10.16 and run it:\r\n(venv) C:\\Users\\jkang1\\PycharmProjects\\site_analytics\\venv\\Scripts>prefect version\r\nVersion: 2.10.16\r\nAPI version: 0.8.4\r\nPython version: 3.11.3\r\nGit commit: 6cd7c3ee\r\nBuilt: Tue, Jun 20, 2023 2:59 PM\r\nOS/Arch: win32/AMD64\r\nProfile: remote-desktop\r\nServer type: cloud\n```\n\n\n### Additional context\n\n_No response_\n", "before_files": [{"content": "\"\"\"\nUtilities for working with file systems\n\"\"\"\nimport os\nimport resource\nimport pathlib\nfrom contextlib import contextmanager\nfrom pathlib import Path, PureWindowsPath\nfrom typing import Union\n\nimport fsspec\nimport pathspec\nimport prefect\nfrom fsspec.core import OpenFile\nfrom fsspec.implementations.local import LocalFileSystem\n\n\ndef create_default_ignore_file(path: str) -> bool:\n \"\"\"\n Creates default ignore file in the provided path if one does not already exist; returns boolean specifying\n whether a file was created.\n \"\"\"\n path = pathlib.Path(path)\n ignore_file = path / \".prefectignore\"\n if ignore_file.exists():\n return False\n default_file = pathlib.Path(prefect.__module_path__) / \".prefectignore\"\n with ignore_file.open(mode=\"w\") as f:\n f.write(default_file.read_text())\n return True\n\n\ndef filter_files(\n root: str = \".\", ignore_patterns: list = None, include_dirs: bool = True\n) -> set:\n \"\"\"\n This function accepts a root directory path and a list of file patterns to ignore, and returns\n a list of files that excludes those that should be ignored.\n\n The specification matches that of [.gitignore files](https://git-scm.com/docs/gitignore).\n \"\"\"\n if ignore_patterns is None:\n ignore_patterns = []\n spec = pathspec.PathSpec.from_lines(\"gitwildmatch\", ignore_patterns)\n ignored_files = {p.path for p in spec.match_tree_entries(root)}\n if include_dirs:\n all_files = {p.path for p in pathspec.util.iter_tree_entries(root)}\n else:\n all_files = set(pathspec.util.iter_tree_files(root))\n included_files = all_files - ignored_files\n return included_files\n\n\n@contextmanager\ndef tmpchdir(path: str):\n \"\"\"\n Change current-working directories for the duration of the context\n \"\"\"\n path = os.path.abspath(path)\n if os.path.isfile(path) or (not os.path.exists(path) and not path.endswith(\"/\")):\n path = os.path.dirname(path)\n\n owd = os.getcwd()\n\n try:\n os.chdir(path)\n yield path\n finally:\n os.chdir(owd)\n\n\ndef filename(path: str) -> str:\n \"\"\"Extract the file name from a path with remote file system support\"\"\"\n try:\n of: OpenFile = fsspec.open(path)\n sep = of.fs.sep\n except (ImportError, AttributeError):\n sep = \"\\\\\" if \"\\\\\" in path else \"/\"\n return path.split(sep)[-1]\n\n\ndef is_local_path(path: Union[str, pathlib.Path, OpenFile]):\n \"\"\"Check if the given path points to a local or remote file system\"\"\"\n if isinstance(path, str):\n try:\n of = fsspec.open(path)\n except ImportError:\n # The path is a remote file system that uses a lib that is not installed\n return False\n elif isinstance(path, pathlib.Path):\n return True\n elif isinstance(path, OpenFile):\n of = path\n else:\n raise TypeError(f\"Invalid path of type {type(path).__name__!r}\")\n\n return type(of.fs) == LocalFileSystem\n\n\ndef to_display_path(\n path: Union[pathlib.Path, str], relative_to: Union[pathlib.Path, str] = None\n) -> str:\n \"\"\"\n Convert a path to a displayable path. The absolute path or relative path to the\n current (or given) directory will be returned, whichever is shorter.\n \"\"\"\n path, relative_to = (\n pathlib.Path(path).resolve(),\n pathlib.Path(relative_to or \".\").resolve(),\n )\n relative_path = str(path.relative_to(relative_to))\n absolute_path = str(path)\n return relative_path if len(relative_path) < len(absolute_path) else absolute_path\n\n\ndef relative_path_to_current_platform(path_str: str) -> Path:\n \"\"\"\n Converts a relative path generated on any platform to a relative path for the\n current platform.\n \"\"\"\n\n return Path(PureWindowsPath(path_str).as_posix())\n\n\ndef get_open_file_limit():\n soft_limit, _ = resource.getrlimit(resource.RLIMIT_NOFILE)\n return soft_limit\n", "path": "src/prefect/utilities/filesystem.py"}], "after_files": [{"content": "\"\"\"\nUtilities for working with file systems\n\"\"\"\nimport os\nimport pathlib\nfrom contextlib import contextmanager\nfrom pathlib import Path, PureWindowsPath\nfrom typing import Union\n\nimport fsspec\nimport pathspec\nimport prefect\nfrom fsspec.core import OpenFile\nfrom fsspec.implementations.local import LocalFileSystem\n\n\ndef create_default_ignore_file(path: str) -> bool:\n \"\"\"\n Creates default ignore file in the provided path if one does not already exist; returns boolean specifying\n whether a file was created.\n \"\"\"\n path = pathlib.Path(path)\n ignore_file = path / \".prefectignore\"\n if ignore_file.exists():\n return False\n default_file = pathlib.Path(prefect.__module_path__) / \".prefectignore\"\n with ignore_file.open(mode=\"w\") as f:\n f.write(default_file.read_text())\n return True\n\n\ndef filter_files(\n root: str = \".\", ignore_patterns: list = None, include_dirs: bool = True\n) -> set:\n \"\"\"\n This function accepts a root directory path and a list of file patterns to ignore, and returns\n a list of files that excludes those that should be ignored.\n\n The specification matches that of [.gitignore files](https://git-scm.com/docs/gitignore).\n \"\"\"\n if ignore_patterns is None:\n ignore_patterns = []\n spec = pathspec.PathSpec.from_lines(\"gitwildmatch\", ignore_patterns)\n ignored_files = {p.path for p in spec.match_tree_entries(root)}\n if include_dirs:\n all_files = {p.path for p in pathspec.util.iter_tree_entries(root)}\n else:\n all_files = set(pathspec.util.iter_tree_files(root))\n included_files = all_files - ignored_files\n return included_files\n\n\n@contextmanager\ndef tmpchdir(path: str):\n \"\"\"\n Change current-working directories for the duration of the context\n \"\"\"\n path = os.path.abspath(path)\n if os.path.isfile(path) or (not os.path.exists(path) and not path.endswith(\"/\")):\n path = os.path.dirname(path)\n\n owd = os.getcwd()\n\n try:\n os.chdir(path)\n yield path\n finally:\n os.chdir(owd)\n\n\ndef filename(path: str) -> str:\n \"\"\"Extract the file name from a path with remote file system support\"\"\"\n try:\n of: OpenFile = fsspec.open(path)\n sep = of.fs.sep\n except (ImportError, AttributeError):\n sep = \"\\\\\" if \"\\\\\" in path else \"/\"\n return path.split(sep)[-1]\n\n\ndef is_local_path(path: Union[str, pathlib.Path, OpenFile]):\n \"\"\"Check if the given path points to a local or remote file system\"\"\"\n if isinstance(path, str):\n try:\n of = fsspec.open(path)\n except ImportError:\n # The path is a remote file system that uses a lib that is not installed\n return False\n elif isinstance(path, pathlib.Path):\n return True\n elif isinstance(path, OpenFile):\n of = path\n else:\n raise TypeError(f\"Invalid path of type {type(path).__name__!r}\")\n\n return type(of.fs) == LocalFileSystem\n\n\ndef to_display_path(\n path: Union[pathlib.Path, str], relative_to: Union[pathlib.Path, str] = None\n) -> str:\n \"\"\"\n Convert a path to a displayable path. The absolute path or relative path to the\n current (or given) directory will be returned, whichever is shorter.\n \"\"\"\n path, relative_to = (\n pathlib.Path(path).resolve(),\n pathlib.Path(relative_to or \".\").resolve(),\n )\n relative_path = str(path.relative_to(relative_to))\n absolute_path = str(path)\n return relative_path if len(relative_path) < len(absolute_path) else absolute_path\n\n\ndef relative_path_to_current_platform(path_str: str) -> Path:\n \"\"\"\n Converts a relative path generated on any platform to a relative path for the\n current platform.\n \"\"\"\n\n return Path(PureWindowsPath(path_str).as_posix())\n\n\ndef get_open_file_limit() -> int:\n \"\"\"Get the maximum number of open files allowed for the current process\"\"\"\n\n try:\n if os.name == \"nt\":\n import ctypes\n\n return ctypes.cdll.ucrtbase._getmaxstdio()\n else:\n import resource\n\n soft_limit, _ = resource.getrlimit(resource.RLIMIT_NOFILE)\n return soft_limit\n except Exception:\n # Catch all exceptions, as ctypes can raise several errors\n # depending on what went wrong. Return a safe default if we\n # can't get the limit from the OS.\n return 200\n", "path": "src/prefect/utilities/filesystem.py"}]}
| 3,243 | 298 |
gh_patches_debug_15188
|
rasdani/github-patches
|
git_diff
|
celery__celery-4870
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unable to save pickled objects with couchbase as result backend
Hi it seems like when I attempt to process groups of chords, the couchbase result backend is consistently failing to unlock the chord when reading from the db:
`celery.chord_unlock[e3139ae5-a67d-4f0c-8c54-73b1e19433d2] retry: Retry in 1s: ValueFormatError()`
This behavior does not occur with the redis result backend, i can switch between them and see that the error unlocking only occurs on couchbase.
## Steps to reproduce
Attempt to process a chord with couchbase backend using pickle serialization.
## Expected behavior
Chords process correctly, and resulting data is fed to the next task
## Actual behavior
Celery is unable to unlock the chord from the result backend
## Celery project info:
```
celery -A ipaassteprunner report
software -> celery:4.1.0 (latentcall) kombu:4.1.0 py:2.7.10
billiard:3.5.0.3 py-amqp:2.2.2
platform -> system:Darwin arch:64bit imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:pyamqp results:couchbase://isadmin:**@localhost:8091/tasks
task_serializer: 'pickle'
result_serializer: 'pickle'
dbconfig: <ipaascommon.ipaas_config.DatabaseConfig object at 0x10fbbfe10>
db_pass: u'********'
IpaasConfig: <class 'ipaascommon.ipaas_config.IpaasConfig'>
imports:
('ipaassteprunner.tasks',)
worker_redirect_stdouts: False
DatabaseConfig: u'********'
db_port: '8091'
ipaas_constants: <module 'ipaascommon.ipaas_constants' from '/Library/Python/2.7/site-packages/ipaascommon/ipaas_constants.pyc'>
enable_utc: True
db_user: 'isadmin'
db_host: 'localhost'
result_backend: u'couchbase://isadmin:********@localhost:8091/tasks'
result_expires: 3600
iconfig: <ipaascommon.ipaas_config.IpaasConfig object at 0x10fbbfd90>
broker_url: u'amqp://guest:********@localhost:5672//'
task_bucket: 'tasks'
accept_content: ['pickle']
```
### Additional Debug output
```
[2017-12-13 15:39:57,860: INFO/MainProcess] Received task: celery.chord_unlock[e3139ae5-a67d-4f0c-8c54-73b1e19433d2] ETA:[2017-12-13 20:39:58.853535+00:00]
[2017-12-13 15:39:57,861: DEBUG/MainProcess] basic.qos: prefetch_count->27
[2017-12-13 15:39:58,859: DEBUG/MainProcess] TaskPool: Apply <function _fast_trace_task at 0x10b410b90> (args:('celery.chord_unlock', 'e3139ae5-a67d-4f0c-8c54-73b1e19433d2', {'origin': 'gen53678@silo2460', 'lang': 'py', 'task': 'celery.chord_unlock', 'group': None, 'root_id': '0acd3e0d-7532-445c-8916-b5fc8a6395ab', u'delivery_info': {u'priority': None, u'redelivered': False, u'routing_key': u'celery', u'exchange': u''}, 'expires': None, u'correlation_id': 'e3139ae5-a67d-4f0c-8c54-73b1e19433d2', 'retries': 311, 'timelimit': [None, None], 'argsrepr': "('90c64bef-21ba-42f9-be75-fdd724375a7a', {'chord_size': 2, 'task': 'ipaassteprunner.tasks.transfer_data', 'subtask_type': None, 'kwargs': {}, 'args': (), 'options': {'chord_size': None, 'chain': [...], 'task_id': '9c6b5e1c-2089-4db7-9590-117aeaf782c7', 'root_id': '0acd3e0d-7532-445c-8916-b5fc8a6395ab', 'parent_id': 'c27c9565-19a6-4683-8180-60f0c25007e9', 'reply_to': '0a58093c-6fdd-3458-9a34-7d5e094ac6a8'}, 'immutable': False})", 'eta': '2017-12-13T20:39:58.853535+00:00', 'parent_id': 'c27c9565-19a6-4683-8180-60f0c25007e9', u'reply_to':... kwargs:{})
[2017-12-13 15:40:00,061: DEBUG/MainProcess] basic.qos: prefetch_count->26
[2017-12-13 15:40:00,065: DEBUG/MainProcess] Task accepted: celery.chord_unlock[e3139ae5-a67d-4f0c-8c54-73b1e19433d2] pid:53679
[2017-12-13 15:40:00,076: INFO/ForkPoolWorker-6] Task celery.chord_unlock[e3139ae5-a67d-4f0c-8c54-73b1e19433d2] retry: Retry in 1s: ValueFormatError()
```
### Stack trace from chord unlocking failure
```python
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/celery/app/trace.py", line 374, in trace_task
R = retval = fun(*args, **kwargs)
File "/Library/Python/2.7/site-packages/celery/app/trace.py", line 629, in __protected_call__
return self.run(*args, **kwargs)
File "/Library/Python/2.7/site-packages/celery/app/builtins.py", line 75, in unlock_chord
raise self.retry(countdown=interval, max_retries=max_retries)
File "/Library/Python/2.7/site-packages/celery/app/task.py", line 689, in retry
raise ret
Retry: Retry in 1s
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `celery/backends/couchbase.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """Couchbase result store backend."""
3 from __future__ import absolute_import, unicode_literals
4
5 import logging
6
7 from kombu.utils.encoding import str_t
8 from kombu.utils.url import _parse_url
9
10 from celery.exceptions import ImproperlyConfigured
11
12 from .base import KeyValueStoreBackend
13
14 try:
15 import couchbase_ffi # noqa
16 except ImportError:
17 pass # noqa
18 try:
19 from couchbase import Couchbase
20 from couchbase.connection import Connection
21 from couchbase.exceptions import NotFoundError
22 except ImportError:
23 Couchbase = Connection = NotFoundError = None # noqa
24
25 __all__ = ('CouchbaseBackend',)
26
27
28 class CouchbaseBackend(KeyValueStoreBackend):
29 """Couchbase backend.
30
31 Raises:
32 celery.exceptions.ImproperlyConfigured:
33 if module :pypi:`couchbase` is not available.
34 """
35
36 bucket = 'default'
37 host = 'localhost'
38 port = 8091
39 username = None
40 password = None
41 quiet = False
42 supports_autoexpire = True
43
44 timeout = 2.5
45
46 # Use str as couchbase key not bytes
47 key_t = str_t
48
49 def __init__(self, url=None, *args, **kwargs):
50 kwargs.setdefault('expires_type', int)
51 super(CouchbaseBackend, self).__init__(*args, **kwargs)
52 self.url = url
53
54 if Couchbase is None:
55 raise ImproperlyConfigured(
56 'You need to install the couchbase library to use the '
57 'Couchbase backend.',
58 )
59
60 uhost = uport = uname = upass = ubucket = None
61 if url:
62 _, uhost, uport, uname, upass, ubucket, _ = _parse_url(url)
63 ubucket = ubucket.strip('/') if ubucket else None
64
65 config = self.app.conf.get('couchbase_backend_settings', None)
66 if config is not None:
67 if not isinstance(config, dict):
68 raise ImproperlyConfigured(
69 'Couchbase backend settings should be grouped in a dict',
70 )
71 else:
72 config = {}
73
74 self.host = uhost or config.get('host', self.host)
75 self.port = int(uport or config.get('port', self.port))
76 self.bucket = ubucket or config.get('bucket', self.bucket)
77 self.username = uname or config.get('username', self.username)
78 self.password = upass or config.get('password', self.password)
79
80 self._connection = None
81
82 def _get_connection(self):
83 """Connect to the Couchbase server."""
84 if self._connection is None:
85 kwargs = {'bucket': self.bucket, 'host': self.host}
86
87 if self.port:
88 kwargs.update({'port': self.port})
89 if self.username:
90 kwargs.update({'username': self.username})
91 if self.password:
92 kwargs.update({'password': self.password})
93
94 logging.debug('couchbase settings %r', kwargs)
95 self._connection = Connection(**kwargs)
96 return self._connection
97
98 @property
99 def connection(self):
100 return self._get_connection()
101
102 def get(self, key):
103 try:
104 return self.connection.get(key).value
105 except NotFoundError:
106 return None
107
108 def set(self, key, value):
109 self.connection.set(key, value, ttl=self.expires)
110
111 def mget(self, keys):
112 return [self.get(key) for key in keys]
113
114 def delete(self, key):
115 self.connection.delete(key)
116
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/celery/backends/couchbase.py b/celery/backends/couchbase.py
--- a/celery/backends/couchbase.py
+++ b/celery/backends/couchbase.py
@@ -19,6 +19,7 @@
from couchbase import Couchbase
from couchbase.connection import Connection
from couchbase.exceptions import NotFoundError
+ from couchbase import FMT_AUTO
except ImportError:
Couchbase = Connection = NotFoundError = None # noqa
@@ -106,7 +107,7 @@
return None
def set(self, key, value):
- self.connection.set(key, value, ttl=self.expires)
+ self.connection.set(key, value, ttl=self.expires, format=FMT_AUTO)
def mget(self, keys):
return [self.get(key) for key in keys]
|
{"golden_diff": "diff --git a/celery/backends/couchbase.py b/celery/backends/couchbase.py\n--- a/celery/backends/couchbase.py\n+++ b/celery/backends/couchbase.py\n@@ -19,6 +19,7 @@\n from couchbase import Couchbase\n from couchbase.connection import Connection\n from couchbase.exceptions import NotFoundError\n+ from couchbase import FMT_AUTO\n except ImportError:\n Couchbase = Connection = NotFoundError = None # noqa\n \n@@ -106,7 +107,7 @@\n return None\n \n def set(self, key, value):\n- self.connection.set(key, value, ttl=self.expires)\n+ self.connection.set(key, value, ttl=self.expires, format=FMT_AUTO)\n \n def mget(self, keys):\n return [self.get(key) for key in keys]\n", "issue": "Unable to save pickled objects with couchbase as result backend\nHi it seems like when I attempt to process groups of chords, the couchbase result backend is consistently failing to unlock the chord when reading from the db:\r\n\r\n`celery.chord_unlock[e3139ae5-a67d-4f0c-8c54-73b1e19433d2] retry: Retry in 1s: ValueFormatError()`\r\n\r\nThis behavior does not occur with the redis result backend, i can switch between them and see that the error unlocking only occurs on couchbase.\r\n\r\n## Steps to reproduce\r\nAttempt to process a chord with couchbase backend using pickle serialization.\r\n\r\n## Expected behavior\r\nChords process correctly, and resulting data is fed to the next task\r\n\r\n## Actual behavior\r\nCelery is unable to unlock the chord from the result backend\r\n\r\n## Celery project info: \r\n```\r\ncelery -A ipaassteprunner report\r\n\r\nsoftware -> celery:4.1.0 (latentcall) kombu:4.1.0 py:2.7.10\r\n billiard:3.5.0.3 py-amqp:2.2.2\r\nplatform -> system:Darwin arch:64bit imp:CPython\r\nloader -> celery.loaders.app.AppLoader\r\nsettings -> transport:pyamqp results:couchbase://isadmin:**@localhost:8091/tasks\r\n\r\ntask_serializer: 'pickle'\r\nresult_serializer: 'pickle'\r\ndbconfig: <ipaascommon.ipaas_config.DatabaseConfig object at 0x10fbbfe10>\r\ndb_pass: u'********'\r\nIpaasConfig: <class 'ipaascommon.ipaas_config.IpaasConfig'>\r\nimports:\r\n ('ipaassteprunner.tasks',)\r\nworker_redirect_stdouts: False\r\nDatabaseConfig: u'********'\r\ndb_port: '8091'\r\nipaas_constants: <module 'ipaascommon.ipaas_constants' from '/Library/Python/2.7/site-packages/ipaascommon/ipaas_constants.pyc'>\r\nenable_utc: True\r\ndb_user: 'isadmin'\r\ndb_host: 'localhost'\r\nresult_backend: u'couchbase://isadmin:********@localhost:8091/tasks'\r\nresult_expires: 3600\r\niconfig: <ipaascommon.ipaas_config.IpaasConfig object at 0x10fbbfd90>\r\nbroker_url: u'amqp://guest:********@localhost:5672//'\r\ntask_bucket: 'tasks'\r\naccept_content: ['pickle']\r\n```\r\n### Additional Debug output\r\n```\r\n[2017-12-13 15:39:57,860: INFO/MainProcess] Received task: celery.chord_unlock[e3139ae5-a67d-4f0c-8c54-73b1e19433d2] ETA:[2017-12-13 20:39:58.853535+00:00] \r\n[2017-12-13 15:39:57,861: DEBUG/MainProcess] basic.qos: prefetch_count->27\r\n[2017-12-13 15:39:58,859: DEBUG/MainProcess] TaskPool: Apply <function _fast_trace_task at 0x10b410b90> (args:('celery.chord_unlock', 'e3139ae5-a67d-4f0c-8c54-73b1e19433d2', {'origin': 'gen53678@silo2460', 'lang': 'py', 'task': 'celery.chord_unlock', 'group': None, 'root_id': '0acd3e0d-7532-445c-8916-b5fc8a6395ab', u'delivery_info': {u'priority': None, u'redelivered': False, u'routing_key': u'celery', u'exchange': u''}, 'expires': None, u'correlation_id': 'e3139ae5-a67d-4f0c-8c54-73b1e19433d2', 'retries': 311, 'timelimit': [None, None], 'argsrepr': \"('90c64bef-21ba-42f9-be75-fdd724375a7a', {'chord_size': 2, 'task': 'ipaassteprunner.tasks.transfer_data', 'subtask_type': None, 'kwargs': {}, 'args': (), 'options': {'chord_size': None, 'chain': [...], 'task_id': '9c6b5e1c-2089-4db7-9590-117aeaf782c7', 'root_id': '0acd3e0d-7532-445c-8916-b5fc8a6395ab', 'parent_id': 'c27c9565-19a6-4683-8180-60f0c25007e9', 'reply_to': '0a58093c-6fdd-3458-9a34-7d5e094ac6a8'}, 'immutable': False})\", 'eta': '2017-12-13T20:39:58.853535+00:00', 'parent_id': 'c27c9565-19a6-4683-8180-60f0c25007e9', u'reply_to':... kwargs:{})\r\n[2017-12-13 15:40:00,061: DEBUG/MainProcess] basic.qos: prefetch_count->26\r\n[2017-12-13 15:40:00,065: DEBUG/MainProcess] Task accepted: celery.chord_unlock[e3139ae5-a67d-4f0c-8c54-73b1e19433d2] pid:53679\r\n[2017-12-13 15:40:00,076: INFO/ForkPoolWorker-6] Task celery.chord_unlock[e3139ae5-a67d-4f0c-8c54-73b1e19433d2] retry: Retry in 1s: ValueFormatError()\r\n```\r\n\r\n### Stack trace from chord unlocking failure\r\n```python\r\nTraceback (most recent call last):\r\n File \"/Library/Python/2.7/site-packages/celery/app/trace.py\", line 374, in trace_task\r\n R = retval = fun(*args, **kwargs)\r\n File \"/Library/Python/2.7/site-packages/celery/app/trace.py\", line 629, in __protected_call__\r\n return self.run(*args, **kwargs)\r\n File \"/Library/Python/2.7/site-packages/celery/app/builtins.py\", line 75, in unlock_chord\r\n raise self.retry(countdown=interval, max_retries=max_retries)\r\n File \"/Library/Python/2.7/site-packages/celery/app/task.py\", line 689, in retry\r\n raise ret\r\nRetry: Retry in 1s\r\n```\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Couchbase result store backend.\"\"\"\nfrom __future__ import absolute_import, unicode_literals\n\nimport logging\n\nfrom kombu.utils.encoding import str_t\nfrom kombu.utils.url import _parse_url\n\nfrom celery.exceptions import ImproperlyConfigured\n\nfrom .base import KeyValueStoreBackend\n\ntry:\n import couchbase_ffi # noqa\nexcept ImportError:\n pass # noqa\ntry:\n from couchbase import Couchbase\n from couchbase.connection import Connection\n from couchbase.exceptions import NotFoundError\nexcept ImportError:\n Couchbase = Connection = NotFoundError = None # noqa\n\n__all__ = ('CouchbaseBackend',)\n\n\nclass CouchbaseBackend(KeyValueStoreBackend):\n \"\"\"Couchbase backend.\n\n Raises:\n celery.exceptions.ImproperlyConfigured:\n if module :pypi:`couchbase` is not available.\n \"\"\"\n\n bucket = 'default'\n host = 'localhost'\n port = 8091\n username = None\n password = None\n quiet = False\n supports_autoexpire = True\n\n timeout = 2.5\n\n # Use str as couchbase key not bytes\n key_t = str_t\n\n def __init__(self, url=None, *args, **kwargs):\n kwargs.setdefault('expires_type', int)\n super(CouchbaseBackend, self).__init__(*args, **kwargs)\n self.url = url\n\n if Couchbase is None:\n raise ImproperlyConfigured(\n 'You need to install the couchbase library to use the '\n 'Couchbase backend.',\n )\n\n uhost = uport = uname = upass = ubucket = None\n if url:\n _, uhost, uport, uname, upass, ubucket, _ = _parse_url(url)\n ubucket = ubucket.strip('/') if ubucket else None\n\n config = self.app.conf.get('couchbase_backend_settings', None)\n if config is not None:\n if not isinstance(config, dict):\n raise ImproperlyConfigured(\n 'Couchbase backend settings should be grouped in a dict',\n )\n else:\n config = {}\n\n self.host = uhost or config.get('host', self.host)\n self.port = int(uport or config.get('port', self.port))\n self.bucket = ubucket or config.get('bucket', self.bucket)\n self.username = uname or config.get('username', self.username)\n self.password = upass or config.get('password', self.password)\n\n self._connection = None\n\n def _get_connection(self):\n \"\"\"Connect to the Couchbase server.\"\"\"\n if self._connection is None:\n kwargs = {'bucket': self.bucket, 'host': self.host}\n\n if self.port:\n kwargs.update({'port': self.port})\n if self.username:\n kwargs.update({'username': self.username})\n if self.password:\n kwargs.update({'password': self.password})\n\n logging.debug('couchbase settings %r', kwargs)\n self._connection = Connection(**kwargs)\n return self._connection\n\n @property\n def connection(self):\n return self._get_connection()\n\n def get(self, key):\n try:\n return self.connection.get(key).value\n except NotFoundError:\n return None\n\n def set(self, key, value):\n self.connection.set(key, value, ttl=self.expires)\n\n def mget(self, keys):\n return [self.get(key) for key in keys]\n\n def delete(self, key):\n self.connection.delete(key)\n", "path": "celery/backends/couchbase.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Couchbase result store backend.\"\"\"\nfrom __future__ import absolute_import, unicode_literals\n\nimport logging\n\nfrom kombu.utils.encoding import str_t\nfrom kombu.utils.url import _parse_url\n\nfrom celery.exceptions import ImproperlyConfigured\n\nfrom .base import KeyValueStoreBackend\n\ntry:\n import couchbase_ffi # noqa\nexcept ImportError:\n pass # noqa\ntry:\n from couchbase import Couchbase\n from couchbase.connection import Connection\n from couchbase.exceptions import NotFoundError\n from couchbase import FMT_AUTO\nexcept ImportError:\n Couchbase = Connection = NotFoundError = None # noqa\n\n__all__ = ('CouchbaseBackend',)\n\n\nclass CouchbaseBackend(KeyValueStoreBackend):\n \"\"\"Couchbase backend.\n\n Raises:\n celery.exceptions.ImproperlyConfigured:\n if module :pypi:`couchbase` is not available.\n \"\"\"\n\n bucket = 'default'\n host = 'localhost'\n port = 8091\n username = None\n password = None\n quiet = False\n supports_autoexpire = True\n\n timeout = 2.5\n\n # Use str as couchbase key not bytes\n key_t = str_t\n\n def __init__(self, url=None, *args, **kwargs):\n kwargs.setdefault('expires_type', int)\n super(CouchbaseBackend, self).__init__(*args, **kwargs)\n self.url = url\n\n if Couchbase is None:\n raise ImproperlyConfigured(\n 'You need to install the couchbase library to use the '\n 'Couchbase backend.',\n )\n\n uhost = uport = uname = upass = ubucket = None\n if url:\n _, uhost, uport, uname, upass, ubucket, _ = _parse_url(url)\n ubucket = ubucket.strip('/') if ubucket else None\n\n config = self.app.conf.get('couchbase_backend_settings', None)\n if config is not None:\n if not isinstance(config, dict):\n raise ImproperlyConfigured(\n 'Couchbase backend settings should be grouped in a dict',\n )\n else:\n config = {}\n\n self.host = uhost or config.get('host', self.host)\n self.port = int(uport or config.get('port', self.port))\n self.bucket = ubucket or config.get('bucket', self.bucket)\n self.username = uname or config.get('username', self.username)\n self.password = upass or config.get('password', self.password)\n\n self._connection = None\n\n def _get_connection(self):\n \"\"\"Connect to the Couchbase server.\"\"\"\n if self._connection is None:\n kwargs = {'bucket': self.bucket, 'host': self.host}\n\n if self.port:\n kwargs.update({'port': self.port})\n if self.username:\n kwargs.update({'username': self.username})\n if self.password:\n kwargs.update({'password': self.password})\n\n logging.debug('couchbase settings %r', kwargs)\n self._connection = Connection(**kwargs)\n return self._connection\n\n @property\n def connection(self):\n return self._get_connection()\n\n def get(self, key):\n try:\n return self.connection.get(key).value\n except NotFoundError:\n return None\n\n def set(self, key, value):\n self.connection.set(key, value, ttl=self.expires, format=FMT_AUTO)\n\n def mget(self, keys):\n return [self.get(key) for key in keys]\n\n def delete(self, key):\n self.connection.delete(key)\n", "path": "celery/backends/couchbase.py"}]}
| 2,991 | 191 |
gh_patches_debug_33144
|
rasdani/github-patches
|
git_diff
|
scoutapp__scout_apm_python-358
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Instrument Starlette authentication
Starlette has a basic authentication framework: https://www.starlette.io/authentication/
We should probably copy what we do for Django here, tagging user ID's/ display names on traces.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/scout_apm/async_/starlette.py`
Content:
```
1 # coding=utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 import wrapt
5 from starlette.background import BackgroundTask
6 from starlette.requests import Request
7
8 import scout_apm.core
9 from scout_apm.core.tracked_request import TrackedRequest
10 from scout_apm.core.web_requests import (
11 create_filtered_path,
12 ignore_path,
13 track_amazon_request_queue_time,
14 track_request_queue_time,
15 )
16
17
18 class ScoutMiddleware:
19 def __init__(self, app):
20 self.app = app
21 installed = scout_apm.core.install()
22 self._do_nothing = not installed
23 if installed:
24 install_background_instrumentation()
25
26 async def __call__(self, scope, receive, send):
27 if self._do_nothing or scope["type"] != "http":
28 await self.app(scope, receive, send)
29 return
30
31 request = Request(scope)
32 tracked_request = TrackedRequest.instance()
33 # Can't name controller until post-routing - see final clause
34 controller_span = tracked_request.start_span(operation="Controller/Unknown")
35
36 tracked_request.tag(
37 "path",
38 create_filtered_path(request.url.path, request.query_params.multi_items()),
39 )
40 if ignore_path(request.url.path):
41 tracked_request.tag("ignore_transaction", True)
42
43 user_ip = (
44 request.headers.get("x-forwarded-for", default="").split(",")[0]
45 or request.headers.get("client-ip", default="").split(",")[0]
46 or request.client.host
47 )
48 tracked_request.tag("user_ip", user_ip)
49
50 queue_time = request.headers.get(
51 "x-queue-start", default=""
52 ) or request.headers.get("x-request-start", default="")
53 tracked_queue_time = track_request_queue_time(queue_time, tracked_request)
54 if not tracked_queue_time:
55 amazon_queue_time = request.headers.get("x-amzn-trace-id", default="")
56 track_amazon_request_queue_time(amazon_queue_time, tracked_request)
57
58 def rename_controller_span_from_endpoint():
59 if "endpoint" in scope:
60 # Rename top span
61 endpoint = scope["endpoint"]
62 controller_span.operation = "Controller/{}.{}".format(
63 endpoint.__module__, endpoint.__qualname__
64 )
65 tracked_request.is_real_request = True
66
67 async def wrapped_send(data):
68 # Finish HTTP span when body finishes sending, not later (e.g.
69 # after background tasks)
70 if data.get("type", None) == "http.response.body" and not data.get(
71 "more_body", False
72 ):
73 rename_controller_span_from_endpoint()
74 tracked_request.stop_span()
75 return await send(data)
76
77 try:
78 await self.app(scope, receive, wrapped_send)
79 except Exception as exc:
80 tracked_request.tag("error", "true")
81 raise exc
82 finally:
83 if tracked_request.end_time is None:
84 rename_controller_span_from_endpoint()
85 tracked_request.stop_span()
86
87
88 background_instrumentation_installed = False
89
90
91 def install_background_instrumentation():
92 global background_instrumentation_installed
93 if background_instrumentation_installed:
94 return
95 background_instrumentation_installed = True
96
97 @wrapt.decorator
98 async def wrapped_background_call(wrapped, instance, args, kwargs):
99 tracked_request = TrackedRequest.instance()
100 tracked_request.is_real_request = True
101 tracked_request.start_span(
102 operation="Job/{}.{}".format(
103 instance.func.__module__, instance.func.__qualname__
104 )
105 )
106 try:
107 return await wrapped(*args, **kwargs)
108 finally:
109 tracked_request.stop_span()
110
111 BackgroundTask.__call__ = wrapped_background_call(BackgroundTask.__call__)
112
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/scout_apm/async_/starlette.py b/src/scout_apm/async_/starlette.py
--- a/src/scout_apm/async_/starlette.py
+++ b/src/scout_apm/async_/starlette.py
@@ -55,7 +55,7 @@
amazon_queue_time = request.headers.get("x-amzn-trace-id", default="")
track_amazon_request_queue_time(amazon_queue_time, tracked_request)
- def rename_controller_span_from_endpoint():
+ def grab_extra_data():
if "endpoint" in scope:
# Rename top span
endpoint = scope["endpoint"]
@@ -64,13 +64,23 @@
)
tracked_request.is_real_request = True
+ # From AuthenticationMiddleware - bypass request.user because it
+ # throws AssertionError if 'user' is not in Scope, and we need a
+ # try/except already
+ try:
+ username = scope["user"].display_name
+ except (KeyError, AttributeError):
+ pass
+ else:
+ tracked_request.tag("username", username)
+
async def wrapped_send(data):
# Finish HTTP span when body finishes sending, not later (e.g.
# after background tasks)
if data.get("type", None) == "http.response.body" and not data.get(
"more_body", False
):
- rename_controller_span_from_endpoint()
+ grab_extra_data()
tracked_request.stop_span()
return await send(data)
@@ -81,7 +91,7 @@
raise exc
finally:
if tracked_request.end_time is None:
- rename_controller_span_from_endpoint()
+ grab_extra_data()
tracked_request.stop_span()
|
{"golden_diff": "diff --git a/src/scout_apm/async_/starlette.py b/src/scout_apm/async_/starlette.py\n--- a/src/scout_apm/async_/starlette.py\n+++ b/src/scout_apm/async_/starlette.py\n@@ -55,7 +55,7 @@\n amazon_queue_time = request.headers.get(\"x-amzn-trace-id\", default=\"\")\n track_amazon_request_queue_time(amazon_queue_time, tracked_request)\n \n- def rename_controller_span_from_endpoint():\n+ def grab_extra_data():\n if \"endpoint\" in scope:\n # Rename top span\n endpoint = scope[\"endpoint\"]\n@@ -64,13 +64,23 @@\n )\n tracked_request.is_real_request = True\n \n+ # From AuthenticationMiddleware - bypass request.user because it\n+ # throws AssertionError if 'user' is not in Scope, and we need a\n+ # try/except already\n+ try:\n+ username = scope[\"user\"].display_name\n+ except (KeyError, AttributeError):\n+ pass\n+ else:\n+ tracked_request.tag(\"username\", username)\n+\n async def wrapped_send(data):\n # Finish HTTP span when body finishes sending, not later (e.g.\n # after background tasks)\n if data.get(\"type\", None) == \"http.response.body\" and not data.get(\n \"more_body\", False\n ):\n- rename_controller_span_from_endpoint()\n+ grab_extra_data()\n tracked_request.stop_span()\n return await send(data)\n \n@@ -81,7 +91,7 @@\n raise exc\n finally:\n if tracked_request.end_time is None:\n- rename_controller_span_from_endpoint()\n+ grab_extra_data()\n tracked_request.stop_span()\n", "issue": "Instrument Starlette authentication\nStarlette has a basic authentication framework: https://www.starlette.io/authentication/\r\n\r\nWe should probably copy what we do for Django here, tagging user ID's/ display names on traces.\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport wrapt\nfrom starlette.background import BackgroundTask\nfrom starlette.requests import Request\n\nimport scout_apm.core\nfrom scout_apm.core.tracked_request import TrackedRequest\nfrom scout_apm.core.web_requests import (\n create_filtered_path,\n ignore_path,\n track_amazon_request_queue_time,\n track_request_queue_time,\n)\n\n\nclass ScoutMiddleware:\n def __init__(self, app):\n self.app = app\n installed = scout_apm.core.install()\n self._do_nothing = not installed\n if installed:\n install_background_instrumentation()\n\n async def __call__(self, scope, receive, send):\n if self._do_nothing or scope[\"type\"] != \"http\":\n await self.app(scope, receive, send)\n return\n\n request = Request(scope)\n tracked_request = TrackedRequest.instance()\n # Can't name controller until post-routing - see final clause\n controller_span = tracked_request.start_span(operation=\"Controller/Unknown\")\n\n tracked_request.tag(\n \"path\",\n create_filtered_path(request.url.path, request.query_params.multi_items()),\n )\n if ignore_path(request.url.path):\n tracked_request.tag(\"ignore_transaction\", True)\n\n user_ip = (\n request.headers.get(\"x-forwarded-for\", default=\"\").split(\",\")[0]\n or request.headers.get(\"client-ip\", default=\"\").split(\",\")[0]\n or request.client.host\n )\n tracked_request.tag(\"user_ip\", user_ip)\n\n queue_time = request.headers.get(\n \"x-queue-start\", default=\"\"\n ) or request.headers.get(\"x-request-start\", default=\"\")\n tracked_queue_time = track_request_queue_time(queue_time, tracked_request)\n if not tracked_queue_time:\n amazon_queue_time = request.headers.get(\"x-amzn-trace-id\", default=\"\")\n track_amazon_request_queue_time(amazon_queue_time, tracked_request)\n\n def rename_controller_span_from_endpoint():\n if \"endpoint\" in scope:\n # Rename top span\n endpoint = scope[\"endpoint\"]\n controller_span.operation = \"Controller/{}.{}\".format(\n endpoint.__module__, endpoint.__qualname__\n )\n tracked_request.is_real_request = True\n\n async def wrapped_send(data):\n # Finish HTTP span when body finishes sending, not later (e.g.\n # after background tasks)\n if data.get(\"type\", None) == \"http.response.body\" and not data.get(\n \"more_body\", False\n ):\n rename_controller_span_from_endpoint()\n tracked_request.stop_span()\n return await send(data)\n\n try:\n await self.app(scope, receive, wrapped_send)\n except Exception as exc:\n tracked_request.tag(\"error\", \"true\")\n raise exc\n finally:\n if tracked_request.end_time is None:\n rename_controller_span_from_endpoint()\n tracked_request.stop_span()\n\n\nbackground_instrumentation_installed = False\n\n\ndef install_background_instrumentation():\n global background_instrumentation_installed\n if background_instrumentation_installed:\n return\n background_instrumentation_installed = True\n\n @wrapt.decorator\n async def wrapped_background_call(wrapped, instance, args, kwargs):\n tracked_request = TrackedRequest.instance()\n tracked_request.is_real_request = True\n tracked_request.start_span(\n operation=\"Job/{}.{}\".format(\n instance.func.__module__, instance.func.__qualname__\n )\n )\n try:\n return await wrapped(*args, **kwargs)\n finally:\n tracked_request.stop_span()\n\n BackgroundTask.__call__ = wrapped_background_call(BackgroundTask.__call__)\n", "path": "src/scout_apm/async_/starlette.py"}], "after_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport wrapt\nfrom starlette.background import BackgroundTask\nfrom starlette.requests import Request\n\nimport scout_apm.core\nfrom scout_apm.core.tracked_request import TrackedRequest\nfrom scout_apm.core.web_requests import (\n create_filtered_path,\n ignore_path,\n track_amazon_request_queue_time,\n track_request_queue_time,\n)\n\n\nclass ScoutMiddleware:\n def __init__(self, app):\n self.app = app\n installed = scout_apm.core.install()\n self._do_nothing = not installed\n if installed:\n install_background_instrumentation()\n\n async def __call__(self, scope, receive, send):\n if self._do_nothing or scope[\"type\"] != \"http\":\n await self.app(scope, receive, send)\n return\n\n request = Request(scope)\n tracked_request = TrackedRequest.instance()\n # Can't name controller until post-routing - see final clause\n controller_span = tracked_request.start_span(operation=\"Controller/Unknown\")\n\n tracked_request.tag(\n \"path\",\n create_filtered_path(request.url.path, request.query_params.multi_items()),\n )\n if ignore_path(request.url.path):\n tracked_request.tag(\"ignore_transaction\", True)\n\n user_ip = (\n request.headers.get(\"x-forwarded-for\", default=\"\").split(\",\")[0]\n or request.headers.get(\"client-ip\", default=\"\").split(\",\")[0]\n or request.client.host\n )\n tracked_request.tag(\"user_ip\", user_ip)\n\n queue_time = request.headers.get(\n \"x-queue-start\", default=\"\"\n ) or request.headers.get(\"x-request-start\", default=\"\")\n tracked_queue_time = track_request_queue_time(queue_time, tracked_request)\n if not tracked_queue_time:\n amazon_queue_time = request.headers.get(\"x-amzn-trace-id\", default=\"\")\n track_amazon_request_queue_time(amazon_queue_time, tracked_request)\n\n def grab_extra_data():\n if \"endpoint\" in scope:\n # Rename top span\n endpoint = scope[\"endpoint\"]\n controller_span.operation = \"Controller/{}.{}\".format(\n endpoint.__module__, endpoint.__qualname__\n )\n tracked_request.is_real_request = True\n\n # From AuthenticationMiddleware - bypass request.user because it\n # throws AssertionError if 'user' is not in Scope, and we need a\n # try/except already\n try:\n username = scope[\"user\"].display_name\n except (KeyError, AttributeError):\n pass\n else:\n tracked_request.tag(\"username\", username)\n\n async def wrapped_send(data):\n # Finish HTTP span when body finishes sending, not later (e.g.\n # after background tasks)\n if data.get(\"type\", None) == \"http.response.body\" and not data.get(\n \"more_body\", False\n ):\n grab_extra_data()\n tracked_request.stop_span()\n return await send(data)\n\n try:\n await self.app(scope, receive, wrapped_send)\n except Exception as exc:\n tracked_request.tag(\"error\", \"true\")\n raise exc\n finally:\n if tracked_request.end_time is None:\n grab_extra_data()\n tracked_request.stop_span()\n\n\nbackground_instrumentation_installed = False\n\n\ndef install_background_instrumentation():\n global background_instrumentation_installed\n if background_instrumentation_installed:\n return\n background_instrumentation_installed = True\n\n @wrapt.decorator\n async def wrapped_background_call(wrapped, instance, args, kwargs):\n tracked_request = TrackedRequest.instance()\n tracked_request.is_real_request = True\n tracked_request.start_span(\n operation=\"Job/{}.{}\".format(\n instance.func.__module__, instance.func.__qualname__\n )\n )\n try:\n return await wrapped(*args, **kwargs)\n finally:\n tracked_request.stop_span()\n\n BackgroundTask.__call__ = wrapped_background_call(BackgroundTask.__call__)\n", "path": "src/scout_apm/async_/starlette.py"}]}
| 1,318 | 380 |
gh_patches_debug_25796
|
rasdani/github-patches
|
git_diff
|
akvo__akvo-rsr-3426
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Results framework loads very slowly for projects with lot of indicator dimensions
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `akvo/rest/views/indicator_dimension.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # Akvo RSR is covered by the GNU Affero General Public License.
4 # See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 # For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6
7
8 from akvo.rsr.models import IndicatorDimension
9
10 from ..serializers import IndicatorDimensionSerializer
11 from ..viewsets import PublicProjectViewSet
12
13
14 class IndicatorDimensionViewSet(PublicProjectViewSet):
15 """
16 """
17 queryset = IndicatorDimension.objects.all()
18 serializer_class = IndicatorDimensionSerializer
19 project_relation = 'indicator__result__project__'
20
```
Path: `akvo/rest/pagination.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # Akvo RSR is covered by the GNU Affero General Public License.
4 # See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 # For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6
7 from rest_framework import pagination
8 from rest_framework.response import Response
9
10
11 class LimitSizePageNumberPagination(pagination.PageNumberPagination):
12 page_size = 30
13 page_size_query_param = 'limit'
14 max_page_size = 100
15
16
17 class TastypieOffsetPagination(pagination.LimitOffsetPagination):
18
19 def get_paginated_response(self, data):
20 """ Emulate the old style Tastypie format if the URL contains /api/v1/
21 """
22 return Response({
23 'meta': {
24 'next': self.get_next_link(),
25 'previous': self.get_previous_link(),
26 'total_count': self.count,
27 'limit': self.limit,
28 'offset': self.offset,
29 },
30 'objects': data
31 })
32
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/akvo/rest/pagination.py b/akvo/rest/pagination.py
--- a/akvo/rest/pagination.py
+++ b/akvo/rest/pagination.py
@@ -8,12 +8,18 @@
from rest_framework.response import Response
-class LimitSizePageNumberPagination(pagination.PageNumberPagination):
+class StandardSizePageNumberPagination(pagination.PageNumberPagination):
page_size = 30
page_size_query_param = 'limit'
max_page_size = 100
+class LargeSizePageNumberPagination(pagination.PageNumberPagination):
+ page_size = 100
+ page_size_query_param = 'limit'
+ max_page_size = 1000
+
+
class TastypieOffsetPagination(pagination.LimitOffsetPagination):
def get_paginated_response(self, data):
diff --git a/akvo/rest/views/indicator_dimension.py b/akvo/rest/views/indicator_dimension.py
--- a/akvo/rest/views/indicator_dimension.py
+++ b/akvo/rest/views/indicator_dimension.py
@@ -6,6 +6,7 @@
from akvo.rsr.models import IndicatorDimension
+from akvo.rest.pagination import LargeSizePageNumberPagination
from ..serializers import IndicatorDimensionSerializer
from ..viewsets import PublicProjectViewSet
@@ -17,3 +18,4 @@
queryset = IndicatorDimension.objects.all()
serializer_class = IndicatorDimensionSerializer
project_relation = 'indicator__result__project__'
+ pagination_class = LargeSizePageNumberPagination
|
{"golden_diff": "diff --git a/akvo/rest/pagination.py b/akvo/rest/pagination.py\n--- a/akvo/rest/pagination.py\n+++ b/akvo/rest/pagination.py\n@@ -8,12 +8,18 @@\n from rest_framework.response import Response\n \n \n-class LimitSizePageNumberPagination(pagination.PageNumberPagination):\n+class StandardSizePageNumberPagination(pagination.PageNumberPagination):\n page_size = 30\n page_size_query_param = 'limit'\n max_page_size = 100\n \n \n+class LargeSizePageNumberPagination(pagination.PageNumberPagination):\n+ page_size = 100\n+ page_size_query_param = 'limit'\n+ max_page_size = 1000\n+\n+\n class TastypieOffsetPagination(pagination.LimitOffsetPagination):\n \n def get_paginated_response(self, data):\ndiff --git a/akvo/rest/views/indicator_dimension.py b/akvo/rest/views/indicator_dimension.py\n--- a/akvo/rest/views/indicator_dimension.py\n+++ b/akvo/rest/views/indicator_dimension.py\n@@ -6,6 +6,7 @@\n \n \n from akvo.rsr.models import IndicatorDimension\n+from akvo.rest.pagination import LargeSizePageNumberPagination\n \n from ..serializers import IndicatorDimensionSerializer\n from ..viewsets import PublicProjectViewSet\n@@ -17,3 +18,4 @@\n queryset = IndicatorDimension.objects.all()\n serializer_class = IndicatorDimensionSerializer\n project_relation = 'indicator__result__project__'\n+ pagination_class = LargeSizePageNumberPagination\n", "issue": "Results framework loads very slowly for projects with lot of indicator dimensions\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\n\nfrom akvo.rsr.models import IndicatorDimension\n\nfrom ..serializers import IndicatorDimensionSerializer\nfrom ..viewsets import PublicProjectViewSet\n\n\nclass IndicatorDimensionViewSet(PublicProjectViewSet):\n \"\"\"\n \"\"\"\n queryset = IndicatorDimension.objects.all()\n serializer_class = IndicatorDimensionSerializer\n project_relation = 'indicator__result__project__'\n", "path": "akvo/rest/views/indicator_dimension.py"}, {"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\nfrom rest_framework import pagination\nfrom rest_framework.response import Response\n\n\nclass LimitSizePageNumberPagination(pagination.PageNumberPagination):\n page_size = 30\n page_size_query_param = 'limit'\n max_page_size = 100\n\n\nclass TastypieOffsetPagination(pagination.LimitOffsetPagination):\n\n def get_paginated_response(self, data):\n \"\"\" Emulate the old style Tastypie format if the URL contains /api/v1/\n \"\"\"\n return Response({\n 'meta': {\n 'next': self.get_next_link(),\n 'previous': self.get_previous_link(),\n 'total_count': self.count,\n 'limit': self.limit,\n 'offset': self.offset,\n },\n 'objects': data\n })\n", "path": "akvo/rest/pagination.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\n\nfrom akvo.rsr.models import IndicatorDimension\nfrom akvo.rest.pagination import LargeSizePageNumberPagination\n\nfrom ..serializers import IndicatorDimensionSerializer\nfrom ..viewsets import PublicProjectViewSet\n\n\nclass IndicatorDimensionViewSet(PublicProjectViewSet):\n \"\"\"\n \"\"\"\n queryset = IndicatorDimension.objects.all()\n serializer_class = IndicatorDimensionSerializer\n project_relation = 'indicator__result__project__'\n pagination_class = LargeSizePageNumberPagination\n", "path": "akvo/rest/views/indicator_dimension.py"}, {"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\nfrom rest_framework import pagination\nfrom rest_framework.response import Response\n\n\nclass StandardSizePageNumberPagination(pagination.PageNumberPagination):\n page_size = 30\n page_size_query_param = 'limit'\n max_page_size = 100\n\n\nclass LargeSizePageNumberPagination(pagination.PageNumberPagination):\n page_size = 100\n page_size_query_param = 'limit'\n max_page_size = 1000\n\n\nclass TastypieOffsetPagination(pagination.LimitOffsetPagination):\n\n def get_paginated_response(self, data):\n \"\"\" Emulate the old style Tastypie format if the URL contains /api/v1/\n \"\"\"\n return Response({\n 'meta': {\n 'next': self.get_next_link(),\n 'previous': self.get_previous_link(),\n 'total_count': self.count,\n 'limit': self.limit,\n 'offset': self.offset,\n },\n 'objects': data\n })\n", "path": "akvo/rest/pagination.py"}]}
| 752 | 336 |
gh_patches_debug_9421
|
rasdani/github-patches
|
git_diff
|
arviz-devs__arviz-374
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add API docs for plot_ppc
https://arviz-devs.github.io/arviz/generated/arviz.plot_ppc.html#arviz.plot_ppc
See Posterior Docs for example
https://arviz-devs.github.io/arviz/generated/arviz.plot_posterior.html#arviz.plot_posterior
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `arviz/plots/ppcplot.py`
Content:
```
1 """Posterior predictive plot."""
2 import numpy as np
3 from .kdeplot import plot_kde, _fast_kde
4 from .plot_utils import _scale_fig_size, _create_axes_grid, default_grid
5
6
7 def plot_ppc(
8 data, kind="density", alpha=0.2, mean=True, figsize=None, textsize=None, data_pairs=None
9 ):
10 """
11 Plot for Posterior Predictive checks.
12
13 Note that this plot will flatten out any dimensions in the posterior predictive variables.
14
15 Parameters
16 ----------
17 data : Array-like
18 Observed values
19 kind : str
20 Type of plot to display (density or cumulative)
21 alpha : float
22 Opacity of posterior predictive density curves
23 mean : bool
24 Whether or not to plot the mean posterior predictive distribution. Defaults to True
25 figsize : tuple
26 Figure size. If None it will be defined automatically.
27 textsize: float
28 Text size scaling factor for labels, titles and lines. If None it will be autoscaled based
29 on figsize.
30 data_pairs : dict
31 Dictionary containing relations between observed data and posterior predictive data.
32 Dictionary structure:
33 Key = data var_name
34 Value = posterior predictive var_name
35 Example: `data_pairs = {'y' : 'y_hat'}`
36
37 Returns
38 -------
39 axes : matplotlib axes
40 """
41 for group in ("posterior_predictive", "observed_data"):
42 if not hasattr(data, group):
43 raise TypeError(
44 '`data` argument must have the group "{group}" for ppcplot'.format(group=group)
45 )
46
47 if kind.lower() not in ("density", "cumulative"):
48 raise TypeError("`kind` argument must be either `density` or `cumulative`")
49
50 if data_pairs is None:
51 data_pairs = {}
52
53 observed = data.observed_data
54 posterior_predictive = data.posterior_predictive
55
56 rows, cols = default_grid(len(observed.data_vars))
57
58 (figsize, ax_labelsize, _, xt_labelsize, linewidth, _) = _scale_fig_size(
59 figsize, textsize, rows, cols
60 )
61
62 _, axes = _create_axes_grid(len(observed.data_vars), rows, cols, figsize=figsize)
63
64 for ax, var_name in zip(np.atleast_1d(axes), observed.data_vars):
65 dtype = observed[var_name].dtype.kind
66 if kind == "density":
67 if dtype == "f":
68 plot_kde(
69 observed[var_name].values.flatten(),
70 label="Observed {}".format(var_name),
71 plot_kwargs={"color": "k", "linewidth": linewidth, "zorder": 3},
72 fill_kwargs={"alpha": 0},
73 ax=ax,
74 )
75 else:
76 vals = observed[var_name].values.flatten()
77 nbins = round(len(vals) ** 0.5)
78 hist, bin_edges = np.histogram(vals, bins=nbins, density=True)
79 hist = np.concatenate((hist[:1], hist))
80 ax.plot(
81 bin_edges,
82 hist,
83 label="Observed {}".format(var_name),
84 color="k",
85 linewidth=linewidth,
86 zorder=3,
87 drawstyle="steps-pre",
88 )
89 pp_var_name = data_pairs.get(var_name, var_name)
90 # run plot_kde manually with one plot call
91 pp_densities = []
92 for _, chain_vals in posterior_predictive[pp_var_name].groupby("chain"):
93 for _, vals in chain_vals.groupby("draw"):
94 if dtype == "f":
95 pp_density, lower, upper = _fast_kde(vals)
96 pp_x = np.linspace(lower, upper, len(pp_density))
97 pp_densities.extend([pp_x, pp_density])
98 else:
99 nbins = round(len(vals) ** 0.5)
100 hist, bin_edges = np.histogram(vals, bins=nbins, density=True)
101 hist = np.concatenate((hist[:1], hist))
102 pp_densities.extend([bin_edges, hist])
103 plot_kwargs = {"color": "C5", "alpha": alpha, "linewidth": 0.5 * linewidth}
104 if dtype == "i":
105 plot_kwargs["drawstyle"] = "steps-pre"
106 ax.plot(*pp_densities, **plot_kwargs)
107 ax.plot([], color="C5", label="Posterior predictive {}".format(pp_var_name))
108 if mean:
109 if dtype == "f":
110 plot_kde(
111 posterior_predictive[pp_var_name].values.flatten(),
112 plot_kwargs={
113 "color": "C0",
114 "linestyle": "--",
115 "linewidth": linewidth,
116 "zorder": 2,
117 },
118 label="Posterior predictive mean {}".format(pp_var_name),
119 ax=ax,
120 )
121 else:
122 vals = posterior_predictive[pp_var_name].values.flatten()
123 nbins = round(len(vals) ** 0.5)
124 hist, bin_edges = np.histogram(vals, bins=nbins, density=True)
125 hist = np.concatenate((hist[:1], hist))
126 ax.plot(
127 bin_edges,
128 hist,
129 color="C0",
130 linewidth=linewidth,
131 label="Posterior predictive mean {}".format(pp_var_name),
132 zorder=2,
133 linestyle="--",
134 drawstyle="steps-pre",
135 )
136 if var_name != pp_var_name:
137 xlabel = "{} / {}".format(var_name, pp_var_name)
138 else:
139 xlabel = var_name
140 ax.set_xlabel(xlabel, fontsize=ax_labelsize)
141 ax.tick_params(labelsize=xt_labelsize)
142 ax.set_yticks([])
143
144 elif kind == "cumulative":
145 if dtype == "f":
146 ax.plot(
147 *_empirical_cdf(observed[var_name].values.flatten()),
148 color="k",
149 linewidth=linewidth,
150 label="Observed {}".format(var_name),
151 zorder=3
152 )
153 else:
154 ax.plot(
155 *_empirical_cdf(observed[var_name].values.flatten()),
156 color="k",
157 linewidth=linewidth,
158 label="Observed {}".format(var_name),
159 drawstyle="steps-pre",
160 zorder=3
161 )
162 pp_var_name = data_pairs.get(var_name, var_name)
163 # run plot_kde manually with one plot call
164 pp_densities = []
165 for _, chain_vals in posterior_predictive[pp_var_name].groupby("chain"):
166 for _, vals in chain_vals.groupby("draw"):
167 pp_x, pp_density = _empirical_cdf(vals)
168 pp_densities.extend([pp_x, pp_density])
169 if dtype == "f":
170 ax.plot(*pp_densities, alpha=alpha, color="C5", linewidth=linewidth)
171 else:
172 ax.plot(
173 *pp_densities,
174 alpha=alpha,
175 color="C5",
176 drawstyle="steps-pre",
177 linewidth=linewidth
178 )
179 ax.plot([], color="C5", label="Posterior predictive {}".format(pp_var_name))
180 if mean:
181 if dtype == "f":
182 ax.plot(
183 *_empirical_cdf(posterior_predictive[pp_var_name].values.flatten()),
184 color="C0",
185 linestyle="--",
186 linewidth=linewidth,
187 label="Posterior predictive mean {}".format(pp_var_name)
188 )
189 else:
190 ax.plot(
191 *_empirical_cdf(posterior_predictive[pp_var_name].values.flatten()),
192 color="C0",
193 linestyle="--",
194 linewidth=linewidth,
195 drawstyle="steps-pre",
196 label="Posterior predictive mean {}".format(pp_var_name)
197 )
198 if var_name != pp_var_name:
199 xlabel = "{} / {}".format(var_name, pp_var_name)
200 else:
201 xlabel = var_name
202 ax.set_xlabel(var_name, fontsize=ax_labelsize)
203 ax.set_yticks([0, 0.5, 1])
204 ax.legend(fontsize=xt_labelsize)
205 return axes
206
207
208 def _empirical_cdf(data):
209 """Compute empirical cdf of a numpy array.
210
211 Parameters
212 ----------
213 data : np.array
214 1d array
215
216 Returns
217 -------
218 np.array, np.array
219 x and y coordinates for the empirical cdf of the data
220 """
221 return np.sort(data), np.linspace(0, 1, len(data))
222
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/arviz/plots/ppcplot.py b/arviz/plots/ppcplot.py
--- a/arviz/plots/ppcplot.py
+++ b/arviz/plots/ppcplot.py
@@ -37,6 +37,25 @@
Returns
-------
axes : matplotlib axes
+
+ Examples
+ --------
+ Plot the observed data KDE overlaid on posterior predictive KDEs.
+
+ .. plot::
+ :context: close-figs
+
+ >>> import arviz as az
+ >>> data = az.load_arviz_data('radon')
+ >>> az.plot_ppc(data)
+
+ Plot the overlay with empirical CDFs.
+
+ .. plot::
+ :context: close-figs
+
+ >>> az.plot_ppc(data, kind='cumulative')
+
"""
for group in ("posterior_predictive", "observed_data"):
if not hasattr(data, group):
|
{"golden_diff": "diff --git a/arviz/plots/ppcplot.py b/arviz/plots/ppcplot.py\n--- a/arviz/plots/ppcplot.py\n+++ b/arviz/plots/ppcplot.py\n@@ -37,6 +37,25 @@\n Returns\n -------\n axes : matplotlib axes\n+\n+ Examples\n+ --------\n+ Plot the observed data KDE overlaid on posterior predictive KDEs.\n+\n+ .. plot::\n+ :context: close-figs\n+\n+ >>> import arviz as az\n+ >>> data = az.load_arviz_data('radon')\n+ >>> az.plot_ppc(data)\n+\n+ Plot the overlay with empirical CDFs.\n+\n+ .. plot::\n+ :context: close-figs\n+\n+ >>> az.plot_ppc(data, kind='cumulative')\n+\n \"\"\"\n for group in (\"posterior_predictive\", \"observed_data\"):\n if not hasattr(data, group):\n", "issue": "Add API docs for plot_ppc\nhttps://arviz-devs.github.io/arviz/generated/arviz.plot_ppc.html#arviz.plot_ppc\r\n\r\nSee Posterior Docs for example\r\nhttps://arviz-devs.github.io/arviz/generated/arviz.plot_posterior.html#arviz.plot_posterior\n", "before_files": [{"content": "\"\"\"Posterior predictive plot.\"\"\"\nimport numpy as np\nfrom .kdeplot import plot_kde, _fast_kde\nfrom .plot_utils import _scale_fig_size, _create_axes_grid, default_grid\n\n\ndef plot_ppc(\n data, kind=\"density\", alpha=0.2, mean=True, figsize=None, textsize=None, data_pairs=None\n):\n \"\"\"\n Plot for Posterior Predictive checks.\n\n Note that this plot will flatten out any dimensions in the posterior predictive variables.\n\n Parameters\n ----------\n data : Array-like\n Observed values\n kind : str\n Type of plot to display (density or cumulative)\n alpha : float\n Opacity of posterior predictive density curves\n mean : bool\n Whether or not to plot the mean posterior predictive distribution. Defaults to True\n figsize : tuple\n Figure size. If None it will be defined automatically.\n textsize: float\n Text size scaling factor for labels, titles and lines. If None it will be autoscaled based\n on figsize.\n data_pairs : dict\n Dictionary containing relations between observed data and posterior predictive data.\n Dictionary structure:\n Key = data var_name\n Value = posterior predictive var_name\n Example: `data_pairs = {'y' : 'y_hat'}`\n\n Returns\n -------\n axes : matplotlib axes\n \"\"\"\n for group in (\"posterior_predictive\", \"observed_data\"):\n if not hasattr(data, group):\n raise TypeError(\n '`data` argument must have the group \"{group}\" for ppcplot'.format(group=group)\n )\n\n if kind.lower() not in (\"density\", \"cumulative\"):\n raise TypeError(\"`kind` argument must be either `density` or `cumulative`\")\n\n if data_pairs is None:\n data_pairs = {}\n\n observed = data.observed_data\n posterior_predictive = data.posterior_predictive\n\n rows, cols = default_grid(len(observed.data_vars))\n\n (figsize, ax_labelsize, _, xt_labelsize, linewidth, _) = _scale_fig_size(\n figsize, textsize, rows, cols\n )\n\n _, axes = _create_axes_grid(len(observed.data_vars), rows, cols, figsize=figsize)\n\n for ax, var_name in zip(np.atleast_1d(axes), observed.data_vars):\n dtype = observed[var_name].dtype.kind\n if kind == \"density\":\n if dtype == \"f\":\n plot_kde(\n observed[var_name].values.flatten(),\n label=\"Observed {}\".format(var_name),\n plot_kwargs={\"color\": \"k\", \"linewidth\": linewidth, \"zorder\": 3},\n fill_kwargs={\"alpha\": 0},\n ax=ax,\n )\n else:\n vals = observed[var_name].values.flatten()\n nbins = round(len(vals) ** 0.5)\n hist, bin_edges = np.histogram(vals, bins=nbins, density=True)\n hist = np.concatenate((hist[:1], hist))\n ax.plot(\n bin_edges,\n hist,\n label=\"Observed {}\".format(var_name),\n color=\"k\",\n linewidth=linewidth,\n zorder=3,\n drawstyle=\"steps-pre\",\n )\n pp_var_name = data_pairs.get(var_name, var_name)\n # run plot_kde manually with one plot call\n pp_densities = []\n for _, chain_vals in posterior_predictive[pp_var_name].groupby(\"chain\"):\n for _, vals in chain_vals.groupby(\"draw\"):\n if dtype == \"f\":\n pp_density, lower, upper = _fast_kde(vals)\n pp_x = np.linspace(lower, upper, len(pp_density))\n pp_densities.extend([pp_x, pp_density])\n else:\n nbins = round(len(vals) ** 0.5)\n hist, bin_edges = np.histogram(vals, bins=nbins, density=True)\n hist = np.concatenate((hist[:1], hist))\n pp_densities.extend([bin_edges, hist])\n plot_kwargs = {\"color\": \"C5\", \"alpha\": alpha, \"linewidth\": 0.5 * linewidth}\n if dtype == \"i\":\n plot_kwargs[\"drawstyle\"] = \"steps-pre\"\n ax.plot(*pp_densities, **plot_kwargs)\n ax.plot([], color=\"C5\", label=\"Posterior predictive {}\".format(pp_var_name))\n if mean:\n if dtype == \"f\":\n plot_kde(\n posterior_predictive[pp_var_name].values.flatten(),\n plot_kwargs={\n \"color\": \"C0\",\n \"linestyle\": \"--\",\n \"linewidth\": linewidth,\n \"zorder\": 2,\n },\n label=\"Posterior predictive mean {}\".format(pp_var_name),\n ax=ax,\n )\n else:\n vals = posterior_predictive[pp_var_name].values.flatten()\n nbins = round(len(vals) ** 0.5)\n hist, bin_edges = np.histogram(vals, bins=nbins, density=True)\n hist = np.concatenate((hist[:1], hist))\n ax.plot(\n bin_edges,\n hist,\n color=\"C0\",\n linewidth=linewidth,\n label=\"Posterior predictive mean {}\".format(pp_var_name),\n zorder=2,\n linestyle=\"--\",\n drawstyle=\"steps-pre\",\n )\n if var_name != pp_var_name:\n xlabel = \"{} / {}\".format(var_name, pp_var_name)\n else:\n xlabel = var_name\n ax.set_xlabel(xlabel, fontsize=ax_labelsize)\n ax.tick_params(labelsize=xt_labelsize)\n ax.set_yticks([])\n\n elif kind == \"cumulative\":\n if dtype == \"f\":\n ax.plot(\n *_empirical_cdf(observed[var_name].values.flatten()),\n color=\"k\",\n linewidth=linewidth,\n label=\"Observed {}\".format(var_name),\n zorder=3\n )\n else:\n ax.plot(\n *_empirical_cdf(observed[var_name].values.flatten()),\n color=\"k\",\n linewidth=linewidth,\n label=\"Observed {}\".format(var_name),\n drawstyle=\"steps-pre\",\n zorder=3\n )\n pp_var_name = data_pairs.get(var_name, var_name)\n # run plot_kde manually with one plot call\n pp_densities = []\n for _, chain_vals in posterior_predictive[pp_var_name].groupby(\"chain\"):\n for _, vals in chain_vals.groupby(\"draw\"):\n pp_x, pp_density = _empirical_cdf(vals)\n pp_densities.extend([pp_x, pp_density])\n if dtype == \"f\":\n ax.plot(*pp_densities, alpha=alpha, color=\"C5\", linewidth=linewidth)\n else:\n ax.plot(\n *pp_densities,\n alpha=alpha,\n color=\"C5\",\n drawstyle=\"steps-pre\",\n linewidth=linewidth\n )\n ax.plot([], color=\"C5\", label=\"Posterior predictive {}\".format(pp_var_name))\n if mean:\n if dtype == \"f\":\n ax.plot(\n *_empirical_cdf(posterior_predictive[pp_var_name].values.flatten()),\n color=\"C0\",\n linestyle=\"--\",\n linewidth=linewidth,\n label=\"Posterior predictive mean {}\".format(pp_var_name)\n )\n else:\n ax.plot(\n *_empirical_cdf(posterior_predictive[pp_var_name].values.flatten()),\n color=\"C0\",\n linestyle=\"--\",\n linewidth=linewidth,\n drawstyle=\"steps-pre\",\n label=\"Posterior predictive mean {}\".format(pp_var_name)\n )\n if var_name != pp_var_name:\n xlabel = \"{} / {}\".format(var_name, pp_var_name)\n else:\n xlabel = var_name\n ax.set_xlabel(var_name, fontsize=ax_labelsize)\n ax.set_yticks([0, 0.5, 1])\n ax.legend(fontsize=xt_labelsize)\n return axes\n\n\ndef _empirical_cdf(data):\n \"\"\"Compute empirical cdf of a numpy array.\n\n Parameters\n ----------\n data : np.array\n 1d array\n\n Returns\n -------\n np.array, np.array\n x and y coordinates for the empirical cdf of the data\n \"\"\"\n return np.sort(data), np.linspace(0, 1, len(data))\n", "path": "arviz/plots/ppcplot.py"}], "after_files": [{"content": "\"\"\"Posterior predictive plot.\"\"\"\nimport numpy as np\nfrom .kdeplot import plot_kde, _fast_kde\nfrom .plot_utils import _scale_fig_size, _create_axes_grid, default_grid\n\n\ndef plot_ppc(\n data, kind=\"density\", alpha=0.2, mean=True, figsize=None, textsize=None, data_pairs=None\n):\n \"\"\"\n Plot for Posterior Predictive checks.\n\n Note that this plot will flatten out any dimensions in the posterior predictive variables.\n\n Parameters\n ----------\n data : Array-like\n Observed values\n kind : str\n Type of plot to display (density or cumulative)\n alpha : float\n Opacity of posterior predictive density curves\n mean : bool\n Whether or not to plot the mean posterior predictive distribution. Defaults to True\n figsize : tuple\n Figure size. If None it will be defined automatically.\n textsize: float\n Text size scaling factor for labels, titles and lines. If None it will be autoscaled based\n on figsize.\n data_pairs : dict\n Dictionary containing relations between observed data and posterior predictive data.\n Dictionary structure:\n Key = data var_name\n Value = posterior predictive var_name\n Example: `data_pairs = {'y' : 'y_hat'}`\n\n Returns\n -------\n axes : matplotlib axes\n\n Examples\n --------\n Plot the observed data KDE overlaid on posterior predictive KDEs.\n\n .. plot::\n :context: close-figs\n\n >>> import arviz as az\n >>> data = az.load_arviz_data('radon')\n >>> az.plot_ppc(data)\n\n Plot the overlay with empirical CDFs.\n\n .. plot::\n :context: close-figs\n\n >>> az.plot_ppc(data, kind='cumulative')\n\n \"\"\"\n for group in (\"posterior_predictive\", \"observed_data\"):\n if not hasattr(data, group):\n raise TypeError(\n '`data` argument must have the group \"{group}\" for ppcplot'.format(group=group)\n )\n\n if kind.lower() not in (\"density\", \"cumulative\"):\n raise TypeError(\"`kind` argument must be either `density` or `cumulative`\")\n\n if data_pairs is None:\n data_pairs = {}\n\n observed = data.observed_data\n posterior_predictive = data.posterior_predictive\n\n rows, cols = default_grid(len(observed.data_vars))\n\n (figsize, ax_labelsize, _, xt_labelsize, linewidth, _) = _scale_fig_size(\n figsize, textsize, rows, cols\n )\n\n _, axes = _create_axes_grid(len(observed.data_vars), rows, cols, figsize=figsize)\n\n for ax, var_name in zip(np.atleast_1d(axes), observed.data_vars):\n dtype = observed[var_name].dtype.kind\n if kind == \"density\":\n if dtype == \"f\":\n plot_kde(\n observed[var_name].values.flatten(),\n label=\"Observed {}\".format(var_name),\n plot_kwargs={\"color\": \"k\", \"linewidth\": linewidth, \"zorder\": 3},\n fill_kwargs={\"alpha\": 0},\n ax=ax,\n )\n else:\n vals = observed[var_name].values.flatten()\n nbins = round(len(vals) ** 0.5)\n hist, bin_edges = np.histogram(vals, bins=nbins, density=True)\n hist = np.concatenate((hist[:1], hist))\n ax.plot(\n bin_edges,\n hist,\n label=\"Observed {}\".format(var_name),\n color=\"k\",\n linewidth=linewidth,\n zorder=3,\n drawstyle=\"steps-pre\",\n )\n pp_var_name = data_pairs.get(var_name, var_name)\n # run plot_kde manually with one plot call\n pp_densities = []\n for _, chain_vals in posterior_predictive[pp_var_name].groupby(\"chain\"):\n for _, vals in chain_vals.groupby(\"draw\"):\n if dtype == \"f\":\n pp_density, lower, upper = _fast_kde(vals)\n pp_x = np.linspace(lower, upper, len(pp_density))\n pp_densities.extend([pp_x, pp_density])\n else:\n nbins = round(len(vals) ** 0.5)\n hist, bin_edges = np.histogram(vals, bins=nbins, density=True)\n hist = np.concatenate((hist[:1], hist))\n pp_densities.extend([bin_edges, hist])\n plot_kwargs = {\"color\": \"C5\", \"alpha\": alpha, \"linewidth\": 0.5 * linewidth}\n if dtype == \"i\":\n plot_kwargs[\"drawstyle\"] = \"steps-pre\"\n ax.plot(*pp_densities, **plot_kwargs)\n ax.plot([], color=\"C5\", label=\"Posterior predictive {}\".format(pp_var_name))\n if mean:\n if dtype == \"f\":\n plot_kde(\n posterior_predictive[pp_var_name].values.flatten(),\n plot_kwargs={\n \"color\": \"C0\",\n \"linestyle\": \"--\",\n \"linewidth\": linewidth,\n \"zorder\": 2,\n },\n label=\"Posterior predictive mean {}\".format(pp_var_name),\n ax=ax,\n )\n else:\n vals = posterior_predictive[pp_var_name].values.flatten()\n nbins = round(len(vals) ** 0.5)\n hist, bin_edges = np.histogram(vals, bins=nbins, density=True)\n hist = np.concatenate((hist[:1], hist))\n ax.plot(\n bin_edges,\n hist,\n color=\"C0\",\n linewidth=linewidth,\n label=\"Posterior predictive mean {}\".format(pp_var_name),\n zorder=2,\n linestyle=\"--\",\n drawstyle=\"steps-pre\",\n )\n if var_name != pp_var_name:\n xlabel = \"{} / {}\".format(var_name, pp_var_name)\n else:\n xlabel = var_name\n ax.set_xlabel(xlabel, fontsize=ax_labelsize)\n ax.tick_params(labelsize=xt_labelsize)\n ax.set_yticks([])\n\n elif kind == \"cumulative\":\n if dtype == \"f\":\n ax.plot(\n *_empirical_cdf(observed[var_name].values.flatten()),\n color=\"k\",\n linewidth=linewidth,\n label=\"Observed {}\".format(var_name),\n zorder=3\n )\n else:\n ax.plot(\n *_empirical_cdf(observed[var_name].values.flatten()),\n color=\"k\",\n linewidth=linewidth,\n label=\"Observed {}\".format(var_name),\n drawstyle=\"steps-pre\",\n zorder=3\n )\n pp_var_name = data_pairs.get(var_name, var_name)\n # run plot_kde manually with one plot call\n pp_densities = []\n for _, chain_vals in posterior_predictive[pp_var_name].groupby(\"chain\"):\n for _, vals in chain_vals.groupby(\"draw\"):\n pp_x, pp_density = _empirical_cdf(vals)\n pp_densities.extend([pp_x, pp_density])\n if dtype == \"f\":\n ax.plot(*pp_densities, alpha=alpha, color=\"C5\", linewidth=linewidth)\n else:\n ax.plot(\n *pp_densities,\n alpha=alpha,\n color=\"C5\",\n drawstyle=\"steps-pre\",\n linewidth=linewidth\n )\n ax.plot([], color=\"C5\", label=\"Posterior predictive {}\".format(pp_var_name))\n if mean:\n if dtype == \"f\":\n ax.plot(\n *_empirical_cdf(posterior_predictive[pp_var_name].values.flatten()),\n color=\"C0\",\n linestyle=\"--\",\n linewidth=linewidth,\n label=\"Posterior predictive mean {}\".format(pp_var_name)\n )\n else:\n ax.plot(\n *_empirical_cdf(posterior_predictive[pp_var_name].values.flatten()),\n color=\"C0\",\n linestyle=\"--\",\n linewidth=linewidth,\n drawstyle=\"steps-pre\",\n label=\"Posterior predictive mean {}\".format(pp_var_name)\n )\n if var_name != pp_var_name:\n xlabel = \"{} / {}\".format(var_name, pp_var_name)\n else:\n xlabel = var_name\n ax.set_xlabel(var_name, fontsize=ax_labelsize)\n ax.set_yticks([0, 0.5, 1])\n ax.legend(fontsize=xt_labelsize)\n return axes\n\n\ndef _empirical_cdf(data):\n \"\"\"Compute empirical cdf of a numpy array.\n\n Parameters\n ----------\n data : np.array\n 1d array\n\n Returns\n -------\n np.array, np.array\n x and y coordinates for the empirical cdf of the data\n \"\"\"\n return np.sort(data), np.linspace(0, 1, len(data))\n", "path": "arviz/plots/ppcplot.py"}]}
| 2,661 | 209 |
gh_patches_debug_23266
|
rasdani/github-patches
|
git_diff
|
feast-dev__feast-3203
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support requested features on Postgresql online store
**Is your feature request related to a problem? Please describe.**
As a user, I want to retrieve subset of features on Postgresql online store instead of fetching all features
**Describe the solution you'd like**
Update online_read method to retrieve subset of features
**Additional context**
I'm close to creating a pull request for this issue myself.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sdk/python/feast/infra/online_stores/contrib/postgres.py`
Content:
```
1 import logging
2 from collections import defaultdict
3 from datetime import datetime
4 from typing import Any, Callable, Dict, List, Optional, Sequence, Tuple
5
6 import psycopg2
7 import pytz
8 from psycopg2 import sql
9 from psycopg2.extras import execute_values
10 from pydantic.schema import Literal
11
12 from feast import Entity
13 from feast.feature_view import FeatureView
14 from feast.infra.key_encoding_utils import serialize_entity_key
15 from feast.infra.online_stores.online_store import OnlineStore
16 from feast.infra.utils.postgres.connection_utils import _get_conn
17 from feast.infra.utils.postgres.postgres_config import PostgreSQLConfig
18 from feast.protos.feast.types.EntityKey_pb2 import EntityKey as EntityKeyProto
19 from feast.protos.feast.types.Value_pb2 import Value as ValueProto
20 from feast.repo_config import RepoConfig
21 from feast.usage import log_exceptions_and_usage
22
23
24 class PostgreSQLOnlineStoreConfig(PostgreSQLConfig):
25 type: Literal["postgres"] = "postgres"
26
27
28 class PostgreSQLOnlineStore(OnlineStore):
29 _conn: Optional[psycopg2._psycopg.connection] = None
30
31 def _get_conn(self, config: RepoConfig):
32 if not self._conn:
33 assert config.online_store.type == "postgres"
34 self._conn = _get_conn(config.online_store)
35 return self._conn
36
37 @log_exceptions_and_usage(online_store="postgres")
38 def online_write_batch(
39 self,
40 config: RepoConfig,
41 table: FeatureView,
42 data: List[
43 Tuple[EntityKeyProto, Dict[str, ValueProto], datetime, Optional[datetime]]
44 ],
45 progress: Optional[Callable[[int], Any]],
46 ) -> None:
47 project = config.project
48
49 with self._get_conn(config) as conn, conn.cursor() as cur:
50 insert_values = []
51 for entity_key, values, timestamp, created_ts in data:
52 entity_key_bin = serialize_entity_key(
53 entity_key,
54 entity_key_serialization_version=config.entity_key_serialization_version,
55 )
56 timestamp = _to_naive_utc(timestamp)
57 if created_ts is not None:
58 created_ts = _to_naive_utc(created_ts)
59
60 for feature_name, val in values.items():
61 insert_values.append(
62 (
63 entity_key_bin,
64 feature_name,
65 val.SerializeToString(),
66 timestamp,
67 created_ts,
68 )
69 )
70 # Control the batch so that we can update the progress
71 batch_size = 5000
72 for i in range(0, len(insert_values), batch_size):
73 cur_batch = insert_values[i : i + batch_size]
74 execute_values(
75 cur,
76 sql.SQL(
77 """
78 INSERT INTO {}
79 (entity_key, feature_name, value, event_ts, created_ts)
80 VALUES %s
81 ON CONFLICT (entity_key, feature_name) DO
82 UPDATE SET
83 value = EXCLUDED.value,
84 event_ts = EXCLUDED.event_ts,
85 created_ts = EXCLUDED.created_ts;
86 """,
87 ).format(sql.Identifier(_table_id(project, table))),
88 cur_batch,
89 page_size=batch_size,
90 )
91 if progress:
92 progress(len(cur_batch))
93
94 @log_exceptions_and_usage(online_store="postgres")
95 def online_read(
96 self,
97 config: RepoConfig,
98 table: FeatureView,
99 entity_keys: List[EntityKeyProto],
100 requested_features: Optional[List[str]] = None,
101 ) -> List[Tuple[Optional[datetime], Optional[Dict[str, ValueProto]]]]:
102 result: List[Tuple[Optional[datetime], Optional[Dict[str, ValueProto]]]] = []
103
104 project = config.project
105 with self._get_conn(config) as conn, conn.cursor() as cur:
106 # Collecting all the keys to a list allows us to make fewer round trips
107 # to PostgreSQL
108 keys = []
109 for entity_key in entity_keys:
110 keys.append(
111 serialize_entity_key(
112 entity_key,
113 entity_key_serialization_version=config.entity_key_serialization_version,
114 )
115 )
116
117 cur.execute(
118 sql.SQL(
119 """
120 SELECT entity_key, feature_name, value, event_ts
121 FROM {} WHERE entity_key = ANY(%s);
122 """
123 ).format(
124 sql.Identifier(_table_id(project, table)),
125 ),
126 (keys,),
127 )
128
129 rows = cur.fetchall()
130
131 # Since we don't know the order returned from PostgreSQL we'll need
132 # to construct a dict to be able to quickly look up the correct row
133 # when we iterate through the keys since they are in the correct order
134 values_dict = defaultdict(list)
135 for row in rows if rows is not None else []:
136 values_dict[row[0].tobytes()].append(row[1:])
137
138 for key in keys:
139 if key in values_dict:
140 value = values_dict[key]
141 res = {}
142 for feature_name, value_bin, event_ts in value:
143 val = ValueProto()
144 val.ParseFromString(bytes(value_bin))
145 res[feature_name] = val
146 result.append((event_ts, res))
147 else:
148 result.append((None, None))
149
150 return result
151
152 @log_exceptions_and_usage(online_store="postgres")
153 def update(
154 self,
155 config: RepoConfig,
156 tables_to_delete: Sequence[FeatureView],
157 tables_to_keep: Sequence[FeatureView],
158 entities_to_delete: Sequence[Entity],
159 entities_to_keep: Sequence[Entity],
160 partial: bool,
161 ):
162 project = config.project
163 schema_name = config.online_store.db_schema or config.online_store.user
164 with self._get_conn(config) as conn, conn.cursor() as cur:
165 # If a db_schema is provided, then that schema gets created if it doesn't
166 # exist. Else a schema is created for the feature store user.
167
168 cur.execute(
169 """
170 SELECT schema_name
171 FROM information_schema.schemata
172 WHERE schema_name = %s
173 """,
174 (schema_name,),
175 )
176 schema_exists = cur.fetchone()
177 if not schema_exists:
178 cur.execute(
179 sql.SQL("CREATE SCHEMA IF NOT EXISTS {} AUTHORIZATION {}").format(
180 sql.Identifier(schema_name),
181 sql.Identifier(config.online_store.user),
182 ),
183 )
184
185 for table in tables_to_delete:
186 table_name = _table_id(project, table)
187 cur.execute(_drop_table_and_index(table_name))
188
189 for table in tables_to_keep:
190 table_name = _table_id(project, table)
191 cur.execute(
192 sql.SQL(
193 """
194 CREATE TABLE IF NOT EXISTS {}
195 (
196 entity_key BYTEA,
197 feature_name TEXT,
198 value BYTEA,
199 event_ts TIMESTAMPTZ,
200 created_ts TIMESTAMPTZ,
201 PRIMARY KEY(entity_key, feature_name)
202 );
203 CREATE INDEX IF NOT EXISTS {} ON {} (entity_key);
204 """
205 ).format(
206 sql.Identifier(table_name),
207 sql.Identifier(f"{table_name}_ek"),
208 sql.Identifier(table_name),
209 )
210 )
211
212 conn.commit()
213
214 def teardown(
215 self,
216 config: RepoConfig,
217 tables: Sequence[FeatureView],
218 entities: Sequence[Entity],
219 ):
220 project = config.project
221 try:
222 with self._get_conn(config) as conn, conn.cursor() as cur:
223 for table in tables:
224 table_name = _table_id(project, table)
225 cur.execute(_drop_table_and_index(table_name))
226 except Exception:
227 logging.exception("Teardown failed")
228 raise
229
230
231 def _table_id(project: str, table: FeatureView) -> str:
232 return f"{project}_{table.name}"
233
234
235 def _drop_table_and_index(table_name):
236 return sql.SQL(
237 """
238 DROP TABLE IF EXISTS {};
239 DROP INDEX IF EXISTS {};
240 """
241 ).format(
242 sql.Identifier(table_name),
243 sql.Identifier(f"{table_name}_ek"),
244 )
245
246
247 def _to_naive_utc(ts: datetime):
248 if ts.tzinfo is None:
249 return ts
250 else:
251 return ts.astimezone(pytz.utc).replace(tzinfo=None)
252
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sdk/python/feast/infra/online_stores/contrib/postgres.py b/sdk/python/feast/infra/online_stores/contrib/postgres.py
--- a/sdk/python/feast/infra/online_stores/contrib/postgres.py
+++ b/sdk/python/feast/infra/online_stores/contrib/postgres.py
@@ -114,17 +114,30 @@
)
)
- cur.execute(
- sql.SQL(
- """
- SELECT entity_key, feature_name, value, event_ts
- FROM {} WHERE entity_key = ANY(%s);
- """
- ).format(
- sql.Identifier(_table_id(project, table)),
- ),
- (keys,),
- )
+ if not requested_features:
+ cur.execute(
+ sql.SQL(
+ """
+ SELECT entity_key, feature_name, value, event_ts
+ FROM {} WHERE entity_key = ANY(%s);
+ """
+ ).format(
+ sql.Identifier(_table_id(project, table)),
+ ),
+ (keys,),
+ )
+ else:
+ cur.execute(
+ sql.SQL(
+ """
+ SELECT entity_key, feature_name, value, event_ts
+ FROM {} WHERE entity_key = ANY(%s) and feature_name = ANY(%s);
+ """
+ ).format(
+ sql.Identifier(_table_id(project, table)),
+ ),
+ (keys, requested_features),
+ )
rows = cur.fetchall()
|
{"golden_diff": "diff --git a/sdk/python/feast/infra/online_stores/contrib/postgres.py b/sdk/python/feast/infra/online_stores/contrib/postgres.py\n--- a/sdk/python/feast/infra/online_stores/contrib/postgres.py\n+++ b/sdk/python/feast/infra/online_stores/contrib/postgres.py\n@@ -114,17 +114,30 @@\n )\n )\n \n- cur.execute(\n- sql.SQL(\n- \"\"\"\n- SELECT entity_key, feature_name, value, event_ts\n- FROM {} WHERE entity_key = ANY(%s);\n- \"\"\"\n- ).format(\n- sql.Identifier(_table_id(project, table)),\n- ),\n- (keys,),\n- )\n+ if not requested_features:\n+ cur.execute(\n+ sql.SQL(\n+ \"\"\"\n+ SELECT entity_key, feature_name, value, event_ts\n+ FROM {} WHERE entity_key = ANY(%s);\n+ \"\"\"\n+ ).format(\n+ sql.Identifier(_table_id(project, table)),\n+ ),\n+ (keys,),\n+ )\n+ else:\n+ cur.execute(\n+ sql.SQL(\n+ \"\"\"\n+ SELECT entity_key, feature_name, value, event_ts\n+ FROM {} WHERE entity_key = ANY(%s) and feature_name = ANY(%s);\n+ \"\"\"\n+ ).format(\n+ sql.Identifier(_table_id(project, table)),\n+ ),\n+ (keys, requested_features),\n+ )\n \n rows = cur.fetchall()\n", "issue": "Support requested features on Postgresql online store\n**Is your feature request related to a problem? Please describe.**\r\nAs a user, I want to retrieve subset of features on Postgresql online store instead of fetching all features\r\n\r\n**Describe the solution you'd like**\r\nUpdate online_read method to retrieve subset of features\r\n\r\n**Additional context**\r\nI'm close to creating a pull request for this issue myself.\n", "before_files": [{"content": "import logging\nfrom collections import defaultdict\nfrom datetime import datetime\nfrom typing import Any, Callable, Dict, List, Optional, Sequence, Tuple\n\nimport psycopg2\nimport pytz\nfrom psycopg2 import sql\nfrom psycopg2.extras import execute_values\nfrom pydantic.schema import Literal\n\nfrom feast import Entity\nfrom feast.feature_view import FeatureView\nfrom feast.infra.key_encoding_utils import serialize_entity_key\nfrom feast.infra.online_stores.online_store import OnlineStore\nfrom feast.infra.utils.postgres.connection_utils import _get_conn\nfrom feast.infra.utils.postgres.postgres_config import PostgreSQLConfig\nfrom feast.protos.feast.types.EntityKey_pb2 import EntityKey as EntityKeyProto\nfrom feast.protos.feast.types.Value_pb2 import Value as ValueProto\nfrom feast.repo_config import RepoConfig\nfrom feast.usage import log_exceptions_and_usage\n\n\nclass PostgreSQLOnlineStoreConfig(PostgreSQLConfig):\n type: Literal[\"postgres\"] = \"postgres\"\n\n\nclass PostgreSQLOnlineStore(OnlineStore):\n _conn: Optional[psycopg2._psycopg.connection] = None\n\n def _get_conn(self, config: RepoConfig):\n if not self._conn:\n assert config.online_store.type == \"postgres\"\n self._conn = _get_conn(config.online_store)\n return self._conn\n\n @log_exceptions_and_usage(online_store=\"postgres\")\n def online_write_batch(\n self,\n config: RepoConfig,\n table: FeatureView,\n data: List[\n Tuple[EntityKeyProto, Dict[str, ValueProto], datetime, Optional[datetime]]\n ],\n progress: Optional[Callable[[int], Any]],\n ) -> None:\n project = config.project\n\n with self._get_conn(config) as conn, conn.cursor() as cur:\n insert_values = []\n for entity_key, values, timestamp, created_ts in data:\n entity_key_bin = serialize_entity_key(\n entity_key,\n entity_key_serialization_version=config.entity_key_serialization_version,\n )\n timestamp = _to_naive_utc(timestamp)\n if created_ts is not None:\n created_ts = _to_naive_utc(created_ts)\n\n for feature_name, val in values.items():\n insert_values.append(\n (\n entity_key_bin,\n feature_name,\n val.SerializeToString(),\n timestamp,\n created_ts,\n )\n )\n # Control the batch so that we can update the progress\n batch_size = 5000\n for i in range(0, len(insert_values), batch_size):\n cur_batch = insert_values[i : i + batch_size]\n execute_values(\n cur,\n sql.SQL(\n \"\"\"\n INSERT INTO {}\n (entity_key, feature_name, value, event_ts, created_ts)\n VALUES %s\n ON CONFLICT (entity_key, feature_name) DO\n UPDATE SET\n value = EXCLUDED.value,\n event_ts = EXCLUDED.event_ts,\n created_ts = EXCLUDED.created_ts;\n \"\"\",\n ).format(sql.Identifier(_table_id(project, table))),\n cur_batch,\n page_size=batch_size,\n )\n if progress:\n progress(len(cur_batch))\n\n @log_exceptions_and_usage(online_store=\"postgres\")\n def online_read(\n self,\n config: RepoConfig,\n table: FeatureView,\n entity_keys: List[EntityKeyProto],\n requested_features: Optional[List[str]] = None,\n ) -> List[Tuple[Optional[datetime], Optional[Dict[str, ValueProto]]]]:\n result: List[Tuple[Optional[datetime], Optional[Dict[str, ValueProto]]]] = []\n\n project = config.project\n with self._get_conn(config) as conn, conn.cursor() as cur:\n # Collecting all the keys to a list allows us to make fewer round trips\n # to PostgreSQL\n keys = []\n for entity_key in entity_keys:\n keys.append(\n serialize_entity_key(\n entity_key,\n entity_key_serialization_version=config.entity_key_serialization_version,\n )\n )\n\n cur.execute(\n sql.SQL(\n \"\"\"\n SELECT entity_key, feature_name, value, event_ts\n FROM {} WHERE entity_key = ANY(%s);\n \"\"\"\n ).format(\n sql.Identifier(_table_id(project, table)),\n ),\n (keys,),\n )\n\n rows = cur.fetchall()\n\n # Since we don't know the order returned from PostgreSQL we'll need\n # to construct a dict to be able to quickly look up the correct row\n # when we iterate through the keys since they are in the correct order\n values_dict = defaultdict(list)\n for row in rows if rows is not None else []:\n values_dict[row[0].tobytes()].append(row[1:])\n\n for key in keys:\n if key in values_dict:\n value = values_dict[key]\n res = {}\n for feature_name, value_bin, event_ts in value:\n val = ValueProto()\n val.ParseFromString(bytes(value_bin))\n res[feature_name] = val\n result.append((event_ts, res))\n else:\n result.append((None, None))\n\n return result\n\n @log_exceptions_and_usage(online_store=\"postgres\")\n def update(\n self,\n config: RepoConfig,\n tables_to_delete: Sequence[FeatureView],\n tables_to_keep: Sequence[FeatureView],\n entities_to_delete: Sequence[Entity],\n entities_to_keep: Sequence[Entity],\n partial: bool,\n ):\n project = config.project\n schema_name = config.online_store.db_schema or config.online_store.user\n with self._get_conn(config) as conn, conn.cursor() as cur:\n # If a db_schema is provided, then that schema gets created if it doesn't\n # exist. Else a schema is created for the feature store user.\n\n cur.execute(\n \"\"\"\n SELECT schema_name\n FROM information_schema.schemata\n WHERE schema_name = %s\n \"\"\",\n (schema_name,),\n )\n schema_exists = cur.fetchone()\n if not schema_exists:\n cur.execute(\n sql.SQL(\"CREATE SCHEMA IF NOT EXISTS {} AUTHORIZATION {}\").format(\n sql.Identifier(schema_name),\n sql.Identifier(config.online_store.user),\n ),\n )\n\n for table in tables_to_delete:\n table_name = _table_id(project, table)\n cur.execute(_drop_table_and_index(table_name))\n\n for table in tables_to_keep:\n table_name = _table_id(project, table)\n cur.execute(\n sql.SQL(\n \"\"\"\n CREATE TABLE IF NOT EXISTS {}\n (\n entity_key BYTEA,\n feature_name TEXT,\n value BYTEA,\n event_ts TIMESTAMPTZ,\n created_ts TIMESTAMPTZ,\n PRIMARY KEY(entity_key, feature_name)\n );\n CREATE INDEX IF NOT EXISTS {} ON {} (entity_key);\n \"\"\"\n ).format(\n sql.Identifier(table_name),\n sql.Identifier(f\"{table_name}_ek\"),\n sql.Identifier(table_name),\n )\n )\n\n conn.commit()\n\n def teardown(\n self,\n config: RepoConfig,\n tables: Sequence[FeatureView],\n entities: Sequence[Entity],\n ):\n project = config.project\n try:\n with self._get_conn(config) as conn, conn.cursor() as cur:\n for table in tables:\n table_name = _table_id(project, table)\n cur.execute(_drop_table_and_index(table_name))\n except Exception:\n logging.exception(\"Teardown failed\")\n raise\n\n\ndef _table_id(project: str, table: FeatureView) -> str:\n return f\"{project}_{table.name}\"\n\n\ndef _drop_table_and_index(table_name):\n return sql.SQL(\n \"\"\"\n DROP TABLE IF EXISTS {};\n DROP INDEX IF EXISTS {};\n \"\"\"\n ).format(\n sql.Identifier(table_name),\n sql.Identifier(f\"{table_name}_ek\"),\n )\n\n\ndef _to_naive_utc(ts: datetime):\n if ts.tzinfo is None:\n return ts\n else:\n return ts.astimezone(pytz.utc).replace(tzinfo=None)\n", "path": "sdk/python/feast/infra/online_stores/contrib/postgres.py"}], "after_files": [{"content": "import logging\nfrom collections import defaultdict\nfrom datetime import datetime\nfrom typing import Any, Callable, Dict, List, Optional, Sequence, Tuple\n\nimport psycopg2\nimport pytz\nfrom psycopg2 import sql\nfrom psycopg2.extras import execute_values\nfrom pydantic.schema import Literal\n\nfrom feast import Entity\nfrom feast.feature_view import FeatureView\nfrom feast.infra.key_encoding_utils import serialize_entity_key\nfrom feast.infra.online_stores.online_store import OnlineStore\nfrom feast.infra.utils.postgres.connection_utils import _get_conn\nfrom feast.infra.utils.postgres.postgres_config import PostgreSQLConfig\nfrom feast.protos.feast.types.EntityKey_pb2 import EntityKey as EntityKeyProto\nfrom feast.protos.feast.types.Value_pb2 import Value as ValueProto\nfrom feast.repo_config import RepoConfig\nfrom feast.usage import log_exceptions_and_usage\n\n\nclass PostgreSQLOnlineStoreConfig(PostgreSQLConfig):\n type: Literal[\"postgres\"] = \"postgres\"\n\n\nclass PostgreSQLOnlineStore(OnlineStore):\n _conn: Optional[psycopg2._psycopg.connection] = None\n\n def _get_conn(self, config: RepoConfig):\n if not self._conn:\n assert config.online_store.type == \"postgres\"\n self._conn = _get_conn(config.online_store)\n return self._conn\n\n @log_exceptions_and_usage(online_store=\"postgres\")\n def online_write_batch(\n self,\n config: RepoConfig,\n table: FeatureView,\n data: List[\n Tuple[EntityKeyProto, Dict[str, ValueProto], datetime, Optional[datetime]]\n ],\n progress: Optional[Callable[[int], Any]],\n ) -> None:\n project = config.project\n\n with self._get_conn(config) as conn, conn.cursor() as cur:\n insert_values = []\n for entity_key, values, timestamp, created_ts in data:\n entity_key_bin = serialize_entity_key(\n entity_key,\n entity_key_serialization_version=config.entity_key_serialization_version,\n )\n timestamp = _to_naive_utc(timestamp)\n if created_ts is not None:\n created_ts = _to_naive_utc(created_ts)\n\n for feature_name, val in values.items():\n insert_values.append(\n (\n entity_key_bin,\n feature_name,\n val.SerializeToString(),\n timestamp,\n created_ts,\n )\n )\n # Control the batch so that we can update the progress\n batch_size = 5000\n for i in range(0, len(insert_values), batch_size):\n cur_batch = insert_values[i : i + batch_size]\n execute_values(\n cur,\n sql.SQL(\n \"\"\"\n INSERT INTO {}\n (entity_key, feature_name, value, event_ts, created_ts)\n VALUES %s\n ON CONFLICT (entity_key, feature_name) DO\n UPDATE SET\n value = EXCLUDED.value,\n event_ts = EXCLUDED.event_ts,\n created_ts = EXCLUDED.created_ts;\n \"\"\",\n ).format(sql.Identifier(_table_id(project, table))),\n cur_batch,\n page_size=batch_size,\n )\n if progress:\n progress(len(cur_batch))\n\n @log_exceptions_and_usage(online_store=\"postgres\")\n def online_read(\n self,\n config: RepoConfig,\n table: FeatureView,\n entity_keys: List[EntityKeyProto],\n requested_features: Optional[List[str]] = None,\n ) -> List[Tuple[Optional[datetime], Optional[Dict[str, ValueProto]]]]:\n result: List[Tuple[Optional[datetime], Optional[Dict[str, ValueProto]]]] = []\n\n project = config.project\n with self._get_conn(config) as conn, conn.cursor() as cur:\n # Collecting all the keys to a list allows us to make fewer round trips\n # to PostgreSQL\n keys = []\n for entity_key in entity_keys:\n keys.append(\n serialize_entity_key(\n entity_key,\n entity_key_serialization_version=config.entity_key_serialization_version,\n )\n )\n\n if not requested_features:\n cur.execute(\n sql.SQL(\n \"\"\"\n SELECT entity_key, feature_name, value, event_ts\n FROM {} WHERE entity_key = ANY(%s);\n \"\"\"\n ).format(\n sql.Identifier(_table_id(project, table)),\n ),\n (keys,),\n )\n else:\n cur.execute(\n sql.SQL(\n \"\"\"\n SELECT entity_key, feature_name, value, event_ts\n FROM {} WHERE entity_key = ANY(%s) and feature_name = ANY(%s);\n \"\"\"\n ).format(\n sql.Identifier(_table_id(project, table)),\n ),\n (keys, requested_features),\n )\n\n rows = cur.fetchall()\n\n # Since we don't know the order returned from PostgreSQL we'll need\n # to construct a dict to be able to quickly look up the correct row\n # when we iterate through the keys since they are in the correct order\n values_dict = defaultdict(list)\n for row in rows if rows is not None else []:\n values_dict[row[0].tobytes()].append(row[1:])\n\n for key in keys:\n if key in values_dict:\n value = values_dict[key]\n res = {}\n for feature_name, value_bin, event_ts in value:\n val = ValueProto()\n val.ParseFromString(bytes(value_bin))\n res[feature_name] = val\n result.append((event_ts, res))\n else:\n result.append((None, None))\n\n return result\n\n @log_exceptions_and_usage(online_store=\"postgres\")\n def update(\n self,\n config: RepoConfig,\n tables_to_delete: Sequence[FeatureView],\n tables_to_keep: Sequence[FeatureView],\n entities_to_delete: Sequence[Entity],\n entities_to_keep: Sequence[Entity],\n partial: bool,\n ):\n project = config.project\n schema_name = config.online_store.db_schema or config.online_store.user\n with self._get_conn(config) as conn, conn.cursor() as cur:\n # If a db_schema is provided, then that schema gets created if it doesn't\n # exist. Else a schema is created for the feature store user.\n\n cur.execute(\n \"\"\"\n SELECT schema_name\n FROM information_schema.schemata\n WHERE schema_name = %s\n \"\"\",\n (schema_name,),\n )\n schema_exists = cur.fetchone()\n if not schema_exists:\n cur.execute(\n sql.SQL(\"CREATE SCHEMA IF NOT EXISTS {} AUTHORIZATION {}\").format(\n sql.Identifier(schema_name),\n sql.Identifier(config.online_store.user),\n ),\n )\n\n for table in tables_to_delete:\n table_name = _table_id(project, table)\n cur.execute(_drop_table_and_index(table_name))\n\n for table in tables_to_keep:\n table_name = _table_id(project, table)\n cur.execute(\n sql.SQL(\n \"\"\"\n CREATE TABLE IF NOT EXISTS {}\n (\n entity_key BYTEA,\n feature_name TEXT,\n value BYTEA,\n event_ts TIMESTAMPTZ,\n created_ts TIMESTAMPTZ,\n PRIMARY KEY(entity_key, feature_name)\n );\n CREATE INDEX IF NOT EXISTS {} ON {} (entity_key);\n \"\"\"\n ).format(\n sql.Identifier(table_name),\n sql.Identifier(f\"{table_name}_ek\"),\n sql.Identifier(table_name),\n )\n )\n\n conn.commit()\n\n def teardown(\n self,\n config: RepoConfig,\n tables: Sequence[FeatureView],\n entities: Sequence[Entity],\n ):\n project = config.project\n try:\n with self._get_conn(config) as conn, conn.cursor() as cur:\n for table in tables:\n table_name = _table_id(project, table)\n cur.execute(_drop_table_and_index(table_name))\n except Exception:\n logging.exception(\"Teardown failed\")\n raise\n\n\ndef _table_id(project: str, table: FeatureView) -> str:\n return f\"{project}_{table.name}\"\n\n\ndef _drop_table_and_index(table_name):\n return sql.SQL(\n \"\"\"\n DROP TABLE IF EXISTS {};\n DROP INDEX IF EXISTS {};\n \"\"\"\n ).format(\n sql.Identifier(table_name),\n sql.Identifier(f\"{table_name}_ek\"),\n )\n\n\ndef _to_naive_utc(ts: datetime):\n if ts.tzinfo is None:\n return ts\n else:\n return ts.astimezone(pytz.utc).replace(tzinfo=None)\n", "path": "sdk/python/feast/infra/online_stores/contrib/postgres.py"}]}
| 2,718 | 333 |
gh_patches_debug_37322
|
rasdani/github-patches
|
git_diff
|
quantumlib__Cirq-3218
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
sqrtISWAP serializer doesn't like rz, ry, rx
The `cirq.rz` helper function ([cirq.ops.common_gates:1045](https://github.com/quantumlib/Cirq/blob/d998b3afe20dd6783e00dcc2590ba0f162b95af7/cirq/ops/common_gates.py#L1045)) converts inputs to half turns (radians?) by dividing by sympy.pi for parametrized inputs, but this division is actually expressed as `sympy.Mul(sympy.Pow(sympy.pi, sympy.Integer(-1))`
The `sympy.Pow` part of that expression then causes problems with serializers that expect only `Mul` and `Add` in Sympy expressions (i.e. [cirq.google.arg_func_langs:143](https://github.com/quantumlib/Cirq/blob/d998b3afe20dd6783e00dcc2590ba0f162b95af7/cirq/google/arg_func_langs.py#L143))
Would it make more sense to convert all forms of `rz` inputs (symbolic or not) with division by `np.pi`?
reproducing code:
```
# This introduces `Pow`:
contains_pow = sympy.Symbol('x') / sympy.pi
for arg in contains_pow.args:
print(arg, type(arg))
>>> x <class 'sympy.core.symbol.Symbol'>
>>> 1/pi <class 'sympy.core.power.Pow'>
# This casts to float:
no_pow = sympy.Symbol('x') / np.pi
for arg in no_pow.args:
print(arg, type(arg))
>>> 0.318309886183791 <class 'sympy.core.numbers.Float'>
>>> x <class 'sympy.core.symbol.Symbol'>
# Serializer chokes up on Pow due to sympy.pi division
q0 = cirq.GridQubit(0, 0)
c0 = cirq.Circuit(cirq.rz(sympy.Symbol('x')).on(q0))
c1 = cirq.Circuit(cirq.ZPowGate(exponent=sympy.Symbol('x')/np.pi, global_shift=-0.5).on(q0) )
cirq.google.SQRT_ISWAP_GATESET.serialize(c0) # Throws ValueError: Unrecognized arg type: <class 'sympy.core.power.Pow'>
cirq.google.SQRT_ISWAP_GATESET.serialize(c1) # Works fine
```
@dstrain115
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cirq/google/arg_func_langs.py`
Content:
```
1 # Copyright 2019 The Cirq Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import math
15 from typing import (
16 List,
17 Union,
18 Optional,
19 Iterator,
20 Iterable,
21 cast,
22 Set,
23 Dict,
24 FrozenSet,
25 )
26
27 import numpy as np
28 import sympy
29 from cirq.google.api import v2
30
31 SUPPORTED_FUNCTIONS_FOR_LANGUAGE: Dict[Optional[str], FrozenSet[str]] = {
32 '': frozenset(),
33 'linear': frozenset({'add', 'mul'}),
34 # None means any. Is used when inferring the language during serialization.
35 None: frozenset({'add', 'mul'}),
36 }
37
38 SUPPORTED_SYMPY_OPS = (sympy.Symbol, sympy.Add, sympy.Mul)
39
40 # Argument types for gates.
41 ARG_LIKE = Union[int, float, List[bool], str, sympy.Symbol, sympy.Add, sympy.
42 Mul]
43
44 # Supported function languages in order from least to most flexible.
45 # Clients should use the least flexible language they can, to make it easier
46 # to gradually roll out new capabilities to clients and servers.
47 LANGUAGE_ORDER = [
48 '',
49 'linear',
50 ]
51
52
53 def _max_lang(langs: Iterable[str]) -> str:
54 i = max((LANGUAGE_ORDER.index(e) for e in langs), default=0)
55 return LANGUAGE_ORDER[i]
56
57
58 def _infer_function_language_from_circuit(value: v2.program_pb2.Circuit) -> str:
59 return _max_lang({
60 e for moment in value.moments for op in moment.operations
61 for e in _function_languages_from_operation(op)
62 })
63
64
65 def _infer_function_language_from_schedule(value: v2.program_pb2.Schedule
66 ) -> str:
67 return _max_lang({
68 e for op in value.scheduled_operations
69 for e in _function_languages_from_operation(op.operation)
70 })
71
72
73 def _function_languages_from_operation(value: v2.program_pb2.Operation
74 ) -> Iterator[str]:
75 for arg in value.args.values():
76 yield from _function_languages_from_arg(arg)
77
78
79 def _function_languages_from_arg(arg_proto: v2.program_pb2.Arg
80 ) -> Iterator[str]:
81
82 which = arg_proto.WhichOneof('arg')
83 if which == 'func':
84 if arg_proto.func.type in ['add', 'mul']:
85 yield 'linear'
86 for a in arg_proto.func.args:
87 yield from _function_languages_from_arg(a)
88
89
90 def _arg_to_proto(value: ARG_LIKE,
91 *,
92 arg_function_language: Optional[str],
93 out: Optional[v2.program_pb2.Arg] = None
94 ) -> v2.program_pb2.Arg:
95 """Writes an argument value into an Arg proto.
96
97 Args:
98 value: The value to encode.
99 arg_function_language: The language to use when encoding functions. If
100 this is set to None, it will be set to the minimal language
101 necessary to support the features that were actually used.
102 out: The proto to write the result into. Defaults to a new instance.
103
104 Returns:
105 The proto that was written into as well as the `arg_function_language`
106 that was used.
107 """
108
109 if arg_function_language not in SUPPORTED_FUNCTIONS_FOR_LANGUAGE:
110 raise ValueError(f'Unrecognized arg_function_language: '
111 f'{arg_function_language!r}')
112 supported = SUPPORTED_FUNCTIONS_FOR_LANGUAGE[arg_function_language]
113
114 msg = v2.program_pb2.Arg() if out is None else out
115
116 def check_support(func_type: str) -> str:
117 if func_type not in supported:
118 lang = (repr(arg_function_language)
119 if arg_function_language is not None else '[any]')
120 raise ValueError(f'Function type {func_type!r} not supported by '
121 f'arg_function_language {lang}')
122 return func_type
123
124 if isinstance(value, (float, int, sympy.Integer, sympy.Float,
125 sympy.Rational, sympy.NumberSymbol)):
126 msg.arg_value.float_value = float(value)
127 elif isinstance(value, str):
128 msg.arg_value.string_value = value
129 elif (isinstance(value, (list, tuple, np.ndarray)) and
130 all(isinstance(x, (bool, np.bool_)) for x in value)):
131 # Some protobuf / numpy combinations do not support np.bool_, so cast.
132 msg.arg_value.bool_values.values.extend([bool(x) for x in value])
133 elif isinstance(value, sympy.Symbol):
134 msg.symbol = str(value.free_symbols.pop())
135 elif isinstance(value, sympy.Add):
136 msg.func.type = check_support('add')
137 for arg in value.args:
138 _arg_to_proto(arg,
139 arg_function_language=arg_function_language,
140 out=msg.func.args.add())
141 elif isinstance(value, sympy.Mul):
142 msg.func.type = check_support('mul')
143 for arg in value.args:
144 _arg_to_proto(arg,
145 arg_function_language=arg_function_language,
146 out=msg.func.args.add())
147 else:
148 raise ValueError(f'Unrecognized arg type: {type(value)}')
149
150 return msg
151
152
153 def _arg_from_proto(
154 arg_proto: v2.program_pb2.Arg,
155 *,
156 arg_function_language: str,
157 required_arg_name: Optional[str] = None,
158 ) -> Optional[ARG_LIKE]:
159 """Extracts a python value from an argument value proto.
160
161 Args:
162 arg_proto: The proto containing a serialized value.
163 arg_function_language: The `arg_function_language` field from
164 `Program.Language`.
165 required_arg_name: If set to `None`, the method will return `None` when
166 given an unset proto value. If set to a string, the method will
167 instead raise an error complaining that the value is missing in that
168 situation.
169
170 Returns:
171 The deserialized value, or else None if there was no set value and
172 `required_arg_name` was set to `None`.
173 """
174 supported = SUPPORTED_FUNCTIONS_FOR_LANGUAGE.get(arg_function_language)
175 if supported is None:
176 raise ValueError(f'Unrecognized arg_function_language: '
177 f'{arg_function_language!r}')
178
179 which = arg_proto.WhichOneof('arg')
180 if which == 'arg_value':
181 arg_value = arg_proto.arg_value
182 which_val = arg_value.WhichOneof('arg_value')
183 if which_val == 'float_value' or which_val == 'double_value':
184 if which_val == 'double_value':
185 result = float(arg_value.double_value)
186 else:
187 result = float(arg_value.float_value)
188 if math.ceil(result) == math.floor(result):
189 result = int(result)
190 return result
191 if which_val == 'bool_values':
192 return list(arg_value.bool_values.values)
193 if which_val == 'string_value':
194 return str(arg_value.string_value)
195 raise ValueError(f'Unrecognized value type: {which_val!r}')
196
197 if which == 'symbol':
198 return sympy.Symbol(arg_proto.symbol)
199
200 if which == 'func':
201 func = arg_proto.func
202
203 if func.type not in cast(Set[str], supported):
204 raise ValueError(
205 f'Unrecognized function type {func.type!r} '
206 f'for arg_function_language={arg_function_language!r}')
207
208 if func.type == 'add':
209 return sympy.Add(*[
210 _arg_from_proto(a,
211 arg_function_language=arg_function_language,
212 required_arg_name='An addition argument')
213 for a in func.args
214 ])
215
216 if func.type == 'mul':
217 return sympy.Mul(*[
218 _arg_from_proto(a,
219 arg_function_language=arg_function_language,
220 required_arg_name='A multiplication argument')
221 for a in func.args
222 ])
223
224 if required_arg_name is not None:
225 raise ValueError(
226 f'{required_arg_name} is missing or has an unrecognized '
227 f'argument type (WhichOneof("arg")={which!r}).')
228
229 return None
230
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/cirq/google/arg_func_langs.py b/cirq/google/arg_func_langs.py
--- a/cirq/google/arg_func_langs.py
+++ b/cirq/google/arg_func_langs.py
@@ -31,11 +31,12 @@
SUPPORTED_FUNCTIONS_FOR_LANGUAGE: Dict[Optional[str], FrozenSet[str]] = {
'': frozenset(),
'linear': frozenset({'add', 'mul'}),
+ 'exp': frozenset({'add', 'mul', 'pow'}),
# None means any. Is used when inferring the language during serialization.
- None: frozenset({'add', 'mul'}),
+ None: frozenset({'add', 'mul', 'pow'}),
}
-SUPPORTED_SYMPY_OPS = (sympy.Symbol, sympy.Add, sympy.Mul)
+SUPPORTED_SYMPY_OPS = (sympy.Symbol, sympy.Add, sympy.Mul, sympy.Pow)
# Argument types for gates.
ARG_LIKE = Union[int, float, List[bool], str, sympy.Symbol, sympy.Add, sympy.
@@ -47,6 +48,7 @@
LANGUAGE_ORDER = [
'',
'linear',
+ 'exp',
]
@@ -85,6 +87,10 @@
yield 'linear'
for a in arg_proto.func.args:
yield from _function_languages_from_arg(a)
+ if arg_proto.func.type in ['pow']:
+ yield 'exp'
+ for a in arg_proto.func.args:
+ yield from _function_languages_from_arg(a)
def _arg_to_proto(value: ARG_LIKE,
@@ -144,6 +150,12 @@
_arg_to_proto(arg,
arg_function_language=arg_function_language,
out=msg.func.args.add())
+ elif isinstance(value, sympy.Pow):
+ msg.func.type = check_support('pow')
+ for arg in value.args:
+ _arg_to_proto(arg,
+ arg_function_language=arg_function_language,
+ out=msg.func.args.add())
else:
raise ValueError(f'Unrecognized arg type: {type(value)}')
@@ -221,6 +233,14 @@
for a in func.args
])
+ if func.type == 'pow':
+ return sympy.Pow(*[
+ _arg_from_proto(a,
+ arg_function_language=arg_function_language,
+ required_arg_name='A power argument')
+ for a in func.args
+ ])
+
if required_arg_name is not None:
raise ValueError(
f'{required_arg_name} is missing or has an unrecognized '
|
{"golden_diff": "diff --git a/cirq/google/arg_func_langs.py b/cirq/google/arg_func_langs.py\n--- a/cirq/google/arg_func_langs.py\n+++ b/cirq/google/arg_func_langs.py\n@@ -31,11 +31,12 @@\n SUPPORTED_FUNCTIONS_FOR_LANGUAGE: Dict[Optional[str], FrozenSet[str]] = {\n '': frozenset(),\n 'linear': frozenset({'add', 'mul'}),\n+ 'exp': frozenset({'add', 'mul', 'pow'}),\n # None means any. Is used when inferring the language during serialization.\n- None: frozenset({'add', 'mul'}),\n+ None: frozenset({'add', 'mul', 'pow'}),\n }\n \n-SUPPORTED_SYMPY_OPS = (sympy.Symbol, sympy.Add, sympy.Mul)\n+SUPPORTED_SYMPY_OPS = (sympy.Symbol, sympy.Add, sympy.Mul, sympy.Pow)\n \n # Argument types for gates.\n ARG_LIKE = Union[int, float, List[bool], str, sympy.Symbol, sympy.Add, sympy.\n@@ -47,6 +48,7 @@\n LANGUAGE_ORDER = [\n '',\n 'linear',\n+ 'exp',\n ]\n \n \n@@ -85,6 +87,10 @@\n yield 'linear'\n for a in arg_proto.func.args:\n yield from _function_languages_from_arg(a)\n+ if arg_proto.func.type in ['pow']:\n+ yield 'exp'\n+ for a in arg_proto.func.args:\n+ yield from _function_languages_from_arg(a)\n \n \n def _arg_to_proto(value: ARG_LIKE,\n@@ -144,6 +150,12 @@\n _arg_to_proto(arg,\n arg_function_language=arg_function_language,\n out=msg.func.args.add())\n+ elif isinstance(value, sympy.Pow):\n+ msg.func.type = check_support('pow')\n+ for arg in value.args:\n+ _arg_to_proto(arg,\n+ arg_function_language=arg_function_language,\n+ out=msg.func.args.add())\n else:\n raise ValueError(f'Unrecognized arg type: {type(value)}')\n \n@@ -221,6 +233,14 @@\n for a in func.args\n ])\n \n+ if func.type == 'pow':\n+ return sympy.Pow(*[\n+ _arg_from_proto(a,\n+ arg_function_language=arg_function_language,\n+ required_arg_name='A power argument')\n+ for a in func.args\n+ ])\n+\n if required_arg_name is not None:\n raise ValueError(\n f'{required_arg_name} is missing or has an unrecognized '\n", "issue": "sqrtISWAP serializer doesn't like rz, ry, rx\nThe `cirq.rz` helper function ([cirq.ops.common_gates:1045](https://github.com/quantumlib/Cirq/blob/d998b3afe20dd6783e00dcc2590ba0f162b95af7/cirq/ops/common_gates.py#L1045)) converts inputs to half turns (radians?) by dividing by sympy.pi for parametrized inputs, but this division is actually expressed as `sympy.Mul(sympy.Pow(sympy.pi, sympy.Integer(-1))`\r\n\r\nThe `sympy.Pow` part of that expression then causes problems with serializers that expect only `Mul` and `Add` in Sympy expressions (i.e. [cirq.google.arg_func_langs:143](https://github.com/quantumlib/Cirq/blob/d998b3afe20dd6783e00dcc2590ba0f162b95af7/cirq/google/arg_func_langs.py#L143)) \r\n\r\nWould it make more sense to convert all forms of `rz` inputs (symbolic or not) with division by `np.pi`?\r\n\r\nreproducing code:\r\n```\r\n# This introduces `Pow`:\r\ncontains_pow = sympy.Symbol('x') / sympy.pi\r\nfor arg in contains_pow.args:\r\n print(arg, type(arg))\r\n>>> x <class 'sympy.core.symbol.Symbol'>\r\n>>> 1/pi <class 'sympy.core.power.Pow'>\r\n\r\n# This casts to float:\r\nno_pow = sympy.Symbol('x') / np.pi\r\nfor arg in no_pow.args:\r\n print(arg, type(arg))\r\n\r\n>>> 0.318309886183791 <class 'sympy.core.numbers.Float'>\r\n>>> x <class 'sympy.core.symbol.Symbol'>\r\n\r\n# Serializer chokes up on Pow due to sympy.pi division\r\nq0 = cirq.GridQubit(0, 0)\r\nc0 = cirq.Circuit(cirq.rz(sympy.Symbol('x')).on(q0))\r\nc1 = cirq.Circuit(cirq.ZPowGate(exponent=sympy.Symbol('x')/np.pi, global_shift=-0.5).on(q0) )\r\n\r\ncirq.google.SQRT_ISWAP_GATESET.serialize(c0) # Throws ValueError: Unrecognized arg type: <class 'sympy.core.power.Pow'>\r\ncirq.google.SQRT_ISWAP_GATESET.serialize(c1) # Works fine\r\n```\r\n\r\n@dstrain115 \n", "before_files": [{"content": "# Copyright 2019 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport math\nfrom typing import (\n List,\n Union,\n Optional,\n Iterator,\n Iterable,\n cast,\n Set,\n Dict,\n FrozenSet,\n)\n\nimport numpy as np\nimport sympy\nfrom cirq.google.api import v2\n\nSUPPORTED_FUNCTIONS_FOR_LANGUAGE: Dict[Optional[str], FrozenSet[str]] = {\n '': frozenset(),\n 'linear': frozenset({'add', 'mul'}),\n # None means any. Is used when inferring the language during serialization.\n None: frozenset({'add', 'mul'}),\n}\n\nSUPPORTED_SYMPY_OPS = (sympy.Symbol, sympy.Add, sympy.Mul)\n\n# Argument types for gates.\nARG_LIKE = Union[int, float, List[bool], str, sympy.Symbol, sympy.Add, sympy.\n Mul]\n\n# Supported function languages in order from least to most flexible.\n# Clients should use the least flexible language they can, to make it easier\n# to gradually roll out new capabilities to clients and servers.\nLANGUAGE_ORDER = [\n '',\n 'linear',\n]\n\n\ndef _max_lang(langs: Iterable[str]) -> str:\n i = max((LANGUAGE_ORDER.index(e) for e in langs), default=0)\n return LANGUAGE_ORDER[i]\n\n\ndef _infer_function_language_from_circuit(value: v2.program_pb2.Circuit) -> str:\n return _max_lang({\n e for moment in value.moments for op in moment.operations\n for e in _function_languages_from_operation(op)\n })\n\n\ndef _infer_function_language_from_schedule(value: v2.program_pb2.Schedule\n ) -> str:\n return _max_lang({\n e for op in value.scheduled_operations\n for e in _function_languages_from_operation(op.operation)\n })\n\n\ndef _function_languages_from_operation(value: v2.program_pb2.Operation\n ) -> Iterator[str]:\n for arg in value.args.values():\n yield from _function_languages_from_arg(arg)\n\n\ndef _function_languages_from_arg(arg_proto: v2.program_pb2.Arg\n ) -> Iterator[str]:\n\n which = arg_proto.WhichOneof('arg')\n if which == 'func':\n if arg_proto.func.type in ['add', 'mul']:\n yield 'linear'\n for a in arg_proto.func.args:\n yield from _function_languages_from_arg(a)\n\n\ndef _arg_to_proto(value: ARG_LIKE,\n *,\n arg_function_language: Optional[str],\n out: Optional[v2.program_pb2.Arg] = None\n ) -> v2.program_pb2.Arg:\n \"\"\"Writes an argument value into an Arg proto.\n\n Args:\n value: The value to encode.\n arg_function_language: The language to use when encoding functions. If\n this is set to None, it will be set to the minimal language\n necessary to support the features that were actually used.\n out: The proto to write the result into. Defaults to a new instance.\n\n Returns:\n The proto that was written into as well as the `arg_function_language`\n that was used.\n \"\"\"\n\n if arg_function_language not in SUPPORTED_FUNCTIONS_FOR_LANGUAGE:\n raise ValueError(f'Unrecognized arg_function_language: '\n f'{arg_function_language!r}')\n supported = SUPPORTED_FUNCTIONS_FOR_LANGUAGE[arg_function_language]\n\n msg = v2.program_pb2.Arg() if out is None else out\n\n def check_support(func_type: str) -> str:\n if func_type not in supported:\n lang = (repr(arg_function_language)\n if arg_function_language is not None else '[any]')\n raise ValueError(f'Function type {func_type!r} not supported by '\n f'arg_function_language {lang}')\n return func_type\n\n if isinstance(value, (float, int, sympy.Integer, sympy.Float,\n sympy.Rational, sympy.NumberSymbol)):\n msg.arg_value.float_value = float(value)\n elif isinstance(value, str):\n msg.arg_value.string_value = value\n elif (isinstance(value, (list, tuple, np.ndarray)) and\n all(isinstance(x, (bool, np.bool_)) for x in value)):\n # Some protobuf / numpy combinations do not support np.bool_, so cast.\n msg.arg_value.bool_values.values.extend([bool(x) for x in value])\n elif isinstance(value, sympy.Symbol):\n msg.symbol = str(value.free_symbols.pop())\n elif isinstance(value, sympy.Add):\n msg.func.type = check_support('add')\n for arg in value.args:\n _arg_to_proto(arg,\n arg_function_language=arg_function_language,\n out=msg.func.args.add())\n elif isinstance(value, sympy.Mul):\n msg.func.type = check_support('mul')\n for arg in value.args:\n _arg_to_proto(arg,\n arg_function_language=arg_function_language,\n out=msg.func.args.add())\n else:\n raise ValueError(f'Unrecognized arg type: {type(value)}')\n\n return msg\n\n\ndef _arg_from_proto(\n arg_proto: v2.program_pb2.Arg,\n *,\n arg_function_language: str,\n required_arg_name: Optional[str] = None,\n) -> Optional[ARG_LIKE]:\n \"\"\"Extracts a python value from an argument value proto.\n\n Args:\n arg_proto: The proto containing a serialized value.\n arg_function_language: The `arg_function_language` field from\n `Program.Language`.\n required_arg_name: If set to `None`, the method will return `None` when\n given an unset proto value. If set to a string, the method will\n instead raise an error complaining that the value is missing in that\n situation.\n\n Returns:\n The deserialized value, or else None if there was no set value and\n `required_arg_name` was set to `None`.\n \"\"\"\n supported = SUPPORTED_FUNCTIONS_FOR_LANGUAGE.get(arg_function_language)\n if supported is None:\n raise ValueError(f'Unrecognized arg_function_language: '\n f'{arg_function_language!r}')\n\n which = arg_proto.WhichOneof('arg')\n if which == 'arg_value':\n arg_value = arg_proto.arg_value\n which_val = arg_value.WhichOneof('arg_value')\n if which_val == 'float_value' or which_val == 'double_value':\n if which_val == 'double_value':\n result = float(arg_value.double_value)\n else:\n result = float(arg_value.float_value)\n if math.ceil(result) == math.floor(result):\n result = int(result)\n return result\n if which_val == 'bool_values':\n return list(arg_value.bool_values.values)\n if which_val == 'string_value':\n return str(arg_value.string_value)\n raise ValueError(f'Unrecognized value type: {which_val!r}')\n\n if which == 'symbol':\n return sympy.Symbol(arg_proto.symbol)\n\n if which == 'func':\n func = arg_proto.func\n\n if func.type not in cast(Set[str], supported):\n raise ValueError(\n f'Unrecognized function type {func.type!r} '\n f'for arg_function_language={arg_function_language!r}')\n\n if func.type == 'add':\n return sympy.Add(*[\n _arg_from_proto(a,\n arg_function_language=arg_function_language,\n required_arg_name='An addition argument')\n for a in func.args\n ])\n\n if func.type == 'mul':\n return sympy.Mul(*[\n _arg_from_proto(a,\n arg_function_language=arg_function_language,\n required_arg_name='A multiplication argument')\n for a in func.args\n ])\n\n if required_arg_name is not None:\n raise ValueError(\n f'{required_arg_name} is missing or has an unrecognized '\n f'argument type (WhichOneof(\"arg\")={which!r}).')\n\n return None\n", "path": "cirq/google/arg_func_langs.py"}], "after_files": [{"content": "# Copyright 2019 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport math\nfrom typing import (\n List,\n Union,\n Optional,\n Iterator,\n Iterable,\n cast,\n Set,\n Dict,\n FrozenSet,\n)\n\nimport numpy as np\nimport sympy\nfrom cirq.google.api import v2\n\nSUPPORTED_FUNCTIONS_FOR_LANGUAGE: Dict[Optional[str], FrozenSet[str]] = {\n '': frozenset(),\n 'linear': frozenset({'add', 'mul'}),\n 'exp': frozenset({'add', 'mul', 'pow'}),\n # None means any. Is used when inferring the language during serialization.\n None: frozenset({'add', 'mul', 'pow'}),\n}\n\nSUPPORTED_SYMPY_OPS = (sympy.Symbol, sympy.Add, sympy.Mul, sympy.Pow)\n\n# Argument types for gates.\nARG_LIKE = Union[int, float, List[bool], str, sympy.Symbol, sympy.Add, sympy.\n Mul]\n\n# Supported function languages in order from least to most flexible.\n# Clients should use the least flexible language they can, to make it easier\n# to gradually roll out new capabilities to clients and servers.\nLANGUAGE_ORDER = [\n '',\n 'linear',\n 'exp',\n]\n\n\ndef _max_lang(langs: Iterable[str]) -> str:\n i = max((LANGUAGE_ORDER.index(e) for e in langs), default=0)\n return LANGUAGE_ORDER[i]\n\n\ndef _infer_function_language_from_circuit(value: v2.program_pb2.Circuit) -> str:\n return _max_lang({\n e for moment in value.moments for op in moment.operations\n for e in _function_languages_from_operation(op)\n })\n\n\ndef _infer_function_language_from_schedule(value: v2.program_pb2.Schedule\n ) -> str:\n return _max_lang({\n e for op in value.scheduled_operations\n for e in _function_languages_from_operation(op.operation)\n })\n\n\ndef _function_languages_from_operation(value: v2.program_pb2.Operation\n ) -> Iterator[str]:\n for arg in value.args.values():\n yield from _function_languages_from_arg(arg)\n\n\ndef _function_languages_from_arg(arg_proto: v2.program_pb2.Arg\n ) -> Iterator[str]:\n\n which = arg_proto.WhichOneof('arg')\n if which == 'func':\n if arg_proto.func.type in ['add', 'mul']:\n yield 'linear'\n for a in arg_proto.func.args:\n yield from _function_languages_from_arg(a)\n if arg_proto.func.type in ['pow']:\n yield 'exp'\n for a in arg_proto.func.args:\n yield from _function_languages_from_arg(a)\n\n\ndef _arg_to_proto(value: ARG_LIKE,\n *,\n arg_function_language: Optional[str],\n out: Optional[v2.program_pb2.Arg] = None\n ) -> v2.program_pb2.Arg:\n \"\"\"Writes an argument value into an Arg proto.\n\n Args:\n value: The value to encode.\n arg_function_language: The language to use when encoding functions. If\n this is set to None, it will be set to the minimal language\n necessary to support the features that were actually used.\n out: The proto to write the result into. Defaults to a new instance.\n\n Returns:\n The proto that was written into as well as the `arg_function_language`\n that was used.\n \"\"\"\n\n if arg_function_language not in SUPPORTED_FUNCTIONS_FOR_LANGUAGE:\n raise ValueError(f'Unrecognized arg_function_language: '\n f'{arg_function_language!r}')\n supported = SUPPORTED_FUNCTIONS_FOR_LANGUAGE[arg_function_language]\n\n msg = v2.program_pb2.Arg() if out is None else out\n\n def check_support(func_type: str) -> str:\n if func_type not in supported:\n lang = (repr(arg_function_language)\n if arg_function_language is not None else '[any]')\n raise ValueError(f'Function type {func_type!r} not supported by '\n f'arg_function_language {lang}')\n return func_type\n\n if isinstance(value, (float, int, sympy.Integer, sympy.Float,\n sympy.Rational, sympy.NumberSymbol)):\n msg.arg_value.float_value = float(value)\n elif isinstance(value, str):\n msg.arg_value.string_value = value\n elif (isinstance(value, (list, tuple, np.ndarray)) and\n all(isinstance(x, (bool, np.bool_)) for x in value)):\n # Some protobuf / numpy combinations do not support np.bool_, so cast.\n msg.arg_value.bool_values.values.extend([bool(x) for x in value])\n elif isinstance(value, sympy.Symbol):\n msg.symbol = str(value.free_symbols.pop())\n elif isinstance(value, sympy.Add):\n msg.func.type = check_support('add')\n for arg in value.args:\n _arg_to_proto(arg,\n arg_function_language=arg_function_language,\n out=msg.func.args.add())\n elif isinstance(value, sympy.Mul):\n msg.func.type = check_support('mul')\n for arg in value.args:\n _arg_to_proto(arg,\n arg_function_language=arg_function_language,\n out=msg.func.args.add())\n elif isinstance(value, sympy.Pow):\n msg.func.type = check_support('pow')\n for arg in value.args:\n _arg_to_proto(arg,\n arg_function_language=arg_function_language,\n out=msg.func.args.add())\n else:\n raise ValueError(f'Unrecognized arg type: {type(value)}')\n\n return msg\n\n\ndef _arg_from_proto(\n arg_proto: v2.program_pb2.Arg,\n *,\n arg_function_language: str,\n required_arg_name: Optional[str] = None,\n) -> Optional[ARG_LIKE]:\n \"\"\"Extracts a python value from an argument value proto.\n\n Args:\n arg_proto: The proto containing a serialized value.\n arg_function_language: The `arg_function_language` field from\n `Program.Language`.\n required_arg_name: If set to `None`, the method will return `None` when\n given an unset proto value. If set to a string, the method will\n instead raise an error complaining that the value is missing in that\n situation.\n\n Returns:\n The deserialized value, or else None if there was no set value and\n `required_arg_name` was set to `None`.\n \"\"\"\n supported = SUPPORTED_FUNCTIONS_FOR_LANGUAGE.get(arg_function_language)\n if supported is None:\n raise ValueError(f'Unrecognized arg_function_language: '\n f'{arg_function_language!r}')\n\n which = arg_proto.WhichOneof('arg')\n if which == 'arg_value':\n arg_value = arg_proto.arg_value\n which_val = arg_value.WhichOneof('arg_value')\n if which_val == 'float_value' or which_val == 'double_value':\n if which_val == 'double_value':\n result = float(arg_value.double_value)\n else:\n result = float(arg_value.float_value)\n if math.ceil(result) == math.floor(result):\n result = int(result)\n return result\n if which_val == 'bool_values':\n return list(arg_value.bool_values.values)\n if which_val == 'string_value':\n return str(arg_value.string_value)\n raise ValueError(f'Unrecognized value type: {which_val!r}')\n\n if which == 'symbol':\n return sympy.Symbol(arg_proto.symbol)\n\n if which == 'func':\n func = arg_proto.func\n\n if func.type not in cast(Set[str], supported):\n raise ValueError(\n f'Unrecognized function type {func.type!r} '\n f'for arg_function_language={arg_function_language!r}')\n\n if func.type == 'add':\n return sympy.Add(*[\n _arg_from_proto(a,\n arg_function_language=arg_function_language,\n required_arg_name='An addition argument')\n for a in func.args\n ])\n\n if func.type == 'mul':\n return sympy.Mul(*[\n _arg_from_proto(a,\n arg_function_language=arg_function_language,\n required_arg_name='A multiplication argument')\n for a in func.args\n ])\n\n if func.type == 'pow':\n return sympy.Pow(*[\n _arg_from_proto(a,\n arg_function_language=arg_function_language,\n required_arg_name='A power argument')\n for a in func.args\n ])\n\n if required_arg_name is not None:\n raise ValueError(\n f'{required_arg_name} is missing or has an unrecognized '\n f'argument type (WhichOneof(\"arg\")={which!r}).')\n\n return None\n", "path": "cirq/google/arg_func_langs.py"}]}
| 3,236 | 583 |
gh_patches_debug_15993
|
rasdani/github-patches
|
git_diff
|
certbot__certbot-4897
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add more context to "Easy vs Secure" prompt
Right now you get this prompt for redirect
```
Please choose whether HTTPS access is required or optional.
-------------------------------------------------------------------------------
1: Easy - Allow both HTTP and HTTPS access to these sites
2: Secure - Make all requests redirect to secure HTTPS access
-------------------------------------------------------------------------------
```
One confusing thing: The title asks "required or optional," but those words don't appear in either option.
Also, we should provide more advice on how to choose. I.e.
```
1: Easy - Allow both HTTP and HTTPS access to these sites. Choose this if you haven't yet tested that your sites function properly when visited on HTTPS.
2: Secure - Make all requests redirect to secure HTTPS access. Choose this for new sites, or if you're confident your site works on HTTPS. You can undo this change by editing your web server's configuration.
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `certbot/display/enhancements.py`
Content:
```
1 """Certbot Enhancement Display"""
2 import logging
3
4 import zope.component
5
6 from certbot import errors
7 from certbot import interfaces
8 from certbot.display import util as display_util
9
10
11 logger = logging.getLogger(__name__)
12
13 # Define a helper function to avoid verbose code
14 util = zope.component.getUtility
15
16
17 def ask(enhancement):
18 """Display the enhancement to the user.
19
20 :param str enhancement: One of the
21 :class:`certbot.CONFIG.ENHANCEMENTS` enhancements
22
23 :returns: True if feature is desired, False otherwise
24 :rtype: bool
25
26 :raises .errors.Error: if the enhancement provided is not supported
27
28 """
29 try:
30 # Call the appropriate function based on the enhancement
31 return DISPATCH[enhancement]()
32 except KeyError:
33 logger.error("Unsupported enhancement given to ask(): %s", enhancement)
34 raise errors.Error("Unsupported Enhancement")
35
36
37 def redirect_by_default():
38 """Determines whether the user would like to redirect to HTTPS.
39
40 :returns: True if redirect is desired, False otherwise
41 :rtype: bool
42
43 """
44 choices = [
45 ("Easy", "Allow both HTTP and HTTPS access to these sites"),
46 ("Secure", "Make all requests redirect to secure HTTPS access"),
47 ]
48
49 code, selection = util(interfaces.IDisplay).menu(
50 "Please choose whether HTTPS access is required or optional.",
51 choices, default=0,
52 cli_flag="--redirect / --no-redirect", force_interactive=True)
53
54 if code != display_util.OK:
55 return False
56
57 return selection == 1
58
59
60 DISPATCH = {
61 "redirect": redirect_by_default
62 }
63
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/certbot/display/enhancements.py b/certbot/display/enhancements.py
--- a/certbot/display/enhancements.py
+++ b/certbot/display/enhancements.py
@@ -42,12 +42,14 @@
"""
choices = [
- ("Easy", "Allow both HTTP and HTTPS access to these sites"),
- ("Secure", "Make all requests redirect to secure HTTPS access"),
+ ("No redirect", "Make no further changes to the webserver configuration."),
+ ("Redirect", "Make all requests redirect to secure HTTPS access. "
+ "Choose this for new sites, or if you're confident your site works on HTTPS. "
+ "You can undo this change by editing your web server's configuration."),
]
code, selection = util(interfaces.IDisplay).menu(
- "Please choose whether HTTPS access is required or optional.",
+ "Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.",
choices, default=0,
cli_flag="--redirect / --no-redirect", force_interactive=True)
|
{"golden_diff": "diff --git a/certbot/display/enhancements.py b/certbot/display/enhancements.py\n--- a/certbot/display/enhancements.py\n+++ b/certbot/display/enhancements.py\n@@ -42,12 +42,14 @@\n \n \"\"\"\n choices = [\n- (\"Easy\", \"Allow both HTTP and HTTPS access to these sites\"),\n- (\"Secure\", \"Make all requests redirect to secure HTTPS access\"),\n+ (\"No redirect\", \"Make no further changes to the webserver configuration.\"),\n+ (\"Redirect\", \"Make all requests redirect to secure HTTPS access. \"\n+ \"Choose this for new sites, or if you're confident your site works on HTTPS. \"\n+ \"You can undo this change by editing your web server's configuration.\"),\n ]\n \n code, selection = util(interfaces.IDisplay).menu(\n- \"Please choose whether HTTPS access is required or optional.\",\n+ \"Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.\",\n choices, default=0,\n cli_flag=\"--redirect / --no-redirect\", force_interactive=True)\n", "issue": "Add more context to \"Easy vs Secure\" prompt\nRight now you get this prompt for redirect\n\n```\nPlease choose whether HTTPS access is required or optional.\n-------------------------------------------------------------------------------\n1: Easy - Allow both HTTP and HTTPS access to these sites\n2: Secure - Make all requests redirect to secure HTTPS access\n-------------------------------------------------------------------------------\n```\n\nOne confusing thing: The title asks \"required or optional,\" but those words don't appear in either option.\n\nAlso, we should provide more advice on how to choose. I.e. \n\n```\n1: Easy - Allow both HTTP and HTTPS access to these sites. Choose this if you haven't yet tested that your sites function properly when visited on HTTPS.\n2: Secure - Make all requests redirect to secure HTTPS access. Choose this for new sites, or if you're confident your site works on HTTPS. You can undo this change by editing your web server's configuration.\n```\n\n", "before_files": [{"content": "\"\"\"Certbot Enhancement Display\"\"\"\nimport logging\n\nimport zope.component\n\nfrom certbot import errors\nfrom certbot import interfaces\nfrom certbot.display import util as display_util\n\n\nlogger = logging.getLogger(__name__)\n\n# Define a helper function to avoid verbose code\nutil = zope.component.getUtility\n\n\ndef ask(enhancement):\n \"\"\"Display the enhancement to the user.\n\n :param str enhancement: One of the\n :class:`certbot.CONFIG.ENHANCEMENTS` enhancements\n\n :returns: True if feature is desired, False otherwise\n :rtype: bool\n\n :raises .errors.Error: if the enhancement provided is not supported\n\n \"\"\"\n try:\n # Call the appropriate function based on the enhancement\n return DISPATCH[enhancement]()\n except KeyError:\n logger.error(\"Unsupported enhancement given to ask(): %s\", enhancement)\n raise errors.Error(\"Unsupported Enhancement\")\n\n\ndef redirect_by_default():\n \"\"\"Determines whether the user would like to redirect to HTTPS.\n\n :returns: True if redirect is desired, False otherwise\n :rtype: bool\n\n \"\"\"\n choices = [\n (\"Easy\", \"Allow both HTTP and HTTPS access to these sites\"),\n (\"Secure\", \"Make all requests redirect to secure HTTPS access\"),\n ]\n\n code, selection = util(interfaces.IDisplay).menu(\n \"Please choose whether HTTPS access is required or optional.\",\n choices, default=0,\n cli_flag=\"--redirect / --no-redirect\", force_interactive=True)\n\n if code != display_util.OK:\n return False\n\n return selection == 1\n\n\nDISPATCH = {\n \"redirect\": redirect_by_default\n}\n", "path": "certbot/display/enhancements.py"}], "after_files": [{"content": "\"\"\"Certbot Enhancement Display\"\"\"\nimport logging\n\nimport zope.component\n\nfrom certbot import errors\nfrom certbot import interfaces\nfrom certbot.display import util as display_util\n\n\nlogger = logging.getLogger(__name__)\n\n# Define a helper function to avoid verbose code\nutil = zope.component.getUtility\n\n\ndef ask(enhancement):\n \"\"\"Display the enhancement to the user.\n\n :param str enhancement: One of the\n :class:`certbot.CONFIG.ENHANCEMENTS` enhancements\n\n :returns: True if feature is desired, False otherwise\n :rtype: bool\n\n :raises .errors.Error: if the enhancement provided is not supported\n\n \"\"\"\n try:\n # Call the appropriate function based on the enhancement\n return DISPATCH[enhancement]()\n except KeyError:\n logger.error(\"Unsupported enhancement given to ask(): %s\", enhancement)\n raise errors.Error(\"Unsupported Enhancement\")\n\n\ndef redirect_by_default():\n \"\"\"Determines whether the user would like to redirect to HTTPS.\n\n :returns: True if redirect is desired, False otherwise\n :rtype: bool\n\n \"\"\"\n choices = [\n (\"No redirect\", \"Make no further changes to the webserver configuration.\"),\n (\"Redirect\", \"Make all requests redirect to secure HTTPS access. \"\n \"Choose this for new sites, or if you're confident your site works on HTTPS. \"\n \"You can undo this change by editing your web server's configuration.\"),\n ]\n\n code, selection = util(interfaces.IDisplay).menu(\n \"Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.\",\n choices, default=0,\n cli_flag=\"--redirect / --no-redirect\", force_interactive=True)\n\n if code != display_util.OK:\n return False\n\n return selection == 1\n\n\nDISPATCH = {\n \"redirect\": redirect_by_default\n}\n", "path": "certbot/display/enhancements.py"}]}
| 918 | 237 |
gh_patches_debug_13250
|
rasdani/github-patches
|
git_diff
|
scrapy__scrapy-6099
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Deprecate Reppy support
https://github.com/scrapy/scrapy/issues/5226#issuecomment-897621867
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/robotstxt.py`
Content:
```
1 import logging
2 import sys
3 from abc import ABCMeta, abstractmethod
4
5 from scrapy.utils.python import to_unicode
6
7 logger = logging.getLogger(__name__)
8
9
10 def decode_robotstxt(robotstxt_body, spider, to_native_str_type=False):
11 try:
12 if to_native_str_type:
13 robotstxt_body = to_unicode(robotstxt_body)
14 else:
15 robotstxt_body = robotstxt_body.decode("utf-8")
16 except UnicodeDecodeError:
17 # If we found garbage or robots.txt in an encoding other than UTF-8, disregard it.
18 # Switch to 'allow all' state.
19 logger.warning(
20 "Failure while parsing robots.txt. File either contains garbage or "
21 "is in an encoding other than UTF-8, treating it as an empty file.",
22 exc_info=sys.exc_info(),
23 extra={"spider": spider},
24 )
25 robotstxt_body = ""
26 return robotstxt_body
27
28
29 class RobotParser(metaclass=ABCMeta):
30 @classmethod
31 @abstractmethod
32 def from_crawler(cls, crawler, robotstxt_body):
33 """Parse the content of a robots.txt_ file as bytes. This must be a class method.
34 It must return a new instance of the parser backend.
35
36 :param crawler: crawler which made the request
37 :type crawler: :class:`~scrapy.crawler.Crawler` instance
38
39 :param robotstxt_body: content of a robots.txt_ file.
40 :type robotstxt_body: bytes
41 """
42 pass
43
44 @abstractmethod
45 def allowed(self, url, user_agent):
46 """Return ``True`` if ``user_agent`` is allowed to crawl ``url``, otherwise return ``False``.
47
48 :param url: Absolute URL
49 :type url: str
50
51 :param user_agent: User agent
52 :type user_agent: str
53 """
54 pass
55
56
57 class PythonRobotParser(RobotParser):
58 def __init__(self, robotstxt_body, spider):
59 from urllib.robotparser import RobotFileParser
60
61 self.spider = spider
62 robotstxt_body = decode_robotstxt(
63 robotstxt_body, spider, to_native_str_type=True
64 )
65 self.rp = RobotFileParser()
66 self.rp.parse(robotstxt_body.splitlines())
67
68 @classmethod
69 def from_crawler(cls, crawler, robotstxt_body):
70 spider = None if not crawler else crawler.spider
71 o = cls(robotstxt_body, spider)
72 return o
73
74 def allowed(self, url, user_agent):
75 user_agent = to_unicode(user_agent)
76 url = to_unicode(url)
77 return self.rp.can_fetch(user_agent, url)
78
79
80 class ReppyRobotParser(RobotParser):
81 def __init__(self, robotstxt_body, spider):
82 from reppy.robots import Robots
83
84 self.spider = spider
85 self.rp = Robots.parse("", robotstxt_body)
86
87 @classmethod
88 def from_crawler(cls, crawler, robotstxt_body):
89 spider = None if not crawler else crawler.spider
90 o = cls(robotstxt_body, spider)
91 return o
92
93 def allowed(self, url, user_agent):
94 return self.rp.allowed(url, user_agent)
95
96
97 class RerpRobotParser(RobotParser):
98 def __init__(self, robotstxt_body, spider):
99 from robotexclusionrulesparser import RobotExclusionRulesParser
100
101 self.spider = spider
102 self.rp = RobotExclusionRulesParser()
103 robotstxt_body = decode_robotstxt(robotstxt_body, spider)
104 self.rp.parse(robotstxt_body)
105
106 @classmethod
107 def from_crawler(cls, crawler, robotstxt_body):
108 spider = None if not crawler else crawler.spider
109 o = cls(robotstxt_body, spider)
110 return o
111
112 def allowed(self, url, user_agent):
113 user_agent = to_unicode(user_agent)
114 url = to_unicode(url)
115 return self.rp.is_allowed(user_agent, url)
116
117
118 class ProtegoRobotParser(RobotParser):
119 def __init__(self, robotstxt_body, spider):
120 from protego import Protego
121
122 self.spider = spider
123 robotstxt_body = decode_robotstxt(robotstxt_body, spider)
124 self.rp = Protego.parse(robotstxt_body)
125
126 @classmethod
127 def from_crawler(cls, crawler, robotstxt_body):
128 spider = None if not crawler else crawler.spider
129 o = cls(robotstxt_body, spider)
130 return o
131
132 def allowed(self, url, user_agent):
133 user_agent = to_unicode(user_agent)
134 url = to_unicode(url)
135 return self.rp.can_fetch(url, user_agent)
136
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scrapy/robotstxt.py b/scrapy/robotstxt.py
--- a/scrapy/robotstxt.py
+++ b/scrapy/robotstxt.py
@@ -1,7 +1,9 @@
import logging
import sys
from abc import ABCMeta, abstractmethod
+from warnings import warn
+from scrapy.exceptions import ScrapyDeprecationWarning
from scrapy.utils.python import to_unicode
logger = logging.getLogger(__name__)
@@ -79,6 +81,7 @@
class ReppyRobotParser(RobotParser):
def __init__(self, robotstxt_body, spider):
+ warn("ReppyRobotParser is deprecated.", ScrapyDeprecationWarning, stacklevel=2)
from reppy.robots import Robots
self.spider = spider
|
{"golden_diff": "diff --git a/scrapy/robotstxt.py b/scrapy/robotstxt.py\n--- a/scrapy/robotstxt.py\n+++ b/scrapy/robotstxt.py\n@@ -1,7 +1,9 @@\n import logging\n import sys\n from abc import ABCMeta, abstractmethod\n+from warnings import warn\n \n+from scrapy.exceptions import ScrapyDeprecationWarning\n from scrapy.utils.python import to_unicode\n \n logger = logging.getLogger(__name__)\n@@ -79,6 +81,7 @@\n \n class ReppyRobotParser(RobotParser):\n def __init__(self, robotstxt_body, spider):\n+ warn(\"ReppyRobotParser is deprecated.\", ScrapyDeprecationWarning, stacklevel=2)\n from reppy.robots import Robots\n \n self.spider = spider\n", "issue": "Deprecate Reppy support\nhttps://github.com/scrapy/scrapy/issues/5226#issuecomment-897621867\n", "before_files": [{"content": "import logging\nimport sys\nfrom abc import ABCMeta, abstractmethod\n\nfrom scrapy.utils.python import to_unicode\n\nlogger = logging.getLogger(__name__)\n\n\ndef decode_robotstxt(robotstxt_body, spider, to_native_str_type=False):\n try:\n if to_native_str_type:\n robotstxt_body = to_unicode(robotstxt_body)\n else:\n robotstxt_body = robotstxt_body.decode(\"utf-8\")\n except UnicodeDecodeError:\n # If we found garbage or robots.txt in an encoding other than UTF-8, disregard it.\n # Switch to 'allow all' state.\n logger.warning(\n \"Failure while parsing robots.txt. File either contains garbage or \"\n \"is in an encoding other than UTF-8, treating it as an empty file.\",\n exc_info=sys.exc_info(),\n extra={\"spider\": spider},\n )\n robotstxt_body = \"\"\n return robotstxt_body\n\n\nclass RobotParser(metaclass=ABCMeta):\n @classmethod\n @abstractmethod\n def from_crawler(cls, crawler, robotstxt_body):\n \"\"\"Parse the content of a robots.txt_ file as bytes. This must be a class method.\n It must return a new instance of the parser backend.\n\n :param crawler: crawler which made the request\n :type crawler: :class:`~scrapy.crawler.Crawler` instance\n\n :param robotstxt_body: content of a robots.txt_ file.\n :type robotstxt_body: bytes\n \"\"\"\n pass\n\n @abstractmethod\n def allowed(self, url, user_agent):\n \"\"\"Return ``True`` if ``user_agent`` is allowed to crawl ``url``, otherwise return ``False``.\n\n :param url: Absolute URL\n :type url: str\n\n :param user_agent: User agent\n :type user_agent: str\n \"\"\"\n pass\n\n\nclass PythonRobotParser(RobotParser):\n def __init__(self, robotstxt_body, spider):\n from urllib.robotparser import RobotFileParser\n\n self.spider = spider\n robotstxt_body = decode_robotstxt(\n robotstxt_body, spider, to_native_str_type=True\n )\n self.rp = RobotFileParser()\n self.rp.parse(robotstxt_body.splitlines())\n\n @classmethod\n def from_crawler(cls, crawler, robotstxt_body):\n spider = None if not crawler else crawler.spider\n o = cls(robotstxt_body, spider)\n return o\n\n def allowed(self, url, user_agent):\n user_agent = to_unicode(user_agent)\n url = to_unicode(url)\n return self.rp.can_fetch(user_agent, url)\n\n\nclass ReppyRobotParser(RobotParser):\n def __init__(self, robotstxt_body, spider):\n from reppy.robots import Robots\n\n self.spider = spider\n self.rp = Robots.parse(\"\", robotstxt_body)\n\n @classmethod\n def from_crawler(cls, crawler, robotstxt_body):\n spider = None if not crawler else crawler.spider\n o = cls(robotstxt_body, spider)\n return o\n\n def allowed(self, url, user_agent):\n return self.rp.allowed(url, user_agent)\n\n\nclass RerpRobotParser(RobotParser):\n def __init__(self, robotstxt_body, spider):\n from robotexclusionrulesparser import RobotExclusionRulesParser\n\n self.spider = spider\n self.rp = RobotExclusionRulesParser()\n robotstxt_body = decode_robotstxt(robotstxt_body, spider)\n self.rp.parse(robotstxt_body)\n\n @classmethod\n def from_crawler(cls, crawler, robotstxt_body):\n spider = None if not crawler else crawler.spider\n o = cls(robotstxt_body, spider)\n return o\n\n def allowed(self, url, user_agent):\n user_agent = to_unicode(user_agent)\n url = to_unicode(url)\n return self.rp.is_allowed(user_agent, url)\n\n\nclass ProtegoRobotParser(RobotParser):\n def __init__(self, robotstxt_body, spider):\n from protego import Protego\n\n self.spider = spider\n robotstxt_body = decode_robotstxt(robotstxt_body, spider)\n self.rp = Protego.parse(robotstxt_body)\n\n @classmethod\n def from_crawler(cls, crawler, robotstxt_body):\n spider = None if not crawler else crawler.spider\n o = cls(robotstxt_body, spider)\n return o\n\n def allowed(self, url, user_agent):\n user_agent = to_unicode(user_agent)\n url = to_unicode(url)\n return self.rp.can_fetch(url, user_agent)\n", "path": "scrapy/robotstxt.py"}], "after_files": [{"content": "import logging\nimport sys\nfrom abc import ABCMeta, abstractmethod\nfrom warnings import warn\n\nfrom scrapy.exceptions import ScrapyDeprecationWarning\nfrom scrapy.utils.python import to_unicode\n\nlogger = logging.getLogger(__name__)\n\n\ndef decode_robotstxt(robotstxt_body, spider, to_native_str_type=False):\n try:\n if to_native_str_type:\n robotstxt_body = to_unicode(robotstxt_body)\n else:\n robotstxt_body = robotstxt_body.decode(\"utf-8\")\n except UnicodeDecodeError:\n # If we found garbage or robots.txt in an encoding other than UTF-8, disregard it.\n # Switch to 'allow all' state.\n logger.warning(\n \"Failure while parsing robots.txt. File either contains garbage or \"\n \"is in an encoding other than UTF-8, treating it as an empty file.\",\n exc_info=sys.exc_info(),\n extra={\"spider\": spider},\n )\n robotstxt_body = \"\"\n return robotstxt_body\n\n\nclass RobotParser(metaclass=ABCMeta):\n @classmethod\n @abstractmethod\n def from_crawler(cls, crawler, robotstxt_body):\n \"\"\"Parse the content of a robots.txt_ file as bytes. This must be a class method.\n It must return a new instance of the parser backend.\n\n :param crawler: crawler which made the request\n :type crawler: :class:`~scrapy.crawler.Crawler` instance\n\n :param robotstxt_body: content of a robots.txt_ file.\n :type robotstxt_body: bytes\n \"\"\"\n pass\n\n @abstractmethod\n def allowed(self, url, user_agent):\n \"\"\"Return ``True`` if ``user_agent`` is allowed to crawl ``url``, otherwise return ``False``.\n\n :param url: Absolute URL\n :type url: str\n\n :param user_agent: User agent\n :type user_agent: str\n \"\"\"\n pass\n\n\nclass PythonRobotParser(RobotParser):\n def __init__(self, robotstxt_body, spider):\n from urllib.robotparser import RobotFileParser\n\n self.spider = spider\n robotstxt_body = decode_robotstxt(\n robotstxt_body, spider, to_native_str_type=True\n )\n self.rp = RobotFileParser()\n self.rp.parse(robotstxt_body.splitlines())\n\n @classmethod\n def from_crawler(cls, crawler, robotstxt_body):\n spider = None if not crawler else crawler.spider\n o = cls(robotstxt_body, spider)\n return o\n\n def allowed(self, url, user_agent):\n user_agent = to_unicode(user_agent)\n url = to_unicode(url)\n return self.rp.can_fetch(user_agent, url)\n\n\nclass ReppyRobotParser(RobotParser):\n def __init__(self, robotstxt_body, spider):\n warn(\"ReppyRobotParser is deprecated.\", ScrapyDeprecationWarning, stacklevel=2)\n from reppy.robots import Robots\n\n self.spider = spider\n self.rp = Robots.parse(\"\", robotstxt_body)\n\n @classmethod\n def from_crawler(cls, crawler, robotstxt_body):\n spider = None if not crawler else crawler.spider\n o = cls(robotstxt_body, spider)\n return o\n\n def allowed(self, url, user_agent):\n return self.rp.allowed(url, user_agent)\n\n\nclass RerpRobotParser(RobotParser):\n def __init__(self, robotstxt_body, spider):\n from robotexclusionrulesparser import RobotExclusionRulesParser\n\n self.spider = spider\n self.rp = RobotExclusionRulesParser()\n robotstxt_body = decode_robotstxt(robotstxt_body, spider)\n self.rp.parse(robotstxt_body)\n\n @classmethod\n def from_crawler(cls, crawler, robotstxt_body):\n spider = None if not crawler else crawler.spider\n o = cls(robotstxt_body, spider)\n return o\n\n def allowed(self, url, user_agent):\n user_agent = to_unicode(user_agent)\n url = to_unicode(url)\n return self.rp.is_allowed(user_agent, url)\n\n\nclass ProtegoRobotParser(RobotParser):\n def __init__(self, robotstxt_body, spider):\n from protego import Protego\n\n self.spider = spider\n robotstxt_body = decode_robotstxt(robotstxt_body, spider)\n self.rp = Protego.parse(robotstxt_body)\n\n @classmethod\n def from_crawler(cls, crawler, robotstxt_body):\n spider = None if not crawler else crawler.spider\n o = cls(robotstxt_body, spider)\n return o\n\n def allowed(self, url, user_agent):\n user_agent = to_unicode(user_agent)\n url = to_unicode(url)\n return self.rp.can_fetch(url, user_agent)\n", "path": "scrapy/robotstxt.py"}]}
| 1,627 | 173 |
gh_patches_debug_1829
|
rasdani/github-patches
|
git_diff
|
googleapis__google-api-python-client-1125
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Incorrect logging level for `oauth2client.contrib.locked_file` (ref #427)
This is - as per request in the original #427 - a follow-up issue.
The function documentation says:
> Detects an appropriate cache module and returns it. Returns `googleapiclient.discovery_cache.base.Cache`, a cache object which is auto detected, or `None` if no cache object is available.
Exceptions in this context only provide a pythonic way to do control flow, and do not indicate anomalies or malfunctioning of the code. `None` is a perfectly fine, as-per-specifications behaviour. The correct level should probably be `INFO`.
Also, even if you would stand by the `WARNING` level, you should tidy up the messy log message. This is what the user is met with:
```WARNING:googleapiclient.discovery_cache:file_cache is unavailable when using oauth2client >= 4.0.0 or google-auth
Traceback (most recent call last):
File "/home/mac/dev/flashfaces/flashfaces/lib/python3.8/site-packages/googleapiclient/discovery_cache/__init__.py", line 36, in autodetect
from google.appengine.api import memcache
ModuleNotFoundError: No module named 'google.appengine'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/mac/dev/flashfaces/flashfaces/lib/python3.8/site-packages/googleapiclient/discovery_cache/file_cache.py", line 33, in <module>
from oauth2client.contrib.locked_file import LockedFile
ModuleNotFoundError: No module named 'oauth2client.contrib.locked_file'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/mac/dev/flashfaces/flashfaces/lib/python3.8/site-packages/googleapiclient/discovery_cache/file_cache.py", line 37, in <module>
from oauth2client.locked_file import LockedFile
ModuleNotFoundError: No module named 'oauth2client.locked_file'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/mac/dev/flashfaces/flashfaces/lib/python3.8/site-packages/googleapiclient/discovery_cache/__init__.py", line 42, in autodetect
from . import file_cache
File "/home/mac/dev/flashfaces/flashfaces/lib/python3.8/site-packages/googleapiclient/discovery_cache/file_cache.py", line 40, in <module>
raise ImportError(
ImportError: file_cache is unavailable when using oauth2client >= 4.0.0 or google-auth
```
A saner way to to communicate with the user could potentially be:
`WARNING - No caching available (file_cache is unavailable when using oauth2client >= 4.0.0 or google-auth)`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `googleapiclient/discovery_cache/__init__.py`
Content:
```
1 # Copyright 2014 Google Inc. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Caching utility for the discovery document."""
16
17 from __future__ import absolute_import
18
19 import logging
20 import datetime
21 import os
22
23 LOGGER = logging.getLogger(__name__)
24
25 DISCOVERY_DOC_MAX_AGE = 60 * 60 * 24 # 1 day
26
27
28 def autodetect():
29 """Detects an appropriate cache module and returns it.
30
31 Returns:
32 googleapiclient.discovery_cache.base.Cache, a cache object which
33 is auto detected, or None if no cache object is available.
34 """
35 if 'APPENGINE_RUNTIME' in os.environ:
36 try:
37 from google.appengine.api import memcache
38 from . import appengine_memcache
39
40 return appengine_memcache.cache
41 except Exception:
42 pass
43 try:
44 from . import file_cache
45
46 return file_cache.cache
47 except Exception as e:
48 LOGGER.warning(e, exc_info=True)
49 return None
50
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/googleapiclient/discovery_cache/__init__.py b/googleapiclient/discovery_cache/__init__.py
--- a/googleapiclient/discovery_cache/__init__.py
+++ b/googleapiclient/discovery_cache/__init__.py
@@ -44,6 +44,7 @@
from . import file_cache
return file_cache.cache
- except Exception as e:
- LOGGER.warning(e, exc_info=True)
+ except Exception:
+ LOGGER.info("file_cache is only supported with oauth2client<4.0.0",
+ exc_info=False)
return None
|
{"golden_diff": "diff --git a/googleapiclient/discovery_cache/__init__.py b/googleapiclient/discovery_cache/__init__.py\n--- a/googleapiclient/discovery_cache/__init__.py\n+++ b/googleapiclient/discovery_cache/__init__.py\n@@ -44,6 +44,7 @@\n from . import file_cache\n \n return file_cache.cache\n- except Exception as e:\n- LOGGER.warning(e, exc_info=True)\n+ except Exception:\n+ LOGGER.info(\"file_cache is only supported with oauth2client<4.0.0\",\n+ exc_info=False)\n return None\n", "issue": "Incorrect logging level for `oauth2client.contrib.locked_file` (ref #427)\nThis is - as per request in the original #427 - a follow-up issue.\r\n\r\nThe function documentation says:\r\n\r\n> Detects an appropriate cache module and returns it. Returns `googleapiclient.discovery_cache.base.Cache`, a cache object which is auto detected, or `None` if no cache object is available.\r\n\r\nExceptions in this context only provide a pythonic way to do control flow, and do not indicate anomalies or malfunctioning of the code. `None` is a perfectly fine, as-per-specifications behaviour. The correct level should probably be `INFO`.\r\n\r\nAlso, even if you would stand by the `WARNING` level, you should tidy up the messy log message. This is what the user is met with:\r\n\r\n```WARNING:googleapiclient.discovery_cache:file_cache is unavailable when using oauth2client >= 4.0.0 or google-auth\r\nTraceback (most recent call last):\r\n File \"/home/mac/dev/flashfaces/flashfaces/lib/python3.8/site-packages/googleapiclient/discovery_cache/__init__.py\", line 36, in autodetect\r\n from google.appengine.api import memcache\r\nModuleNotFoundError: No module named 'google.appengine'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/mac/dev/flashfaces/flashfaces/lib/python3.8/site-packages/googleapiclient/discovery_cache/file_cache.py\", line 33, in <module>\r\n from oauth2client.contrib.locked_file import LockedFile\r\nModuleNotFoundError: No module named 'oauth2client.contrib.locked_file'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/mac/dev/flashfaces/flashfaces/lib/python3.8/site-packages/googleapiclient/discovery_cache/file_cache.py\", line 37, in <module>\r\n from oauth2client.locked_file import LockedFile\r\nModuleNotFoundError: No module named 'oauth2client.locked_file'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/mac/dev/flashfaces/flashfaces/lib/python3.8/site-packages/googleapiclient/discovery_cache/__init__.py\", line 42, in autodetect\r\n from . import file_cache\r\n File \"/home/mac/dev/flashfaces/flashfaces/lib/python3.8/site-packages/googleapiclient/discovery_cache/file_cache.py\", line 40, in <module>\r\n raise ImportError(\r\nImportError: file_cache is unavailable when using oauth2client >= 4.0.0 or google-auth\r\n```\r\n\r\nA saner way to to communicate with the user could potentially be:\r\n\r\n`WARNING - No caching available (file_cache is unavailable when using oauth2client >= 4.0.0 or google-auth)`\r\n\n", "before_files": [{"content": "# Copyright 2014 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Caching utility for the discovery document.\"\"\"\n\nfrom __future__ import absolute_import\n\nimport logging\nimport datetime\nimport os\n\nLOGGER = logging.getLogger(__name__)\n\nDISCOVERY_DOC_MAX_AGE = 60 * 60 * 24 # 1 day\n\n\ndef autodetect():\n \"\"\"Detects an appropriate cache module and returns it.\n\n Returns:\n googleapiclient.discovery_cache.base.Cache, a cache object which\n is auto detected, or None if no cache object is available.\n \"\"\"\n if 'APPENGINE_RUNTIME' in os.environ:\n try:\n from google.appengine.api import memcache\n from . import appengine_memcache\n\n return appengine_memcache.cache\n except Exception:\n pass\n try:\n from . import file_cache\n\n return file_cache.cache\n except Exception as e:\n LOGGER.warning(e, exc_info=True)\n return None\n", "path": "googleapiclient/discovery_cache/__init__.py"}], "after_files": [{"content": "# Copyright 2014 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Caching utility for the discovery document.\"\"\"\n\nfrom __future__ import absolute_import\n\nimport logging\nimport datetime\nimport os\n\nLOGGER = logging.getLogger(__name__)\n\nDISCOVERY_DOC_MAX_AGE = 60 * 60 * 24 # 1 day\n\n\ndef autodetect():\n \"\"\"Detects an appropriate cache module and returns it.\n\n Returns:\n googleapiclient.discovery_cache.base.Cache, a cache object which\n is auto detected, or None if no cache object is available.\n \"\"\"\n if 'APPENGINE_RUNTIME' in os.environ:\n try:\n from google.appengine.api import memcache\n from . import appengine_memcache\n\n return appengine_memcache.cache\n except Exception:\n pass\n try:\n from . import file_cache\n\n return file_cache.cache\n except Exception:\n LOGGER.info(\"file_cache is only supported with oauth2client<4.0.0\",\n exc_info=False)\n return None\n", "path": "googleapiclient/discovery_cache/__init__.py"}]}
| 1,313 | 134 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.