problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
10.2k
| golden_diff
stringlengths 151
4.94k
| verification_info
stringlengths 582
21k
| num_tokens
int64 271
2.05k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_14854
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-python-143
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Package not classified as Python 3 compatible
* `sentry-sdk` is classified as not Python 3 compatible by [pyup](https://pyup.io) when it actually is - e.g. [https://pyup.io/repos/github/pushresume/backend](https://pyup.io/repos/github/pushresume/backend)

* `setup.py` classifiers should be updated with `Programming Language :: Python :: 3`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 """
4 Sentry-Python - Sentry SDK for Python
5 =====================================
6
7 **Sentry-Python is an experimental SDK for Sentry.** Check out `GitHub
8 <https://github.com/getsentry/sentry-python>`_ to find out more.
9 """
10
11 from setuptools import setup, find_packages
12
13 setup(
14 name="sentry-sdk",
15 version="0.5.1",
16 author="Sentry Team and Contributors",
17 author_email="[email protected]",
18 url="https://github.com/getsentry/sentry-python",
19 description="Python client for Sentry (https://getsentry.com)",
20 long_description=__doc__,
21 packages=find_packages(exclude=("tests", "tests.*")),
22 zip_safe=False,
23 license="BSD",
24 install_requires=["urllib3", "certifi"],
25 extras_require={"flask": ["flask>=0.8", "blinker>=1.1"]},
26 )
27
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -23,4 +23,20 @@
license="BSD",
install_requires=["urllib3", "certifi"],
extras_require={"flask": ["flask>=0.8", "blinker>=1.1"]},
+ classifiers=[
+ 'Development Status :: 5 - Production/Stable',
+ 'Environment :: Web Environment',
+ 'Intended Audience :: Developers',
+ 'License :: OSI Approved :: BSD License',
+ 'Operating System :: OS Independent',
+ 'Programming Language :: Python',
+ 'Programming Language :: Python :: 2',
+ 'Programming Language :: Python :: 2.7',
+ 'Programming Language :: Python :: 3',
+ 'Programming Language :: Python :: 3.4',
+ 'Programming Language :: Python :: 3.5',
+ 'Programming Language :: Python :: 3.6',
+ 'Programming Language :: Python :: 3.7',
+ 'Topic :: Software Development :: Libraries :: Python Modules',
+ ],
)
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -23,4 +23,20 @@\n license=\"BSD\",\n install_requires=[\"urllib3\", \"certifi\"],\n extras_require={\"flask\": [\"flask>=0.8\", \"blinker>=1.1\"]},\n+ classifiers=[\n+ 'Development Status :: 5 - Production/Stable',\n+ 'Environment :: Web Environment',\n+ 'Intended Audience :: Developers',\n+ 'License :: OSI Approved :: BSD License',\n+ 'Operating System :: OS Independent',\n+ 'Programming Language :: Python',\n+ 'Programming Language :: Python :: 2',\n+ 'Programming Language :: Python :: 2.7',\n+ 'Programming Language :: Python :: 3',\n+ 'Programming Language :: Python :: 3.4',\n+ 'Programming Language :: Python :: 3.5',\n+ 'Programming Language :: Python :: 3.6',\n+ 'Programming Language :: Python :: 3.7',\n+ 'Topic :: Software Development :: Libraries :: Python Modules',\n+ ],\n )\n", "issue": "Package not classified as Python 3 compatible \n* `sentry-sdk` is classified as not Python 3 compatible by [pyup](https://pyup.io) when it actually is - e.g. [https://pyup.io/repos/github/pushresume/backend](https://pyup.io/repos/github/pushresume/backend)\r\n\r\n\r\n\r\n* `setup.py` classifiers should be updated with `Programming Language :: Python :: 3`\n", "before_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"\nSentry-Python - Sentry SDK for Python\n=====================================\n\n**Sentry-Python is an experimental SDK for Sentry.** Check out `GitHub\n<https://github.com/getsentry/sentry-python>`_ to find out more.\n\"\"\"\n\nfrom setuptools import setup, find_packages\n\nsetup(\n name=\"sentry-sdk\",\n version=\"0.5.1\",\n author=\"Sentry Team and Contributors\",\n author_email=\"[email protected]\",\n url=\"https://github.com/getsentry/sentry-python\",\n description=\"Python client for Sentry (https://getsentry.com)\",\n long_description=__doc__,\n packages=find_packages(exclude=(\"tests\", \"tests.*\")),\n zip_safe=False,\n license=\"BSD\",\n install_requires=[\"urllib3\", \"certifi\"],\n extras_require={\"flask\": [\"flask>=0.8\", \"blinker>=1.1\"]},\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"\nSentry-Python - Sentry SDK for Python\n=====================================\n\n**Sentry-Python is an experimental SDK for Sentry.** Check out `GitHub\n<https://github.com/getsentry/sentry-python>`_ to find out more.\n\"\"\"\n\nfrom setuptools import setup, find_packages\n\nsetup(\n name=\"sentry-sdk\",\n version=\"0.5.1\",\n author=\"Sentry Team and Contributors\",\n author_email=\"[email protected]\",\n url=\"https://github.com/getsentry/sentry-python\",\n description=\"Python client for Sentry (https://getsentry.com)\",\n long_description=__doc__,\n packages=find_packages(exclude=(\"tests\", \"tests.*\")),\n zip_safe=False,\n license=\"BSD\",\n install_requires=[\"urllib3\", \"certifi\"],\n extras_require={\"flask\": [\"flask>=0.8\", \"blinker>=1.1\"]},\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n)\n", "path": "setup.py"}]}
| 648 | 240 |
gh_patches_debug_15869
|
rasdani/github-patches
|
git_diff
|
Pylons__pyramid-1033
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
prequest does not support PUT
Title says it all. Give me the green light and I can whip up a patch (no pun intended).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyramid/scripts/prequest.py`
Content:
```
1 import optparse
2 import sys
3 import textwrap
4
5 from pyramid.compat import url_unquote
6 from pyramid.request import Request
7 from pyramid.paster import get_app
8 from pyramid.scripts.common import parse_vars
9
10 def main(argv=sys.argv, quiet=False):
11 command = PRequestCommand(argv, quiet)
12 return command.run()
13
14 class PRequestCommand(object):
15 description = """\
16 Run a request for the described application.
17
18 This command makes an artifical request to a web application that uses a
19 PasteDeploy (.ini) configuration file for the server and application.
20
21 Use "prequest config.ini /path" to request "/path". Use "prequest
22 --method=POST config.ini /path < data" to do a POST with the given
23 request body.
24
25 If the path is relative (doesn't begin with "/") it is interpreted as
26 relative to "/". The path passed to this script should be URL-quoted.
27 The path can be succeeded with a query string (e.g. `/path?a=1&=b2').
28
29 The variable "environ['paste.command_request']" will be set to "True" in
30 the request's WSGI environment, so your application can distinguish these
31 calls from normal requests.
32 """
33 usage = "usage: %prog config_uri path_info [args/options]"
34 parser = optparse.OptionParser(
35 usage=usage,
36 description=textwrap.dedent(description)
37 )
38 parser.add_option(
39 '-n', '--app-name',
40 dest='app_name',
41 metavar= 'NAME',
42 help="Load the named application from the config file (default 'main')",
43 type="string",
44 )
45 parser.add_option(
46 '--header',
47 dest='headers',
48 metavar='NAME:VALUE',
49 type='string',
50 action='append',
51 help="Header to add to request (you can use this option multiple times)"
52 )
53 parser.add_option(
54 '-d', '--display-headers',
55 dest='display_headers',
56 action='store_true',
57 help='Display status and headers before the response body'
58 )
59 parser.add_option(
60 '-m', '--method',
61 dest='method',
62 choices=['GET', 'HEAD', 'POST', 'DELETE'],
63 type='choice',
64 help='Request method type (GET, POST, DELETE)',
65 )
66
67 get_app = staticmethod(get_app)
68 stdin = sys.stdin
69
70 def __init__(self, argv, quiet=False):
71 self.quiet = quiet
72 self.options, self.args = self.parser.parse_args(argv[1:])
73
74 def out(self, msg): # pragma: no cover
75 if not self.quiet:
76 print(msg)
77
78 def run(self):
79 if not len(self.args) >= 2:
80 self.out('You must provide at least two arguments')
81 return 2
82 app_spec = self.args[0]
83 path = self.args[1]
84 if not path.startswith('/'):
85 path = '/' + path
86
87 try:
88 path, qs = path.split('?', 1)
89 except ValueError:
90 qs = ''
91
92 path = url_unquote(path)
93
94 headers = {}
95 if self.options.headers:
96 for item in self.options.headers:
97 if ':' not in item:
98 self.out(
99 "Bad --header=%s option, value must be in the form "
100 "'name:value'" % item)
101 return 2
102 name, value = item.split(':', 1)
103 headers[name] = value.strip()
104
105 app = self.get_app(app_spec, self.options.app_name,
106 options=parse_vars(self.args[2:]))
107
108 request_method = (self.options.method or 'GET').upper()
109
110 environ = {
111 'REQUEST_METHOD': request_method,
112 'SCRIPT_NAME': '', # may be empty if app is at the root
113 'PATH_INFO': path,
114 'SERVER_NAME': 'localhost', # always mandatory
115 'SERVER_PORT': '80', # always mandatory
116 'SERVER_PROTOCOL': 'HTTP/1.0',
117 'CONTENT_TYPE': 'text/plain',
118 'REMOTE_ADDR':'127.0.0.1',
119 'wsgi.run_once': True,
120 'wsgi.multithread': False,
121 'wsgi.multiprocess': False,
122 'wsgi.errors': sys.stderr,
123 'wsgi.url_scheme': 'http',
124 'wsgi.version': (1, 0),
125 'QUERY_STRING': qs,
126 'HTTP_ACCEPT': 'text/plain;q=1.0, */*;q=0.1',
127 'paste.command_request': True,
128 }
129
130 if request_method == 'POST':
131 environ['wsgi.input'] = self.stdin
132 environ['CONTENT_LENGTH'] = '-1'
133
134 for name, value in headers.items():
135 if name.lower() == 'content-type':
136 name = 'CONTENT_TYPE'
137 else:
138 name = 'HTTP_'+name.upper().replace('-', '_')
139 environ[name] = value
140
141 request = Request.blank(path, environ=environ)
142 response = request.get_response(app)
143 if self.options.display_headers:
144 self.out(response.status)
145 for name, value in response.headerlist:
146 self.out('%s: %s' % (name, value))
147 if response.charset:
148 self.out(response.ubody)
149 else:
150 self.out(response.body)
151 return 0
152
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pyramid/scripts/prequest.py b/pyramid/scripts/prequest.py
--- a/pyramid/scripts/prequest.py
+++ b/pyramid/scripts/prequest.py
@@ -59,9 +59,9 @@
parser.add_option(
'-m', '--method',
dest='method',
- choices=['GET', 'HEAD', 'POST', 'DELETE'],
+ choices=['GET', 'HEAD', 'POST', 'PUT', 'PATCH', 'DELETE'],
type='choice',
- help='Request method type (GET, POST, DELETE)',
+ help='Request method type',
)
get_app = staticmethod(get_app)
@@ -127,7 +127,7 @@
'paste.command_request': True,
}
- if request_method == 'POST':
+ if request_method in ('POST', 'PUT', 'PATCH'):
environ['wsgi.input'] = self.stdin
environ['CONTENT_LENGTH'] = '-1'
|
{"golden_diff": "diff --git a/pyramid/scripts/prequest.py b/pyramid/scripts/prequest.py\n--- a/pyramid/scripts/prequest.py\n+++ b/pyramid/scripts/prequest.py\n@@ -59,9 +59,9 @@\n parser.add_option(\n '-m', '--method',\n dest='method',\n- choices=['GET', 'HEAD', 'POST', 'DELETE'],\n+ choices=['GET', 'HEAD', 'POST', 'PUT', 'PATCH', 'DELETE'],\n type='choice',\n- help='Request method type (GET, POST, DELETE)',\n+ help='Request method type',\n )\n \n get_app = staticmethod(get_app)\n@@ -127,7 +127,7 @@\n 'paste.command_request': True,\n }\n \n- if request_method == 'POST':\n+ if request_method in ('POST', 'PUT', 'PATCH'):\n environ['wsgi.input'] = self.stdin\n environ['CONTENT_LENGTH'] = '-1'\n", "issue": "prequest does not support PUT\nTitle says it all. Give me the green light and I can whip up a patch (no pun intended).\n\n", "before_files": [{"content": "import optparse\nimport sys\nimport textwrap\n\nfrom pyramid.compat import url_unquote\nfrom pyramid.request import Request\nfrom pyramid.paster import get_app\nfrom pyramid.scripts.common import parse_vars\n\ndef main(argv=sys.argv, quiet=False):\n command = PRequestCommand(argv, quiet)\n return command.run()\n\nclass PRequestCommand(object):\n description = \"\"\"\\\n Run a request for the described application.\n\n This command makes an artifical request to a web application that uses a\n PasteDeploy (.ini) configuration file for the server and application.\n\n Use \"prequest config.ini /path\" to request \"/path\". Use \"prequest\n --method=POST config.ini /path < data\" to do a POST with the given\n request body.\n\n If the path is relative (doesn't begin with \"/\") it is interpreted as\n relative to \"/\". The path passed to this script should be URL-quoted.\n The path can be succeeded with a query string (e.g. `/path?a=1&=b2').\n\n The variable \"environ['paste.command_request']\" will be set to \"True\" in\n the request's WSGI environment, so your application can distinguish these\n calls from normal requests.\n \"\"\"\n usage = \"usage: %prog config_uri path_info [args/options]\"\n parser = optparse.OptionParser(\n usage=usage,\n description=textwrap.dedent(description)\n )\n parser.add_option(\n '-n', '--app-name',\n dest='app_name',\n metavar= 'NAME',\n help=\"Load the named application from the config file (default 'main')\",\n type=\"string\",\n )\n parser.add_option(\n '--header',\n dest='headers',\n metavar='NAME:VALUE',\n type='string',\n action='append',\n help=\"Header to add to request (you can use this option multiple times)\"\n )\n parser.add_option(\n '-d', '--display-headers',\n dest='display_headers',\n action='store_true',\n help='Display status and headers before the response body'\n )\n parser.add_option(\n '-m', '--method',\n dest='method',\n choices=['GET', 'HEAD', 'POST', 'DELETE'],\n type='choice',\n help='Request method type (GET, POST, DELETE)',\n )\n\n get_app = staticmethod(get_app)\n stdin = sys.stdin\n\n def __init__(self, argv, quiet=False):\n self.quiet = quiet\n self.options, self.args = self.parser.parse_args(argv[1:])\n\n def out(self, msg): # pragma: no cover\n if not self.quiet:\n print(msg)\n\n def run(self):\n if not len(self.args) >= 2:\n self.out('You must provide at least two arguments')\n return 2\n app_spec = self.args[0]\n path = self.args[1]\n if not path.startswith('/'):\n path = '/' + path\n\n try:\n path, qs = path.split('?', 1)\n except ValueError:\n qs = ''\n\n path = url_unquote(path)\n\n headers = {}\n if self.options.headers:\n for item in self.options.headers:\n if ':' not in item:\n self.out(\n \"Bad --header=%s option, value must be in the form \"\n \"'name:value'\" % item)\n return 2\n name, value = item.split(':', 1)\n headers[name] = value.strip()\n\n app = self.get_app(app_spec, self.options.app_name,\n options=parse_vars(self.args[2:]))\n\n request_method = (self.options.method or 'GET').upper()\n\n environ = {\n 'REQUEST_METHOD': request_method,\n 'SCRIPT_NAME': '', # may be empty if app is at the root\n 'PATH_INFO': path, \n 'SERVER_NAME': 'localhost', # always mandatory\n 'SERVER_PORT': '80', # always mandatory \n 'SERVER_PROTOCOL': 'HTTP/1.0',\n 'CONTENT_TYPE': 'text/plain',\n 'REMOTE_ADDR':'127.0.0.1',\n 'wsgi.run_once': True,\n 'wsgi.multithread': False,\n 'wsgi.multiprocess': False,\n 'wsgi.errors': sys.stderr,\n 'wsgi.url_scheme': 'http',\n 'wsgi.version': (1, 0),\n 'QUERY_STRING': qs,\n 'HTTP_ACCEPT': 'text/plain;q=1.0, */*;q=0.1',\n 'paste.command_request': True,\n }\n\n if request_method == 'POST':\n environ['wsgi.input'] = self.stdin\n environ['CONTENT_LENGTH'] = '-1'\n\n for name, value in headers.items():\n if name.lower() == 'content-type':\n name = 'CONTENT_TYPE'\n else:\n name = 'HTTP_'+name.upper().replace('-', '_')\n environ[name] = value\n\n request = Request.blank(path, environ=environ)\n response = request.get_response(app)\n if self.options.display_headers:\n self.out(response.status)\n for name, value in response.headerlist:\n self.out('%s: %s' % (name, value))\n if response.charset:\n self.out(response.ubody)\n else:\n self.out(response.body)\n return 0\n", "path": "pyramid/scripts/prequest.py"}], "after_files": [{"content": "import optparse\nimport sys\nimport textwrap\n\nfrom pyramid.compat import url_unquote\nfrom pyramid.request import Request\nfrom pyramid.paster import get_app\nfrom pyramid.scripts.common import parse_vars\n\ndef main(argv=sys.argv, quiet=False):\n command = PRequestCommand(argv, quiet)\n return command.run()\n\nclass PRequestCommand(object):\n description = \"\"\"\\\n Run a request for the described application.\n\n This command makes an artifical request to a web application that uses a\n PasteDeploy (.ini) configuration file for the server and application.\n\n Use \"prequest config.ini /path\" to request \"/path\". Use \"prequest\n --method=POST config.ini /path < data\" to do a POST with the given\n request body.\n\n If the path is relative (doesn't begin with \"/\") it is interpreted as\n relative to \"/\". The path passed to this script should be URL-quoted.\n The path can be succeeded with a query string (e.g. `/path?a=1&=b2').\n\n The variable \"environ['paste.command_request']\" will be set to \"True\" in\n the request's WSGI environment, so your application can distinguish these\n calls from normal requests.\n \"\"\"\n usage = \"usage: %prog config_uri path_info [args/options]\"\n parser = optparse.OptionParser(\n usage=usage,\n description=textwrap.dedent(description)\n )\n parser.add_option(\n '-n', '--app-name',\n dest='app_name',\n metavar= 'NAME',\n help=\"Load the named application from the config file (default 'main')\",\n type=\"string\",\n )\n parser.add_option(\n '--header',\n dest='headers',\n metavar='NAME:VALUE',\n type='string',\n action='append',\n help=\"Header to add to request (you can use this option multiple times)\"\n )\n parser.add_option(\n '-d', '--display-headers',\n dest='display_headers',\n action='store_true',\n help='Display status and headers before the response body'\n )\n parser.add_option(\n '-m', '--method',\n dest='method',\n choices=['GET', 'HEAD', 'POST', 'PUT', 'PATCH', 'DELETE'],\n type='choice',\n help='Request method type',\n )\n\n get_app = staticmethod(get_app)\n stdin = sys.stdin\n\n def __init__(self, argv, quiet=False):\n self.quiet = quiet\n self.options, self.args = self.parser.parse_args(argv[1:])\n\n def out(self, msg): # pragma: no cover\n if not self.quiet:\n print(msg)\n\n def run(self):\n if not len(self.args) >= 2:\n self.out('You must provide at least two arguments')\n return 2\n app_spec = self.args[0]\n path = self.args[1]\n if not path.startswith('/'):\n path = '/' + path\n\n try:\n path, qs = path.split('?', 1)\n except ValueError:\n qs = ''\n\n path = url_unquote(path)\n\n headers = {}\n if self.options.headers:\n for item in self.options.headers:\n if ':' not in item:\n self.out(\n \"Bad --header=%s option, value must be in the form \"\n \"'name:value'\" % item)\n return 2\n name, value = item.split(':', 1)\n headers[name] = value.strip()\n\n app = self.get_app(app_spec, self.options.app_name,\n options=parse_vars(self.args[2:]))\n\n request_method = (self.options.method or 'GET').upper()\n\n environ = {\n 'REQUEST_METHOD': request_method,\n 'SCRIPT_NAME': '', # may be empty if app is at the root\n 'PATH_INFO': path, \n 'SERVER_NAME': 'localhost', # always mandatory\n 'SERVER_PORT': '80', # always mandatory \n 'SERVER_PROTOCOL': 'HTTP/1.0',\n 'CONTENT_TYPE': 'text/plain',\n 'REMOTE_ADDR':'127.0.0.1',\n 'wsgi.run_once': True,\n 'wsgi.multithread': False,\n 'wsgi.multiprocess': False,\n 'wsgi.errors': sys.stderr,\n 'wsgi.url_scheme': 'http',\n 'wsgi.version': (1, 0),\n 'QUERY_STRING': qs,\n 'HTTP_ACCEPT': 'text/plain;q=1.0, */*;q=0.1',\n 'paste.command_request': True,\n }\n\n if request_method in ('POST', 'PUT', 'PATCH'):\n environ['wsgi.input'] = self.stdin\n environ['CONTENT_LENGTH'] = '-1'\n\n for name, value in headers.items():\n if name.lower() == 'content-type':\n name = 'CONTENT_TYPE'\n else:\n name = 'HTTP_'+name.upper().replace('-', '_')\n environ[name] = value\n\n request = Request.blank(path, environ=environ)\n response = request.get_response(app)\n if self.options.display_headers:\n self.out(response.status)\n for name, value in response.headerlist:\n self.out('%s: %s' % (name, value))\n if response.charset:\n self.out(response.ubody)\n else:\n self.out(response.body)\n return 0\n", "path": "pyramid/scripts/prequest.py"}]}
| 1,802 | 210 |
gh_patches_debug_9258
|
rasdani/github-patches
|
git_diff
|
pytorch__ignite-2173
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improve the example of Timer's usage
## 📚 Documentation
The example in the `Timer's documentation suggest to estimate the processing time for a batch as follows:
```python
timer.attach(
trainer,
start=Events.EPOCH_STARTED,
resume=Events.ITERATION_STARTED,
pause=Events.ITERATION_COMPLETED,
step=Events.ITERATION_COMPLETED)
```
It is to note that this timer will be reset at the start of each epoch. When we use multiple workers in the data loader, the first couple of iterations at each epoch often take longer than the later ones. The reset behavior will incur an inaccurate estimation of the remaining training time (ETA), even when the `average` flag is set to `True`. Specifically, the ETA is computed as `(engine.state.max_iters - engine.state.iteration) * time_per_iter`. So the small fluctuation in `time_per_iter` will be magnified by remaining number of iterations. To address this problem, we can let the timer only start once in the whole training process:
```python
timer.attach(
trainer,
start=Events.EPOCH_STARTED(once=1),
resume=Events.ITERATION_STARTED,
pause=Events.ITERATION_COMPLETED,
step=Events.ITERATION_COMPLETED)
```
I have empirically verified the effectiveness of this modification.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ignite/handlers/timing.py`
Content:
```
1 from time import perf_counter
2 from typing import Any, Optional
3
4 from ignite.engine import Engine, Events
5
6 __all__ = ["Timer"]
7
8
9 class Timer:
10 """ Timer object can be used to measure (average) time between events.
11
12 Args:
13 average: if True, then when ``.value()`` method is called, the returned value
14 will be equal to total time measured, divided by the value of internal counter.
15
16 Attributes:
17 total (float): total time elapsed when the Timer was running (in seconds).
18 step_count (int): internal counter, useful to measure average time, e.g. of processing a single batch.
19 Incremented with the ``.step()`` method.
20 running (bool): flag indicating if timer is measuring time.
21
22 Note:
23 When using ``Timer(average=True)`` do not forget to call ``timer.step()`` every time an event occurs. See
24 the examples below.
25
26 Examples:
27
28 Measuring total time of the epoch:
29
30 >>> from ignite.handlers import Timer
31 >>> import time
32 >>> work = lambda : time.sleep(0.1)
33 >>> idle = lambda : time.sleep(0.1)
34 >>> t = Timer(average=False)
35 >>> for _ in range(10):
36 ... work()
37 ... idle()
38 ...
39 >>> t.value()
40 2.003073937026784
41
42 Measuring average time of the epoch:
43
44 >>> t = Timer(average=True)
45 >>> for _ in range(10):
46 ... work()
47 ... idle()
48 ... t.step()
49 ...
50 >>> t.value()
51 0.2003182829997968
52
53 Measuring average time it takes to execute a single ``work()`` call:
54
55 >>> t = Timer(average=True)
56 >>> for _ in range(10):
57 ... t.resume()
58 ... work()
59 ... t.pause()
60 ... idle()
61 ... t.step()
62 ...
63 >>> t.value()
64 0.10016545779653825
65
66 Using the Timer to measure average time it takes to process a single batch of examples:
67
68 >>> from ignite.engine import Engine, Events
69 >>> from ignite.handlers import Timer
70 >>> trainer = Engine(training_update_function)
71 >>> timer = Timer(average=True)
72 >>> timer.attach(trainer,
73 ... start=Events.EPOCH_STARTED,
74 ... resume=Events.ITERATION_STARTED,
75 ... pause=Events.ITERATION_COMPLETED,
76 ... step=Events.ITERATION_COMPLETED)
77 """
78
79 def __init__(self, average: bool = False):
80 self._average = average
81
82 self.reset()
83
84 def attach(
85 self,
86 engine: Engine,
87 start: Events = Events.STARTED,
88 pause: Events = Events.COMPLETED,
89 resume: Optional[Events] = None,
90 step: Optional[Events] = None,
91 ) -> "Timer":
92 """ Register callbacks to control the timer.
93
94 Args:
95 engine: Engine that this timer will be attached to.
96 start: Event which should start (reset) the timer.
97 pause: Event which should pause the timer.
98 resume: Event which should resume the timer.
99 step: Event which should call the `step` method of the counter.
100
101 Returns:
102 this timer
103 """
104
105 engine.add_event_handler(start, self.reset)
106 engine.add_event_handler(pause, self.pause)
107
108 if resume is not None:
109 engine.add_event_handler(resume, self.resume)
110
111 if step is not None:
112 engine.add_event_handler(step, self.step)
113
114 return self
115
116 def reset(self, *args: Any) -> "Timer":
117 """Reset the timer to zero."""
118 self._t0 = perf_counter()
119 self.total = 0.0
120 self.step_count = 0.0
121 self.running = True
122
123 return self
124
125 def pause(self, *args: Any) -> None:
126 """Pause the current running timer."""
127 if self.running:
128 self.total += self._elapsed()
129 self.running = False
130
131 def resume(self, *args: Any) -> None:
132 """Resume the current running timer."""
133 if not self.running:
134 self.running = True
135 self._t0 = perf_counter()
136
137 def value(self) -> float:
138 """Return the average timer value."""
139 total = self.total
140 if self.running:
141 total += self._elapsed()
142
143 if self._average:
144 denominator = max(self.step_count, 1.0)
145 else:
146 denominator = 1.0
147
148 return total / denominator
149
150 def step(self, *args: Any) -> None:
151 """Increment the timer."""
152 self.step_count += 1.0
153
154 def _elapsed(self) -> float:
155 return perf_counter() - self._t0
156
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ignite/handlers/timing.py b/ignite/handlers/timing.py
--- a/ignite/handlers/timing.py
+++ b/ignite/handlers/timing.py
@@ -70,7 +70,7 @@
>>> trainer = Engine(training_update_function)
>>> timer = Timer(average=True)
>>> timer.attach(trainer,
- ... start=Events.EPOCH_STARTED,
+ ... start=Events.STARTED,
... resume=Events.ITERATION_STARTED,
... pause=Events.ITERATION_COMPLETED,
... step=Events.ITERATION_COMPLETED)
|
{"golden_diff": "diff --git a/ignite/handlers/timing.py b/ignite/handlers/timing.py\n--- a/ignite/handlers/timing.py\n+++ b/ignite/handlers/timing.py\n@@ -70,7 +70,7 @@\n >>> trainer = Engine(training_update_function)\n >>> timer = Timer(average=True)\n >>> timer.attach(trainer,\n- ... start=Events.EPOCH_STARTED,\n+ ... start=Events.STARTED,\n ... resume=Events.ITERATION_STARTED,\n ... pause=Events.ITERATION_COMPLETED,\n ... step=Events.ITERATION_COMPLETED)\n", "issue": "Improve the example of Timer's usage\n## \ud83d\udcda Documentation\r\n\r\nThe example in the `Timer's documentation suggest to estimate the processing time for a batch as follows:\r\n```python\r\ntimer.attach(\r\n trainer, \r\n start=Events.EPOCH_STARTED, \r\n resume=Events.ITERATION_STARTED, \r\n pause=Events.ITERATION_COMPLETED, \r\n step=Events.ITERATION_COMPLETED)\r\n```\r\nIt is to note that this timer will be reset at the start of each epoch. When we use multiple workers in the data loader, the first couple of iterations at each epoch often take longer than the later ones. The reset behavior will incur an inaccurate estimation of the remaining training time (ETA), even when the `average` flag is set to `True`. Specifically, the ETA is computed as `(engine.state.max_iters - engine.state.iteration) * time_per_iter`. So the small fluctuation in `time_per_iter` will be magnified by remaining number of iterations. To address this problem, we can let the timer only start once in the whole training process:\r\n```python\r\n timer.attach(\r\n trainer, \r\n start=Events.EPOCH_STARTED(once=1), \r\n resume=Events.ITERATION_STARTED, \r\n pause=Events.ITERATION_COMPLETED, \r\n step=Events.ITERATION_COMPLETED)\r\n```\r\nI have empirically verified the effectiveness of this modification.\n", "before_files": [{"content": "from time import perf_counter\nfrom typing import Any, Optional\n\nfrom ignite.engine import Engine, Events\n\n__all__ = [\"Timer\"]\n\n\nclass Timer:\n \"\"\" Timer object can be used to measure (average) time between events.\n\n Args:\n average: if True, then when ``.value()`` method is called, the returned value\n will be equal to total time measured, divided by the value of internal counter.\n\n Attributes:\n total (float): total time elapsed when the Timer was running (in seconds).\n step_count (int): internal counter, useful to measure average time, e.g. of processing a single batch.\n Incremented with the ``.step()`` method.\n running (bool): flag indicating if timer is measuring time.\n\n Note:\n When using ``Timer(average=True)`` do not forget to call ``timer.step()`` every time an event occurs. See\n the examples below.\n\n Examples:\n\n Measuring total time of the epoch:\n\n >>> from ignite.handlers import Timer\n >>> import time\n >>> work = lambda : time.sleep(0.1)\n >>> idle = lambda : time.sleep(0.1)\n >>> t = Timer(average=False)\n >>> for _ in range(10):\n ... work()\n ... idle()\n ...\n >>> t.value()\n 2.003073937026784\n\n Measuring average time of the epoch:\n\n >>> t = Timer(average=True)\n >>> for _ in range(10):\n ... work()\n ... idle()\n ... t.step()\n ...\n >>> t.value()\n 0.2003182829997968\n\n Measuring average time it takes to execute a single ``work()`` call:\n\n >>> t = Timer(average=True)\n >>> for _ in range(10):\n ... t.resume()\n ... work()\n ... t.pause()\n ... idle()\n ... t.step()\n ...\n >>> t.value()\n 0.10016545779653825\n\n Using the Timer to measure average time it takes to process a single batch of examples:\n\n >>> from ignite.engine import Engine, Events\n >>> from ignite.handlers import Timer\n >>> trainer = Engine(training_update_function)\n >>> timer = Timer(average=True)\n >>> timer.attach(trainer,\n ... start=Events.EPOCH_STARTED,\n ... resume=Events.ITERATION_STARTED,\n ... pause=Events.ITERATION_COMPLETED,\n ... step=Events.ITERATION_COMPLETED)\n \"\"\"\n\n def __init__(self, average: bool = False):\n self._average = average\n\n self.reset()\n\n def attach(\n self,\n engine: Engine,\n start: Events = Events.STARTED,\n pause: Events = Events.COMPLETED,\n resume: Optional[Events] = None,\n step: Optional[Events] = None,\n ) -> \"Timer\":\n \"\"\" Register callbacks to control the timer.\n\n Args:\n engine: Engine that this timer will be attached to.\n start: Event which should start (reset) the timer.\n pause: Event which should pause the timer.\n resume: Event which should resume the timer.\n step: Event which should call the `step` method of the counter.\n\n Returns:\n this timer\n \"\"\"\n\n engine.add_event_handler(start, self.reset)\n engine.add_event_handler(pause, self.pause)\n\n if resume is not None:\n engine.add_event_handler(resume, self.resume)\n\n if step is not None:\n engine.add_event_handler(step, self.step)\n\n return self\n\n def reset(self, *args: Any) -> \"Timer\":\n \"\"\"Reset the timer to zero.\"\"\"\n self._t0 = perf_counter()\n self.total = 0.0\n self.step_count = 0.0\n self.running = True\n\n return self\n\n def pause(self, *args: Any) -> None:\n \"\"\"Pause the current running timer.\"\"\"\n if self.running:\n self.total += self._elapsed()\n self.running = False\n\n def resume(self, *args: Any) -> None:\n \"\"\"Resume the current running timer.\"\"\"\n if not self.running:\n self.running = True\n self._t0 = perf_counter()\n\n def value(self) -> float:\n \"\"\"Return the average timer value.\"\"\"\n total = self.total\n if self.running:\n total += self._elapsed()\n\n if self._average:\n denominator = max(self.step_count, 1.0)\n else:\n denominator = 1.0\n\n return total / denominator\n\n def step(self, *args: Any) -> None:\n \"\"\"Increment the timer.\"\"\"\n self.step_count += 1.0\n\n def _elapsed(self) -> float:\n return perf_counter() - self._t0\n", "path": "ignite/handlers/timing.py"}], "after_files": [{"content": "from time import perf_counter\nfrom typing import Any, Optional\n\nfrom ignite.engine import Engine, Events\n\n__all__ = [\"Timer\"]\n\n\nclass Timer:\n \"\"\" Timer object can be used to measure (average) time between events.\n\n Args:\n average: if True, then when ``.value()`` method is called, the returned value\n will be equal to total time measured, divided by the value of internal counter.\n\n Attributes:\n total (float): total time elapsed when the Timer was running (in seconds).\n step_count (int): internal counter, useful to measure average time, e.g. of processing a single batch.\n Incremented with the ``.step()`` method.\n running (bool): flag indicating if timer is measuring time.\n\n Note:\n When using ``Timer(average=True)`` do not forget to call ``timer.step()`` every time an event occurs. See\n the examples below.\n\n Examples:\n\n Measuring total time of the epoch:\n\n >>> from ignite.handlers import Timer\n >>> import time\n >>> work = lambda : time.sleep(0.1)\n >>> idle = lambda : time.sleep(0.1)\n >>> t = Timer(average=False)\n >>> for _ in range(10):\n ... work()\n ... idle()\n ...\n >>> t.value()\n 2.003073937026784\n\n Measuring average time of the epoch:\n\n >>> t = Timer(average=True)\n >>> for _ in range(10):\n ... work()\n ... idle()\n ... t.step()\n ...\n >>> t.value()\n 0.2003182829997968\n\n Measuring average time it takes to execute a single ``work()`` call:\n\n >>> t = Timer(average=True)\n >>> for _ in range(10):\n ... t.resume()\n ... work()\n ... t.pause()\n ... idle()\n ... t.step()\n ...\n >>> t.value()\n 0.10016545779653825\n\n Using the Timer to measure average time it takes to process a single batch of examples:\n\n >>> from ignite.engine import Engine, Events\n >>> from ignite.handlers import Timer\n >>> trainer = Engine(training_update_function)\n >>> timer = Timer(average=True)\n >>> timer.attach(trainer,\n ... start=Events.STARTED,\n ... resume=Events.ITERATION_STARTED,\n ... pause=Events.ITERATION_COMPLETED,\n ... step=Events.ITERATION_COMPLETED)\n \"\"\"\n\n def __init__(self, average: bool = False):\n self._average = average\n\n self.reset()\n\n def attach(\n self,\n engine: Engine,\n start: Events = Events.STARTED,\n pause: Events = Events.COMPLETED,\n resume: Optional[Events] = None,\n step: Optional[Events] = None,\n ) -> \"Timer\":\n \"\"\" Register callbacks to control the timer.\n\n Args:\n engine: Engine that this timer will be attached to.\n start: Event which should start (reset) the timer.\n pause: Event which should pause the timer.\n resume: Event which should resume the timer.\n step: Event which should call the `step` method of the counter.\n\n Returns:\n this timer\n \"\"\"\n\n engine.add_event_handler(start, self.reset)\n engine.add_event_handler(pause, self.pause)\n\n if resume is not None:\n engine.add_event_handler(resume, self.resume)\n\n if step is not None:\n engine.add_event_handler(step, self.step)\n\n return self\n\n def reset(self, *args: Any) -> \"Timer\":\n \"\"\"Reset the timer to zero.\"\"\"\n self._t0 = perf_counter()\n self.total = 0.0\n self.step_count = 0.0\n self.running = True\n\n return self\n\n def pause(self, *args: Any) -> None:\n \"\"\"Pause the current running timer.\"\"\"\n if self.running:\n self.total += self._elapsed()\n self.running = False\n\n def resume(self, *args: Any) -> None:\n \"\"\"Resume the current running timer.\"\"\"\n if not self.running:\n self.running = True\n self._t0 = perf_counter()\n\n def value(self) -> float:\n \"\"\"Return the average timer value.\"\"\"\n total = self.total\n if self.running:\n total += self._elapsed()\n\n if self._average:\n denominator = max(self.step_count, 1.0)\n else:\n denominator = 1.0\n\n return total / denominator\n\n def step(self, *args: Any) -> None:\n \"\"\"Increment the timer.\"\"\"\n self.step_count += 1.0\n\n def _elapsed(self) -> float:\n return perf_counter() - self._t0\n", "path": "ignite/handlers/timing.py"}]}
| 1,994 | 135 |
gh_patches_debug_33887
|
rasdani/github-patches
|
git_diff
|
zulip__zulip-1065
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Make `scripts/setup/generate-secrets -d` use existing setting values
Currently, `generate-secrets -d` regenerates all the secret keys each time it is run, modifying zproject/settings.py, which means that if you use the same Zulip checkout for both a development VM on the host and a Vagrant guest, every time you provision on the Vagrant guest, you need to rebuild the database (to match the new keys). I think this would also affect someone using both Docker and Vagrant with the same checkout (as I imagine happens with some frequency when testing).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/setup/generate_secrets.py`
Content:
```
1 #!/usr/bin/env python
2 # This tools generates local_settings_generated.py using the template
3
4 from __future__ import print_function
5 import sys, os, os.path
6
7 sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', '..'))
8 os.environ['DJANGO_SETTINGS_MODULE'] = 'zproject.settings'
9
10 from django.utils.crypto import get_random_string
11 from zerver.lib.utils import generate_random_token
12
13 os.chdir(os.path.join(os.path.dirname(__file__), '..', '..'))
14
15 CAMO_CONFIG_FILENAME = '/etc/default/camo'
16
17 AUTOGENERATED_SETTINGS = ['shared_secret', 'avatar_salt', 'rabbitmq_password', 'local_database_password',
18 'initial_password_salt']
19
20 def generate_camo_config_file(camo_key):
21 camo_config = """ENABLED=yes
22 PORT=9292
23 CAMO_KEY=%s
24 """ % (camo_key,)
25 with open(CAMO_CONFIG_FILENAME, 'w') as camo_file:
26 camo_file.write(camo_config)
27 print("Generated Camo config file %s" % (CAMO_CONFIG_FILENAME,))
28
29 def generate_django_secretkey():
30 # Secret key generation taken from Django's startproject.py
31 chars = 'abcdefghijklmnopqrstuvwxyz0123456789!@#$%^&*(-_=+)'
32 return get_random_string(50, chars)
33
34 def generate_secrets(development=False):
35 if development:
36 OUTPUT_SETTINGS_FILENAME = "zproject/dev-secrets.conf"
37 else:
38 OUTPUT_SETTINGS_FILENAME = "/etc/zulip/zulip-secrets.conf"
39
40 lines = ['[secrets]\n']
41
42 def config_line(var, value):
43 return "%s = %s\n" % (var, value)
44
45 for name in AUTOGENERATED_SETTINGS:
46 lines.append(config_line(name, generate_random_token(64)))
47
48 lines.append(config_line('secret_key', generate_django_secretkey()))
49 camo_key = get_random_string(64)
50 lines.append(config_line('camo_key', camo_key))
51 if not development:
52 # Write the Camo config file directly
53 generate_camo_config_file(camo_key)
54
55 out = open(OUTPUT_SETTINGS_FILENAME, 'w')
56 out.write("".join(lines))
57 out.close()
58
59 print("Generated %s with auto-generated secrets!" % (OUTPUT_SETTINGS_FILENAME,))
60
61 if __name__ == '__main__':
62
63 development = False
64 extra_args = sys.argv[1:]
65
66 if len(extra_args) and extra_args[0] in ('-d', '--development'):
67 development = True
68
69 generate_secrets(development)
70
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scripts/setup/generate_secrets.py b/scripts/setup/generate_secrets.py
--- a/scripts/setup/generate_secrets.py
+++ b/scripts/setup/generate_secrets.py
@@ -8,6 +8,8 @@
os.environ['DJANGO_SETTINGS_MODULE'] = 'zproject.settings'
from django.utils.crypto import get_random_string
+import six
+
from zerver.lib.utils import generate_random_token
os.chdir(os.path.join(os.path.dirname(__file__), '..', '..'))
@@ -31,6 +33,21 @@
chars = 'abcdefghijklmnopqrstuvwxyz0123456789!@#$%^&*(-_=+)'
return get_random_string(50, chars)
+def get_old_conf(output_filename):
+ if not os.path.exists(output_filename):
+ return {}
+
+ secrets_file = six.moves.configparser.RawConfigParser()
+ secrets_file.read(output_filename)
+
+ def get_secret(key):
+ if secrets_file.has_option('secrets', key):
+ return secrets_file.get('secrets', key)
+ return None
+
+ fields = AUTOGENERATED_SETTINGS + ['secret_key', 'camo_key']
+ return {name: get_secret(name) for name in fields}
+
def generate_secrets(development=False):
if development:
OUTPUT_SETTINGS_FILENAME = "zproject/dev-secrets.conf"
@@ -42,12 +59,16 @@
def config_line(var, value):
return "%s = %s\n" % (var, value)
+ old_conf = get_old_conf(OUTPUT_SETTINGS_FILENAME)
for name in AUTOGENERATED_SETTINGS:
- lines.append(config_line(name, generate_random_token(64)))
+ lines.append(config_line(name, old_conf.get(name, generate_random_token(64))))
+
+ secret_key = old_conf.get('secret_key', generate_django_secretkey())
+ lines.append(config_line('secret_key', secret_key))
- lines.append(config_line('secret_key', generate_django_secretkey()))
- camo_key = get_random_string(64)
+ camo_key = old_conf.get('camo_key', get_random_string(64))
lines.append(config_line('camo_key', camo_key))
+
if not development:
# Write the Camo config file directly
generate_camo_config_file(camo_key)
|
{"golden_diff": "diff --git a/scripts/setup/generate_secrets.py b/scripts/setup/generate_secrets.py\n--- a/scripts/setup/generate_secrets.py\n+++ b/scripts/setup/generate_secrets.py\n@@ -8,6 +8,8 @@\n os.environ['DJANGO_SETTINGS_MODULE'] = 'zproject.settings'\n \n from django.utils.crypto import get_random_string\n+import six\n+\n from zerver.lib.utils import generate_random_token\n \n os.chdir(os.path.join(os.path.dirname(__file__), '..', '..'))\n@@ -31,6 +33,21 @@\n chars = 'abcdefghijklmnopqrstuvwxyz0123456789!@#$%^&*(-_=+)'\n return get_random_string(50, chars)\n \n+def get_old_conf(output_filename):\n+ if not os.path.exists(output_filename):\n+ return {}\n+\n+ secrets_file = six.moves.configparser.RawConfigParser()\n+ secrets_file.read(output_filename)\n+\n+ def get_secret(key):\n+ if secrets_file.has_option('secrets', key):\n+ return secrets_file.get('secrets', key)\n+ return None\n+\n+ fields = AUTOGENERATED_SETTINGS + ['secret_key', 'camo_key']\n+ return {name: get_secret(name) for name in fields}\n+\n def generate_secrets(development=False):\n if development:\n OUTPUT_SETTINGS_FILENAME = \"zproject/dev-secrets.conf\"\n@@ -42,12 +59,16 @@\n def config_line(var, value):\n return \"%s = %s\\n\" % (var, value)\n \n+ old_conf = get_old_conf(OUTPUT_SETTINGS_FILENAME)\n for name in AUTOGENERATED_SETTINGS:\n- lines.append(config_line(name, generate_random_token(64)))\n+ lines.append(config_line(name, old_conf.get(name, generate_random_token(64))))\n+\n+ secret_key = old_conf.get('secret_key', generate_django_secretkey())\n+ lines.append(config_line('secret_key', secret_key))\n \n- lines.append(config_line('secret_key', generate_django_secretkey()))\n- camo_key = get_random_string(64)\n+ camo_key = old_conf.get('camo_key', get_random_string(64))\n lines.append(config_line('camo_key', camo_key))\n+\n if not development:\n # Write the Camo config file directly\n generate_camo_config_file(camo_key)\n", "issue": "Make `scripts/setup/generate-secrets -d` use existing setting values\nCurrently, `generate-secrets -d` regenerates all the secret keys each time it is run, modifying zproject/settings.py, which means that if you use the same Zulip checkout for both a development VM on the host and a Vagrant guest, every time you provision on the Vagrant guest, you need to rebuild the database (to match the new keys). I think this would also affect someone using both Docker and Vagrant with the same checkout (as I imagine happens with some frequency when testing).\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# This tools generates local_settings_generated.py using the template\n\nfrom __future__ import print_function\nimport sys, os, os.path\n\nsys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', '..'))\nos.environ['DJANGO_SETTINGS_MODULE'] = 'zproject.settings'\n\nfrom django.utils.crypto import get_random_string\nfrom zerver.lib.utils import generate_random_token\n\nos.chdir(os.path.join(os.path.dirname(__file__), '..', '..'))\n\nCAMO_CONFIG_FILENAME = '/etc/default/camo'\n\nAUTOGENERATED_SETTINGS = ['shared_secret', 'avatar_salt', 'rabbitmq_password', 'local_database_password',\n 'initial_password_salt']\n\ndef generate_camo_config_file(camo_key):\n camo_config = \"\"\"ENABLED=yes\nPORT=9292\nCAMO_KEY=%s\n\"\"\" % (camo_key,)\n with open(CAMO_CONFIG_FILENAME, 'w') as camo_file:\n camo_file.write(camo_config)\n print(\"Generated Camo config file %s\" % (CAMO_CONFIG_FILENAME,))\n\ndef generate_django_secretkey():\n # Secret key generation taken from Django's startproject.py\n chars = 'abcdefghijklmnopqrstuvwxyz0123456789!@#$%^&*(-_=+)'\n return get_random_string(50, chars)\n\ndef generate_secrets(development=False):\n if development:\n OUTPUT_SETTINGS_FILENAME = \"zproject/dev-secrets.conf\"\n else:\n OUTPUT_SETTINGS_FILENAME = \"/etc/zulip/zulip-secrets.conf\"\n\n lines = ['[secrets]\\n']\n\n def config_line(var, value):\n return \"%s = %s\\n\" % (var, value)\n\n for name in AUTOGENERATED_SETTINGS:\n lines.append(config_line(name, generate_random_token(64)))\n\n lines.append(config_line('secret_key', generate_django_secretkey()))\n camo_key = get_random_string(64)\n lines.append(config_line('camo_key', camo_key))\n if not development:\n # Write the Camo config file directly\n generate_camo_config_file(camo_key)\n\n out = open(OUTPUT_SETTINGS_FILENAME, 'w')\n out.write(\"\".join(lines))\n out.close()\n\n print(\"Generated %s with auto-generated secrets!\" % (OUTPUT_SETTINGS_FILENAME,))\n\nif __name__ == '__main__':\n\n development = False\n extra_args = sys.argv[1:]\n\n if len(extra_args) and extra_args[0] in ('-d', '--development'):\n development = True\n\n generate_secrets(development)\n", "path": "scripts/setup/generate_secrets.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# This tools generates local_settings_generated.py using the template\n\nfrom __future__ import print_function\nimport sys, os, os.path\n\nsys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', '..'))\nos.environ['DJANGO_SETTINGS_MODULE'] = 'zproject.settings'\n\nfrom django.utils.crypto import get_random_string\nimport six\n\nfrom zerver.lib.utils import generate_random_token\n\nos.chdir(os.path.join(os.path.dirname(__file__), '..', '..'))\n\nCAMO_CONFIG_FILENAME = '/etc/default/camo'\n\nAUTOGENERATED_SETTINGS = ['shared_secret', 'avatar_salt', 'rabbitmq_password', 'local_database_password',\n 'initial_password_salt']\n\ndef generate_camo_config_file(camo_key):\n camo_config = \"\"\"ENABLED=yes\nPORT=9292\nCAMO_KEY=%s\n\"\"\" % (camo_key,)\n with open(CAMO_CONFIG_FILENAME, 'w') as camo_file:\n camo_file.write(camo_config)\n print(\"Generated Camo config file %s\" % (CAMO_CONFIG_FILENAME,))\n\ndef generate_django_secretkey():\n # Secret key generation taken from Django's startproject.py\n chars = 'abcdefghijklmnopqrstuvwxyz0123456789!@#$%^&*(-_=+)'\n return get_random_string(50, chars)\n\ndef get_old_conf(output_filename):\n if not os.path.exists(output_filename):\n return {}\n\n secrets_file = six.moves.configparser.RawConfigParser()\n secrets_file.read(output_filename)\n\n def get_secret(key):\n if secrets_file.has_option('secrets', key):\n return secrets_file.get('secrets', key)\n return None\n\n fields = AUTOGENERATED_SETTINGS + ['secret_key', 'camo_key']\n return {name: get_secret(name) for name in fields}\n\ndef generate_secrets(development=False):\n if development:\n OUTPUT_SETTINGS_FILENAME = \"zproject/dev-secrets.conf\"\n else:\n OUTPUT_SETTINGS_FILENAME = \"/etc/zulip/zulip-secrets.conf\"\n\n lines = ['[secrets]\\n']\n\n def config_line(var, value):\n return \"%s = %s\\n\" % (var, value)\n\n old_conf = get_old_conf(OUTPUT_SETTINGS_FILENAME)\n for name in AUTOGENERATED_SETTINGS:\n lines.append(config_line(name, old_conf.get(name, generate_random_token(64))))\n\n secret_key = old_conf.get('secret_key', generate_django_secretkey())\n lines.append(config_line('secret_key', secret_key))\n\n camo_key = old_conf.get('camo_key', get_random_string(64))\n lines.append(config_line('camo_key', camo_key))\n\n if not development:\n # Write the Camo config file directly\n generate_camo_config_file(camo_key)\n\n out = open(OUTPUT_SETTINGS_FILENAME, 'w')\n out.write(\"\".join(lines))\n out.close()\n\n print(\"Generated %s with auto-generated secrets!\" % (OUTPUT_SETTINGS_FILENAME,))\n\nif __name__ == '__main__':\n\n development = False\n extra_args = sys.argv[1:]\n\n if len(extra_args) and extra_args[0] in ('-d', '--development'):\n development = True\n\n generate_secrets(development)\n", "path": "scripts/setup/generate_secrets.py"}]}
| 1,074 | 512 |
gh_patches_debug_606
|
rasdani/github-patches
|
git_diff
|
pex-tool__pex-1559
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 2.1.61
On the docket:
+ [x] Merge packages for --venv-site-packages-copies. #1557
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.60"
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.60"
+__version__ = "2.1.61"
|
{"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.60\"\n+__version__ = \"2.1.61\"\n", "issue": "Release 2.1.61\nOn the docket:\r\n+ [x] Merge packages for --venv-site-packages-copies. #1557 \n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.60\"\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.61\"\n", "path": "pex/version.py"}]}
| 343 | 96 |
gh_patches_debug_29836
|
rasdani/github-patches
|
git_diff
|
archlinux__archinstall-482
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add ability to load a config file from a URL and use that for installation
This would compliment the feature to use a configuration file
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `archinstall/__init__.py`
Content:
```
1 """Arch Linux installer - guided, templates etc."""
2 from argparse import ArgumentParser, FileType
3
4 from .lib.disk import *
5 from .lib.exceptions import *
6 from .lib.general import *
7 from .lib.hardware import *
8 from .lib.installer import __packages__, Installer
9 from .lib.locale_helpers import *
10 from .lib.luks import *
11 from .lib.mirrors import *
12 from .lib.networking import *
13 from .lib.output import *
14 from .lib.packages import *
15 from .lib.profiles import *
16 from .lib.services import *
17 from .lib.storage import *
18 from .lib.systemd import *
19 from .lib.user_interaction import *
20
21 parser = ArgumentParser()
22
23 __version__ = "2.2.0.dev1"
24
25
26 def initialize_arguments():
27 config = {}
28 parser.add_argument("--config", nargs="?", help="json config file", type=FileType("r", encoding="UTF-8"))
29 parser.add_argument("--silent", action="store_true",
30 help="Warning!!! No prompts, ignored if config is not passed")
31 parser.add_argument("--script", default="guided", nargs="?", help="Script to run for installation", type=str)
32 parser.add_argument("--vars",
33 metavar="KEY=VALUE",
34 nargs='?',
35 help="Set a number of key-value pairs "
36 "(do not put spaces before or after the = sign). "
37 "If a value contains spaces, you should define "
38 "it with double quotes: "
39 'foo="this is a sentence". Note that '
40 "values are always treated as strings.")
41 args = parser.parse_args()
42 if args.config is not None:
43 try:
44 config = json.load(args.config)
45 except Exception as e:
46 print(e)
47 # Installation can't be silent if config is not passed
48 config["silent"] = args.silent
49 if args.vars is not None:
50 try:
51 for var in args.vars.split(' '):
52 key, val = var.split("=")
53 config[key] = val
54 except Exception as e:
55 print(e)
56 config["script"] = args.script
57 return config
58
59
60 arguments = initialize_arguments()
61
62
63 # TODO: Learn the dark arts of argparse... (I summon thee dark spawn of cPython)
64
65
66 def run_as_a_module():
67 """
68 Since we're running this as a 'python -m archinstall' module OR
69 a nuitka3 compiled version of the project.
70 This function and the file __main__ acts as a entry point.
71 """
72
73 # Add another path for finding profiles, so that list_profiles() in Script() can find guided.py, unattended.py etc.
74 storage['PROFILE_PATH'].append(os.path.abspath(f'{os.path.dirname(__file__)}/examples'))
75 try:
76 script = Script(arguments.get('script', None))
77 except ProfileNotFound as err:
78 print(f"Couldn't find file: {err}")
79 sys.exit(1)
80
81 os.chdir(os.path.abspath(os.path.dirname(__file__)))
82
83 # Remove the example directory from the PROFILE_PATH, to avoid guided.py etc shows up in user input questions.
84 storage['PROFILE_PATH'].pop()
85 script.execute()
86
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/archinstall/__init__.py b/archinstall/__init__.py
--- a/archinstall/__init__.py
+++ b/archinstall/__init__.py
@@ -1,5 +1,8 @@
"""Arch Linux installer - guided, templates etc."""
-from argparse import ArgumentParser, FileType
+import urllib.error
+import urllib.parse
+import urllib.request
+from argparse import ArgumentParser
from .lib.disk import *
from .lib.exceptions import *
@@ -25,7 +28,7 @@
def initialize_arguments():
config = {}
- parser.add_argument("--config", nargs="?", help="json config file", type=FileType("r", encoding="UTF-8"))
+ parser.add_argument("--config", nargs="?", help="JSON configuration file or URL")
parser.add_argument("--silent", action="store_true",
help="Warning!!! No prompts, ignored if config is not passed")
parser.add_argument("--script", default="guided", nargs="?", help="Script to run for installation", type=str)
@@ -41,7 +44,15 @@
args = parser.parse_args()
if args.config is not None:
try:
- config = json.load(args.config)
+ # First, let's check if this is a URL scheme instead of a filename
+ parsed_url = urllib.parse.urlparse(args.config)
+
+ if not parsed_url.scheme: # The Profile was not a direct match on a remote URL, it must be a local file.
+ with open(args.config) as file:
+ config = json.load(file)
+ else: # Attempt to load the configuration from the URL.
+ with urllib.request.urlopen(args.config) as response:
+ config = json.loads(response.read())
except Exception as e:
print(e)
# Installation can't be silent if config is not passed
|
{"golden_diff": "diff --git a/archinstall/__init__.py b/archinstall/__init__.py\n--- a/archinstall/__init__.py\n+++ b/archinstall/__init__.py\n@@ -1,5 +1,8 @@\n \"\"\"Arch Linux installer - guided, templates etc.\"\"\"\n-from argparse import ArgumentParser, FileType\n+import urllib.error\n+import urllib.parse\n+import urllib.request\n+from argparse import ArgumentParser\n \n from .lib.disk import *\n from .lib.exceptions import *\n@@ -25,7 +28,7 @@\n \n def initialize_arguments():\n \tconfig = {}\n-\tparser.add_argument(\"--config\", nargs=\"?\", help=\"json config file\", type=FileType(\"r\", encoding=\"UTF-8\"))\n+\tparser.add_argument(\"--config\", nargs=\"?\", help=\"JSON configuration file or URL\")\n \tparser.add_argument(\"--silent\", action=\"store_true\",\n \t\t\t\t\t\thelp=\"Warning!!! No prompts, ignored if config is not passed\")\n \tparser.add_argument(\"--script\", default=\"guided\", nargs=\"?\", help=\"Script to run for installation\", type=str)\n@@ -41,7 +44,15 @@\n \targs = parser.parse_args()\n \tif args.config is not None:\n \t\ttry:\n-\t\t\tconfig = json.load(args.config)\n+\t\t\t# First, let's check if this is a URL scheme instead of a filename\n+\t\t\tparsed_url = urllib.parse.urlparse(args.config)\n+\n+\t\t\tif not parsed_url.scheme: # The Profile was not a direct match on a remote URL, it must be a local file.\n+\t\t\t\twith open(args.config) as file:\n+\t\t\t\t\tconfig = json.load(file)\n+\t\t\telse: # Attempt to load the configuration from the URL.\n+\t\t\t\twith urllib.request.urlopen(args.config) as response:\n+\t\t\t\t\tconfig = json.loads(response.read())\n \t\texcept Exception as e:\n \t\t\tprint(e)\n \t\t# Installation can't be silent if config is not passed\n", "issue": "Add ability to load a config file from a URL and use that for installation\nThis would compliment the feature to use a configuration file\n", "before_files": [{"content": "\"\"\"Arch Linux installer - guided, templates etc.\"\"\"\nfrom argparse import ArgumentParser, FileType\n\nfrom .lib.disk import *\nfrom .lib.exceptions import *\nfrom .lib.general import *\nfrom .lib.hardware import *\nfrom .lib.installer import __packages__, Installer\nfrom .lib.locale_helpers import *\nfrom .lib.luks import *\nfrom .lib.mirrors import *\nfrom .lib.networking import *\nfrom .lib.output import *\nfrom .lib.packages import *\nfrom .lib.profiles import *\nfrom .lib.services import *\nfrom .lib.storage import *\nfrom .lib.systemd import *\nfrom .lib.user_interaction import *\n\nparser = ArgumentParser()\n\n__version__ = \"2.2.0.dev1\"\n\n\ndef initialize_arguments():\n\tconfig = {}\n\tparser.add_argument(\"--config\", nargs=\"?\", help=\"json config file\", type=FileType(\"r\", encoding=\"UTF-8\"))\n\tparser.add_argument(\"--silent\", action=\"store_true\",\n\t\t\t\t\t\thelp=\"Warning!!! No prompts, ignored if config is not passed\")\n\tparser.add_argument(\"--script\", default=\"guided\", nargs=\"?\", help=\"Script to run for installation\", type=str)\n\tparser.add_argument(\"--vars\",\n\t\t\t\t\t\tmetavar=\"KEY=VALUE\",\n\t\t\t\t\t\tnargs='?',\n\t\t\t\t\t\thelp=\"Set a number of key-value pairs \"\n\t\t\t\t\t\t\t \"(do not put spaces before or after the = sign). \"\n\t\t\t\t\t\t\t \"If a value contains spaces, you should define \"\n\t\t\t\t\t\t\t \"it with double quotes: \"\n\t\t\t\t\t\t\t 'foo=\"this is a sentence\". Note that '\n\t\t\t\t\t\t\t \"values are always treated as strings.\")\n\targs = parser.parse_args()\n\tif args.config is not None:\n\t\ttry:\n\t\t\tconfig = json.load(args.config)\n\t\texcept Exception as e:\n\t\t\tprint(e)\n\t\t# Installation can't be silent if config is not passed\n\t\tconfig[\"silent\"] = args.silent\n\tif args.vars is not None:\n\t\ttry:\n\t\t\tfor var in args.vars.split(' '):\n\t\t\t\tkey, val = var.split(\"=\")\n\t\t\t\tconfig[key] = val\n\t\texcept Exception as e:\n\t\t\tprint(e)\n\tconfig[\"script\"] = args.script\n\treturn config\n\n\narguments = initialize_arguments()\n\n\n# TODO: Learn the dark arts of argparse... (I summon thee dark spawn of cPython)\n\n\ndef run_as_a_module():\n\t\"\"\"\n\tSince we're running this as a 'python -m archinstall' module OR\n\ta nuitka3 compiled version of the project.\n\tThis function and the file __main__ acts as a entry point.\n\t\"\"\"\n\n\t# Add another path for finding profiles, so that list_profiles() in Script() can find guided.py, unattended.py etc.\n\tstorage['PROFILE_PATH'].append(os.path.abspath(f'{os.path.dirname(__file__)}/examples'))\n\ttry:\n\t\tscript = Script(arguments.get('script', None))\n\texcept ProfileNotFound as err:\n\t\tprint(f\"Couldn't find file: {err}\")\n\t\tsys.exit(1)\n\n\tos.chdir(os.path.abspath(os.path.dirname(__file__)))\n\n\t# Remove the example directory from the PROFILE_PATH, to avoid guided.py etc shows up in user input questions.\n\tstorage['PROFILE_PATH'].pop()\n\tscript.execute()\n", "path": "archinstall/__init__.py"}], "after_files": [{"content": "\"\"\"Arch Linux installer - guided, templates etc.\"\"\"\nimport urllib.error\nimport urllib.parse\nimport urllib.request\nfrom argparse import ArgumentParser\n\nfrom .lib.disk import *\nfrom .lib.exceptions import *\nfrom .lib.general import *\nfrom .lib.hardware import *\nfrom .lib.installer import __packages__, Installer\nfrom .lib.locale_helpers import *\nfrom .lib.luks import *\nfrom .lib.mirrors import *\nfrom .lib.networking import *\nfrom .lib.output import *\nfrom .lib.packages import *\nfrom .lib.profiles import *\nfrom .lib.services import *\nfrom .lib.storage import *\nfrom .lib.systemd import *\nfrom .lib.user_interaction import *\n\nparser = ArgumentParser()\n\n__version__ = \"2.2.0.dev1\"\n\n\ndef initialize_arguments():\n\tconfig = {}\n\tparser.add_argument(\"--config\", nargs=\"?\", help=\"JSON configuration file or URL\")\n\tparser.add_argument(\"--silent\", action=\"store_true\",\n\t\t\t\t\t\thelp=\"Warning!!! No prompts, ignored if config is not passed\")\n\tparser.add_argument(\"--script\", default=\"guided\", nargs=\"?\", help=\"Script to run for installation\", type=str)\n\tparser.add_argument(\"--vars\",\n\t\t\t\t\t\tmetavar=\"KEY=VALUE\",\n\t\t\t\t\t\tnargs='?',\n\t\t\t\t\t\thelp=\"Set a number of key-value pairs \"\n\t\t\t\t\t\t\t \"(do not put spaces before or after the = sign). \"\n\t\t\t\t\t\t\t \"If a value contains spaces, you should define \"\n\t\t\t\t\t\t\t \"it with double quotes: \"\n\t\t\t\t\t\t\t 'foo=\"this is a sentence\". Note that '\n\t\t\t\t\t\t\t \"values are always treated as strings.\")\n\targs = parser.parse_args()\n\tif args.config is not None:\n\t\ttry:\n\t\t\t# First, let's check if this is a URL scheme instead of a filename\n\t\t\tparsed_url = urllib.parse.urlparse(args.config)\n\n\t\t\tif not parsed_url.scheme: # The Profile was not a direct match on a remote URL, it must be a local file.\n\t\t\t\twith open(args.config) as file:\n\t\t\t\t\tconfig = json.load(file)\n\t\t\telse: # Attempt to load the configuration from the URL.\n\t\t\t\twith urllib.request.urlopen(args.config) as response:\n\t\t\t\t\tconfig = json.loads(response.read())\n\t\texcept Exception as e:\n\t\t\tprint(e)\n\t\t# Installation can't be silent if config is not passed\n\t\tconfig[\"silent\"] = args.silent\n\tif args.vars is not None:\n\t\ttry:\n\t\t\tfor var in args.vars.split(' '):\n\t\t\t\tkey, val = var.split(\"=\")\n\t\t\t\tconfig[key] = val\n\t\texcept Exception as e:\n\t\t\tprint(e)\n\tconfig[\"script\"] = args.script\n\treturn config\n\n\narguments = initialize_arguments()\n\n\n# TODO: Learn the dark arts of argparse... (I summon thee dark spawn of cPython)\n\n\ndef run_as_a_module():\n\t\"\"\"\n\tSince we're running this as a 'python -m archinstall' module OR\n\ta nuitka3 compiled version of the project.\n\tThis function and the file __main__ acts as a entry point.\n\t\"\"\"\n\n\t# Add another path for finding profiles, so that list_profiles() in Script() can find guided.py, unattended.py etc.\n\tstorage['PROFILE_PATH'].append(os.path.abspath(f'{os.path.dirname(__file__)}/examples'))\n\ttry:\n\t\tscript = Script(arguments.get('script', None))\n\texcept ProfileNotFound as err:\n\t\tprint(f\"Couldn't find file: {err}\")\n\t\tsys.exit(1)\n\n\tos.chdir(os.path.abspath(os.path.dirname(__file__)))\n\n\t# Remove the example directory from the PROFILE_PATH, to avoid guided.py etc shows up in user input questions.\n\tstorage['PROFILE_PATH'].pop()\n\tscript.execute()\n", "path": "archinstall/__init__.py"}]}
| 1,131 | 398 |
gh_patches_debug_30071
|
rasdani/github-patches
|
git_diff
|
coala__coala-bears-917
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CheckstyleBear should error when use_spaces is False or indent_size is not 2
If `checkstyle_configs = Google`, indentation checking using 2 spaces, according to https://google.github.io/styleguide/javaguide.html#s4.2-block-indentation
`use_spaces=False` must emit an error
`indent_size` must be set to 2, otherwise emit an error.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bears/java/CheckstyleBear.py`
Content:
```
1 from coalib.bearlib.abstractions.Linter import linter
2 from coalib.settings.Setting import path
3
4
5 known_checkstyles = {
6 "google": "https://raw.githubusercontent.com/checkstyle/checkstyle/master/src/main/resources/google_checks.xml",
7 "sun": 'https://raw.githubusercontent.com/checkstyle/checkstyle/master/src/main/resources/sun_checks.xml',
8 "android-check-easy": "https://raw.githubusercontent.com/noveogroup/android-check/master/android-check-plugin/src/main/resources/checkstyle/checkstyle-easy.xml",
9 "android-check-hard": "https://raw.githubusercontent.com/noveogroup/android-check/master/android-check-plugin/src/main/resources/checkstyle/checkstyle-hard.xml",
10 "geosoft": "http://geosoft.no/development/geosoft_checks.xml"}
11
12
13 def known_checkstyle_or_path(setting):
14 if str(setting) in known_checkstyles.keys():
15 return str(setting)
16 else:
17 return path(setting)
18
19
20 @linter(executable='java',
21 output_format='regex',
22 output_regex=r'\[(?P<severity>WARN|INFO)\].*?'
23 r'(?P<line>\d+):?(?P<column>\d+)?. '
24 r'(?P<message>.*?) *\[(?P<origin>[a-zA-Z]+?)\]')
25 class CheckstyleBear:
26 """
27 Check Java code for possible style, semantic and design issues.
28
29 For more information, consult
30 <http://checkstyle.sourceforge.net/checks.html>.
31 """
32
33 LANGUAGES = {"Java"}
34 AUTHORS = {'The coala developers'}
35 AUTHORS_EMAILS = {'[email protected]'}
36 LICENSE = 'AGPL-3.0'
37 CAN_DETECT = {'Formatting', 'Smell'}
38
39 def setup_dependencies(self):
40 type(self).checkstyle_jar_file = self.download_cached_file(
41 'http://sourceforge.net/projects/checkstyle/files/checkstyle/6.15'
42 '/checkstyle-6.15-all.jar',
43 "checkstyle.jar")
44
45 def create_arguments(
46 self, filename, file, config_file,
47 checkstyle_configs: known_checkstyle_or_path="google"):
48 """
49 :param checkstyle_configs:
50 A file containing configs to use in ``checkstyle``. It can also
51 have the special values:
52
53 - google - Google's Java style. More info at
54 <http://checkstyle.sourceforge.net/style_configs.html>.
55 - sun - Sun's Java style. These are the same
56 as the default Eclipse checks. More info at
57 <http://checkstyle.sourceforge.net/style_configs.html>.
58 - android-check-easy - The easy Android configs provided by the
59 android-check eclipse plugin. More info at
60 <https://github.com/noveogroup/android-check>.
61 - android-check-hard - The hard Android confis provided by the
62 android-check eclipse plugin. More info at
63 <https://github.com/noveogroup/android-check>.
64 - geosoft - The Java style followed by GeoSoft. More info at
65 <http://geosoft.no/development/javastyle.html>
66 """
67 if checkstyle_configs in known_checkstyles:
68 checkstyle_configs = self.download_cached_file(
69 known_checkstyles[checkstyle_configs],
70 checkstyle_configs + ".xml")
71
72 return ('-jar', self.checkstyle_jar_file, '-c',
73 checkstyle_configs, filename)
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bears/java/CheckstyleBear.py b/bears/java/CheckstyleBear.py
--- a/bears/java/CheckstyleBear.py
+++ b/bears/java/CheckstyleBear.py
@@ -10,6 +10,13 @@
"geosoft": "http://geosoft.no/development/geosoft_checks.xml"}
+def check_invalid_configuration(checkstyle_configs, use_spaces, indent_size):
+ if (checkstyle_configs is 'google' and
+ (not use_spaces or indent_size != 2)):
+ raise ValueError('Google checkstyle config does not support '
+ 'use_spaces=False or indent_size != 2')
+
+
def known_checkstyle_or_path(setting):
if str(setting) in known_checkstyles.keys():
return str(setting)
@@ -44,7 +51,8 @@
def create_arguments(
self, filename, file, config_file,
- checkstyle_configs: known_checkstyle_or_path="google"):
+ checkstyle_configs: known_checkstyle_or_path="google",
+ use_spaces: bool=True, indent_size: int=2):
"""
:param checkstyle_configs:
A file containing configs to use in ``checkstyle``. It can also
@@ -64,6 +72,9 @@
- geosoft - The Java style followed by GeoSoft. More info at
<http://geosoft.no/development/javastyle.html>
"""
+ check_invalid_configuration(
+ checkstyle_configs, use_spaces, indent_size)
+
if checkstyle_configs in known_checkstyles:
checkstyle_configs = self.download_cached_file(
known_checkstyles[checkstyle_configs],
|
{"golden_diff": "diff --git a/bears/java/CheckstyleBear.py b/bears/java/CheckstyleBear.py\n--- a/bears/java/CheckstyleBear.py\n+++ b/bears/java/CheckstyleBear.py\n@@ -10,6 +10,13 @@\n \"geosoft\": \"http://geosoft.no/development/geosoft_checks.xml\"}\n \n \n+def check_invalid_configuration(checkstyle_configs, use_spaces, indent_size):\n+ if (checkstyle_configs is 'google' and\n+ (not use_spaces or indent_size != 2)):\n+ raise ValueError('Google checkstyle config does not support '\n+ 'use_spaces=False or indent_size != 2')\n+\n+\n def known_checkstyle_or_path(setting):\n if str(setting) in known_checkstyles.keys():\n return str(setting)\n@@ -44,7 +51,8 @@\n \n def create_arguments(\n self, filename, file, config_file,\n- checkstyle_configs: known_checkstyle_or_path=\"google\"):\n+ checkstyle_configs: known_checkstyle_or_path=\"google\",\n+ use_spaces: bool=True, indent_size: int=2):\n \"\"\"\n :param checkstyle_configs:\n A file containing configs to use in ``checkstyle``. It can also\n@@ -64,6 +72,9 @@\n - geosoft - The Java style followed by GeoSoft. More info at\n <http://geosoft.no/development/javastyle.html>\n \"\"\"\n+ check_invalid_configuration(\n+ checkstyle_configs, use_spaces, indent_size)\n+\n if checkstyle_configs in known_checkstyles:\n checkstyle_configs = self.download_cached_file(\n known_checkstyles[checkstyle_configs],\n", "issue": "CheckstyleBear should error when use_spaces is False or indent_size is not 2\nIf `checkstyle_configs = Google`, indentation checking using 2 spaces, according to https://google.github.io/styleguide/javaguide.html#s4.2-block-indentation\n\n`use_spaces=False` must emit an error\n`indent_size` must be set to 2, otherwise emit an error.\n\n", "before_files": [{"content": "from coalib.bearlib.abstractions.Linter import linter\nfrom coalib.settings.Setting import path\n\n\nknown_checkstyles = {\n \"google\": \"https://raw.githubusercontent.com/checkstyle/checkstyle/master/src/main/resources/google_checks.xml\",\n \"sun\": 'https://raw.githubusercontent.com/checkstyle/checkstyle/master/src/main/resources/sun_checks.xml',\n \"android-check-easy\": \"https://raw.githubusercontent.com/noveogroup/android-check/master/android-check-plugin/src/main/resources/checkstyle/checkstyle-easy.xml\",\n \"android-check-hard\": \"https://raw.githubusercontent.com/noveogroup/android-check/master/android-check-plugin/src/main/resources/checkstyle/checkstyle-hard.xml\",\n \"geosoft\": \"http://geosoft.no/development/geosoft_checks.xml\"}\n\n\ndef known_checkstyle_or_path(setting):\n if str(setting) in known_checkstyles.keys():\n return str(setting)\n else:\n return path(setting)\n\n\n@linter(executable='java',\n output_format='regex',\n output_regex=r'\\[(?P<severity>WARN|INFO)\\].*?'\n r'(?P<line>\\d+):?(?P<column>\\d+)?. '\n r'(?P<message>.*?) *\\[(?P<origin>[a-zA-Z]+?)\\]')\nclass CheckstyleBear:\n \"\"\"\n Check Java code for possible style, semantic and design issues.\n\n For more information, consult\n <http://checkstyle.sourceforge.net/checks.html>.\n \"\"\"\n\n LANGUAGES = {\"Java\"}\n AUTHORS = {'The coala developers'}\n AUTHORS_EMAILS = {'[email protected]'}\n LICENSE = 'AGPL-3.0'\n CAN_DETECT = {'Formatting', 'Smell'}\n\n def setup_dependencies(self):\n type(self).checkstyle_jar_file = self.download_cached_file(\n 'http://sourceforge.net/projects/checkstyle/files/checkstyle/6.15'\n '/checkstyle-6.15-all.jar',\n \"checkstyle.jar\")\n\n def create_arguments(\n self, filename, file, config_file,\n checkstyle_configs: known_checkstyle_or_path=\"google\"):\n \"\"\"\n :param checkstyle_configs:\n A file containing configs to use in ``checkstyle``. It can also\n have the special values:\n\n - google - Google's Java style. More info at\n <http://checkstyle.sourceforge.net/style_configs.html>.\n - sun - Sun's Java style. These are the same\n as the default Eclipse checks. More info at\n <http://checkstyle.sourceforge.net/style_configs.html>.\n - android-check-easy - The easy Android configs provided by the\n android-check eclipse plugin. More info at\n <https://github.com/noveogroup/android-check>.\n - android-check-hard - The hard Android confis provided by the\n android-check eclipse plugin. More info at\n <https://github.com/noveogroup/android-check>.\n - geosoft - The Java style followed by GeoSoft. More info at\n <http://geosoft.no/development/javastyle.html>\n \"\"\"\n if checkstyle_configs in known_checkstyles:\n checkstyle_configs = self.download_cached_file(\n known_checkstyles[checkstyle_configs],\n checkstyle_configs + \".xml\")\n\n return ('-jar', self.checkstyle_jar_file, '-c',\n checkstyle_configs, filename)\n", "path": "bears/java/CheckstyleBear.py"}], "after_files": [{"content": "from coalib.bearlib.abstractions.Linter import linter\nfrom coalib.settings.Setting import path\n\n\nknown_checkstyles = {\n \"google\": \"https://raw.githubusercontent.com/checkstyle/checkstyle/master/src/main/resources/google_checks.xml\",\n \"sun\": 'https://raw.githubusercontent.com/checkstyle/checkstyle/master/src/main/resources/sun_checks.xml',\n \"android-check-easy\": \"https://raw.githubusercontent.com/noveogroup/android-check/master/android-check-plugin/src/main/resources/checkstyle/checkstyle-easy.xml\",\n \"android-check-hard\": \"https://raw.githubusercontent.com/noveogroup/android-check/master/android-check-plugin/src/main/resources/checkstyle/checkstyle-hard.xml\",\n \"geosoft\": \"http://geosoft.no/development/geosoft_checks.xml\"}\n\n\ndef check_invalid_configuration(checkstyle_configs, use_spaces, indent_size):\n if (checkstyle_configs is 'google' and\n (not use_spaces or indent_size != 2)):\n raise ValueError('Google checkstyle config does not support '\n 'use_spaces=False or indent_size != 2')\n\n\ndef known_checkstyle_or_path(setting):\n if str(setting) in known_checkstyles.keys():\n return str(setting)\n else:\n return path(setting)\n\n\n@linter(executable='java',\n output_format='regex',\n output_regex=r'\\[(?P<severity>WARN|INFO)\\].*?'\n r'(?P<line>\\d+):?(?P<column>\\d+)?. '\n r'(?P<message>.*?) *\\[(?P<origin>[a-zA-Z]+?)\\]')\nclass CheckstyleBear:\n \"\"\"\n Check Java code for possible style, semantic and design issues.\n\n For more information, consult\n <http://checkstyle.sourceforge.net/checks.html>.\n \"\"\"\n\n LANGUAGES = {\"Java\"}\n AUTHORS = {'The coala developers'}\n AUTHORS_EMAILS = {'[email protected]'}\n LICENSE = 'AGPL-3.0'\n CAN_DETECT = {'Formatting', 'Smell'}\n\n def setup_dependencies(self):\n type(self).checkstyle_jar_file = self.download_cached_file(\n 'http://sourceforge.net/projects/checkstyle/files/checkstyle/6.15'\n '/checkstyle-6.15-all.jar',\n \"checkstyle.jar\")\n\n def create_arguments(\n self, filename, file, config_file,\n checkstyle_configs: known_checkstyle_or_path=\"google\",\n use_spaces: bool=True, indent_size: int=2):\n \"\"\"\n :param checkstyle_configs:\n A file containing configs to use in ``checkstyle``. It can also\n have the special values:\n\n - google - Google's Java style. More info at\n <http://checkstyle.sourceforge.net/style_configs.html>.\n - sun - Sun's Java style. These are the same\n as the default Eclipse checks. More info at\n <http://checkstyle.sourceforge.net/style_configs.html>.\n - android-check-easy - The easy Android configs provided by the\n android-check eclipse plugin. More info at\n <https://github.com/noveogroup/android-check>.\n - android-check-hard - The hard Android confis provided by the\n android-check eclipse plugin. More info at\n <https://github.com/noveogroup/android-check>.\n - geosoft - The Java style followed by GeoSoft. More info at\n <http://geosoft.no/development/javastyle.html>\n \"\"\"\n check_invalid_configuration(\n checkstyle_configs, use_spaces, indent_size)\n\n if checkstyle_configs in known_checkstyles:\n checkstyle_configs = self.download_cached_file(\n known_checkstyles[checkstyle_configs],\n checkstyle_configs + \".xml\")\n\n return ('-jar', self.checkstyle_jar_file, '-c',\n checkstyle_configs, filename)\n", "path": "bears/java/CheckstyleBear.py"}]}
| 1,208 | 364 |
gh_patches_debug_38665
|
rasdani/github-patches
|
git_diff
|
kedro-org__kedro-1105
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`APIDataSet` ease of use
## Description
Howdy team!
I was working with the `APIDataSet` recently and had two issues out of the box.
#### 1. Specifying the `auth` keyword argument in yaml
The `requests` library expects the `auth` parameter of a request to be either a `HTTPBasicAuth` or a `tuple` (lists are not allowed, see [here](https://github.com/ianwhale/kedro-kaggle-starter/blob/master/%7B%7B%20cookiecutter.repo_name%20%7D%7D/src/%7B%7B%20cookiecutter.python_package%20%7D%7D/api.py) in requests). At the moment, neither are possible to specify in my `catalog.yml`.
From what I hear, you're already working on this (#1011). So maybe this point is moot.
#### 2. The `auth` keyword argument and `credentials.yml`
I would like to specify my `(username, password)` tuple inside `credentials.yml`. However, the `APIDataSet`'s `auth` keyword wouldn't get filled in by the config loader.
To get this working, you'd have to extend `APIDataSet` to have a `credentials` keyword that is filled in for `auth` in an upcall.
It would be great to either have this by default, or even have the loader fill `auth` keywords in addition to `credentials`. Although that might have unintended consequences.
## Context
Hopefully this would unify the experience a bit. Right now, the `credentials` keyword in a dataset and `credentials.yml` are the main points of access to secrets. Which is probably good.
## Possible Implementation
I whipped up [my own `APIDataSet`](https://github.com/ianwhale/kedro-kaggle-starter/blob/master/%7B%7B%20cookiecutter.repo_name%20%7D%7D/src/%7B%7B%20cookiecutter.python_package%20%7D%7D/api.py) to solve both the problems above.
## Possible Alternatives
To get this working with no changes to `APIDataSet`, we'd have to implement the changes in #1011 so we can specify tuples in `credentials.yml` and have the config loader fill in `auth` as well.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kedro/extras/datasets/api/api_dataset.py`
Content:
```
1 """``APIDataSet`` loads the data from HTTP(S) APIs.
2 It uses the python requests library: https://requests.readthedocs.io/en/master/
3 """
4 from typing import Any, Dict, List, Tuple, Union
5
6 import requests
7 from requests.auth import AuthBase
8
9 from kedro.io.core import AbstractDataSet, DataSetError
10
11
12 class APIDataSet(AbstractDataSet):
13 """``APIDataSet`` loads the data from HTTP(S) APIs.
14 It uses the python requests library: https://requests.readthedocs.io/en/master/
15
16 Example:
17 ::
18
19 >>> from kedro.extras.datasets.api import APIDataSet
20 >>>
21 >>>
22 >>> data_set = APIDataSet(
23 >>> url="https://quickstats.nass.usda.gov",
24 >>> params={
25 >>> "key": "SOME_TOKEN",
26 >>> "format": "JSON",
27 >>> "commodity_desc": "CORN",
28 >>> "statisticcat_des": "YIELD",
29 >>> "agg_level_desc": "STATE",
30 >>> "year": 2000
31 >>> }
32 >>> )
33 >>> data = data_set.load()
34 """
35
36 # pylint: disable=too-many-arguments
37 def __init__(
38 self,
39 url: str,
40 method: str = "GET",
41 data: Any = None,
42 params: Dict[str, Any] = None,
43 headers: Dict[str, Any] = None,
44 auth: Union[Tuple[str], AuthBase] = None,
45 json: Union[List, Dict[str, Any]] = None,
46 timeout: int = 60,
47 ) -> None:
48 """Creates a new instance of ``APIDataSet`` to fetch data from an API endpoint.
49
50 Args:
51 url: The API URL endpoint.
52 method: The Method of the request, GET, POST, PUT, DELETE, HEAD, etc...
53 data: The request payload, used for POST, PUT, etc requests
54 https://requests.readthedocs.io/en/master/user/quickstart/#more-complicated-post-requests
55 params: The url parameters of the API.
56 https://requests.readthedocs.io/en/master/user/quickstart/#passing-parameters-in-urls
57 headers: The HTTP headers.
58 https://requests.readthedocs.io/en/master/user/quickstart/#custom-headers
59 auth: Anything ``requests`` accepts. Normally it's either ``('login', 'password')``,
60 or ``AuthBase``, ``HTTPBasicAuth`` instance for more complex cases.
61 json: The request payload, used for POST, PUT, etc requests, passed in
62 to the json kwarg in the requests object.
63 https://requests.readthedocs.io/en/master/user/quickstart/#more-complicated-post-requests
64 timeout: The wait time in seconds for a response, defaults to 1 minute.
65 https://requests.readthedocs.io/en/master/user/quickstart/#timeouts
66
67 """
68 super().__init__()
69 self._request_args: Dict[str, Any] = {
70 "url": url,
71 "method": method,
72 "data": data,
73 "params": params,
74 "headers": headers,
75 "auth": auth,
76 "json": json,
77 "timeout": timeout,
78 }
79
80 def _describe(self) -> Dict[str, Any]:
81 return dict(**self._request_args)
82
83 def _execute_request(self) -> requests.Response:
84 try:
85 response = requests.request(**self._request_args)
86 response.raise_for_status()
87 except requests.exceptions.HTTPError as exc:
88 raise DataSetError("Failed to fetch data", exc) from exc
89 except OSError as exc:
90 raise DataSetError("Failed to connect to the remote server") from exc
91
92 return response
93
94 def _load(self) -> requests.Response:
95 return self._execute_request()
96
97 def _save(self, data: Any) -> None:
98 raise DataSetError(f"{self.__class__.__name__} is a read only data set type")
99
100 def _exists(self) -> bool:
101 response = self._execute_request()
102
103 return response.ok
104
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kedro/extras/datasets/api/api_dataset.py b/kedro/extras/datasets/api/api_dataset.py
--- a/kedro/extras/datasets/api/api_dataset.py
+++ b/kedro/extras/datasets/api/api_dataset.py
@@ -1,7 +1,7 @@
"""``APIDataSet`` loads the data from HTTP(S) APIs.
It uses the python requests library: https://requests.readthedocs.io/en/master/
"""
-from typing import Any, Dict, List, Tuple, Union
+from typing import Any, Dict, Iterable, List, Union
import requests
from requests.auth import AuthBase
@@ -41,9 +41,10 @@
data: Any = None,
params: Dict[str, Any] = None,
headers: Dict[str, Any] = None,
- auth: Union[Tuple[str], AuthBase] = None,
+ auth: Union[Iterable[str], AuthBase] = None,
json: Union[List, Dict[str, Any]] = None,
timeout: int = 60,
+ credentials: Union[Iterable[str], AuthBase] = None,
) -> None:
"""Creates a new instance of ``APIDataSet`` to fetch data from an API endpoint.
@@ -57,15 +58,29 @@
headers: The HTTP headers.
https://requests.readthedocs.io/en/master/user/quickstart/#custom-headers
auth: Anything ``requests`` accepts. Normally it's either ``('login', 'password')``,
- or ``AuthBase``, ``HTTPBasicAuth`` instance for more complex cases.
+ or ``AuthBase``, ``HTTPBasicAuth`` instance for more complex cases. Any
+ iterable will be cast to a tuple.
json: The request payload, used for POST, PUT, etc requests, passed in
to the json kwarg in the requests object.
https://requests.readthedocs.io/en/master/user/quickstart/#more-complicated-post-requests
timeout: The wait time in seconds for a response, defaults to 1 minute.
https://requests.readthedocs.io/en/master/user/quickstart/#timeouts
+ credentials: same as ``auth``. Allows specifying ``auth`` secrets in
+ credentials.yml.
+ Raises:
+ ValueError: if both ``credentials`` and ``auth`` are specified.
"""
super().__init__()
+
+ if credentials is not None and auth is not None:
+ raise ValueError("Cannot specify both auth and credentials.")
+
+ auth = credentials or auth
+
+ if isinstance(auth, Iterable):
+ auth = tuple(auth)
+
self._request_args: Dict[str, Any] = {
"url": url,
"method": method,
|
{"golden_diff": "diff --git a/kedro/extras/datasets/api/api_dataset.py b/kedro/extras/datasets/api/api_dataset.py\n--- a/kedro/extras/datasets/api/api_dataset.py\n+++ b/kedro/extras/datasets/api/api_dataset.py\n@@ -1,7 +1,7 @@\n \"\"\"``APIDataSet`` loads the data from HTTP(S) APIs.\n It uses the python requests library: https://requests.readthedocs.io/en/master/\n \"\"\"\n-from typing import Any, Dict, List, Tuple, Union\n+from typing import Any, Dict, Iterable, List, Union\n \n import requests\n from requests.auth import AuthBase\n@@ -41,9 +41,10 @@\n data: Any = None,\n params: Dict[str, Any] = None,\n headers: Dict[str, Any] = None,\n- auth: Union[Tuple[str], AuthBase] = None,\n+ auth: Union[Iterable[str], AuthBase] = None,\n json: Union[List, Dict[str, Any]] = None,\n timeout: int = 60,\n+ credentials: Union[Iterable[str], AuthBase] = None,\n ) -> None:\n \"\"\"Creates a new instance of ``APIDataSet`` to fetch data from an API endpoint.\n \n@@ -57,15 +58,29 @@\n headers: The HTTP headers.\n https://requests.readthedocs.io/en/master/user/quickstart/#custom-headers\n auth: Anything ``requests`` accepts. Normally it's either ``('login', 'password')``,\n- or ``AuthBase``, ``HTTPBasicAuth`` instance for more complex cases.\n+ or ``AuthBase``, ``HTTPBasicAuth`` instance for more complex cases. Any\n+ iterable will be cast to a tuple.\n json: The request payload, used for POST, PUT, etc requests, passed in\n to the json kwarg in the requests object.\n https://requests.readthedocs.io/en/master/user/quickstart/#more-complicated-post-requests\n timeout: The wait time in seconds for a response, defaults to 1 minute.\n https://requests.readthedocs.io/en/master/user/quickstart/#timeouts\n+ credentials: same as ``auth``. Allows specifying ``auth`` secrets in\n+ credentials.yml.\n \n+ Raises:\n+ ValueError: if both ``credentials`` and ``auth`` are specified.\n \"\"\"\n super().__init__()\n+\n+ if credentials is not None and auth is not None:\n+ raise ValueError(\"Cannot specify both auth and credentials.\")\n+\n+ auth = credentials or auth\n+\n+ if isinstance(auth, Iterable):\n+ auth = tuple(auth)\n+\n self._request_args: Dict[str, Any] = {\n \"url\": url,\n \"method\": method,\n", "issue": "`APIDataSet` ease of use\n## Description\r\n\r\nHowdy team!\r\n\r\nI was working with the `APIDataSet` recently and had two issues out of the box.\r\n\r\n#### 1. Specifying the `auth` keyword argument in yaml\r\n\r\nThe `requests` library expects the `auth` parameter of a request to be either a `HTTPBasicAuth` or a `tuple` (lists are not allowed, see [here](https://github.com/ianwhale/kedro-kaggle-starter/blob/master/%7B%7B%20cookiecutter.repo_name%20%7D%7D/src/%7B%7B%20cookiecutter.python_package%20%7D%7D/api.py) in requests). At the moment, neither are possible to specify in my `catalog.yml`. \r\n\r\nFrom what I hear, you're already working on this (#1011). So maybe this point is moot.\r\n\r\n#### 2. The `auth` keyword argument and `credentials.yml`\r\n\r\nI would like to specify my `(username, password)` tuple inside `credentials.yml`. However, the `APIDataSet`'s `auth` keyword wouldn't get filled in by the config loader. \r\n\r\nTo get this working, you'd have to extend `APIDataSet` to have a `credentials` keyword that is filled in for `auth` in an upcall.\r\n\r\nIt would be great to either have this by default, or even have the loader fill `auth` keywords in addition to `credentials`. Although that might have unintended consequences. \r\n\r\n## Context\r\n\r\nHopefully this would unify the experience a bit. Right now, the `credentials` keyword in a dataset and `credentials.yml` are the main points of access to secrets. Which is probably good.\r\n\r\n## Possible Implementation\r\n\r\nI whipped up [my own `APIDataSet`](https://github.com/ianwhale/kedro-kaggle-starter/blob/master/%7B%7B%20cookiecutter.repo_name%20%7D%7D/src/%7B%7B%20cookiecutter.python_package%20%7D%7D/api.py) to solve both the problems above.\r\n\r\n## Possible Alternatives\r\n\r\nTo get this working with no changes to `APIDataSet`, we'd have to implement the changes in #1011 so we can specify tuples in `credentials.yml` and have the config loader fill in `auth` as well.\n", "before_files": [{"content": "\"\"\"``APIDataSet`` loads the data from HTTP(S) APIs.\nIt uses the python requests library: https://requests.readthedocs.io/en/master/\n\"\"\"\nfrom typing import Any, Dict, List, Tuple, Union\n\nimport requests\nfrom requests.auth import AuthBase\n\nfrom kedro.io.core import AbstractDataSet, DataSetError\n\n\nclass APIDataSet(AbstractDataSet):\n \"\"\"``APIDataSet`` loads the data from HTTP(S) APIs.\n It uses the python requests library: https://requests.readthedocs.io/en/master/\n\n Example:\n ::\n\n >>> from kedro.extras.datasets.api import APIDataSet\n >>>\n >>>\n >>> data_set = APIDataSet(\n >>> url=\"https://quickstats.nass.usda.gov\",\n >>> params={\n >>> \"key\": \"SOME_TOKEN\",\n >>> \"format\": \"JSON\",\n >>> \"commodity_desc\": \"CORN\",\n >>> \"statisticcat_des\": \"YIELD\",\n >>> \"agg_level_desc\": \"STATE\",\n >>> \"year\": 2000\n >>> }\n >>> )\n >>> data = data_set.load()\n \"\"\"\n\n # pylint: disable=too-many-arguments\n def __init__(\n self,\n url: str,\n method: str = \"GET\",\n data: Any = None,\n params: Dict[str, Any] = None,\n headers: Dict[str, Any] = None,\n auth: Union[Tuple[str], AuthBase] = None,\n json: Union[List, Dict[str, Any]] = None,\n timeout: int = 60,\n ) -> None:\n \"\"\"Creates a new instance of ``APIDataSet`` to fetch data from an API endpoint.\n\n Args:\n url: The API URL endpoint.\n method: The Method of the request, GET, POST, PUT, DELETE, HEAD, etc...\n data: The request payload, used for POST, PUT, etc requests\n https://requests.readthedocs.io/en/master/user/quickstart/#more-complicated-post-requests\n params: The url parameters of the API.\n https://requests.readthedocs.io/en/master/user/quickstart/#passing-parameters-in-urls\n headers: The HTTP headers.\n https://requests.readthedocs.io/en/master/user/quickstart/#custom-headers\n auth: Anything ``requests`` accepts. Normally it's either ``('login', 'password')``,\n or ``AuthBase``, ``HTTPBasicAuth`` instance for more complex cases.\n json: The request payload, used for POST, PUT, etc requests, passed in\n to the json kwarg in the requests object.\n https://requests.readthedocs.io/en/master/user/quickstart/#more-complicated-post-requests\n timeout: The wait time in seconds for a response, defaults to 1 minute.\n https://requests.readthedocs.io/en/master/user/quickstart/#timeouts\n\n \"\"\"\n super().__init__()\n self._request_args: Dict[str, Any] = {\n \"url\": url,\n \"method\": method,\n \"data\": data,\n \"params\": params,\n \"headers\": headers,\n \"auth\": auth,\n \"json\": json,\n \"timeout\": timeout,\n }\n\n def _describe(self) -> Dict[str, Any]:\n return dict(**self._request_args)\n\n def _execute_request(self) -> requests.Response:\n try:\n response = requests.request(**self._request_args)\n response.raise_for_status()\n except requests.exceptions.HTTPError as exc:\n raise DataSetError(\"Failed to fetch data\", exc) from exc\n except OSError as exc:\n raise DataSetError(\"Failed to connect to the remote server\") from exc\n\n return response\n\n def _load(self) -> requests.Response:\n return self._execute_request()\n\n def _save(self, data: Any) -> None:\n raise DataSetError(f\"{self.__class__.__name__} is a read only data set type\")\n\n def _exists(self) -> bool:\n response = self._execute_request()\n\n return response.ok\n", "path": "kedro/extras/datasets/api/api_dataset.py"}], "after_files": [{"content": "\"\"\"``APIDataSet`` loads the data from HTTP(S) APIs.\nIt uses the python requests library: https://requests.readthedocs.io/en/master/\n\"\"\"\nfrom typing import Any, Dict, Iterable, List, Union\n\nimport requests\nfrom requests.auth import AuthBase\n\nfrom kedro.io.core import AbstractDataSet, DataSetError\n\n\nclass APIDataSet(AbstractDataSet):\n \"\"\"``APIDataSet`` loads the data from HTTP(S) APIs.\n It uses the python requests library: https://requests.readthedocs.io/en/master/\n\n Example:\n ::\n\n >>> from kedro.extras.datasets.api import APIDataSet\n >>>\n >>>\n >>> data_set = APIDataSet(\n >>> url=\"https://quickstats.nass.usda.gov\",\n >>> params={\n >>> \"key\": \"SOME_TOKEN\",\n >>> \"format\": \"JSON\",\n >>> \"commodity_desc\": \"CORN\",\n >>> \"statisticcat_des\": \"YIELD\",\n >>> \"agg_level_desc\": \"STATE\",\n >>> \"year\": 2000\n >>> }\n >>> )\n >>> data = data_set.load()\n \"\"\"\n\n # pylint: disable=too-many-arguments\n def __init__(\n self,\n url: str,\n method: str = \"GET\",\n data: Any = None,\n params: Dict[str, Any] = None,\n headers: Dict[str, Any] = None,\n auth: Union[Iterable[str], AuthBase] = None,\n json: Union[List, Dict[str, Any]] = None,\n timeout: int = 60,\n credentials: Union[Iterable[str], AuthBase] = None,\n ) -> None:\n \"\"\"Creates a new instance of ``APIDataSet`` to fetch data from an API endpoint.\n\n Args:\n url: The API URL endpoint.\n method: The Method of the request, GET, POST, PUT, DELETE, HEAD, etc...\n data: The request payload, used for POST, PUT, etc requests\n https://requests.readthedocs.io/en/master/user/quickstart/#more-complicated-post-requests\n params: The url parameters of the API.\n https://requests.readthedocs.io/en/master/user/quickstart/#passing-parameters-in-urls\n headers: The HTTP headers.\n https://requests.readthedocs.io/en/master/user/quickstart/#custom-headers\n auth: Anything ``requests`` accepts. Normally it's either ``('login', 'password')``,\n or ``AuthBase``, ``HTTPBasicAuth`` instance for more complex cases. Any\n iterable will be cast to a tuple.\n json: The request payload, used for POST, PUT, etc requests, passed in\n to the json kwarg in the requests object.\n https://requests.readthedocs.io/en/master/user/quickstart/#more-complicated-post-requests\n timeout: The wait time in seconds for a response, defaults to 1 minute.\n https://requests.readthedocs.io/en/master/user/quickstart/#timeouts\n credentials: same as ``auth``. Allows specifying ``auth`` secrets in\n credentials.yml.\n\n Raises:\n ValueError: if both ``credentials`` and ``auth`` are specified.\n \"\"\"\n super().__init__()\n\n if credentials is not None and auth is not None:\n raise ValueError(\"Cannot specify both auth and credentials.\")\n\n auth = credentials or auth\n\n if isinstance(auth, Iterable):\n auth = tuple(auth)\n\n self._request_args: Dict[str, Any] = {\n \"url\": url,\n \"method\": method,\n \"data\": data,\n \"params\": params,\n \"headers\": headers,\n \"auth\": auth,\n \"json\": json,\n \"timeout\": timeout,\n }\n\n def _describe(self) -> Dict[str, Any]:\n return dict(**self._request_args)\n\n def _execute_request(self) -> requests.Response:\n try:\n response = requests.request(**self._request_args)\n response.raise_for_status()\n except requests.exceptions.HTTPError as exc:\n raise DataSetError(\"Failed to fetch data\", exc) from exc\n except OSError as exc:\n raise DataSetError(\"Failed to connect to the remote server\") from exc\n\n return response\n\n def _load(self) -> requests.Response:\n return self._execute_request()\n\n def _save(self, data: Any) -> None:\n raise DataSetError(f\"{self.__class__.__name__} is a read only data set type\")\n\n def _exists(self) -> bool:\n response = self._execute_request()\n\n return response.ok\n", "path": "kedro/extras/datasets/api/api_dataset.py"}]}
| 1,871 | 598 |
gh_patches_debug_11797
|
rasdani/github-patches
|
git_diff
|
PrefectHQ__prefect-509
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
All logs aren't making it through
It seems that the QueueHandler for our logs doesn't block correctly, so some logs don't make it to our logger service.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/prefect/utilities/logging.py`
Content:
```
1 # Licensed under LICENSE.md; also available at https://www.prefect.io/licenses/alpha-eula
2 import logging
3 import os
4 import queue
5 from logging.handlers import QueueHandler, QueueListener
6
7 import prefect
8 from prefect.configuration import config
9
10
11 class RemoteHandler(logging.StreamHandler):
12 def __init__(self) -> None:
13 super().__init__()
14 self.logger_server = config.cloud.log
15 self.client = None
16
17 def emit(self, record):
18 if self.client is None:
19 from prefect.client import Client
20
21 self.client = Client()
22 r = self.client.post(path="", server=self.logger_server, **record.__dict__)
23
24
25 old_factory = logging.getLogRecordFactory()
26
27
28 def cloud_record_factory(*args, **kwargs):
29 record = old_factory(*args, **kwargs)
30 record.flowrunid = prefect.context.get("flow_run_id", "")
31 record.taskrunid = prefect.context.get("task_run_id", "")
32 return record
33
34
35 def configure_logging() -> logging.Logger:
36 """
37 Creates a "prefect" root logger with a `StreamHandler` that has level and formatting
38 set from `prefect.config`.
39
40 Returns:
41 logging.Logger
42 """
43 logger = logging.getLogger("prefect")
44 handler = logging.StreamHandler()
45 formatter = logging.Formatter(config.logging.format)
46 handler.setFormatter(formatter)
47 logger.addHandler(handler)
48 logger.setLevel(config.logging.level)
49
50 # send logs to server
51 if config.logging.log_to_cloud:
52 logging.setLogRecordFactory(cloud_record_factory)
53 log_queue = queue.Queue(-1) # unlimited size queue
54 queue_handler = QueueHandler(log_queue)
55 remote_handler = RemoteHandler()
56 remote_listener = QueueListener(log_queue, remote_handler)
57 logger.addHandler(queue_handler)
58 remote_listener.start()
59
60 return logger
61
62
63 prefect_logger = configure_logging()
64
65
66 def get_logger(name: str = None) -> logging.Logger:
67 """
68 Returns a "prefect" logger.
69
70 Args:
71 - name (str): if `None`, the root Prefect logger is returned. If provided, a child
72 logger of the name `"prefect.{name}"` is returned. The child logger inherits
73 the root logger's settings.
74
75 Returns:
76 logging.Logger
77 """
78 if name is None:
79 return prefect_logger
80 else:
81 return prefect_logger.getChild(name)
82
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/prefect/utilities/logging.py b/src/prefect/utilities/logging.py
--- a/src/prefect/utilities/logging.py
+++ b/src/prefect/utilities/logging.py
@@ -1,4 +1,5 @@
# Licensed under LICENSE.md; also available at https://www.prefect.io/licenses/alpha-eula
+import atexit
import logging
import os
import queue
@@ -56,6 +57,8 @@
remote_listener = QueueListener(log_queue, remote_handler)
logger.addHandler(queue_handler)
remote_listener.start()
+ stopper = lambda listener: listener.stop()
+ atexit.register(stopper, remote_listener)
return logger
|
{"golden_diff": "diff --git a/src/prefect/utilities/logging.py b/src/prefect/utilities/logging.py\n--- a/src/prefect/utilities/logging.py\n+++ b/src/prefect/utilities/logging.py\n@@ -1,4 +1,5 @@\n # Licensed under LICENSE.md; also available at https://www.prefect.io/licenses/alpha-eula\n+import atexit\n import logging\n import os\n import queue\n@@ -56,6 +57,8 @@\n remote_listener = QueueListener(log_queue, remote_handler)\n logger.addHandler(queue_handler)\n remote_listener.start()\n+ stopper = lambda listener: listener.stop()\n+ atexit.register(stopper, remote_listener)\n \n return logger\n", "issue": "All logs aren't making it through\nIt seems that the QueueHandler for our logs doesn't block correctly, so some logs don't make it to our logger service.\n", "before_files": [{"content": "# Licensed under LICENSE.md; also available at https://www.prefect.io/licenses/alpha-eula\nimport logging\nimport os\nimport queue\nfrom logging.handlers import QueueHandler, QueueListener\n\nimport prefect\nfrom prefect.configuration import config\n\n\nclass RemoteHandler(logging.StreamHandler):\n def __init__(self) -> None:\n super().__init__()\n self.logger_server = config.cloud.log\n self.client = None\n\n def emit(self, record):\n if self.client is None:\n from prefect.client import Client\n\n self.client = Client()\n r = self.client.post(path=\"\", server=self.logger_server, **record.__dict__)\n\n\nold_factory = logging.getLogRecordFactory()\n\n\ndef cloud_record_factory(*args, **kwargs):\n record = old_factory(*args, **kwargs)\n record.flowrunid = prefect.context.get(\"flow_run_id\", \"\")\n record.taskrunid = prefect.context.get(\"task_run_id\", \"\")\n return record\n\n\ndef configure_logging() -> logging.Logger:\n \"\"\"\n Creates a \"prefect\" root logger with a `StreamHandler` that has level and formatting\n set from `prefect.config`.\n\n Returns:\n logging.Logger\n \"\"\"\n logger = logging.getLogger(\"prefect\")\n handler = logging.StreamHandler()\n formatter = logging.Formatter(config.logging.format)\n handler.setFormatter(formatter)\n logger.addHandler(handler)\n logger.setLevel(config.logging.level)\n\n # send logs to server\n if config.logging.log_to_cloud:\n logging.setLogRecordFactory(cloud_record_factory)\n log_queue = queue.Queue(-1) # unlimited size queue\n queue_handler = QueueHandler(log_queue)\n remote_handler = RemoteHandler()\n remote_listener = QueueListener(log_queue, remote_handler)\n logger.addHandler(queue_handler)\n remote_listener.start()\n\n return logger\n\n\nprefect_logger = configure_logging()\n\n\ndef get_logger(name: str = None) -> logging.Logger:\n \"\"\"\n Returns a \"prefect\" logger.\n\n Args:\n - name (str): if `None`, the root Prefect logger is returned. If provided, a child\n logger of the name `\"prefect.{name}\"` is returned. The child logger inherits\n the root logger's settings.\n\n Returns:\n logging.Logger\n \"\"\"\n if name is None:\n return prefect_logger\n else:\n return prefect_logger.getChild(name)\n", "path": "src/prefect/utilities/logging.py"}], "after_files": [{"content": "# Licensed under LICENSE.md; also available at https://www.prefect.io/licenses/alpha-eula\nimport atexit\nimport logging\nimport os\nimport queue\nfrom logging.handlers import QueueHandler, QueueListener\n\nimport prefect\nfrom prefect.configuration import config\n\n\nclass RemoteHandler(logging.StreamHandler):\n def __init__(self) -> None:\n super().__init__()\n self.logger_server = config.cloud.log\n self.client = None\n\n def emit(self, record):\n if self.client is None:\n from prefect.client import Client\n\n self.client = Client()\n r = self.client.post(path=\"\", server=self.logger_server, **record.__dict__)\n\n\nold_factory = logging.getLogRecordFactory()\n\n\ndef cloud_record_factory(*args, **kwargs):\n record = old_factory(*args, **kwargs)\n record.flowrunid = prefect.context.get(\"flow_run_id\", \"\")\n record.taskrunid = prefect.context.get(\"task_run_id\", \"\")\n return record\n\n\ndef configure_logging() -> logging.Logger:\n \"\"\"\n Creates a \"prefect\" root logger with a `StreamHandler` that has level and formatting\n set from `prefect.config`.\n\n Returns:\n logging.Logger\n \"\"\"\n logger = logging.getLogger(\"prefect\")\n handler = logging.StreamHandler()\n formatter = logging.Formatter(config.logging.format)\n handler.setFormatter(formatter)\n logger.addHandler(handler)\n logger.setLevel(config.logging.level)\n\n # send logs to server\n if config.logging.log_to_cloud:\n logging.setLogRecordFactory(cloud_record_factory)\n log_queue = queue.Queue(-1) # unlimited size queue\n queue_handler = QueueHandler(log_queue)\n remote_handler = RemoteHandler()\n remote_listener = QueueListener(log_queue, remote_handler)\n logger.addHandler(queue_handler)\n remote_listener.start()\n stopper = lambda listener: listener.stop()\n atexit.register(stopper, remote_listener)\n\n return logger\n\n\nprefect_logger = configure_logging()\n\n\ndef get_logger(name: str = None) -> logging.Logger:\n \"\"\"\n Returns a \"prefect\" logger.\n\n Args:\n - name (str): if `None`, the root Prefect logger is returned. If provided, a child\n logger of the name `\"prefect.{name}\"` is returned. The child logger inherits\n the root logger's settings.\n\n Returns:\n logging.Logger\n \"\"\"\n if name is None:\n return prefect_logger\n else:\n return prefect_logger.getChild(name)\n", "path": "src/prefect/utilities/logging.py"}]}
| 954 | 151 |
gh_patches_debug_16717
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-474
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Scanning IAM Role Public Access
Describe the bug
It seems when specifying more than one json, the policy does not scan all principals, rather it looks at the first one.
To Reproduce
Steps to reproduce the behavior:
Create policy with more than one SID
`resource "aws_iam_role" "lambdaRole" {
name = "test-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal" : {"Service": "lambda.amazonaws.com"},
"Effect": "Allow"
},
{
"Action": "sts:AssumeRole",
"Principal" : {"AWS": "*"},
"Effect": "Allow"
},
{
"Action": "sts:AssumeRole",
"Principal" : {"Service": "events.amazonaws.com"},
"Effect": "Allow"
},
]
}
EOF
}`
Run Checkov against policy
Expected behavior
I would expect the scan to check each json within the policy rather than the first one
Desktop (please complete the following information):
OS: Mac
Checkov Version: 1.0.459
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/terraform/checks/resource/aws/IAMRoleAllowsPublicAssume.py`
Content:
```
1 from checkov.common.models.enums import CheckResult, CheckCategories
2 from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
3 import json
4
5
6 class IAMRoleAllowsPublicAssume(BaseResourceCheck):
7
8 def __init__(self):
9 name = "Ensure IAM role allows only specific services or principals to assume it"
10 id = "CKV_AWS_60"
11 supported_resources = ['aws_iam_role']
12 categories = [CheckCategories.IAM]
13 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
14
15 def scan_resource_conf(self, conf):
16 if isinstance(conf['assume_role_policy'][0], str):
17 try:
18 assume_role_block = json.loads(conf['assume_role_policy'][0])
19 if 'Statement' in assume_role_block.keys():
20 if 'Principal' in assume_role_block['Statement'][0]:
21 if 'AWS' in assume_role_block['Statement'][0]['Principal']:
22 if assume_role_block['Statement'][0]['Principal']['AWS'] == '*':
23 return CheckResult.FAILED
24 except: # nosec
25 pass
26 return CheckResult.PASSED
27
28
29 check = IAMRoleAllowsPublicAssume()
30
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/checkov/terraform/checks/resource/aws/IAMRoleAllowsPublicAssume.py b/checkov/terraform/checks/resource/aws/IAMRoleAllowsPublicAssume.py
--- a/checkov/terraform/checks/resource/aws/IAMRoleAllowsPublicAssume.py
+++ b/checkov/terraform/checks/resource/aws/IAMRoleAllowsPublicAssume.py
@@ -17,10 +17,10 @@
try:
assume_role_block = json.loads(conf['assume_role_policy'][0])
if 'Statement' in assume_role_block.keys():
- if 'Principal' in assume_role_block['Statement'][0]:
- if 'AWS' in assume_role_block['Statement'][0]['Principal']:
- if assume_role_block['Statement'][0]['Principal']['AWS'] == '*':
- return CheckResult.FAILED
+ for statement in assume_role_block['Statement']:
+ if 'AWS' in statement['Principal']:
+ if statement['Principal']['AWS'] == '*':
+ return CheckResult.FAILED
except: # nosec
pass
return CheckResult.PASSED
|
{"golden_diff": "diff --git a/checkov/terraform/checks/resource/aws/IAMRoleAllowsPublicAssume.py b/checkov/terraform/checks/resource/aws/IAMRoleAllowsPublicAssume.py\n--- a/checkov/terraform/checks/resource/aws/IAMRoleAllowsPublicAssume.py\n+++ b/checkov/terraform/checks/resource/aws/IAMRoleAllowsPublicAssume.py\n@@ -17,10 +17,10 @@\n try:\n assume_role_block = json.loads(conf['assume_role_policy'][0])\n if 'Statement' in assume_role_block.keys():\n- if 'Principal' in assume_role_block['Statement'][0]:\n- if 'AWS' in assume_role_block['Statement'][0]['Principal']:\n- if assume_role_block['Statement'][0]['Principal']['AWS'] == '*':\n- return CheckResult.FAILED\n+ for statement in assume_role_block['Statement']:\n+ if 'AWS' in statement['Principal']:\n+ if statement['Principal']['AWS'] == '*':\n+ return CheckResult.FAILED\n except: # nosec\n pass\n return CheckResult.PASSED\n", "issue": "Scanning IAM Role Public Access\nDescribe the bug\r\nIt seems when specifying more than one json, the policy does not scan all principals, rather it looks at the first one. \r\n\r\nTo Reproduce\r\nSteps to reproduce the behavior:\r\n\r\nCreate policy with more than one SID\r\n`resource \"aws_iam_role\" \"lambdaRole\" {\r\n name = \"test-role\"\r\n assume_role_policy = <<EOF\r\n{\r\n \"Version\": \"2012-10-17\",\r\n \"Statement\": [\r\n {\r\n \"Action\": \"sts:AssumeRole\",\r\n \"Principal\" : {\"Service\": \"lambda.amazonaws.com\"},\r\n \"Effect\": \"Allow\"\r\n },\r\n {\r\n \"Action\": \"sts:AssumeRole\",\r\n \"Principal\" : {\"AWS\": \"*\"},\r\n \"Effect\": \"Allow\"\r\n },\r\n {\r\n \"Action\": \"sts:AssumeRole\",\r\n \"Principal\" : {\"Service\": \"events.amazonaws.com\"},\r\n \"Effect\": \"Allow\"\r\n },\r\n ]\r\n}\r\n\r\nEOF\r\n}`\r\nRun Checkov against policy\r\nExpected behavior\r\nI would expect the scan to check each json within the policy rather than the first one\r\n\r\nDesktop (please complete the following information):\r\n\r\nOS: Mac\r\nCheckov Version: 1.0.459\r\n\n", "before_files": [{"content": "from checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\nimport json\n\n\nclass IAMRoleAllowsPublicAssume(BaseResourceCheck):\n\n def __init__(self):\n name = \"Ensure IAM role allows only specific services or principals to assume it\"\n id = \"CKV_AWS_60\"\n supported_resources = ['aws_iam_role']\n categories = [CheckCategories.IAM]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf):\n if isinstance(conf['assume_role_policy'][0], str):\n try:\n assume_role_block = json.loads(conf['assume_role_policy'][0])\n if 'Statement' in assume_role_block.keys():\n if 'Principal' in assume_role_block['Statement'][0]:\n if 'AWS' in assume_role_block['Statement'][0]['Principal']:\n if assume_role_block['Statement'][0]['Principal']['AWS'] == '*':\n return CheckResult.FAILED\n except: # nosec\n pass\n return CheckResult.PASSED\n\n\ncheck = IAMRoleAllowsPublicAssume()\n", "path": "checkov/terraform/checks/resource/aws/IAMRoleAllowsPublicAssume.py"}], "after_files": [{"content": "from checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\nimport json\n\n\nclass IAMRoleAllowsPublicAssume(BaseResourceCheck):\n\n def __init__(self):\n name = \"Ensure IAM role allows only specific services or principals to assume it\"\n id = \"CKV_AWS_60\"\n supported_resources = ['aws_iam_role']\n categories = [CheckCategories.IAM]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf):\n if isinstance(conf['assume_role_policy'][0], str):\n try:\n assume_role_block = json.loads(conf['assume_role_policy'][0])\n if 'Statement' in assume_role_block.keys():\n for statement in assume_role_block['Statement']:\n if 'AWS' in statement['Principal']:\n if statement['Principal']['AWS'] == '*':\n return CheckResult.FAILED\n except: # nosec\n pass\n return CheckResult.PASSED\n\n\ncheck = IAMRoleAllowsPublicAssume()\n", "path": "checkov/terraform/checks/resource/aws/IAMRoleAllowsPublicAssume.py"}]}
| 848 | 239 |
gh_patches_debug_20622
|
rasdani/github-patches
|
git_diff
|
ManimCommunity__manim-573
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG-General] Terminal output not formatted and uses ASCII in windows
**Describe the bug**
I am on Windows and command prompt and I `cd` into example_scenes directory and run
```sh
manim basic.py
```
and I get a output like below.

I should get in green colour though.
**To Reproduce**
Just running the one in example_scene in enough.
**Expected behavior**
The ill formatted thing should be in green colour.
**Logs**
<details><summary>Terminal output (Screenshots acceptable)</summary>

<!-- Paste screenshot here -->
</details>
**System Specifications**
<details><summary>System Details</summary>
- OS (with version, e.g Windows 10 v2004 or macOS 10.15 (Catalina)): Windows 7
- Python version (`python/py/python3 --version`): 3.8
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `manim/config/logger.py`
Content:
```
1 """
2 logger.py
3 ---------
4 This is the logging library for manim.
5 This library uses rich for coloured log outputs.
6
7 """
8
9
10 __all__ = ["logger", "console"]
11
12
13 import configparser
14 import logging
15
16 from rich.console import Console
17 from rich.logging import RichHandler
18 from rich.theme import Theme
19 from rich import print as printf
20 from rich import errors, color
21 import json
22 import copy
23
24
25 class JSONFormatter(logging.Formatter):
26 """Subclass of `:class:`logging.Formatter`, to build our own format of the logs (JSON)."""
27
28 def format(self, record):
29 record_c = copy.deepcopy(record)
30 if record_c.args:
31 for arg in record_c.args:
32 record_c.args[arg] = "<>"
33 return json.dumps(
34 {
35 "levelname": record_c.levelname,
36 "module": record_c.module,
37 "message": super().format(record_c),
38 }
39 )
40
41
42 def _parse_theme(config_logger):
43 theme = dict(
44 zip(
45 [key.replace("_", ".") for key in config_logger.keys()],
46 list(config_logger.values()),
47 )
48 )
49
50 theme["log.width"] = None if theme["log.width"] == "-1" else int(theme["log.width"])
51
52 theme["log.height"] = (
53 None if theme["log.height"] == "-1" else int(theme["log.height"])
54 )
55 theme["log.timestamps"] = False
56 try:
57 customTheme = Theme(
58 {
59 k: v
60 for k, v in theme.items()
61 if k not in ["log.width", "log.height", "log.timestamps"]
62 }
63 )
64 except (color.ColorParseError, errors.StyleSyntaxError):
65 customTheme = None
66 printf(
67 "[logging.level.error]It seems your colour configuration couldn't be parsed. Loading the default color configuration...[/logging.level.error]"
68 )
69 return customTheme
70
71
72 def set_rich_logger(config_logger, verbosity):
73 """Will set the RichHandler of the logger.
74
75 Parameter
76 ----------
77 config_logger :class:
78 Config object of the logger.
79 """
80 theme = _parse_theme(config_logger)
81 global console
82 console = Console(theme=theme)
83 # These keywords Are Highlighted specially.
84 RichHandler.KEYWORDS = [
85 "Played",
86 "animations",
87 "scene",
88 "Reading",
89 "Writing",
90 "script",
91 "arguments",
92 "Invalid",
93 "Aborting",
94 "module",
95 "File",
96 "Rendering",
97 "Rendered",
98 ]
99 rich_handler = RichHandler(
100 console=console, show_time=config_logger.getboolean("log_timestamps")
101 )
102 global logger
103 rich_handler.setLevel(verbosity)
104 logger.addHandler(rich_handler)
105
106
107 def set_file_logger(log_file_path):
108 file_handler = logging.FileHandler(log_file_path, mode="w")
109 file_handler.setFormatter(JSONFormatter())
110 global logger
111 logger.addHandler(file_handler)
112
113
114 logger = logging.getLogger("manim")
115 # The console is set to None as it will be changed by set_rich_logger.
116 console = None
117
118 # TODO : This is only temporary to keep the terminal output clean when working with ImageMobject and matplotlib plots
119 logging.getLogger("PIL").setLevel(logging.INFO)
120 logging.getLogger("matplotlib").setLevel(logging.INFO)
121
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/manim/config/logger.py b/manim/config/logger.py
--- a/manim/config/logger.py
+++ b/manim/config/logger.py
@@ -10,12 +10,12 @@
__all__ = ["logger", "console"]
-import configparser
import logging
from rich.console import Console
from rich.logging import RichHandler
from rich.theme import Theme
+from rich.traceback import install
from rich import print as printf
from rich import errors, color
import json
@@ -114,7 +114,7 @@
logger = logging.getLogger("manim")
# The console is set to None as it will be changed by set_rich_logger.
console = None
-
+install()
# TODO : This is only temporary to keep the terminal output clean when working with ImageMobject and matplotlib plots
logging.getLogger("PIL").setLevel(logging.INFO)
logging.getLogger("matplotlib").setLevel(logging.INFO)
|
{"golden_diff": "diff --git a/manim/config/logger.py b/manim/config/logger.py\n--- a/manim/config/logger.py\n+++ b/manim/config/logger.py\n@@ -10,12 +10,12 @@\n __all__ = [\"logger\", \"console\"]\n \n \n-import configparser\n import logging\n \n from rich.console import Console\n from rich.logging import RichHandler\n from rich.theme import Theme\n+from rich.traceback import install\n from rich import print as printf\n from rich import errors, color\n import json\n@@ -114,7 +114,7 @@\n logger = logging.getLogger(\"manim\")\n # The console is set to None as it will be changed by set_rich_logger.\n console = None\n-\n+install()\n # TODO : This is only temporary to keep the terminal output clean when working with ImageMobject and matplotlib plots\n logging.getLogger(\"PIL\").setLevel(logging.INFO)\n logging.getLogger(\"matplotlib\").setLevel(logging.INFO)\n", "issue": " [BUG-General] Terminal output not formatted and uses ASCII in windows\n**Describe the bug**\r\nI am on Windows and command prompt and I `cd` into example_scenes directory and run \r\n```sh\r\nmanim basic.py\r\n``` \r\nand I get a output like below.\r\n\r\nI should get in green colour though.\r\n\r\n**To Reproduce**\r\nJust running the one in example_scene in enough.\r\n\r\n**Expected behavior**\r\nThe ill formatted thing should be in green colour.\r\n\r\n**Logs**\r\n<details><summary>Terminal output (Screenshots acceptable)</summary>\r\n\r\n\r\n\r\n<!-- Paste screenshot here -->\r\n\r\n</details>\r\n\r\n**System Specifications**\r\n\r\n<details><summary>System Details</summary>\r\n\r\n- OS (with version, e.g Windows 10 v2004 or macOS 10.15 (Catalina)): Windows 7\r\n- Python version (`python/py/python3 --version`): 3.8\r\n\n", "before_files": [{"content": "\"\"\"\nlogger.py\n---------\nThis is the logging library for manim.\nThis library uses rich for coloured log outputs.\n\n\"\"\"\n\n\n__all__ = [\"logger\", \"console\"]\n\n\nimport configparser\nimport logging\n\nfrom rich.console import Console\nfrom rich.logging import RichHandler\nfrom rich.theme import Theme\nfrom rich import print as printf\nfrom rich import errors, color\nimport json\nimport copy\n\n\nclass JSONFormatter(logging.Formatter):\n \"\"\"Subclass of `:class:`logging.Formatter`, to build our own format of the logs (JSON).\"\"\"\n\n def format(self, record):\n record_c = copy.deepcopy(record)\n if record_c.args:\n for arg in record_c.args:\n record_c.args[arg] = \"<>\"\n return json.dumps(\n {\n \"levelname\": record_c.levelname,\n \"module\": record_c.module,\n \"message\": super().format(record_c),\n }\n )\n\n\ndef _parse_theme(config_logger):\n theme = dict(\n zip(\n [key.replace(\"_\", \".\") for key in config_logger.keys()],\n list(config_logger.values()),\n )\n )\n\n theme[\"log.width\"] = None if theme[\"log.width\"] == \"-1\" else int(theme[\"log.width\"])\n\n theme[\"log.height\"] = (\n None if theme[\"log.height\"] == \"-1\" else int(theme[\"log.height\"])\n )\n theme[\"log.timestamps\"] = False\n try:\n customTheme = Theme(\n {\n k: v\n for k, v in theme.items()\n if k not in [\"log.width\", \"log.height\", \"log.timestamps\"]\n }\n )\n except (color.ColorParseError, errors.StyleSyntaxError):\n customTheme = None\n printf(\n \"[logging.level.error]It seems your colour configuration couldn't be parsed. Loading the default color configuration...[/logging.level.error]\"\n )\n return customTheme\n\n\ndef set_rich_logger(config_logger, verbosity):\n \"\"\"Will set the RichHandler of the logger.\n\n Parameter\n ----------\n config_logger :class:\n Config object of the logger.\n \"\"\"\n theme = _parse_theme(config_logger)\n global console\n console = Console(theme=theme)\n # These keywords Are Highlighted specially.\n RichHandler.KEYWORDS = [\n \"Played\",\n \"animations\",\n \"scene\",\n \"Reading\",\n \"Writing\",\n \"script\",\n \"arguments\",\n \"Invalid\",\n \"Aborting\",\n \"module\",\n \"File\",\n \"Rendering\",\n \"Rendered\",\n ]\n rich_handler = RichHandler(\n console=console, show_time=config_logger.getboolean(\"log_timestamps\")\n )\n global logger\n rich_handler.setLevel(verbosity)\n logger.addHandler(rich_handler)\n\n\ndef set_file_logger(log_file_path):\n file_handler = logging.FileHandler(log_file_path, mode=\"w\")\n file_handler.setFormatter(JSONFormatter())\n global logger\n logger.addHandler(file_handler)\n\n\nlogger = logging.getLogger(\"manim\")\n# The console is set to None as it will be changed by set_rich_logger.\nconsole = None\n\n# TODO : This is only temporary to keep the terminal output clean when working with ImageMobject and matplotlib plots\nlogging.getLogger(\"PIL\").setLevel(logging.INFO)\nlogging.getLogger(\"matplotlib\").setLevel(logging.INFO)\n", "path": "manim/config/logger.py"}], "after_files": [{"content": "\"\"\"\nlogger.py\n---------\nThis is the logging library for manim.\nThis library uses rich for coloured log outputs.\n\n\"\"\"\n\n\n__all__ = [\"logger\", \"console\"]\n\n\nimport logging\n\nfrom rich.console import Console\nfrom rich.logging import RichHandler\nfrom rich.theme import Theme\nfrom rich.traceback import install\nfrom rich import print as printf\nfrom rich import errors, color\nimport json\nimport copy\n\n\nclass JSONFormatter(logging.Formatter):\n \"\"\"Subclass of `:class:`logging.Formatter`, to build our own format of the logs (JSON).\"\"\"\n\n def format(self, record):\n record_c = copy.deepcopy(record)\n if record_c.args:\n for arg in record_c.args:\n record_c.args[arg] = \"<>\"\n return json.dumps(\n {\n \"levelname\": record_c.levelname,\n \"module\": record_c.module,\n \"message\": super().format(record_c),\n }\n )\n\n\ndef _parse_theme(config_logger):\n theme = dict(\n zip(\n [key.replace(\"_\", \".\") for key in config_logger.keys()],\n list(config_logger.values()),\n )\n )\n\n theme[\"log.width\"] = None if theme[\"log.width\"] == \"-1\" else int(theme[\"log.width\"])\n\n theme[\"log.height\"] = (\n None if theme[\"log.height\"] == \"-1\" else int(theme[\"log.height\"])\n )\n theme[\"log.timestamps\"] = False\n try:\n customTheme = Theme(\n {\n k: v\n for k, v in theme.items()\n if k not in [\"log.width\", \"log.height\", \"log.timestamps\"]\n }\n )\n except (color.ColorParseError, errors.StyleSyntaxError):\n customTheme = None\n printf(\n \"[logging.level.error]It seems your colour configuration couldn't be parsed. Loading the default color configuration...[/logging.level.error]\"\n )\n return customTheme\n\n\ndef set_rich_logger(config_logger, verbosity):\n \"\"\"Will set the RichHandler of the logger.\n\n Parameter\n ----------\n config_logger :class:\n Config object of the logger.\n \"\"\"\n theme = _parse_theme(config_logger)\n global console\n console = Console(theme=theme)\n # These keywords Are Highlighted specially.\n RichHandler.KEYWORDS = [\n \"Played\",\n \"animations\",\n \"scene\",\n \"Reading\",\n \"Writing\",\n \"script\",\n \"arguments\",\n \"Invalid\",\n \"Aborting\",\n \"module\",\n \"File\",\n \"Rendering\",\n \"Rendered\",\n ]\n rich_handler = RichHandler(\n console=console, show_time=config_logger.getboolean(\"log_timestamps\")\n )\n global logger\n rich_handler.setLevel(verbosity)\n logger.addHandler(rich_handler)\n\n\ndef set_file_logger(log_file_path):\n file_handler = logging.FileHandler(log_file_path, mode=\"w\")\n file_handler.setFormatter(JSONFormatter())\n global logger\n logger.addHandler(file_handler)\n\n\nlogger = logging.getLogger(\"manim\")\n# The console is set to None as it will be changed by set_rich_logger.\nconsole = None\ninstall()\n# TODO : This is only temporary to keep the terminal output clean when working with ImageMobject and matplotlib plots\nlogging.getLogger(\"PIL\").setLevel(logging.INFO)\nlogging.getLogger(\"matplotlib\").setLevel(logging.INFO)\n", "path": "manim/config/logger.py"}]}
| 1,516 | 199 |
gh_patches_debug_27175
|
rasdani/github-patches
|
git_diff
|
xorbitsai__inference-143
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ENH: auto find available port for API
### Describe the bug
```
~ ❯ xinference 6s base 18:24:18
Traceback (most recent call last):
File "/Users/hekaisheng/miniconda3/bin/xinference", line 33, in <module>
sys.exit(load_entry_point('xinference', 'console_scripts', 'xinference')())
File "/Users/hekaisheng/miniconda3/lib/python3.9/site-packages/click/core.py", line 1128, in __call__
return self.main(*args, **kwargs)
File "/Users/hekaisheng/miniconda3/lib/python3.9/site-packages/click/core.py", line 1053, in main
rv = self.invoke(ctx)
File "/Users/hekaisheng/miniconda3/lib/python3.9/site-packages/click/core.py", line 1637, in invoke
super().invoke(ctx)
File "/Users/hekaisheng/miniconda3/lib/python3.9/site-packages/click/core.py", line 1395, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/Users/hekaisheng/miniconda3/lib/python3.9/site-packages/click/core.py", line 754, in invoke
return __callback(*args, **kwargs)
File "/Users/hekaisheng/miniconda3/lib/python3.9/site-packages/click/decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
File "/Users/hekaisheng/Documents/projects/inference/xinference/deploy/cmdline.py", line 51, in cli
main(
File "/Users/hekaisheng/Documents/projects/inference/xinference/deploy/local.py", line 50, in main
loop.run_until_complete(task)
File "/Users/hekaisheng/miniconda3/lib/python3.9/asyncio/base_events.py", line 647, in run_until_complete
return future.result()
File "/Users/hekaisheng/Documents/projects/inference/xinference/deploy/local.py", line 36, in _start_local_cluster
url = await start_supervisor_components(address=address, host=host, port=port)
File "/Users/hekaisheng/Documents/projects/inference/xinference/deploy/supervisor.py", line 35, in start_supervisor_components
sock.bind((host, port))
OSError: [Errno 48] Address already in use
```
Use available port if users not specify.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `xinference/deploy/supervisor.py`
Content:
```
1 # Copyright 2022-2023 XProbe Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import asyncio
16 import logging
17 import socket
18 from typing import Dict, Optional
19
20 import xoscar as xo
21
22 from ..core.gradio import GradioApp
23 from ..core.restful_api import RESTfulAPIActor
24 from ..core.service import SupervisorActor
25
26 logger = logging.getLogger("xinference")
27
28
29 async def start_supervisor_components(address: str, host: str, port: int):
30 await xo.create_actor(SupervisorActor, address=address, uid=SupervisorActor.uid())
31 gradio_block = GradioApp(address).build()
32 # create a socket for RESTful API
33 sockets = []
34 sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
35 sock.bind((host, port))
36 sockets.append(sock)
37 restful_actor = await xo.create_actor(
38 RESTfulAPIActor,
39 address=address,
40 uid=RESTfulAPIActor.uid(),
41 sockets=sockets,
42 gradio_block=gradio_block,
43 )
44 await restful_actor.serve()
45 url = f"http://{host}:{port}"
46 logger.info(f"Server address: {url}")
47 return url
48
49
50 async def _start_supervisor(
51 address: str, host: str, port: int, logging_conf: Optional[Dict] = None
52 ):
53 pool = None
54 try:
55 pool = await xo.create_actor_pool(
56 address=address, n_process=0, logging_conf=logging_conf
57 )
58 await start_supervisor_components(address=address, host=host, port=port)
59 await pool.join()
60 except asyncio.exceptions.CancelledError:
61 if pool is not None:
62 await pool.stop()
63
64
65 def main(*args, **kwargs):
66 loop = asyncio.get_event_loop()
67 task = loop.create_task(_start_supervisor(*args, **kwargs))
68
69 try:
70 loop.run_until_complete(task)
71 except KeyboardInterrupt:
72 task.cancel()
73 loop.run_until_complete(task)
74 # avoid displaying exception-unhandled warnings
75 task.exception()
76
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/xinference/deploy/supervisor.py b/xinference/deploy/supervisor.py
--- a/xinference/deploy/supervisor.py
+++ b/xinference/deploy/supervisor.py
@@ -18,7 +18,9 @@
from typing import Dict, Optional
import xoscar as xo
+from xoscar.utils import get_next_port
+from ..constants import XINFERENCE_DEFAULT_ENDPOINT_PORT
from ..core.gradio import GradioApp
from ..core.restful_api import RESTfulAPIActor
from ..core.service import SupervisorActor
@@ -30,10 +32,26 @@
await xo.create_actor(SupervisorActor, address=address, uid=SupervisorActor.uid())
gradio_block = GradioApp(address).build()
# create a socket for RESTful API
- sockets = []
- sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
- sock.bind((host, port))
- sockets.append(sock)
+ try:
+ sockets = []
+ sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+ sock.bind((host, port))
+ sockets.append(sock)
+ except OSError:
+ if port is XINFERENCE_DEFAULT_ENDPOINT_PORT:
+ while True:
+ try:
+ sockets = []
+ sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+ port = get_next_port()
+ sock.bind((host, port))
+ sockets.append(sock)
+ break
+ except OSError:
+ pass
+ else:
+ raise OSError
+
restful_actor = await xo.create_actor(
RESTfulAPIActor,
address=address,
|
{"golden_diff": "diff --git a/xinference/deploy/supervisor.py b/xinference/deploy/supervisor.py\n--- a/xinference/deploy/supervisor.py\n+++ b/xinference/deploy/supervisor.py\n@@ -18,7 +18,9 @@\n from typing import Dict, Optional\n \n import xoscar as xo\n+from xoscar.utils import get_next_port\n \n+from ..constants import XINFERENCE_DEFAULT_ENDPOINT_PORT\n from ..core.gradio import GradioApp\n from ..core.restful_api import RESTfulAPIActor\n from ..core.service import SupervisorActor\n@@ -30,10 +32,26 @@\n await xo.create_actor(SupervisorActor, address=address, uid=SupervisorActor.uid())\n gradio_block = GradioApp(address).build()\n # create a socket for RESTful API\n- sockets = []\n- sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n- sock.bind((host, port))\n- sockets.append(sock)\n+ try:\n+ sockets = []\n+ sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n+ sock.bind((host, port))\n+ sockets.append(sock)\n+ except OSError:\n+ if port is XINFERENCE_DEFAULT_ENDPOINT_PORT:\n+ while True:\n+ try:\n+ sockets = []\n+ sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n+ port = get_next_port()\n+ sock.bind((host, port))\n+ sockets.append(sock)\n+ break\n+ except OSError:\n+ pass\n+ else:\n+ raise OSError\n+\n restful_actor = await xo.create_actor(\n RESTfulAPIActor,\n address=address,\n", "issue": "ENH: auto find available port for API\n### Describe the bug\r\n```\r\n~ \u276f xinference 6s \ue73c base 18:24:18\r\nTraceback (most recent call last):\r\n File \"/Users/hekaisheng/miniconda3/bin/xinference\", line 33, in <module>\r\n sys.exit(load_entry_point('xinference', 'console_scripts', 'xinference')())\r\n File \"/Users/hekaisheng/miniconda3/lib/python3.9/site-packages/click/core.py\", line 1128, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/Users/hekaisheng/miniconda3/lib/python3.9/site-packages/click/core.py\", line 1053, in main\r\n rv = self.invoke(ctx)\r\n File \"/Users/hekaisheng/miniconda3/lib/python3.9/site-packages/click/core.py\", line 1637, in invoke\r\n super().invoke(ctx)\r\n File \"/Users/hekaisheng/miniconda3/lib/python3.9/site-packages/click/core.py\", line 1395, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/Users/hekaisheng/miniconda3/lib/python3.9/site-packages/click/core.py\", line 754, in invoke\r\n return __callback(*args, **kwargs)\r\n File \"/Users/hekaisheng/miniconda3/lib/python3.9/site-packages/click/decorators.py\", line 26, in new_func\r\n return f(get_current_context(), *args, **kwargs)\r\n File \"/Users/hekaisheng/Documents/projects/inference/xinference/deploy/cmdline.py\", line 51, in cli\r\n main(\r\n File \"/Users/hekaisheng/Documents/projects/inference/xinference/deploy/local.py\", line 50, in main\r\n loop.run_until_complete(task)\r\n File \"/Users/hekaisheng/miniconda3/lib/python3.9/asyncio/base_events.py\", line 647, in run_until_complete\r\n return future.result()\r\n File \"/Users/hekaisheng/Documents/projects/inference/xinference/deploy/local.py\", line 36, in _start_local_cluster\r\n url = await start_supervisor_components(address=address, host=host, port=port)\r\n File \"/Users/hekaisheng/Documents/projects/inference/xinference/deploy/supervisor.py\", line 35, in start_supervisor_components\r\n sock.bind((host, port))\r\nOSError: [Errno 48] Address already in use\r\n```\r\n\r\nUse available port if users not specify.\n", "before_files": [{"content": "# Copyright 2022-2023 XProbe Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport asyncio\nimport logging\nimport socket\nfrom typing import Dict, Optional\n\nimport xoscar as xo\n\nfrom ..core.gradio import GradioApp\nfrom ..core.restful_api import RESTfulAPIActor\nfrom ..core.service import SupervisorActor\n\nlogger = logging.getLogger(\"xinference\")\n\n\nasync def start_supervisor_components(address: str, host: str, port: int):\n await xo.create_actor(SupervisorActor, address=address, uid=SupervisorActor.uid())\n gradio_block = GradioApp(address).build()\n # create a socket for RESTful API\n sockets = []\n sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n sock.bind((host, port))\n sockets.append(sock)\n restful_actor = await xo.create_actor(\n RESTfulAPIActor,\n address=address,\n uid=RESTfulAPIActor.uid(),\n sockets=sockets,\n gradio_block=gradio_block,\n )\n await restful_actor.serve()\n url = f\"http://{host}:{port}\"\n logger.info(f\"Server address: {url}\")\n return url\n\n\nasync def _start_supervisor(\n address: str, host: str, port: int, logging_conf: Optional[Dict] = None\n):\n pool = None\n try:\n pool = await xo.create_actor_pool(\n address=address, n_process=0, logging_conf=logging_conf\n )\n await start_supervisor_components(address=address, host=host, port=port)\n await pool.join()\n except asyncio.exceptions.CancelledError:\n if pool is not None:\n await pool.stop()\n\n\ndef main(*args, **kwargs):\n loop = asyncio.get_event_loop()\n task = loop.create_task(_start_supervisor(*args, **kwargs))\n\n try:\n loop.run_until_complete(task)\n except KeyboardInterrupt:\n task.cancel()\n loop.run_until_complete(task)\n # avoid displaying exception-unhandled warnings\n task.exception()\n", "path": "xinference/deploy/supervisor.py"}], "after_files": [{"content": "# Copyright 2022-2023 XProbe Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport asyncio\nimport logging\nimport socket\nfrom typing import Dict, Optional\n\nimport xoscar as xo\nfrom xoscar.utils import get_next_port\n\nfrom ..constants import XINFERENCE_DEFAULT_ENDPOINT_PORT\nfrom ..core.gradio import GradioApp\nfrom ..core.restful_api import RESTfulAPIActor\nfrom ..core.service import SupervisorActor\n\nlogger = logging.getLogger(\"xinference\")\n\n\nasync def start_supervisor_components(address: str, host: str, port: int):\n await xo.create_actor(SupervisorActor, address=address, uid=SupervisorActor.uid())\n gradio_block = GradioApp(address).build()\n # create a socket for RESTful API\n try:\n sockets = []\n sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n sock.bind((host, port))\n sockets.append(sock)\n except OSError:\n if port is XINFERENCE_DEFAULT_ENDPOINT_PORT:\n while True:\n try:\n sockets = []\n sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n port = get_next_port()\n sock.bind((host, port))\n sockets.append(sock)\n break\n except OSError:\n pass\n else:\n raise OSError\n\n restful_actor = await xo.create_actor(\n RESTfulAPIActor,\n address=address,\n uid=RESTfulAPIActor.uid(),\n sockets=sockets,\n gradio_block=gradio_block,\n )\n await restful_actor.serve()\n url = f\"http://{host}:{port}\"\n logger.info(f\"Server address: {url}\")\n return url\n\n\nasync def _start_supervisor(\n address: str, host: str, port: int, logging_conf: Optional[Dict] = None\n):\n pool = None\n try:\n pool = await xo.create_actor_pool(\n address=address, n_process=0, logging_conf=logging_conf\n )\n await start_supervisor_components(address=address, host=host, port=port)\n await pool.join()\n except asyncio.exceptions.CancelledError:\n if pool is not None:\n await pool.stop()\n\n\ndef main(*args, **kwargs):\n loop = asyncio.get_event_loop()\n task = loop.create_task(_start_supervisor(*args, **kwargs))\n\n try:\n loop.run_until_complete(task)\n except KeyboardInterrupt:\n task.cancel()\n loop.run_until_complete(task)\n # avoid displaying exception-unhandled warnings\n task.exception()\n", "path": "xinference/deploy/supervisor.py"}]}
| 1,543 | 368 |
gh_patches_debug_52467
|
rasdani/github-patches
|
git_diff
|
iterative__dvc-1859
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Repro: logger doesn't work correctly on exception
DVC version: 0.35.5+d80137,
Platform: Linux
Method of installation: pip install from git
https://github.com/iterative/dvc/blob/54072d70b542115a78a374fa702129b6959a1d02/dvc/command/repro.py#L50-L51
This lines should be:
```
except DvcException, msg:
logger.exception(msg)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dvc/command/repro.py`
Content:
```
1 from __future__ import unicode_literals
2
3 import argparse
4 import os
5 import logging
6
7 from dvc.command.base import CmdBase, append_doc_link
8 from dvc.command.metrics import show_metrics
9 from dvc.command.status import CmdDataStatus
10 from dvc.exceptions import DvcException
11
12
13 logger = logging.getLogger(__name__)
14
15
16 class CmdRepro(CmdBase):
17 def run(self):
18 recursive = not self.args.single_item
19 saved_dir = os.path.realpath(os.curdir)
20 if self.args.cwd:
21 os.chdir(self.args.cwd)
22
23 # Dirty hack so the for loop below can at least enter once
24 if self.args.all_pipelines:
25 self.args.targets = [None]
26 elif not self.args.targets:
27 self.args.targets = self.default_targets
28
29 ret = 0
30 for target in self.args.targets:
31 try:
32 stages = self.repo.reproduce(
33 target,
34 recursive=recursive,
35 force=self.args.force,
36 dry=self.args.dry,
37 interactive=self.args.interactive,
38 pipeline=self.args.pipeline,
39 all_pipelines=self.args.all_pipelines,
40 ignore_build_cache=self.args.ignore_build_cache,
41 no_commit=self.args.no_commit,
42 )
43
44 if len(stages) == 0:
45 logger.info(CmdDataStatus.UP_TO_DATE_MSG)
46
47 if self.args.metrics:
48 metrics = self.repo.metrics.show()
49 show_metrics(metrics)
50 except DvcException:
51 logger.exception()
52 ret = 1
53 break
54
55 os.chdir(saved_dir)
56 return ret
57
58
59 def add_parser(subparsers, parent_parser):
60 REPRO_HELP = "Check for changes and reproduce DVC file and dependencies."
61 repro_parser = subparsers.add_parser(
62 "repro",
63 parents=[parent_parser],
64 description=append_doc_link(REPRO_HELP, "repro"),
65 help=REPRO_HELP,
66 formatter_class=argparse.RawDescriptionHelpFormatter,
67 )
68 repro_parser.add_argument(
69 "targets",
70 nargs="*",
71 help="DVC file to reproduce (default - 'Dvcfile').",
72 )
73 repro_parser.add_argument(
74 "-f",
75 "--force",
76 action="store_true",
77 default=False,
78 help="Reproduce even if dependencies were not changed.",
79 )
80 repro_parser.add_argument(
81 "-s",
82 "--single-item",
83 action="store_true",
84 default=False,
85 help="Reproduce only single data item without recursive dependencies "
86 "check.",
87 )
88 repro_parser.add_argument(
89 "-c",
90 "--cwd",
91 default=os.path.curdir,
92 help="Directory within your repo to reproduce from.",
93 )
94 repro_parser.add_argument(
95 "-m",
96 "--metrics",
97 action="store_true",
98 default=False,
99 help="Show metrics after reproduction.",
100 )
101 repro_parser.add_argument(
102 "--dry",
103 action="store_true",
104 default=False,
105 help="Only print the commands that would be executed without "
106 "actually executing.",
107 )
108 repro_parser.add_argument(
109 "-i",
110 "--interactive",
111 action="store_true",
112 default=False,
113 help="Ask for confirmation before reproducing each stage.",
114 )
115 repro_parser.add_argument(
116 "-p",
117 "--pipeline",
118 action="store_true",
119 default=False,
120 help="Reproduce the whole pipeline that the specified stage file "
121 "belongs to.",
122 )
123 repro_parser.add_argument(
124 "-P",
125 "--all-pipelines",
126 action="store_true",
127 default=False,
128 help="Reproduce all pipelines in the repo.",
129 )
130 repro_parser.add_argument(
131 "--ignore-build-cache",
132 action="store_true",
133 default=False,
134 help="Reproduce all descendants of a changed stage even if their "
135 "direct dependencies didn't change.",
136 )
137 repro_parser.add_argument(
138 "--no-commit",
139 action="store_true",
140 default=False,
141 help="Don't put files/directories into cache.",
142 )
143 repro_parser.set_defaults(func=CmdRepro)
144
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/dvc/command/repro.py b/dvc/command/repro.py
--- a/dvc/command/repro.py
+++ b/dvc/command/repro.py
@@ -48,7 +48,7 @@
metrics = self.repo.metrics.show()
show_metrics(metrics)
except DvcException:
- logger.exception()
+ logger.exception("")
ret = 1
break
|
{"golden_diff": "diff --git a/dvc/command/repro.py b/dvc/command/repro.py\n--- a/dvc/command/repro.py\n+++ b/dvc/command/repro.py\n@@ -48,7 +48,7 @@\n metrics = self.repo.metrics.show()\n show_metrics(metrics)\n except DvcException:\n- logger.exception()\n+ logger.exception(\"\")\n ret = 1\n break\n", "issue": "Repro: logger doesn't work correctly on exception\nDVC version: 0.35.5+d80137,\r\nPlatform: Linux\r\nMethod of installation: pip install from git\r\n\r\nhttps://github.com/iterative/dvc/blob/54072d70b542115a78a374fa702129b6959a1d02/dvc/command/repro.py#L50-L51 \r\n\r\nThis lines should be:\r\n```\r\nexcept DvcException, msg:\r\n logger.exception(msg)\r\n```\r\n\r\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport argparse\nimport os\nimport logging\n\nfrom dvc.command.base import CmdBase, append_doc_link\nfrom dvc.command.metrics import show_metrics\nfrom dvc.command.status import CmdDataStatus\nfrom dvc.exceptions import DvcException\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass CmdRepro(CmdBase):\n def run(self):\n recursive = not self.args.single_item\n saved_dir = os.path.realpath(os.curdir)\n if self.args.cwd:\n os.chdir(self.args.cwd)\n\n # Dirty hack so the for loop below can at least enter once\n if self.args.all_pipelines:\n self.args.targets = [None]\n elif not self.args.targets:\n self.args.targets = self.default_targets\n\n ret = 0\n for target in self.args.targets:\n try:\n stages = self.repo.reproduce(\n target,\n recursive=recursive,\n force=self.args.force,\n dry=self.args.dry,\n interactive=self.args.interactive,\n pipeline=self.args.pipeline,\n all_pipelines=self.args.all_pipelines,\n ignore_build_cache=self.args.ignore_build_cache,\n no_commit=self.args.no_commit,\n )\n\n if len(stages) == 0:\n logger.info(CmdDataStatus.UP_TO_DATE_MSG)\n\n if self.args.metrics:\n metrics = self.repo.metrics.show()\n show_metrics(metrics)\n except DvcException:\n logger.exception()\n ret = 1\n break\n\n os.chdir(saved_dir)\n return ret\n\n\ndef add_parser(subparsers, parent_parser):\n REPRO_HELP = \"Check for changes and reproduce DVC file and dependencies.\"\n repro_parser = subparsers.add_parser(\n \"repro\",\n parents=[parent_parser],\n description=append_doc_link(REPRO_HELP, \"repro\"),\n help=REPRO_HELP,\n formatter_class=argparse.RawDescriptionHelpFormatter,\n )\n repro_parser.add_argument(\n \"targets\",\n nargs=\"*\",\n help=\"DVC file to reproduce (default - 'Dvcfile').\",\n )\n repro_parser.add_argument(\n \"-f\",\n \"--force\",\n action=\"store_true\",\n default=False,\n help=\"Reproduce even if dependencies were not changed.\",\n )\n repro_parser.add_argument(\n \"-s\",\n \"--single-item\",\n action=\"store_true\",\n default=False,\n help=\"Reproduce only single data item without recursive dependencies \"\n \"check.\",\n )\n repro_parser.add_argument(\n \"-c\",\n \"--cwd\",\n default=os.path.curdir,\n help=\"Directory within your repo to reproduce from.\",\n )\n repro_parser.add_argument(\n \"-m\",\n \"--metrics\",\n action=\"store_true\",\n default=False,\n help=\"Show metrics after reproduction.\",\n )\n repro_parser.add_argument(\n \"--dry\",\n action=\"store_true\",\n default=False,\n help=\"Only print the commands that would be executed without \"\n \"actually executing.\",\n )\n repro_parser.add_argument(\n \"-i\",\n \"--interactive\",\n action=\"store_true\",\n default=False,\n help=\"Ask for confirmation before reproducing each stage.\",\n )\n repro_parser.add_argument(\n \"-p\",\n \"--pipeline\",\n action=\"store_true\",\n default=False,\n help=\"Reproduce the whole pipeline that the specified stage file \"\n \"belongs to.\",\n )\n repro_parser.add_argument(\n \"-P\",\n \"--all-pipelines\",\n action=\"store_true\",\n default=False,\n help=\"Reproduce all pipelines in the repo.\",\n )\n repro_parser.add_argument(\n \"--ignore-build-cache\",\n action=\"store_true\",\n default=False,\n help=\"Reproduce all descendants of a changed stage even if their \"\n \"direct dependencies didn't change.\",\n )\n repro_parser.add_argument(\n \"--no-commit\",\n action=\"store_true\",\n default=False,\n help=\"Don't put files/directories into cache.\",\n )\n repro_parser.set_defaults(func=CmdRepro)\n", "path": "dvc/command/repro.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nimport argparse\nimport os\nimport logging\n\nfrom dvc.command.base import CmdBase, append_doc_link\nfrom dvc.command.metrics import show_metrics\nfrom dvc.command.status import CmdDataStatus\nfrom dvc.exceptions import DvcException\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass CmdRepro(CmdBase):\n def run(self):\n recursive = not self.args.single_item\n saved_dir = os.path.realpath(os.curdir)\n if self.args.cwd:\n os.chdir(self.args.cwd)\n\n # Dirty hack so the for loop below can at least enter once\n if self.args.all_pipelines:\n self.args.targets = [None]\n elif not self.args.targets:\n self.args.targets = self.default_targets\n\n ret = 0\n for target in self.args.targets:\n try:\n stages = self.repo.reproduce(\n target,\n recursive=recursive,\n force=self.args.force,\n dry=self.args.dry,\n interactive=self.args.interactive,\n pipeline=self.args.pipeline,\n all_pipelines=self.args.all_pipelines,\n ignore_build_cache=self.args.ignore_build_cache,\n no_commit=self.args.no_commit,\n )\n\n if len(stages) == 0:\n logger.info(CmdDataStatus.UP_TO_DATE_MSG)\n\n if self.args.metrics:\n metrics = self.repo.metrics.show()\n show_metrics(metrics)\n except DvcException:\n logger.exception(\"\")\n ret = 1\n break\n\n os.chdir(saved_dir)\n return ret\n\n\ndef add_parser(subparsers, parent_parser):\n REPRO_HELP = \"Check for changes and reproduce DVC file and dependencies.\"\n repro_parser = subparsers.add_parser(\n \"repro\",\n parents=[parent_parser],\n description=append_doc_link(REPRO_HELP, \"repro\"),\n help=REPRO_HELP,\n formatter_class=argparse.RawDescriptionHelpFormatter,\n )\n repro_parser.add_argument(\n \"targets\",\n nargs=\"*\",\n help=\"DVC file to reproduce (default - 'Dvcfile').\",\n )\n repro_parser.add_argument(\n \"-f\",\n \"--force\",\n action=\"store_true\",\n default=False,\n help=\"Reproduce even if dependencies were not changed.\",\n )\n repro_parser.add_argument(\n \"-s\",\n \"--single-item\",\n action=\"store_true\",\n default=False,\n help=\"Reproduce only single data item without recursive dependencies \"\n \"check.\",\n )\n repro_parser.add_argument(\n \"-c\",\n \"--cwd\",\n default=os.path.curdir,\n help=\"Directory within your repo to reproduce from.\",\n )\n repro_parser.add_argument(\n \"-m\",\n \"--metrics\",\n action=\"store_true\",\n default=False,\n help=\"Show metrics after reproduction.\",\n )\n repro_parser.add_argument(\n \"--dry\",\n action=\"store_true\",\n default=False,\n help=\"Only print the commands that would be executed without \"\n \"actually executing.\",\n )\n repro_parser.add_argument(\n \"-i\",\n \"--interactive\",\n action=\"store_true\",\n default=False,\n help=\"Ask for confirmation before reproducing each stage.\",\n )\n repro_parser.add_argument(\n \"-p\",\n \"--pipeline\",\n action=\"store_true\",\n default=False,\n help=\"Reproduce the whole pipeline that the specified stage file \"\n \"belongs to.\",\n )\n repro_parser.add_argument(\n \"-P\",\n \"--all-pipelines\",\n action=\"store_true\",\n default=False,\n help=\"Reproduce all pipelines in the repo.\",\n )\n repro_parser.add_argument(\n \"--ignore-build-cache\",\n action=\"store_true\",\n default=False,\n help=\"Reproduce all descendants of a changed stage even if their \"\n \"direct dependencies didn't change.\",\n )\n repro_parser.add_argument(\n \"--no-commit\",\n action=\"store_true\",\n default=False,\n help=\"Don't put files/directories into cache.\",\n )\n repro_parser.set_defaults(func=CmdRepro)\n", "path": "dvc/command/repro.py"}]}
| 1,563 | 86 |
gh_patches_debug_1606
|
rasdani/github-patches
|
git_diff
|
Cloud-CV__EvalAI-3370
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Frontend V2] Fix the media assets endpoint
### Description
We recently moved to `https://evalai.s3.amazonaws.com/` endpoint for our media assets. Frontend v2 is still using `https://staging-evalai.s3.amazonaws.com/` endpoint. We should switch to new enpdoint in frontend v2.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `settings/staging.py`
Content:
```
1 from .prod import * # noqa: ignore=F405
2
3 ALLOWED_HOSTS = ["staging.eval.ai"]
4
5 CORS_ORIGIN_ALLOW_ALL = False
6
7 CORS_ORIGIN_WHITELIST = (
8 "https://staging-evalai.s3.amazonaws.com",
9 "https://staging.eval.ai",
10 "https://beta-staging.eval.ai",
11 )
12
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/settings/staging.py b/settings/staging.py
--- a/settings/staging.py
+++ b/settings/staging.py
@@ -5,6 +5,7 @@
CORS_ORIGIN_ALLOW_ALL = False
CORS_ORIGIN_WHITELIST = (
+ "https://evalai.s3.amazonaws.com",
"https://staging-evalai.s3.amazonaws.com",
"https://staging.eval.ai",
"https://beta-staging.eval.ai",
|
{"golden_diff": "diff --git a/settings/staging.py b/settings/staging.py\n--- a/settings/staging.py\n+++ b/settings/staging.py\n@@ -5,6 +5,7 @@\n CORS_ORIGIN_ALLOW_ALL = False\n \n CORS_ORIGIN_WHITELIST = (\n+ \"https://evalai.s3.amazonaws.com\",\n \"https://staging-evalai.s3.amazonaws.com\",\n \"https://staging.eval.ai\",\n \"https://beta-staging.eval.ai\",\n", "issue": "[Frontend V2] Fix the media assets endpoint\n### Description\r\n\r\nWe recently moved to `https://evalai.s3.amazonaws.com/` endpoint for our media assets. Frontend v2 is still using `https://staging-evalai.s3.amazonaws.com/` endpoint. We should switch to new enpdoint in frontend v2.\n", "before_files": [{"content": "from .prod import * # noqa: ignore=F405\n\nALLOWED_HOSTS = [\"staging.eval.ai\"]\n\nCORS_ORIGIN_ALLOW_ALL = False\n\nCORS_ORIGIN_WHITELIST = (\n \"https://staging-evalai.s3.amazonaws.com\",\n \"https://staging.eval.ai\",\n \"https://beta-staging.eval.ai\",\n)\n", "path": "settings/staging.py"}], "after_files": [{"content": "from .prod import * # noqa: ignore=F405\n\nALLOWED_HOSTS = [\"staging.eval.ai\"]\n\nCORS_ORIGIN_ALLOW_ALL = False\n\nCORS_ORIGIN_WHITELIST = (\n \"https://evalai.s3.amazonaws.com\",\n \"https://staging-evalai.s3.amazonaws.com\",\n \"https://staging.eval.ai\",\n \"https://beta-staging.eval.ai\",\n)\n", "path": "settings/staging.py"}]}
| 423 | 98 |
gh_patches_debug_1467
|
rasdani/github-patches
|
git_diff
|
ckan__ckan-7881
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Invalid session timeout value on CKAN 2.10 (logged out users unexpectedly)
## CKAN version
2.10
## Describe the bug
According to our config declaration for [`beaker.session.timeout`](https://github.com/ckan/ckan/blob/656a39de2e7ed0ce47e15080f0f5d42b66b4929b/ckan/config/config_declaration.yaml#L306):
> Defaults to never expiring.
But the defined default value is 600 :upside_down_face:
Apart from the inconsistency, this is problematic because now that the logged-in user id is stored in the session by Flask-login, this means that users are logged out every 10 minutes.
The fix is to default it to never expire as described on the docs (which is also the [Beaker default](https://beaker.readthedocs.io/en/latest/configuration.html#session-options)), but the problem is that I can set it to `None` because then Beaker complains that the value is not an int:
```
File "/home/adria/dev/pyenvs/gates/lib/python3.8/site-packages/beaker/util.py", line 290, in verify_rules
params[key] = verify_options(params[key], types, message)
File "/home/adria/dev/pyenvs/gates/lib/python3.8/site-packages/beaker/util.py", line 281, in verify_options
raise Exception(error)
Exception: Session timeout must be an integer.
```
This is because our config parsing does not support "int or None", and leaves the string "None" as the value. I guess the alternative is to put a really big number but would be good to handle it better.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ckan/cli/shell.py`
Content:
```
1 # encoding: utf-8
2 import click
3 import logging
4
5 import ckan.model as model
6
7 from typing import Any, Mapping
8
9 from ckan.plugins import toolkit
10
11
12 log = logging.getLogger(__name__)
13
14
15 _banner = """
16 ****** Welcome to the CKAN shell ******
17
18 This session has some variables pre-populated:
19 - app (CKAN Application object)
20 - config (CKAN config dictionary)
21 - model (CKAN model module to access the Database)
22 - toolkit (CKAN toolkit module)
23 """
24
25
26 def ipython(namespace: Mapping[str, Any], banner: str) -> None:
27 import IPython
28 from traitlets.config.loader import Config
29
30 c = Config()
31 c.TerminalInteractiveShell.banner2 = banner # type: ignore
32
33 IPython.start_ipython([], user_ns=namespace, config=c)
34
35
36 def python(namespace: Mapping[str, Any], banner: str) -> None:
37 import code
38 code.interact(banner=banner, local=namespace)
39
40
41 @click.command()
42 @click.help_option("-h", "--help")
43 @click.pass_context
44 def shell(ctx: click.Context):
45 """Run an interactive IPython shell with the context of the
46 CKAN instance.
47
48 It will try to use IPython, if not installed it will callback
49 to the default Python's shell.
50 """
51
52 namespace = {
53 "app": ctx.obj.app._wsgi_app,
54 "model": model,
55 "config": ctx.obj.config,
56 "toolkit": toolkit,
57 }
58
59 try:
60 ipython(namespace, _banner)
61 except ImportError:
62 log.debug("`ipython` library is missing. Using default python shell.")
63 python(namespace, _banner)
64
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ckan/cli/shell.py b/ckan/cli/shell.py
--- a/ckan/cli/shell.py
+++ b/ckan/cli/shell.py
@@ -28,7 +28,7 @@
from traitlets.config.loader import Config
c = Config()
- c.TerminalInteractiveShell.banner2 = banner # type: ignore
+ c.TerminalInteractiveShell.banner2 = banner
IPython.start_ipython([], user_ns=namespace, config=c)
|
{"golden_diff": "diff --git a/ckan/cli/shell.py b/ckan/cli/shell.py\n--- a/ckan/cli/shell.py\n+++ b/ckan/cli/shell.py\n@@ -28,7 +28,7 @@\n from traitlets.config.loader import Config\n \n c = Config()\n- c.TerminalInteractiveShell.banner2 = banner # type: ignore\n+ c.TerminalInteractiveShell.banner2 = banner\n \n IPython.start_ipython([], user_ns=namespace, config=c)\n", "issue": "Invalid session timeout value on CKAN 2.10 (logged out users unexpectedly)\n## CKAN version\r\n2.10\r\n\r\n## Describe the bug\r\n\r\nAccording to our config declaration for [`beaker.session.timeout`](https://github.com/ckan/ckan/blob/656a39de2e7ed0ce47e15080f0f5d42b66b4929b/ckan/config/config_declaration.yaml#L306):\r\n\r\n> Defaults to never expiring.\r\n\r\nBut the defined default value is 600 :upside_down_face: \r\nApart from the inconsistency, this is problematic because now that the logged-in user id is stored in the session by Flask-login, this means that users are logged out every 10 minutes.\r\n\r\nThe fix is to default it to never expire as described on the docs (which is also the [Beaker default](https://beaker.readthedocs.io/en/latest/configuration.html#session-options)), but the problem is that I can set it to `None` because then Beaker complains that the value is not an int:\r\n\r\n```\r\n File \"/home/adria/dev/pyenvs/gates/lib/python3.8/site-packages/beaker/util.py\", line 290, in verify_rules\r\n params[key] = verify_options(params[key], types, message)\r\n File \"/home/adria/dev/pyenvs/gates/lib/python3.8/site-packages/beaker/util.py\", line 281, in verify_options\r\n raise Exception(error)\r\nException: Session timeout must be an integer.\r\n```\r\nThis is because our config parsing does not support \"int or None\", and leaves the string \"None\" as the value. I guess the alternative is to put a really big number but would be good to handle it better.\r\n\n", "before_files": [{"content": "# encoding: utf-8\nimport click\nimport logging\n\nimport ckan.model as model\n\nfrom typing import Any, Mapping\n\nfrom ckan.plugins import toolkit\n\n\nlog = logging.getLogger(__name__)\n\n\n_banner = \"\"\"\n****** Welcome to the CKAN shell ******\n\nThis session has some variables pre-populated:\n - app (CKAN Application object)\n - config (CKAN config dictionary)\n - model (CKAN model module to access the Database)\n - toolkit (CKAN toolkit module)\n \"\"\"\n\n\ndef ipython(namespace: Mapping[str, Any], banner: str) -> None:\n import IPython\n from traitlets.config.loader import Config\n\n c = Config()\n c.TerminalInteractiveShell.banner2 = banner # type: ignore\n\n IPython.start_ipython([], user_ns=namespace, config=c)\n\n\ndef python(namespace: Mapping[str, Any], banner: str) -> None:\n import code\n code.interact(banner=banner, local=namespace)\n\n\[email protected]()\[email protected]_option(\"-h\", \"--help\")\[email protected]_context\ndef shell(ctx: click.Context):\n \"\"\"Run an interactive IPython shell with the context of the\n CKAN instance.\n\n It will try to use IPython, if not installed it will callback\n to the default Python's shell.\n \"\"\"\n\n namespace = {\n \"app\": ctx.obj.app._wsgi_app,\n \"model\": model,\n \"config\": ctx.obj.config,\n \"toolkit\": toolkit,\n }\n\n try:\n ipython(namespace, _banner)\n except ImportError:\n log.debug(\"`ipython` library is missing. Using default python shell.\")\n python(namespace, _banner)\n", "path": "ckan/cli/shell.py"}], "after_files": [{"content": "# encoding: utf-8\nimport click\nimport logging\n\nimport ckan.model as model\n\nfrom typing import Any, Mapping\n\nfrom ckan.plugins import toolkit\n\n\nlog = logging.getLogger(__name__)\n\n\n_banner = \"\"\"\n****** Welcome to the CKAN shell ******\n\nThis session has some variables pre-populated:\n - app (CKAN Application object)\n - config (CKAN config dictionary)\n - model (CKAN model module to access the Database)\n - toolkit (CKAN toolkit module)\n \"\"\"\n\n\ndef ipython(namespace: Mapping[str, Any], banner: str) -> None:\n import IPython\n from traitlets.config.loader import Config\n\n c = Config()\n c.TerminalInteractiveShell.banner2 = banner\n\n IPython.start_ipython([], user_ns=namespace, config=c)\n\n\ndef python(namespace: Mapping[str, Any], banner: str) -> None:\n import code\n code.interact(banner=banner, local=namespace)\n\n\[email protected]()\[email protected]_option(\"-h\", \"--help\")\[email protected]_context\ndef shell(ctx: click.Context):\n \"\"\"Run an interactive IPython shell with the context of the\n CKAN instance.\n\n It will try to use IPython, if not installed it will callback\n to the default Python's shell.\n \"\"\"\n\n namespace = {\n \"app\": ctx.obj.app._wsgi_app,\n \"model\": model,\n \"config\": ctx.obj.config,\n \"toolkit\": toolkit,\n }\n\n try:\n ipython(namespace, _banner)\n except ImportError:\n log.debug(\"`ipython` library is missing. Using default python shell.\")\n python(namespace, _banner)\n", "path": "ckan/cli/shell.py"}]}
| 1,135 | 111 |
gh_patches_debug_2699
|
rasdani/github-patches
|
git_diff
|
pwr-Solaar__Solaar-1003
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Please create an AppData file for Solaar
Please consider writing and installing an AppData file with the application description and some screenshots, else Solaar looks really bad in the GNOME and KDE Software Centers. We'd love to showcase more applications, but without the extra data file we can't. See http://people.freedesktop.org/~hughsient/appdata/ for details; thanks!
Richard
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python3
2
3 from glob import glob as _glob
4
5 try:
6 from setuptools import setup
7 except ImportError:
8 from distutils.core import setup
9
10 # from solaar import NAME, __version__
11 __version__ = '1.0.4'
12 NAME = 'Solaar'
13
14
15 def _data_files():
16 from os.path import dirname as _dirname
17
18 yield 'share/solaar/icons', _glob('share/solaar/icons/solaar*.svg')
19 yield 'share/solaar/icons', _glob('share/solaar/icons/light_*.png')
20 yield 'share/icons/hicolor/scalable/apps', ['share/solaar/icons/solaar.svg']
21
22 for mo in _glob('share/locale/*/LC_MESSAGES/solaar.mo'):
23 yield _dirname(mo), [mo]
24
25 yield 'share/applications', ['share/applications/solaar.desktop']
26 yield 'share/solaar/udev-rules.d', ['rules.d/42-logitech-unify-permissions.rules']
27
28 del _dirname
29
30
31 setup(
32 name=NAME.lower(),
33 version=__version__,
34 description='Linux devices manager for the Logitech Unifying Receiver.',
35 long_description='''
36 Solaar is a Linux device manager for Logitech's Unifying Receiver peripherals.
37 It is able to pair/unpair devices with the receiver, for many devices show
38 battery status, and show and modify some of the modifiable features of devices.
39 '''.strip(),
40 author='Daniel Pavel',
41 license='GPLv2',
42 url='http://pwr-solaar.github.io/Solaar/',
43 classifiers=[
44 'Development Status :: 4 - Beta',
45 'Environment :: X11 Applications :: GTK',
46 'Environment :: Console',
47 'Intended Audience :: End Users/Desktop',
48 'License :: DFSG approved',
49 'License :: OSI Approved :: GNU General Public License v2 (GPLv2)',
50 'Natural Language :: English',
51 'Programming Language :: Python :: 3 :: Only',
52 'Operating System :: POSIX :: Linux',
53 'Topic :: Utilities',
54 ],
55 platforms=['linux'],
56
57 # sudo apt install python-gi python3-gi \
58 # gir1.2-gtk-3.0 gir1.2-notify-0.7 gir1.2-ayatanaappindicator3-0.1
59 # os_requires=['gi.repository.GObject (>= 2.0)', 'gi.repository.Gtk (>= 3.0)'],
60 python_requires='>=3.6',
61 install_requires=[
62 'pyudev (>= 0.13)',
63 'PyYAML (>= 5.1)',
64 'python-xlib (>= 0.27)',
65 'pynput (>= 1.7.0)',
66 'psutil (>= 5.7.3)',
67 ],
68 package_dir={'': 'lib'},
69 packages=['hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'],
70 data_files=list(_data_files()),
71 scripts=_glob('bin/*'),
72 )
73
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -24,6 +24,7 @@
yield 'share/applications', ['share/applications/solaar.desktop']
yield 'share/solaar/udev-rules.d', ['rules.d/42-logitech-unify-permissions.rules']
+ yield 'share/metainfo/io.github.pwr_solaar.solaar.metainfo.xml', ['share/solaar/metainfo.xml']
del _dirname
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -24,6 +24,7 @@\n \n yield 'share/applications', ['share/applications/solaar.desktop']\n yield 'share/solaar/udev-rules.d', ['rules.d/42-logitech-unify-permissions.rules']\n+ yield 'share/metainfo/io.github.pwr_solaar.solaar.metainfo.xml', ['share/solaar/metainfo.xml']\n \n del _dirname\n", "issue": "Please create an AppData file for Solaar\nPlease consider writing and installing an AppData file with the application description and some screenshots, else Solaar looks really bad in the GNOME and KDE Software Centers. We'd love to showcase more applications, but without the extra data file we can't. See http://people.freedesktop.org/~hughsient/appdata/ for details; thanks!\n\nRichard\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nfrom glob import glob as _glob\n\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\n# from solaar import NAME, __version__\n__version__ = '1.0.4'\nNAME = 'Solaar'\n\n\ndef _data_files():\n from os.path import dirname as _dirname\n\n yield 'share/solaar/icons', _glob('share/solaar/icons/solaar*.svg')\n yield 'share/solaar/icons', _glob('share/solaar/icons/light_*.png')\n yield 'share/icons/hicolor/scalable/apps', ['share/solaar/icons/solaar.svg']\n\n for mo in _glob('share/locale/*/LC_MESSAGES/solaar.mo'):\n yield _dirname(mo), [mo]\n\n yield 'share/applications', ['share/applications/solaar.desktop']\n yield 'share/solaar/udev-rules.d', ['rules.d/42-logitech-unify-permissions.rules']\n\n del _dirname\n\n\nsetup(\n name=NAME.lower(),\n version=__version__,\n description='Linux devices manager for the Logitech Unifying Receiver.',\n long_description='''\nSolaar is a Linux device manager for Logitech's Unifying Receiver peripherals.\nIt is able to pair/unpair devices with the receiver, for many devices show\nbattery status, and show and modify some of the modifiable features of devices.\n'''.strip(),\n author='Daniel Pavel',\n license='GPLv2',\n url='http://pwr-solaar.github.io/Solaar/',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: X11 Applications :: GTK',\n 'Environment :: Console',\n 'Intended Audience :: End Users/Desktop',\n 'License :: DFSG approved',\n 'License :: OSI Approved :: GNU General Public License v2 (GPLv2)',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 3 :: Only',\n 'Operating System :: POSIX :: Linux',\n 'Topic :: Utilities',\n ],\n platforms=['linux'],\n\n # sudo apt install python-gi python3-gi \\\n # gir1.2-gtk-3.0 gir1.2-notify-0.7 gir1.2-ayatanaappindicator3-0.1\n # os_requires=['gi.repository.GObject (>= 2.0)', 'gi.repository.Gtk (>= 3.0)'],\n python_requires='>=3.6',\n install_requires=[\n 'pyudev (>= 0.13)',\n 'PyYAML (>= 5.1)',\n 'python-xlib (>= 0.27)',\n 'pynput (>= 1.7.0)',\n 'psutil (>= 5.7.3)',\n ],\n package_dir={'': 'lib'},\n packages=['hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'],\n data_files=list(_data_files()),\n scripts=_glob('bin/*'),\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\nfrom glob import glob as _glob\n\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\n# from solaar import NAME, __version__\n__version__ = '1.0.4'\nNAME = 'Solaar'\n\n\ndef _data_files():\n from os.path import dirname as _dirname\n\n yield 'share/solaar/icons', _glob('share/solaar/icons/solaar*.svg')\n yield 'share/solaar/icons', _glob('share/solaar/icons/light_*.png')\n yield 'share/icons/hicolor/scalable/apps', ['share/solaar/icons/solaar.svg']\n\n for mo in _glob('share/locale/*/LC_MESSAGES/solaar.mo'):\n yield _dirname(mo), [mo]\n\n yield 'share/applications', ['share/applications/solaar.desktop']\n yield 'share/solaar/udev-rules.d', ['rules.d/42-logitech-unify-permissions.rules']\n yield 'share/metainfo/io.github.pwr_solaar.solaar.metainfo.xml', ['share/solaar/metainfo.xml']\n\n del _dirname\n\n\nsetup(\n name=NAME.lower(),\n version=__version__,\n description='Linux devices manager for the Logitech Unifying Receiver.',\n long_description='''\nSolaar is a Linux device manager for Logitech's Unifying Receiver peripherals.\nIt is able to pair/unpair devices with the receiver, for many devices show\nbattery status, and show and modify some of the modifiable features of devices.\n'''.strip(),\n author='Daniel Pavel',\n license='GPLv2',\n url='http://pwr-solaar.github.io/Solaar/',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: X11 Applications :: GTK',\n 'Environment :: Console',\n 'Intended Audience :: End Users/Desktop',\n 'License :: DFSG approved',\n 'License :: OSI Approved :: GNU General Public License v2 (GPLv2)',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 3 :: Only',\n 'Operating System :: POSIX :: Linux',\n 'Topic :: Utilities',\n ],\n platforms=['linux'],\n\n # sudo apt install python-gi python3-gi \\\n # gir1.2-gtk-3.0 gir1.2-notify-0.7 gir1.2-ayatanaappindicator3-0.1\n # os_requires=['gi.repository.GObject (>= 2.0)', 'gi.repository.Gtk (>= 3.0)'],\n python_requires='>=3.6',\n install_requires=[\n 'pyudev (>= 0.13)',\n 'PyYAML (>= 5.1)',\n 'python-xlib (>= 0.27)',\n 'pynput (>= 1.7.0)',\n 'psutil (>= 5.7.3)',\n ],\n package_dir={'': 'lib'},\n packages=['hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'],\n data_files=list(_data_files()),\n scripts=_glob('bin/*'),\n)\n", "path": "setup.py"}]}
| 1,143 | 115 |
gh_patches_debug_1447
|
rasdani/github-patches
|
git_diff
|
strawberry-graphql__strawberry-945
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Can't get DataLoader to work
Hello! I'm trying examples from this page https://strawberry.rocks/docs/guides/dataloaders.
Running the following code on Python 3.8:
```python
import strawberry
from strawberry.dataloader import DataLoader
from typing import List
@strawberry.type
class User:
id: strawberry.ID
async def load_users(keys) -> List[User]:
return [User(id=key) for key in keys]
loader = DataLoader(load_fn=load_users)
@strawberry.type
class Query:
@strawberry.field
async def get_user(self, id: strawberry.ID) -> User:
return await loader.load(id)
schema = strawberry.Schema(query=Query)
```
I get the following error message:
```
Task <Task pending name='Task-8' coro=<ExecutionContext.resolve_field.<locals>.await_result()
running at /Users/-/Documents/src/dataservice-poc/virtualenv/lib/python3.8/site-packages/graphql/execution/execute.py:625>
cb=[gather.<locals>._done_callback() at /usr/local/Cellar/[email protected]/3.8.5/Frameworks/Python.framework/Versions/3.8/lib/python3.8/asyncio/tasks.py:758]>
got Future <Future pending> attached to a different loop
```
When I try my own code (which is pretty much the same, but the loader is real - it reads data from the db) I get this: "RuntimeError: await wasn't used with future".
I'm stuck, don't really know where to look. I thought Strawberry is supposed to manage async processing, but looks like it doesn't work that way. Any help would be greatly appreciated.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `strawberry/cli/commands/server.py`
Content:
```
1 import importlib
2 import sys
3
4 import click
5 import hupper
6 import uvicorn
7 from starlette.applications import Starlette
8 from starlette.middleware.cors import CORSMiddleware
9
10 from strawberry import Schema
11 from strawberry.asgi import GraphQL
12 from strawberry.utils.importer import import_module_symbol
13
14
15 @click.command("server", short_help="Starts debug server")
16 @click.argument("schema", type=str)
17 @click.option("-h", "--host", default="0.0.0.0", type=str)
18 @click.option("-p", "--port", default=8000, type=int)
19 @click.option(
20 "--app-dir",
21 default=".",
22 type=str,
23 show_default=True,
24 help=(
25 "Look for the module in the specified directory, by adding this to the "
26 "PYTHONPATH. Defaults to the current working directory. "
27 "Works the same as `--app-dir` in uvicorn."
28 ),
29 )
30 def server(schema, host, port, app_dir):
31 sys.path.insert(0, app_dir)
32
33 try:
34 schema_symbol = import_module_symbol(schema, default_symbol_name="schema")
35 except (ImportError, AttributeError) as exc:
36 message = str(exc)
37 raise click.BadArgumentUsage(message)
38
39 if not isinstance(schema_symbol, Schema):
40 message = "The `schema` must be an instance of strawberry.Schema"
41 raise click.BadArgumentUsage(message)
42
43 reloader = hupper.start_reloader("strawberry.cli.run", verbose=False)
44 schema_module = importlib.import_module(schema_symbol.__module__)
45 reloader.watch_files([schema_module.__file__])
46
47 app = Starlette(debug=True)
48 app.add_middleware(
49 CORSMiddleware, allow_headers=["*"], allow_origins=["*"], allow_methods=["*"]
50 )
51
52 graphql_app = GraphQL(schema_symbol, debug=True)
53
54 paths = ["/", "/graphql"]
55 for path in paths:
56 app.add_route(path, graphql_app)
57 app.add_websocket_route(path, graphql_app)
58
59 print(f"Running strawberry on http://{host}:{port}/ 🍓")
60 uvicorn.run(app, host=host, port=port, log_level="error")
61
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/strawberry/cli/commands/server.py b/strawberry/cli/commands/server.py
--- a/strawberry/cli/commands/server.py
+++ b/strawberry/cli/commands/server.py
@@ -57,4 +57,4 @@
app.add_websocket_route(path, graphql_app)
print(f"Running strawberry on http://{host}:{port}/ 🍓")
- uvicorn.run(app, host=host, port=port, log_level="error")
+ uvicorn.run(app, loop="none", host=host, port=port, log_level="error")
|
{"golden_diff": "diff --git a/strawberry/cli/commands/server.py b/strawberry/cli/commands/server.py\n--- a/strawberry/cli/commands/server.py\n+++ b/strawberry/cli/commands/server.py\n@@ -57,4 +57,4 @@\n app.add_websocket_route(path, graphql_app)\n \n print(f\"Running strawberry on http://{host}:{port}/ \ud83c\udf53\")\n- uvicorn.run(app, host=host, port=port, log_level=\"error\")\n+ uvicorn.run(app, loop=\"none\", host=host, port=port, log_level=\"error\")\n", "issue": "Can't get DataLoader to work\nHello! I'm trying examples from this page https://strawberry.rocks/docs/guides/dataloaders.\r\nRunning the following code on Python 3.8:\r\n```python\r\nimport strawberry\r\nfrom strawberry.dataloader import DataLoader\r\nfrom typing import List\r\n\r\n\r\[email protected]\r\nclass User:\r\n id: strawberry.ID\r\n\r\n\r\nasync def load_users(keys) -> List[User]:\r\n return [User(id=key) for key in keys]\r\n\r\nloader = DataLoader(load_fn=load_users)\r\n\r\n\r\[email protected]\r\nclass Query:\r\n @strawberry.field\r\n async def get_user(self, id: strawberry.ID) -> User:\r\n return await loader.load(id)\r\n\r\n\r\nschema = strawberry.Schema(query=Query)\r\n```\r\nI get the following error message:\r\n```\r\nTask <Task pending name='Task-8' coro=<ExecutionContext.resolve_field.<locals>.await_result() \r\nrunning at /Users/-/Documents/src/dataservice-poc/virtualenv/lib/python3.8/site-packages/graphql/execution/execute.py:625> \r\ncb=[gather.<locals>._done_callback() at /usr/local/Cellar/[email protected]/3.8.5/Frameworks/Python.framework/Versions/3.8/lib/python3.8/asyncio/tasks.py:758]> \r\ngot Future <Future pending> attached to a different loop\r\n```\r\n\r\nWhen I try my own code (which is pretty much the same, but the loader is real - it reads data from the db) I get this: \"RuntimeError: await wasn't used with future\".\r\n\r\nI'm stuck, don't really know where to look. I thought Strawberry is supposed to manage async processing, but looks like it doesn't work that way. Any help would be greatly appreciated.\n", "before_files": [{"content": "import importlib\nimport sys\n\nimport click\nimport hupper\nimport uvicorn\nfrom starlette.applications import Starlette\nfrom starlette.middleware.cors import CORSMiddleware\n\nfrom strawberry import Schema\nfrom strawberry.asgi import GraphQL\nfrom strawberry.utils.importer import import_module_symbol\n\n\[email protected](\"server\", short_help=\"Starts debug server\")\[email protected](\"schema\", type=str)\[email protected](\"-h\", \"--host\", default=\"0.0.0.0\", type=str)\[email protected](\"-p\", \"--port\", default=8000, type=int)\[email protected](\n \"--app-dir\",\n default=\".\",\n type=str,\n show_default=True,\n help=(\n \"Look for the module in the specified directory, by adding this to the \"\n \"PYTHONPATH. Defaults to the current working directory. \"\n \"Works the same as `--app-dir` in uvicorn.\"\n ),\n)\ndef server(schema, host, port, app_dir):\n sys.path.insert(0, app_dir)\n\n try:\n schema_symbol = import_module_symbol(schema, default_symbol_name=\"schema\")\n except (ImportError, AttributeError) as exc:\n message = str(exc)\n raise click.BadArgumentUsage(message)\n\n if not isinstance(schema_symbol, Schema):\n message = \"The `schema` must be an instance of strawberry.Schema\"\n raise click.BadArgumentUsage(message)\n\n reloader = hupper.start_reloader(\"strawberry.cli.run\", verbose=False)\n schema_module = importlib.import_module(schema_symbol.__module__)\n reloader.watch_files([schema_module.__file__])\n\n app = Starlette(debug=True)\n app.add_middleware(\n CORSMiddleware, allow_headers=[\"*\"], allow_origins=[\"*\"], allow_methods=[\"*\"]\n )\n\n graphql_app = GraphQL(schema_symbol, debug=True)\n\n paths = [\"/\", \"/graphql\"]\n for path in paths:\n app.add_route(path, graphql_app)\n app.add_websocket_route(path, graphql_app)\n\n print(f\"Running strawberry on http://{host}:{port}/ \ud83c\udf53\")\n uvicorn.run(app, host=host, port=port, log_level=\"error\")\n", "path": "strawberry/cli/commands/server.py"}], "after_files": [{"content": "import importlib\nimport sys\n\nimport click\nimport hupper\nimport uvicorn\nfrom starlette.applications import Starlette\nfrom starlette.middleware.cors import CORSMiddleware\n\nfrom strawberry import Schema\nfrom strawberry.asgi import GraphQL\nfrom strawberry.utils.importer import import_module_symbol\n\n\[email protected](\"server\", short_help=\"Starts debug server\")\[email protected](\"schema\", type=str)\[email protected](\"-h\", \"--host\", default=\"0.0.0.0\", type=str)\[email protected](\"-p\", \"--port\", default=8000, type=int)\[email protected](\n \"--app-dir\",\n default=\".\",\n type=str,\n show_default=True,\n help=(\n \"Look for the module in the specified directory, by adding this to the \"\n \"PYTHONPATH. Defaults to the current working directory. \"\n \"Works the same as `--app-dir` in uvicorn.\"\n ),\n)\ndef server(schema, host, port, app_dir):\n sys.path.insert(0, app_dir)\n\n try:\n schema_symbol = import_module_symbol(schema, default_symbol_name=\"schema\")\n except (ImportError, AttributeError) as exc:\n message = str(exc)\n raise click.BadArgumentUsage(message)\n\n if not isinstance(schema_symbol, Schema):\n message = \"The `schema` must be an instance of strawberry.Schema\"\n raise click.BadArgumentUsage(message)\n\n reloader = hupper.start_reloader(\"strawberry.cli.run\", verbose=False)\n schema_module = importlib.import_module(schema_symbol.__module__)\n reloader.watch_files([schema_module.__file__])\n\n app = Starlette(debug=True)\n app.add_middleware(\n CORSMiddleware, allow_headers=[\"*\"], allow_origins=[\"*\"], allow_methods=[\"*\"]\n )\n\n graphql_app = GraphQL(schema_symbol, debug=True)\n\n paths = [\"/\", \"/graphql\"]\n for path in paths:\n app.add_route(path, graphql_app)\n app.add_websocket_route(path, graphql_app)\n\n print(f\"Running strawberry on http://{host}:{port}/ \ud83c\udf53\")\n uvicorn.run(app, loop=\"none\", host=host, port=port, log_level=\"error\")\n", "path": "strawberry/cli/commands/server.py"}]}
| 1,217 | 133 |
gh_patches_debug_24265
|
rasdani/github-patches
|
git_diff
|
encode__uvicorn-469
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add py3.8 to the test matrix
Adds py3.8 to the test matrix
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 import os
5 import re
6 import sys
7 import platform
8
9 from setuptools import setup
10
11
12 def get_version(package):
13 """
14 Return package version as listed in `__version__` in `init.py`.
15 """
16 path = os.path.join(package, '__init__.py')
17 init_py = open(path, 'r', encoding='utf8').read()
18 return re.search("__version__ = ['\"]([^'\"]+)['\"]", init_py).group(1)
19
20
21 def get_long_description():
22 """
23 Return the README.
24 """
25 return open('README.md', 'r', encoding='utf8').read()
26
27
28 def get_packages(package):
29 """
30 Return root package and all sub-packages.
31 """
32 return [dirpath
33 for dirpath, dirnames, filenames in os.walk(package)
34 if os.path.exists(os.path.join(dirpath, '__init__.py'))]
35
36
37 env_marker = (
38 "sys_platform != 'win32'"
39 " and sys_platform != 'cygwin'"
40 " and platform_python_implementation != 'pypy'"
41 )
42
43 requirements = [
44 "click==7.*",
45 "h11==0.8.*",
46 "websockets==8.*",
47 "httptools==0.0.13 ;" + env_marker,
48 "uvloop==0.* ;" + env_marker,
49 ]
50
51
52 setup(
53 name='uvicorn',
54 version=get_version('uvicorn'),
55 url='https://github.com/encode/uvicorn',
56 license='BSD',
57 description='The lightning-fast ASGI server.',
58 long_description=get_long_description(),
59 long_description_content_type='text/markdown',
60 author='Tom Christie',
61 author_email='[email protected]',
62 packages=get_packages('uvicorn'),
63 install_requires=requirements,
64 data_files = [("", ["LICENSE.md"])],
65 classifiers=[
66 'Development Status :: 3 - Alpha',
67 'Environment :: Web Environment',
68 'Intended Audience :: Developers',
69 'License :: OSI Approved :: BSD License',
70 'Operating System :: OS Independent',
71 'Topic :: Internet :: WWW/HTTP',
72 'Programming Language :: Python :: 3',
73 'Programming Language :: Python :: 3.6',
74 'Programming Language :: Python :: 3.7',
75 'Programming Language :: Python :: Implementation :: CPython',
76 'Programming Language :: Python :: Implementation :: PyPy',
77 ],
78 entry_points="""
79 [console_scripts]
80 uvicorn=uvicorn.main:main
81 """
82 )
83
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -45,7 +45,7 @@
"h11==0.8.*",
"websockets==8.*",
"httptools==0.0.13 ;" + env_marker,
- "uvloop==0.* ;" + env_marker,
+ "uvloop==0.14.0rc2 ;" + env_marker,
]
@@ -63,7 +63,7 @@
install_requires=requirements,
data_files = [("", ["LICENSE.md"])],
classifiers=[
- 'Development Status :: 3 - Alpha',
+ 'Development Status :: 4 - Beta',
'Environment :: Web Environment',
'Intended Audience :: Developers',
'License :: OSI Approved :: BSD License',
@@ -72,6 +72,7 @@
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
+ 'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: Implementation :: CPython',
'Programming Language :: Python :: Implementation :: PyPy',
],
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -45,7 +45,7 @@\n \"h11==0.8.*\",\n \"websockets==8.*\",\n \"httptools==0.0.13 ;\" + env_marker,\n- \"uvloop==0.* ;\" + env_marker,\n+ \"uvloop==0.14.0rc2 ;\" + env_marker,\n ]\n \n \n@@ -63,7 +63,7 @@\n install_requires=requirements,\n data_files = [(\"\", [\"LICENSE.md\"])],\n classifiers=[\n- 'Development Status :: 3 - Alpha',\n+ 'Development Status :: 4 - Beta',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n@@ -72,6 +72,7 @@\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n+ 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n ],\n", "issue": "Add py3.8 to the test matrix\nAdds py3.8 to the test matrix\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nimport os\nimport re\nimport sys\nimport platform\n\nfrom setuptools import setup\n\n\ndef get_version(package):\n \"\"\"\n Return package version as listed in `__version__` in `init.py`.\n \"\"\"\n path = os.path.join(package, '__init__.py')\n init_py = open(path, 'r', encoding='utf8').read()\n return re.search(\"__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", init_py).group(1)\n\n\ndef get_long_description():\n \"\"\"\n Return the README.\n \"\"\"\n return open('README.md', 'r', encoding='utf8').read()\n\n\ndef get_packages(package):\n \"\"\"\n Return root package and all sub-packages.\n \"\"\"\n return [dirpath\n for dirpath, dirnames, filenames in os.walk(package)\n if os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n\nenv_marker = (\n \"sys_platform != 'win32'\"\n \" and sys_platform != 'cygwin'\"\n \" and platform_python_implementation != 'pypy'\"\n)\n\nrequirements = [\n \"click==7.*\",\n \"h11==0.8.*\",\n \"websockets==8.*\",\n \"httptools==0.0.13 ;\" + env_marker,\n \"uvloop==0.* ;\" + env_marker,\n]\n\n\nsetup(\n name='uvicorn',\n version=get_version('uvicorn'),\n url='https://github.com/encode/uvicorn',\n license='BSD',\n description='The lightning-fast ASGI server.',\n long_description=get_long_description(),\n long_description_content_type='text/markdown',\n author='Tom Christie',\n author_email='[email protected]',\n packages=get_packages('uvicorn'),\n install_requires=requirements,\n data_files = [(\"\", [\"LICENSE.md\"])],\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Topic :: Internet :: WWW/HTTP',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n ],\n entry_points=\"\"\"\n [console_scripts]\n uvicorn=uvicorn.main:main\n \"\"\"\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nimport os\nimport re\nimport sys\nimport platform\n\nfrom setuptools import setup\n\n\ndef get_version(package):\n \"\"\"\n Return package version as listed in `__version__` in `init.py`.\n \"\"\"\n path = os.path.join(package, '__init__.py')\n init_py = open(path, 'r', encoding='utf8').read()\n return re.search(\"__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", init_py).group(1)\n\n\ndef get_long_description():\n \"\"\"\n Return the README.\n \"\"\"\n return open('README.md', 'r', encoding='utf8').read()\n\n\ndef get_packages(package):\n \"\"\"\n Return root package and all sub-packages.\n \"\"\"\n return [dirpath\n for dirpath, dirnames, filenames in os.walk(package)\n if os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n\nenv_marker = (\n \"sys_platform != 'win32'\"\n \" and sys_platform != 'cygwin'\"\n \" and platform_python_implementation != 'pypy'\"\n)\n\nrequirements = [\n \"click==7.*\",\n \"h11==0.8.*\",\n \"websockets==8.*\",\n \"httptools==0.0.13 ;\" + env_marker,\n \"uvloop==0.14.0rc2 ;\" + env_marker,\n]\n\n\nsetup(\n name='uvicorn',\n version=get_version('uvicorn'),\n url='https://github.com/encode/uvicorn',\n license='BSD',\n description='The lightning-fast ASGI server.',\n long_description=get_long_description(),\n long_description_content_type='text/markdown',\n author='Tom Christie',\n author_email='[email protected]',\n packages=get_packages('uvicorn'),\n install_requires=requirements,\n data_files = [(\"\", [\"LICENSE.md\"])],\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Topic :: Internet :: WWW/HTTP',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n ],\n entry_points=\"\"\"\n [console_scripts]\n uvicorn=uvicorn.main:main\n \"\"\"\n)\n", "path": "setup.py"}]}
| 985 | 269 |
gh_patches_debug_12259
|
rasdani/github-patches
|
git_diff
|
Lightning-Universe__lightning-flash-1404
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Docs for using `output` with `ObjectDetector`
## 🐛 Bug
I'am trying to use FiftyOneDetectionLabelsOutput output for Object detection but i got the following error
model = model_type.model(backbone=backbone, num_classes=num_classes, **kwargs)
TypeError: model() got an unexpected keyword argument 'output'
How can i correctly setup the output?
### To Reproduce
from flash.image.detection.output import FiftyOneDetectionLabelsOutput
from flash.image import ObjectDetector
out= FiftyOneDetectionLabelsOutput(threshold=0.7)
objDetc=ObjectDetector(num_classes=81,backbone="medium",head="yolov5",output=out)
### Expected behavior
Class init correctly as described in documentation
https://lightning-flash.readthedocs.io/en/latest/api/generated/flash.image.detection.model.ObjectDetector.html#flash.image.detection.model.ObjectDetector
There is a parameter output in the description. Maybe is an old value
### Environment
- OS (e.g., Linux): Linux
- Python version: 3.8
- PyTorch/Lightning/Flash Version (e.g., 1.10/1.5/0.7): 1.10 / 1.5.8 / 0.7.4
- GPU models and configuration: cuda 11.3
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `flash/image/detection/model.py`
Content:
```
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from typing import Any, Dict, List, Optional, Type, Union
15
16 from flash.core.adapter import AdapterTask
17 from flash.core.data.io.input import ServeInput
18 from flash.core.data.io.output import Output
19 from flash.core.integrations.icevision.transforms import IceVisionInputTransform
20 from flash.core.model import Task
21 from flash.core.registry import FlashRegistry
22 from flash.core.serve import Composition
23 from flash.core.utilities.imports import requires
24 from flash.core.utilities.types import INPUT_TRANSFORM_TYPE, LR_SCHEDULER_TYPE, OPTIMIZER_TYPE
25 from flash.image.data import ImageDeserializer
26 from flash.image.detection.backbones import OBJECT_DETECTION_HEADS
27 from flash.image.detection.output import OBJECT_DETECTION_OUTPUTS
28
29
30 class ObjectDetector(AdapterTask):
31 """The ``ObjectDetector`` is a :class:`~flash.Task` for detecting objects in images. For more details, see
32 :ref:`object_detection`.
33
34 Args:
35 num_classes: The number of object classes.
36 backbone: String indicating the backbone CNN architecture to use.
37 head: String indicating the head module to use ontop of the backbone.
38 pretrained: Whether the model should be loaded with it's pretrained weights.
39 optimizer: Optimizer to use for training.
40 lr_scheduler: The LR scheduler to use during training.
41 learning_rate: The learning rate to use for training.
42 output: The :class:`~flash.core.data.io.output.Output` to use when formatting prediction outputs.
43 predict_kwargs: dictionary containing parameters that will be used during the prediction phase.
44 kwargs: additional kwargs nessesary for initializing the backbone task
45 """
46
47 heads: FlashRegistry = OBJECT_DETECTION_HEADS
48 outputs = Task.outputs + OBJECT_DETECTION_OUTPUTS
49
50 required_extras: List[str] = ["image", "icevision", "effdet"]
51
52 def __init__(
53 self,
54 num_classes: Optional[int] = None,
55 labels: Optional[List[str]] = None,
56 backbone: Optional[str] = "resnet18_fpn",
57 head: Optional[str] = "retinanet",
58 pretrained: bool = True,
59 optimizer: OPTIMIZER_TYPE = "Adam",
60 lr_scheduler: LR_SCHEDULER_TYPE = None,
61 learning_rate: Optional[float] = None,
62 predict_kwargs: Dict = None,
63 **kwargs: Any,
64 ):
65 self.save_hyperparameters()
66
67 if labels is not None and num_classes is None:
68 num_classes = len(labels)
69
70 self.labels = labels
71 self.num_classes = num_classes
72
73 predict_kwargs = predict_kwargs if predict_kwargs else {}
74 metadata = self.heads.get(head, with_metadata=True)
75 adapter = metadata["metadata"]["adapter"].from_task(
76 self,
77 num_classes=num_classes,
78 backbone=backbone,
79 head=head,
80 pretrained=pretrained,
81 predict_kwargs=predict_kwargs,
82 **kwargs,
83 )
84
85 super().__init__(
86 adapter,
87 learning_rate=learning_rate,
88 optimizer=optimizer,
89 lr_scheduler=lr_scheduler,
90 )
91
92 def _ci_benchmark_fn(self, history: List[Dict[str, Any]]) -> None:
93 """This function is used only for debugging usage with CI."""
94 # todo
95
96 @property
97 def predict_kwargs(self) -> Dict[str, Any]:
98 """The kwargs used for the prediction step."""
99 return self.adapter.predict_kwargs
100
101 @predict_kwargs.setter
102 def predict_kwargs(self, predict_kwargs: Dict[str, Any]):
103 self.adapter.predict_kwargs = predict_kwargs
104
105 @requires("serve")
106 def serve(
107 self,
108 host: str = "127.0.0.1",
109 port: int = 8000,
110 sanity_check: bool = True,
111 input_cls: Optional[Type[ServeInput]] = ImageDeserializer,
112 transform: INPUT_TRANSFORM_TYPE = IceVisionInputTransform,
113 transform_kwargs: Optional[Dict] = None,
114 output: Optional[Union[str, Output]] = None,
115 ) -> Composition:
116 return super().serve(host, port, sanity_check, input_cls, transform, transform_kwargs, output)
117
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/flash/image/detection/model.py b/flash/image/detection/model.py
--- a/flash/image/detection/model.py
+++ b/flash/image/detection/model.py
@@ -39,7 +39,6 @@
optimizer: Optimizer to use for training.
lr_scheduler: The LR scheduler to use during training.
learning_rate: The learning rate to use for training.
- output: The :class:`~flash.core.data.io.output.Output` to use when formatting prediction outputs.
predict_kwargs: dictionary containing parameters that will be used during the prediction phase.
kwargs: additional kwargs nessesary for initializing the backbone task
"""
|
{"golden_diff": "diff --git a/flash/image/detection/model.py b/flash/image/detection/model.py\n--- a/flash/image/detection/model.py\n+++ b/flash/image/detection/model.py\n@@ -39,7 +39,6 @@\n optimizer: Optimizer to use for training.\n lr_scheduler: The LR scheduler to use during training.\n learning_rate: The learning rate to use for training.\n- output: The :class:`~flash.core.data.io.output.Output` to use when formatting prediction outputs.\n predict_kwargs: dictionary containing parameters that will be used during the prediction phase.\n kwargs: additional kwargs nessesary for initializing the backbone task\n \"\"\"\n", "issue": "Docs for using `output` with `ObjectDetector`\n## \ud83d\udc1b Bug\r\n\r\nI'am trying to use FiftyOneDetectionLabelsOutput output for Object detection but i got the following error\r\n\r\nmodel = model_type.model(backbone=backbone, num_classes=num_classes, **kwargs)\r\nTypeError: model() got an unexpected keyword argument 'output'\r\n\r\nHow can i correctly setup the output?\r\n\r\n### To Reproduce\r\nfrom flash.image.detection.output import FiftyOneDetectionLabelsOutput\r\nfrom flash.image import ObjectDetector\r\n\r\nout= FiftyOneDetectionLabelsOutput(threshold=0.7)\r\nobjDetc=ObjectDetector(num_classes=81,backbone=\"medium\",head=\"yolov5\",output=out)\r\n\r\n\r\n### Expected behavior\r\nClass init correctly as described in documentation\r\n\r\nhttps://lightning-flash.readthedocs.io/en/latest/api/generated/flash.image.detection.model.ObjectDetector.html#flash.image.detection.model.ObjectDetector\r\nThere is a parameter output in the description. Maybe is an old value\r\n\r\n\r\n### Environment\r\n\r\n - OS (e.g., Linux): Linux\r\n - Python version: 3.8\r\n - PyTorch/Lightning/Flash Version (e.g., 1.10/1.5/0.7): 1.10 / 1.5.8 / 0.7.4\r\n - GPU models and configuration: cuda 11.3\r\n \n", "before_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Any, Dict, List, Optional, Type, Union\n\nfrom flash.core.adapter import AdapterTask\nfrom flash.core.data.io.input import ServeInput\nfrom flash.core.data.io.output import Output\nfrom flash.core.integrations.icevision.transforms import IceVisionInputTransform\nfrom flash.core.model import Task\nfrom flash.core.registry import FlashRegistry\nfrom flash.core.serve import Composition\nfrom flash.core.utilities.imports import requires\nfrom flash.core.utilities.types import INPUT_TRANSFORM_TYPE, LR_SCHEDULER_TYPE, OPTIMIZER_TYPE\nfrom flash.image.data import ImageDeserializer\nfrom flash.image.detection.backbones import OBJECT_DETECTION_HEADS\nfrom flash.image.detection.output import OBJECT_DETECTION_OUTPUTS\n\n\nclass ObjectDetector(AdapterTask):\n \"\"\"The ``ObjectDetector`` is a :class:`~flash.Task` for detecting objects in images. For more details, see\n :ref:`object_detection`.\n\n Args:\n num_classes: The number of object classes.\n backbone: String indicating the backbone CNN architecture to use.\n head: String indicating the head module to use ontop of the backbone.\n pretrained: Whether the model should be loaded with it's pretrained weights.\n optimizer: Optimizer to use for training.\n lr_scheduler: The LR scheduler to use during training.\n learning_rate: The learning rate to use for training.\n output: The :class:`~flash.core.data.io.output.Output` to use when formatting prediction outputs.\n predict_kwargs: dictionary containing parameters that will be used during the prediction phase.\n kwargs: additional kwargs nessesary for initializing the backbone task\n \"\"\"\n\n heads: FlashRegistry = OBJECT_DETECTION_HEADS\n outputs = Task.outputs + OBJECT_DETECTION_OUTPUTS\n\n required_extras: List[str] = [\"image\", \"icevision\", \"effdet\"]\n\n def __init__(\n self,\n num_classes: Optional[int] = None,\n labels: Optional[List[str]] = None,\n backbone: Optional[str] = \"resnet18_fpn\",\n head: Optional[str] = \"retinanet\",\n pretrained: bool = True,\n optimizer: OPTIMIZER_TYPE = \"Adam\",\n lr_scheduler: LR_SCHEDULER_TYPE = None,\n learning_rate: Optional[float] = None,\n predict_kwargs: Dict = None,\n **kwargs: Any,\n ):\n self.save_hyperparameters()\n\n if labels is not None and num_classes is None:\n num_classes = len(labels)\n\n self.labels = labels\n self.num_classes = num_classes\n\n predict_kwargs = predict_kwargs if predict_kwargs else {}\n metadata = self.heads.get(head, with_metadata=True)\n adapter = metadata[\"metadata\"][\"adapter\"].from_task(\n self,\n num_classes=num_classes,\n backbone=backbone,\n head=head,\n pretrained=pretrained,\n predict_kwargs=predict_kwargs,\n **kwargs,\n )\n\n super().__init__(\n adapter,\n learning_rate=learning_rate,\n optimizer=optimizer,\n lr_scheduler=lr_scheduler,\n )\n\n def _ci_benchmark_fn(self, history: List[Dict[str, Any]]) -> None:\n \"\"\"This function is used only for debugging usage with CI.\"\"\"\n # todo\n\n @property\n def predict_kwargs(self) -> Dict[str, Any]:\n \"\"\"The kwargs used for the prediction step.\"\"\"\n return self.adapter.predict_kwargs\n\n @predict_kwargs.setter\n def predict_kwargs(self, predict_kwargs: Dict[str, Any]):\n self.adapter.predict_kwargs = predict_kwargs\n\n @requires(\"serve\")\n def serve(\n self,\n host: str = \"127.0.0.1\",\n port: int = 8000,\n sanity_check: bool = True,\n input_cls: Optional[Type[ServeInput]] = ImageDeserializer,\n transform: INPUT_TRANSFORM_TYPE = IceVisionInputTransform,\n transform_kwargs: Optional[Dict] = None,\n output: Optional[Union[str, Output]] = None,\n ) -> Composition:\n return super().serve(host, port, sanity_check, input_cls, transform, transform_kwargs, output)\n", "path": "flash/image/detection/model.py"}], "after_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Any, Dict, List, Optional, Type, Union\n\nfrom flash.core.adapter import AdapterTask\nfrom flash.core.data.io.input import ServeInput\nfrom flash.core.data.io.output import Output\nfrom flash.core.integrations.icevision.transforms import IceVisionInputTransform\nfrom flash.core.model import Task\nfrom flash.core.registry import FlashRegistry\nfrom flash.core.serve import Composition\nfrom flash.core.utilities.imports import requires\nfrom flash.core.utilities.types import INPUT_TRANSFORM_TYPE, LR_SCHEDULER_TYPE, OPTIMIZER_TYPE\nfrom flash.image.data import ImageDeserializer\nfrom flash.image.detection.backbones import OBJECT_DETECTION_HEADS\nfrom flash.image.detection.output import OBJECT_DETECTION_OUTPUTS\n\n\nclass ObjectDetector(AdapterTask):\n \"\"\"The ``ObjectDetector`` is a :class:`~flash.Task` for detecting objects in images. For more details, see\n :ref:`object_detection`.\n\n Args:\n num_classes: The number of object classes.\n backbone: String indicating the backbone CNN architecture to use.\n head: String indicating the head module to use ontop of the backbone.\n pretrained: Whether the model should be loaded with it's pretrained weights.\n optimizer: Optimizer to use for training.\n lr_scheduler: The LR scheduler to use during training.\n learning_rate: The learning rate to use for training.\n predict_kwargs: dictionary containing parameters that will be used during the prediction phase.\n kwargs: additional kwargs nessesary for initializing the backbone task\n \"\"\"\n\n heads: FlashRegistry = OBJECT_DETECTION_HEADS\n outputs = Task.outputs + OBJECT_DETECTION_OUTPUTS\n\n required_extras: List[str] = [\"image\", \"icevision\", \"effdet\"]\n\n def __init__(\n self,\n num_classes: Optional[int] = None,\n labels: Optional[List[str]] = None,\n backbone: Optional[str] = \"resnet18_fpn\",\n head: Optional[str] = \"retinanet\",\n pretrained: bool = True,\n optimizer: OPTIMIZER_TYPE = \"Adam\",\n lr_scheduler: LR_SCHEDULER_TYPE = None,\n learning_rate: Optional[float] = None,\n predict_kwargs: Dict = None,\n **kwargs: Any,\n ):\n self.save_hyperparameters()\n\n if labels is not None and num_classes is None:\n num_classes = len(labels)\n\n self.labels = labels\n self.num_classes = num_classes\n\n predict_kwargs = predict_kwargs if predict_kwargs else {}\n metadata = self.heads.get(head, with_metadata=True)\n adapter = metadata[\"metadata\"][\"adapter\"].from_task(\n self,\n num_classes=num_classes,\n backbone=backbone,\n head=head,\n pretrained=pretrained,\n predict_kwargs=predict_kwargs,\n **kwargs,\n )\n\n super().__init__(\n adapter,\n learning_rate=learning_rate,\n optimizer=optimizer,\n lr_scheduler=lr_scheduler,\n )\n\n def _ci_benchmark_fn(self, history: List[Dict[str, Any]]) -> None:\n \"\"\"This function is used only for debugging usage with CI.\"\"\"\n # todo\n\n @property\n def predict_kwargs(self) -> Dict[str, Any]:\n \"\"\"The kwargs used for the prediction step.\"\"\"\n return self.adapter.predict_kwargs\n\n @predict_kwargs.setter\n def predict_kwargs(self, predict_kwargs: Dict[str, Any]):\n self.adapter.predict_kwargs = predict_kwargs\n\n @requires(\"serve\")\n def serve(\n self,\n host: str = \"127.0.0.1\",\n port: int = 8000,\n sanity_check: bool = True,\n input_cls: Optional[Type[ServeInput]] = ImageDeserializer,\n transform: INPUT_TRANSFORM_TYPE = IceVisionInputTransform,\n transform_kwargs: Optional[Dict] = None,\n output: Optional[Union[str, Output]] = None,\n ) -> Composition:\n return super().serve(host, port, sanity_check, input_cls, transform, transform_kwargs, output)\n", "path": "flash/image/detection/model.py"}]}
| 1,781 | 140 |
gh_patches_debug_47732
|
rasdani/github-patches
|
git_diff
|
ray-project__ray-8617
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[rllib] PyTorch and SampleAsync validation
<!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant-->
### What is the problem?
PyTorch is supposed to be thread-safe, as long as you don't write a tensor using multiple threads. Please see https://discuss.pytorch.org/t/is-pytorch-supposed-to-be-thread-safe/36540/2
It might be worth removing the validation of sample_async and use_pytorch for A3C (and maybe others?).
Ray Version 0.9.0dev (but this applies to any ray version actually)
### Reproduction (REQUIRED)
Please provide a script that can be run to reproduce the issue. The script should have **no external library dependencies** (i.e., use fake or mock data / environments):
If we cannot run your script, we cannot fix your issue.
- [x] I have verified my script runs in a clean environment and reproduces the issue.
- [x] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html).
[rllib] PyTorch and SampleAsync validation
<!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant-->
### What is the problem?
PyTorch is supposed to be thread-safe, as long as you don't write a tensor using multiple threads. Please see https://discuss.pytorch.org/t/is-pytorch-supposed-to-be-thread-safe/36540/2
It might be worth removing the validation of sample_async and use_pytorch for A3C (and maybe others?).
Ray Version 0.9.0dev (but this applies to any ray version actually)
### Reproduction (REQUIRED)
Please provide a script that can be run to reproduce the issue. The script should have **no external library dependencies** (i.e., use fake or mock data / environments):
If we cannot run your script, we cannot fix your issue.
- [x] I have verified my script runs in a clean environment and reproduces the issue.
- [x] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rllib/agents/a3c/a3c.py`
Content:
```
1 import logging
2
3 from ray.rllib.agents.a3c.a3c_tf_policy import A3CTFPolicy
4 from ray.rllib.agents.trainer import with_common_config
5 from ray.rllib.agents.trainer_template import build_trainer
6 from ray.rllib.execution.rollout_ops import AsyncGradients
7 from ray.rllib.execution.train_ops import ApplyGradients
8 from ray.rllib.execution.metric_ops import StandardMetricsReporting
9
10 logger = logging.getLogger(__name__)
11
12 # yapf: disable
13 # __sphinx_doc_begin__
14 DEFAULT_CONFIG = with_common_config({
15 # Should use a critic as a baseline (otherwise don't use value baseline;
16 # required for using GAE).
17 "use_critic": True,
18 # If true, use the Generalized Advantage Estimator (GAE)
19 # with a value function, see https://arxiv.org/pdf/1506.02438.pdf.
20 "use_gae": True,
21 # Size of rollout batch
22 "rollout_fragment_length": 10,
23 # GAE(gamma) parameter
24 "lambda": 1.0,
25 # Max global norm for each gradient calculated by worker
26 "grad_clip": 40.0,
27 # Learning rate
28 "lr": 0.0001,
29 # Learning rate schedule
30 "lr_schedule": None,
31 # Value Function Loss coefficient
32 "vf_loss_coeff": 0.5,
33 # Entropy coefficient
34 "entropy_coeff": 0.01,
35 # Min time per iteration
36 "min_iter_time_s": 5,
37 # Workers sample async. Note that this increases the effective
38 # rollout_fragment_length by up to 5x due to async buffering of batches.
39 "sample_async": True,
40 })
41 # __sphinx_doc_end__
42 # yapf: enable
43
44
45 def get_policy_class(config):
46 if config["use_pytorch"]:
47 from ray.rllib.agents.a3c.a3c_torch_policy import \
48 A3CTorchPolicy
49 return A3CTorchPolicy
50 else:
51 return A3CTFPolicy
52
53
54 def validate_config(config):
55 if config["entropy_coeff"] < 0:
56 raise DeprecationWarning("entropy_coeff must be >= 0")
57 if config["sample_async"] and config["use_pytorch"]:
58 config["sample_async"] = False
59 logger.warning(
60 "The sample_async option is not supported with use_pytorch: "
61 "Multithreading can be lead to crashes if used with pytorch.")
62
63
64 def execution_plan(workers, config):
65 # For A3C, compute policy gradients remotely on the rollout workers.
66 grads = AsyncGradients(workers)
67
68 # Apply the gradients as they arrive. We set update_all to False so that
69 # only the worker sending the gradient is updated with new weights.
70 train_op = grads.for_each(ApplyGradients(workers, update_all=False))
71
72 return StandardMetricsReporting(train_op, workers, config)
73
74
75 A3CTrainer = build_trainer(
76 name="A3C",
77 default_config=DEFAULT_CONFIG,
78 default_policy=A3CTFPolicy,
79 get_policy_class=get_policy_class,
80 validate_config=validate_config,
81 execution_plan=execution_plan)
82
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/rllib/agents/a3c/a3c.py b/rllib/agents/a3c/a3c.py
--- a/rllib/agents/a3c/a3c.py
+++ b/rllib/agents/a3c/a3c.py
@@ -54,11 +54,6 @@
def validate_config(config):
if config["entropy_coeff"] < 0:
raise DeprecationWarning("entropy_coeff must be >= 0")
- if config["sample_async"] and config["use_pytorch"]:
- config["sample_async"] = False
- logger.warning(
- "The sample_async option is not supported with use_pytorch: "
- "Multithreading can be lead to crashes if used with pytorch.")
def execution_plan(workers, config):
|
{"golden_diff": "diff --git a/rllib/agents/a3c/a3c.py b/rllib/agents/a3c/a3c.py\n--- a/rllib/agents/a3c/a3c.py\n+++ b/rllib/agents/a3c/a3c.py\n@@ -54,11 +54,6 @@\n def validate_config(config):\n if config[\"entropy_coeff\"] < 0:\n raise DeprecationWarning(\"entropy_coeff must be >= 0\")\n- if config[\"sample_async\"] and config[\"use_pytorch\"]:\n- config[\"sample_async\"] = False\n- logger.warning(\n- \"The sample_async option is not supported with use_pytorch: \"\n- \"Multithreading can be lead to crashes if used with pytorch.\")\n \n \n def execution_plan(workers, config):\n", "issue": "[rllib] PyTorch and SampleAsync validation\n<!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant-->\r\n\r\n### What is the problem?\r\n\r\nPyTorch is supposed to be thread-safe, as long as you don't write a tensor using multiple threads. Please see https://discuss.pytorch.org/t/is-pytorch-supposed-to-be-thread-safe/36540/2 \r\n\r\nIt might be worth removing the validation of sample_async and use_pytorch for A3C (and maybe others?).\r\n\r\nRay Version 0.9.0dev (but this applies to any ray version actually)\r\n\r\n### Reproduction (REQUIRED)\r\nPlease provide a script that can be run to reproduce the issue. The script should have **no external library dependencies** (i.e., use fake or mock data / environments):\r\n\r\nIf we cannot run your script, we cannot fix your issue.\r\n\r\n- [x] I have verified my script runs in a clean environment and reproduces the issue.\r\n- [x] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html).\r\n\n[rllib] PyTorch and SampleAsync validation\n<!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant-->\r\n\r\n### What is the problem?\r\n\r\nPyTorch is supposed to be thread-safe, as long as you don't write a tensor using multiple threads. Please see https://discuss.pytorch.org/t/is-pytorch-supposed-to-be-thread-safe/36540/2 \r\n\r\nIt might be worth removing the validation of sample_async and use_pytorch for A3C (and maybe others?).\r\n\r\nRay Version 0.9.0dev (but this applies to any ray version actually)\r\n\r\n### Reproduction (REQUIRED)\r\nPlease provide a script that can be run to reproduce the issue. The script should have **no external library dependencies** (i.e., use fake or mock data / environments):\r\n\r\nIf we cannot run your script, we cannot fix your issue.\r\n\r\n- [x] I have verified my script runs in a clean environment and reproduces the issue.\r\n- [x] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html).\r\n\n", "before_files": [{"content": "import logging\n\nfrom ray.rllib.agents.a3c.a3c_tf_policy import A3CTFPolicy\nfrom ray.rllib.agents.trainer import with_common_config\nfrom ray.rllib.agents.trainer_template import build_trainer\nfrom ray.rllib.execution.rollout_ops import AsyncGradients\nfrom ray.rllib.execution.train_ops import ApplyGradients\nfrom ray.rllib.execution.metric_ops import StandardMetricsReporting\n\nlogger = logging.getLogger(__name__)\n\n# yapf: disable\n# __sphinx_doc_begin__\nDEFAULT_CONFIG = with_common_config({\n # Should use a critic as a baseline (otherwise don't use value baseline;\n # required for using GAE).\n \"use_critic\": True,\n # If true, use the Generalized Advantage Estimator (GAE)\n # with a value function, see https://arxiv.org/pdf/1506.02438.pdf.\n \"use_gae\": True,\n # Size of rollout batch\n \"rollout_fragment_length\": 10,\n # GAE(gamma) parameter\n \"lambda\": 1.0,\n # Max global norm for each gradient calculated by worker\n \"grad_clip\": 40.0,\n # Learning rate\n \"lr\": 0.0001,\n # Learning rate schedule\n \"lr_schedule\": None,\n # Value Function Loss coefficient\n \"vf_loss_coeff\": 0.5,\n # Entropy coefficient\n \"entropy_coeff\": 0.01,\n # Min time per iteration\n \"min_iter_time_s\": 5,\n # Workers sample async. Note that this increases the effective\n # rollout_fragment_length by up to 5x due to async buffering of batches.\n \"sample_async\": True,\n})\n# __sphinx_doc_end__\n# yapf: enable\n\n\ndef get_policy_class(config):\n if config[\"use_pytorch\"]:\n from ray.rllib.agents.a3c.a3c_torch_policy import \\\n A3CTorchPolicy\n return A3CTorchPolicy\n else:\n return A3CTFPolicy\n\n\ndef validate_config(config):\n if config[\"entropy_coeff\"] < 0:\n raise DeprecationWarning(\"entropy_coeff must be >= 0\")\n if config[\"sample_async\"] and config[\"use_pytorch\"]:\n config[\"sample_async\"] = False\n logger.warning(\n \"The sample_async option is not supported with use_pytorch: \"\n \"Multithreading can be lead to crashes if used with pytorch.\")\n\n\ndef execution_plan(workers, config):\n # For A3C, compute policy gradients remotely on the rollout workers.\n grads = AsyncGradients(workers)\n\n # Apply the gradients as they arrive. We set update_all to False so that\n # only the worker sending the gradient is updated with new weights.\n train_op = grads.for_each(ApplyGradients(workers, update_all=False))\n\n return StandardMetricsReporting(train_op, workers, config)\n\n\nA3CTrainer = build_trainer(\n name=\"A3C\",\n default_config=DEFAULT_CONFIG,\n default_policy=A3CTFPolicy,\n get_policy_class=get_policy_class,\n validate_config=validate_config,\n execution_plan=execution_plan)\n", "path": "rllib/agents/a3c/a3c.py"}], "after_files": [{"content": "import logging\n\nfrom ray.rllib.agents.a3c.a3c_tf_policy import A3CTFPolicy\nfrom ray.rllib.agents.trainer import with_common_config\nfrom ray.rllib.agents.trainer_template import build_trainer\nfrom ray.rllib.execution.rollout_ops import AsyncGradients\nfrom ray.rllib.execution.train_ops import ApplyGradients\nfrom ray.rllib.execution.metric_ops import StandardMetricsReporting\n\nlogger = logging.getLogger(__name__)\n\n# yapf: disable\n# __sphinx_doc_begin__\nDEFAULT_CONFIG = with_common_config({\n # Should use a critic as a baseline (otherwise don't use value baseline;\n # required for using GAE).\n \"use_critic\": True,\n # If true, use the Generalized Advantage Estimator (GAE)\n # with a value function, see https://arxiv.org/pdf/1506.02438.pdf.\n \"use_gae\": True,\n # Size of rollout batch\n \"rollout_fragment_length\": 10,\n # GAE(gamma) parameter\n \"lambda\": 1.0,\n # Max global norm for each gradient calculated by worker\n \"grad_clip\": 40.0,\n # Learning rate\n \"lr\": 0.0001,\n # Learning rate schedule\n \"lr_schedule\": None,\n # Value Function Loss coefficient\n \"vf_loss_coeff\": 0.5,\n # Entropy coefficient\n \"entropy_coeff\": 0.01,\n # Min time per iteration\n \"min_iter_time_s\": 5,\n # Workers sample async. Note that this increases the effective\n # rollout_fragment_length by up to 5x due to async buffering of batches.\n \"sample_async\": True,\n})\n# __sphinx_doc_end__\n# yapf: enable\n\n\ndef get_policy_class(config):\n if config[\"use_pytorch\"]:\n from ray.rllib.agents.a3c.a3c_torch_policy import \\\n A3CTorchPolicy\n return A3CTorchPolicy\n else:\n return A3CTFPolicy\n\n\ndef validate_config(config):\n if config[\"entropy_coeff\"] < 0:\n raise DeprecationWarning(\"entropy_coeff must be >= 0\")\n\n\ndef execution_plan(workers, config):\n # For A3C, compute policy gradients remotely on the rollout workers.\n grads = AsyncGradients(workers)\n\n # Apply the gradients as they arrive. We set update_all to False so that\n # only the worker sending the gradient is updated with new weights.\n train_op = grads.for_each(ApplyGradients(workers, update_all=False))\n\n return StandardMetricsReporting(train_op, workers, config)\n\n\nA3CTrainer = build_trainer(\n name=\"A3C\",\n default_config=DEFAULT_CONFIG,\n default_policy=A3CTFPolicy,\n get_policy_class=get_policy_class,\n validate_config=validate_config,\n execution_plan=execution_plan)\n", "path": "rllib/agents/a3c/a3c.py"}]}
| 1,607 | 172 |
gh_patches_debug_9723
|
rasdani/github-patches
|
git_diff
|
dask__dask-3157
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
LZ4_compress and LZ4_uncompress removed
Since commit python-lz4/python-lz4@d62fdc50c0e183d7260961f09d4e0701fbdf0c5c LZ4_compress and LZ4_decompress have been removed (they've been deprecated for a while). With the version of python-lz4 released on pypi, it means we can't use lz4 compression with dask, and worse importing dask.bytes.compression errors out.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dask/bytes/compression.py`
Content:
```
1 from __future__ import print_function, division, absolute_import
2
3 import bz2
4 import sys
5 import zlib
6
7 from toolz import identity
8
9 from ..compatibility import gzip_compress, gzip_decompress, GzipFile
10 from ..utils import ignoring
11
12
13 def noop_file(file, **kwargs):
14 return file
15
16
17 compress = {'gzip': gzip_compress,
18 'zlib': zlib.compress,
19 'bz2': bz2.compress,
20 None: identity}
21 decompress = {'gzip': gzip_decompress,
22 'zlib': zlib.decompress,
23 'bz2': bz2.decompress,
24 None: identity}
25 files = {'gzip': lambda f, **kwargs: GzipFile(fileobj=f, **kwargs),
26 None: noop_file}
27 seekable_files = {None: noop_file}
28
29
30 with ignoring(ImportError):
31 import snappy
32 compress['snappy'] = snappy.compress
33 decompress['snappy'] = snappy.decompress
34
35
36 with ignoring(ImportError):
37 import lz4
38 compress['lz4'] = lz4.LZ4_compress
39 decompress['lz4'] = lz4.LZ4_uncompress
40
41 with ignoring(ImportError):
42 from ..compatibility import LZMAFile, lzma_compress, lzma_decompress
43 compress['xz'] = lzma_compress
44 decompress['xz'] = lzma_decompress
45 files['xz'] = LZMAFile
46
47 # Seekable xz files actually tend to scan whole file - see `get_xz_blocks`
48 # with ignoring(ImportError):
49 # import lzma
50 # seekable_files['xz'] = lzma.LZMAFile
51 #
52 # with ignoring(ImportError):
53 # import lzmaffi
54 # seekable_files['xz'] = lzmaffi.LZMAFile
55
56
57 if sys.version_info[0] >= 3:
58 import bz2
59 files['bz2'] = bz2.BZ2File
60
61
62 def get_xz_blocks(fp):
63 from lzmaffi import (STREAM_HEADER_SIZE, decode_stream_footer,
64 decode_index, LZMAError)
65 fp.seek(0, 2)
66
67 def _peek(f, size):
68 data = f.read(size)
69 f.seek(-size, 1)
70 return data
71
72 if fp.tell() < 2 * STREAM_HEADER_SIZE:
73 raise LZMAError("file too small")
74
75 # read stream paddings (4 bytes each)
76 fp.seek(-4, 1)
77 padding = 0
78 while _peek(fp, 4) == b'\x00\x00\x00\x00':
79 fp.seek(-4, 1)
80 padding += 4
81
82 fp.seek(-STREAM_HEADER_SIZE + 4, 1)
83
84 stream_flags = decode_stream_footer(_peek(fp, STREAM_HEADER_SIZE))
85 fp.seek(-stream_flags.backward_size, 1)
86
87 index = decode_index(_peek(fp, stream_flags.backward_size), padding)
88 return {'offsets': [b.compressed_file_offset for i, b in index],
89 'lengths': [b.unpadded_size for i, b in index],
90 'check': stream_flags.check}
91
92
93 def xz_decompress(data, check):
94 from lzmaffi import decode_block_header_size, LZMADecompressor, FORMAT_BLOCK
95 hsize = decode_block_header_size(data[:1])
96 header = data[:hsize]
97 dc = LZMADecompressor(format=FORMAT_BLOCK, header=header,
98 unpadded_size=len(data), check=check)
99 return dc.decompress(data[len(header):])
100
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/dask/bytes/compression.py b/dask/bytes/compression.py
--- a/dask/bytes/compression.py
+++ b/dask/bytes/compression.py
@@ -33,10 +33,17 @@
decompress['snappy'] = snappy.decompress
-with ignoring(ImportError):
- import lz4
- compress['lz4'] = lz4.LZ4_compress
- decompress['lz4'] = lz4.LZ4_uncompress
+try:
+ import lz4.block
+ compress['lz4'] = lz4.block.compress
+ compress['lz4'] = lz4.block.decompress
+except ImportError:
+ try:
+ import lz4
+ compress['lz4'] = lz4.LZ4_compress
+ compress['lz4'] = lz4.LZ4_uncompress
+ except ImportError:
+ pass
with ignoring(ImportError):
from ..compatibility import LZMAFile, lzma_compress, lzma_decompress
|
{"golden_diff": "diff --git a/dask/bytes/compression.py b/dask/bytes/compression.py\n--- a/dask/bytes/compression.py\n+++ b/dask/bytes/compression.py\n@@ -33,10 +33,17 @@\n decompress['snappy'] = snappy.decompress\n \n \n-with ignoring(ImportError):\n- import lz4\n- compress['lz4'] = lz4.LZ4_compress\n- decompress['lz4'] = lz4.LZ4_uncompress\n+try:\n+ import lz4.block\n+ compress['lz4'] = lz4.block.compress\n+ compress['lz4'] = lz4.block.decompress\n+except ImportError:\n+ try:\n+ import lz4\n+ compress['lz4'] = lz4.LZ4_compress\n+ compress['lz4'] = lz4.LZ4_uncompress\n+ except ImportError:\n+ pass\n \n with ignoring(ImportError):\n from ..compatibility import LZMAFile, lzma_compress, lzma_decompress\n", "issue": "LZ4_compress and LZ4_uncompress removed\nSince commit python-lz4/python-lz4@d62fdc50c0e183d7260961f09d4e0701fbdf0c5c LZ4_compress and LZ4_decompress have been removed (they've been deprecated for a while). With the version of python-lz4 released on pypi, it means we can't use lz4 compression with dask, and worse importing dask.bytes.compression errors out.\r\n\n", "before_files": [{"content": "from __future__ import print_function, division, absolute_import\n\nimport bz2\nimport sys\nimport zlib\n\nfrom toolz import identity\n\nfrom ..compatibility import gzip_compress, gzip_decompress, GzipFile\nfrom ..utils import ignoring\n\n\ndef noop_file(file, **kwargs):\n return file\n\n\ncompress = {'gzip': gzip_compress,\n 'zlib': zlib.compress,\n 'bz2': bz2.compress,\n None: identity}\ndecompress = {'gzip': gzip_decompress,\n 'zlib': zlib.decompress,\n 'bz2': bz2.decompress,\n None: identity}\nfiles = {'gzip': lambda f, **kwargs: GzipFile(fileobj=f, **kwargs),\n None: noop_file}\nseekable_files = {None: noop_file}\n\n\nwith ignoring(ImportError):\n import snappy\n compress['snappy'] = snappy.compress\n decompress['snappy'] = snappy.decompress\n\n\nwith ignoring(ImportError):\n import lz4\n compress['lz4'] = lz4.LZ4_compress\n decompress['lz4'] = lz4.LZ4_uncompress\n\nwith ignoring(ImportError):\n from ..compatibility import LZMAFile, lzma_compress, lzma_decompress\n compress['xz'] = lzma_compress\n decompress['xz'] = lzma_decompress\n files['xz'] = LZMAFile\n\n# Seekable xz files actually tend to scan whole file - see `get_xz_blocks`\n# with ignoring(ImportError):\n# import lzma\n# seekable_files['xz'] = lzma.LZMAFile\n#\n# with ignoring(ImportError):\n# import lzmaffi\n# seekable_files['xz'] = lzmaffi.LZMAFile\n\n\nif sys.version_info[0] >= 3:\n import bz2\n files['bz2'] = bz2.BZ2File\n\n\ndef get_xz_blocks(fp):\n from lzmaffi import (STREAM_HEADER_SIZE, decode_stream_footer,\n decode_index, LZMAError)\n fp.seek(0, 2)\n\n def _peek(f, size):\n data = f.read(size)\n f.seek(-size, 1)\n return data\n\n if fp.tell() < 2 * STREAM_HEADER_SIZE:\n raise LZMAError(\"file too small\")\n\n # read stream paddings (4 bytes each)\n fp.seek(-4, 1)\n padding = 0\n while _peek(fp, 4) == b'\\x00\\x00\\x00\\x00':\n fp.seek(-4, 1)\n padding += 4\n\n fp.seek(-STREAM_HEADER_SIZE + 4, 1)\n\n stream_flags = decode_stream_footer(_peek(fp, STREAM_HEADER_SIZE))\n fp.seek(-stream_flags.backward_size, 1)\n\n index = decode_index(_peek(fp, stream_flags.backward_size), padding)\n return {'offsets': [b.compressed_file_offset for i, b in index],\n 'lengths': [b.unpadded_size for i, b in index],\n 'check': stream_flags.check}\n\n\ndef xz_decompress(data, check):\n from lzmaffi import decode_block_header_size, LZMADecompressor, FORMAT_BLOCK\n hsize = decode_block_header_size(data[:1])\n header = data[:hsize]\n dc = LZMADecompressor(format=FORMAT_BLOCK, header=header,\n unpadded_size=len(data), check=check)\n return dc.decompress(data[len(header):])\n", "path": "dask/bytes/compression.py"}], "after_files": [{"content": "from __future__ import print_function, division, absolute_import\n\nimport bz2\nimport sys\nimport zlib\n\nfrom toolz import identity\n\nfrom ..compatibility import gzip_compress, gzip_decompress, GzipFile\nfrom ..utils import ignoring\n\n\ndef noop_file(file, **kwargs):\n return file\n\n\ncompress = {'gzip': gzip_compress,\n 'zlib': zlib.compress,\n 'bz2': bz2.compress,\n None: identity}\ndecompress = {'gzip': gzip_decompress,\n 'zlib': zlib.decompress,\n 'bz2': bz2.decompress,\n None: identity}\nfiles = {'gzip': lambda f, **kwargs: GzipFile(fileobj=f, **kwargs),\n None: noop_file}\nseekable_files = {None: noop_file}\n\n\nwith ignoring(ImportError):\n import snappy\n compress['snappy'] = snappy.compress\n decompress['snappy'] = snappy.decompress\n\n\ntry:\n import lz4.block\n compress['lz4'] = lz4.block.compress\n compress['lz4'] = lz4.block.decompress\nexcept ImportError:\n try:\n import lz4\n compress['lz4'] = lz4.LZ4_compress\n compress['lz4'] = lz4.LZ4_uncompress\n except ImportError:\n pass\n\nwith ignoring(ImportError):\n from ..compatibility import LZMAFile, lzma_compress, lzma_decompress\n compress['xz'] = lzma_compress\n decompress['xz'] = lzma_decompress\n files['xz'] = LZMAFile\n\n# Seekable xz files actually tend to scan whole file - see `get_xz_blocks`\n# with ignoring(ImportError):\n# import lzma\n# seekable_files['xz'] = lzma.LZMAFile\n#\n# with ignoring(ImportError):\n# import lzmaffi\n# seekable_files['xz'] = lzmaffi.LZMAFile\n\n\nif sys.version_info[0] >= 3:\n import bz2\n files['bz2'] = bz2.BZ2File\n\n\ndef get_xz_blocks(fp):\n from lzmaffi import (STREAM_HEADER_SIZE, decode_stream_footer,\n decode_index, LZMAError)\n fp.seek(0, 2)\n\n def _peek(f, size):\n data = f.read(size)\n f.seek(-size, 1)\n return data\n\n if fp.tell() < 2 * STREAM_HEADER_SIZE:\n raise LZMAError(\"file too small\")\n\n # read stream paddings (4 bytes each)\n fp.seek(-4, 1)\n padding = 0\n while _peek(fp, 4) == b'\\x00\\x00\\x00\\x00':\n fp.seek(-4, 1)\n padding += 4\n\n fp.seek(-STREAM_HEADER_SIZE + 4, 1)\n\n stream_flags = decode_stream_footer(_peek(fp, STREAM_HEADER_SIZE))\n fp.seek(-stream_flags.backward_size, 1)\n\n index = decode_index(_peek(fp, stream_flags.backward_size), padding)\n return {'offsets': [b.compressed_file_offset for i, b in index],\n 'lengths': [b.unpadded_size for i, b in index],\n 'check': stream_flags.check}\n\n\ndef xz_decompress(data, check):\n from lzmaffi import decode_block_header_size, LZMADecompressor, FORMAT_BLOCK\n hsize = decode_block_header_size(data[:1])\n header = data[:hsize]\n dc = LZMADecompressor(format=FORMAT_BLOCK, header=header,\n unpadded_size=len(data), check=check)\n return dc.decompress(data[len(header):])\n", "path": "dask/bytes/compression.py"}]}
| 1,347 | 228 |
gh_patches_debug_513
|
rasdani/github-patches
|
git_diff
|
weni-ai__bothub-engine-150
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Relative STATIC_URL in production broken email images
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bothub/settings.py`
Content:
```
1 import os
2 import dj_database_url
3
4 from decouple import config
5
6
7 # Build paths inside the project like this: os.path.join(BASE_DIR, ...)
8 BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
9
10
11 # SECURITY WARNING: keep the secret key used in production secret!
12 SECRET_KEY = config('SECRET_KEY')
13
14 # SECURITY WARNING: don't run with debug turned on in production!
15 DEBUG = config('DEBUG', default=False, cast=bool)
16
17 ALLOWED_HOSTS = config(
18 'ALLOWED_HOSTS',
19 default='*',
20 cast=lambda v: [s.strip() for s in v.split(',')])
21
22
23 # Application definition
24
25 INSTALLED_APPS = [
26 'django.contrib.admin',
27 'django.contrib.auth',
28 'django.contrib.contenttypes',
29 'django.contrib.sessions',
30 'django.contrib.messages',
31 'django.contrib.staticfiles',
32 'rest_framework',
33 'rest_framework.authtoken',
34 'django_filters',
35 'corsheaders',
36 'bothub.authentication',
37 'bothub.common',
38 'bothub.api',
39 ]
40
41 MIDDLEWARE = [
42 'django.middleware.security.SecurityMiddleware',
43 'whitenoise.middleware.WhiteNoiseMiddleware',
44 'django.contrib.sessions.middleware.SessionMiddleware',
45 'corsheaders.middleware.CorsMiddleware',
46 'django.middleware.common.CommonMiddleware',
47 'django.middleware.csrf.CsrfViewMiddleware',
48 'django.contrib.auth.middleware.AuthenticationMiddleware',
49 'django.contrib.messages.middleware.MessageMiddleware',
50 'django.middleware.clickjacking.XFrameOptionsMiddleware',
51 ]
52
53 ROOT_URLCONF = 'bothub.urls'
54
55 TEMPLATES = [
56 {
57 'BACKEND': 'django.template.backends.django.DjangoTemplates',
58 'DIRS': [],
59 'APP_DIRS': True,
60 'OPTIONS': {
61 'context_processors': [
62 'django.template.context_processors.debug',
63 'django.template.context_processors.request',
64 'django.contrib.auth.context_processors.auth',
65 'django.contrib.messages.context_processors.messages',
66 ],
67 },
68 },
69 ]
70
71 WSGI_APPLICATION = 'bothub.wsgi.application'
72
73
74 # Database
75
76 DATABASES = {}
77 DATABASES['default'] = dj_database_url.parse(
78 config(
79 'DEFAULT_DATABASE',
80 default='sqlite:///db.sqlite3'))
81
82
83 # Auth
84
85 AUTH_USER_MODEL = 'authentication.User'
86
87
88 # Password validation
89
90 AUTH_PASSWORD_VALIDATORS = [
91 {
92 'NAME': 'django.contrib.auth.password_validation.' +
93 'UserAttributeSimilarityValidator',
94 },
95 {
96 'NAME': 'django.contrib.auth.password_validation.' +
97 'MinimumLengthValidator',
98 },
99 {
100 'NAME': 'django.contrib.auth.password_validation.' +
101 'CommonPasswordValidator',
102 },
103 {
104 'NAME': 'django.contrib.auth.password_validation.' +
105 'NumericPasswordValidator',
106 },
107 ]
108
109
110 # Internationalization
111
112 LANGUAGE_CODE = config('LANGUAGE_CODE', default='en-us')
113
114 TIME_ZONE = config('TIME_ZONE', default='UTC')
115
116 USE_I18N = True
117
118 USE_L10N = True
119
120 USE_TZ = True
121
122
123 # Static files (CSS, JavaScript, Images)
124
125 STATIC_URL = '/static/'
126
127 STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')
128
129 STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage'
130
131
132 # rest framework
133
134 REST_FRAMEWORK = {
135 'DEFAULT_AUTHENTICATION_CLASSES': [
136 'rest_framework.authentication.TokenAuthentication',
137 ],
138 'DEFAULT_PAGINATION_CLASS': 'rest_framework.pagination.' +
139 'LimitOffsetPagination',
140 'PAGE_SIZE': 20,
141 'DEFAULT_FILTER_BACKENDS': [
142 'django_filters.rest_framework.DjangoFilterBackend',
143 ],
144 'DEFAULT_METADATA_CLASS': 'bothub.api.metadata.Metadata',
145 }
146
147
148 # cors headers
149
150 CORS_ORIGIN_ALLOW_ALL = True
151 CORS_URLS_REGEX = r'^/api/.*$'
152
153
154 # mail
155
156 envvar_EMAIL_HOST = config('EMAIL_HOST', default=None)
157
158 ADMINS = config(
159 'ADMINS',
160 default='',
161 cast=lambda v: [
162 (
163 s.strip().split('|')[0],
164 s.strip().split('|')[1],
165 ) for s in v.split(',')] if v else [])
166 EMAIL_SUBJECT_PREFIX = '[bothub] '
167 DEFAULT_FROM_EMAIL = config(
168 'DEFAULT_FROM_EMAIL',
169 default='webmaster@localhost')
170 SERVER_EMAIL = config('SERVER_EMAIL', default='root@localhost')
171
172 if envvar_EMAIL_HOST:
173 EMAIL_HOST = envvar_EMAIL_HOST
174 EMAIL_PORT = config('EMAIL_PORT', default=25, cast=int)
175 EMAIL_HOST_USER = config('EMAIL_HOST_USER', default='')
176 EMAIL_HOST_PASSWORD = config('EMAIL_HOST_PASSWORD', default='')
177 EMAIL_USE_SSL = config('EMAIL_USE_SSL', default=False, cast=bool)
178 EMAIL_USE_TLS = config('EMAIL_USE_TLS', default=False, cast=bool)
179 else:
180 EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
181
182
183 # webapp
184
185 BOTHUB_WEBAPP_BASE_URL = config(
186 'BOTHUB_WEBAPP_BASE_URL',
187 default='http://localhost:8080/')
188
189
190 # NLP
191
192 BOTHUB_NLP_BASE_URL = config(
193 'BOTHUB_NLP_BASE_URL',
194 default='http://localhost:8001/')
195
196
197 # CSRF
198
199 CSRF_COOKIE_DOMAIN = config(
200 'CSRF_COOKIE_DOMAIN',
201 default=None)
202
203 CSRF_COOKIE_SECURE = config(
204 'CSRF_COOKIE_SECURE',
205 default=False,
206 cast=bool)
207
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bothub/settings.py b/bothub/settings.py
--- a/bothub/settings.py
+++ b/bothub/settings.py
@@ -122,7 +122,7 @@
# Static files (CSS, JavaScript, Images)
-STATIC_URL = '/static/'
+STATIC_URL = config('STATIC_URL', default='/static/')
STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')
|
{"golden_diff": "diff --git a/bothub/settings.py b/bothub/settings.py\n--- a/bothub/settings.py\n+++ b/bothub/settings.py\n@@ -122,7 +122,7 @@\n \n # Static files (CSS, JavaScript, Images)\n \n-STATIC_URL = '/static/'\n+STATIC_URL = config('STATIC_URL', default='/static/')\n \n STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')\n", "issue": "Relative STATIC_URL in production broken email images\n\n", "before_files": [{"content": "import os\nimport dj_database_url\n\nfrom decouple import config\n\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = config('SECRET_KEY')\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = config('DEBUG', default=False, cast=bool)\n\nALLOWED_HOSTS = config(\n 'ALLOWED_HOSTS',\n default='*',\n cast=lambda v: [s.strip() for s in v.split(',')])\n\n\n# Application definition\n\nINSTALLED_APPS = [\n 'django.contrib.admin',\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n 'rest_framework',\n 'rest_framework.authtoken',\n 'django_filters',\n 'corsheaders',\n 'bothub.authentication',\n 'bothub.common',\n 'bothub.api',\n]\n\nMIDDLEWARE = [\n 'django.middleware.security.SecurityMiddleware',\n 'whitenoise.middleware.WhiteNoiseMiddleware',\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'corsheaders.middleware.CorsMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n]\n\nROOT_URLCONF = 'bothub.urls'\n\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'DIRS': [],\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'context_processors': [\n 'django.template.context_processors.debug',\n 'django.template.context_processors.request',\n 'django.contrib.auth.context_processors.auth',\n 'django.contrib.messages.context_processors.messages',\n ],\n },\n },\n]\n\nWSGI_APPLICATION = 'bothub.wsgi.application'\n\n\n# Database\n\nDATABASES = {}\nDATABASES['default'] = dj_database_url.parse(\n config(\n 'DEFAULT_DATABASE',\n default='sqlite:///db.sqlite3'))\n\n\n# Auth\n\nAUTH_USER_MODEL = 'authentication.User'\n\n\n# Password validation\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n 'NAME': 'django.contrib.auth.password_validation.' +\n 'UserAttributeSimilarityValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.' +\n 'MinimumLengthValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.' +\n 'CommonPasswordValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.' +\n 'NumericPasswordValidator',\n },\n]\n\n\n# Internationalization\n\nLANGUAGE_CODE = config('LANGUAGE_CODE', default='en-us')\n\nTIME_ZONE = config('TIME_ZONE', default='UTC')\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n\n# Static files (CSS, JavaScript, Images)\n\nSTATIC_URL = '/static/'\n\nSTATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')\n\nSTATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage'\n\n\n# rest framework\n\nREST_FRAMEWORK = {\n 'DEFAULT_AUTHENTICATION_CLASSES': [\n 'rest_framework.authentication.TokenAuthentication',\n ],\n 'DEFAULT_PAGINATION_CLASS': 'rest_framework.pagination.' +\n 'LimitOffsetPagination',\n 'PAGE_SIZE': 20,\n 'DEFAULT_FILTER_BACKENDS': [\n 'django_filters.rest_framework.DjangoFilterBackend',\n ],\n 'DEFAULT_METADATA_CLASS': 'bothub.api.metadata.Metadata',\n}\n\n\n# cors headers\n\nCORS_ORIGIN_ALLOW_ALL = True\nCORS_URLS_REGEX = r'^/api/.*$'\n\n\n# mail\n\nenvvar_EMAIL_HOST = config('EMAIL_HOST', default=None)\n\nADMINS = config(\n 'ADMINS',\n default='',\n cast=lambda v: [\n (\n s.strip().split('|')[0],\n s.strip().split('|')[1],\n ) for s in v.split(',')] if v else [])\nEMAIL_SUBJECT_PREFIX = '[bothub] '\nDEFAULT_FROM_EMAIL = config(\n 'DEFAULT_FROM_EMAIL',\n default='webmaster@localhost')\nSERVER_EMAIL = config('SERVER_EMAIL', default='root@localhost')\n\nif envvar_EMAIL_HOST:\n EMAIL_HOST = envvar_EMAIL_HOST\n EMAIL_PORT = config('EMAIL_PORT', default=25, cast=int)\n EMAIL_HOST_USER = config('EMAIL_HOST_USER', default='')\n EMAIL_HOST_PASSWORD = config('EMAIL_HOST_PASSWORD', default='')\n EMAIL_USE_SSL = config('EMAIL_USE_SSL', default=False, cast=bool)\n EMAIL_USE_TLS = config('EMAIL_USE_TLS', default=False, cast=bool)\nelse:\n EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'\n\n\n# webapp\n\nBOTHUB_WEBAPP_BASE_URL = config(\n 'BOTHUB_WEBAPP_BASE_URL',\n default='http://localhost:8080/')\n\n\n# NLP\n\nBOTHUB_NLP_BASE_URL = config(\n 'BOTHUB_NLP_BASE_URL',\n default='http://localhost:8001/')\n\n\n# CSRF\n\nCSRF_COOKIE_DOMAIN = config(\n 'CSRF_COOKIE_DOMAIN',\n default=None)\n\nCSRF_COOKIE_SECURE = config(\n 'CSRF_COOKIE_SECURE',\n default=False,\n cast=bool)\n", "path": "bothub/settings.py"}], "after_files": [{"content": "import os\nimport dj_database_url\n\nfrom decouple import config\n\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = config('SECRET_KEY')\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = config('DEBUG', default=False, cast=bool)\n\nALLOWED_HOSTS = config(\n 'ALLOWED_HOSTS',\n default='*',\n cast=lambda v: [s.strip() for s in v.split(',')])\n\n\n# Application definition\n\nINSTALLED_APPS = [\n 'django.contrib.admin',\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n 'rest_framework',\n 'rest_framework.authtoken',\n 'django_filters',\n 'corsheaders',\n 'bothub.authentication',\n 'bothub.common',\n 'bothub.api',\n]\n\nMIDDLEWARE = [\n 'django.middleware.security.SecurityMiddleware',\n 'whitenoise.middleware.WhiteNoiseMiddleware',\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'corsheaders.middleware.CorsMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n]\n\nROOT_URLCONF = 'bothub.urls'\n\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'DIRS': [],\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'context_processors': [\n 'django.template.context_processors.debug',\n 'django.template.context_processors.request',\n 'django.contrib.auth.context_processors.auth',\n 'django.contrib.messages.context_processors.messages',\n ],\n },\n },\n]\n\nWSGI_APPLICATION = 'bothub.wsgi.application'\n\n\n# Database\n\nDATABASES = {}\nDATABASES['default'] = dj_database_url.parse(\n config(\n 'DEFAULT_DATABASE',\n default='sqlite:///db.sqlite3'))\n\n\n# Auth\n\nAUTH_USER_MODEL = 'authentication.User'\n\n\n# Password validation\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n 'NAME': 'django.contrib.auth.password_validation.' +\n 'UserAttributeSimilarityValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.' +\n 'MinimumLengthValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.' +\n 'CommonPasswordValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.' +\n 'NumericPasswordValidator',\n },\n]\n\n\n# Internationalization\n\nLANGUAGE_CODE = config('LANGUAGE_CODE', default='en-us')\n\nTIME_ZONE = config('TIME_ZONE', default='UTC')\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n\n# Static files (CSS, JavaScript, Images)\n\nSTATIC_URL = config('STATIC_URL', default='/static/')\n\nSTATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')\n\nSTATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage'\n\n\n# rest framework\n\nREST_FRAMEWORK = {\n 'DEFAULT_AUTHENTICATION_CLASSES': [\n 'rest_framework.authentication.TokenAuthentication',\n ],\n 'DEFAULT_PAGINATION_CLASS': 'rest_framework.pagination.' +\n 'LimitOffsetPagination',\n 'PAGE_SIZE': 20,\n 'DEFAULT_FILTER_BACKENDS': [\n 'django_filters.rest_framework.DjangoFilterBackend',\n ],\n 'DEFAULT_METADATA_CLASS': 'bothub.api.metadata.Metadata',\n}\n\n\n# cors headers\n\nCORS_ORIGIN_ALLOW_ALL = True\nCORS_URLS_REGEX = r'^/api/.*$'\n\n\n# mail\n\nenvvar_EMAIL_HOST = config('EMAIL_HOST', default=None)\n\nADMINS = config(\n 'ADMINS',\n default='',\n cast=lambda v: [\n (\n s.strip().split('|')[0],\n s.strip().split('|')[1],\n ) for s in v.split(',')] if v else [])\nEMAIL_SUBJECT_PREFIX = '[bothub] '\nDEFAULT_FROM_EMAIL = config(\n 'DEFAULT_FROM_EMAIL',\n default='webmaster@localhost')\nSERVER_EMAIL = config('SERVER_EMAIL', default='root@localhost')\n\nif envvar_EMAIL_HOST:\n EMAIL_HOST = envvar_EMAIL_HOST\n EMAIL_PORT = config('EMAIL_PORT', default=25, cast=int)\n EMAIL_HOST_USER = config('EMAIL_HOST_USER', default='')\n EMAIL_HOST_PASSWORD = config('EMAIL_HOST_PASSWORD', default='')\n EMAIL_USE_SSL = config('EMAIL_USE_SSL', default=False, cast=bool)\n EMAIL_USE_TLS = config('EMAIL_USE_TLS', default=False, cast=bool)\nelse:\n EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'\n\n\n# webapp\n\nBOTHUB_WEBAPP_BASE_URL = config(\n 'BOTHUB_WEBAPP_BASE_URL',\n default='http://localhost:8080/')\n\n\n# NLP\n\nBOTHUB_NLP_BASE_URL = config(\n 'BOTHUB_NLP_BASE_URL',\n default='http://localhost:8001/')\n\n\n# CSRF\n\nCSRF_COOKIE_DOMAIN = config(\n 'CSRF_COOKIE_DOMAIN',\n default=None)\n\nCSRF_COOKIE_SECURE = config(\n 'CSRF_COOKIE_SECURE',\n default=False,\n cast=bool)\n", "path": "bothub/settings.py"}]}
| 1,941 | 92 |
gh_patches_debug_9911
|
rasdani/github-patches
|
git_diff
|
quantumlib__Cirq-3978
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Typos in wait_gate.py::wait
https://github.com/quantumlib/Cirq/blob/150f95c31042669ab9905654998a8432844a4209/cirq/ops/wait_gate.py#L140-L143
They all say picoseconds, but should say picos, nanos, micros, millis.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cirq/ops/wait_gate.py`
Content:
```
1 # Copyright 2019 The Cirq Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from typing import AbstractSet, Any, Dict, Optional, Tuple, TYPE_CHECKING, Union
15
16 import sympy
17
18 from cirq import value, protocols
19 from cirq.ops import raw_types
20
21 if TYPE_CHECKING:
22 import cirq
23
24
25 @value.value_equality
26 class WaitGate(raw_types.Gate):
27 """A single-qubit idle gate that represents waiting.
28
29 In non-noisy simulators, this gate is just an identity gate. But noisy
30 simulators and noise models may insert more error for longer waits.
31 """
32
33 def __init__(
34 self,
35 duration: 'cirq.DURATION_LIKE',
36 num_qubits: Optional[int] = None,
37 qid_shape: Tuple[int, ...] = None,
38 ) -> None:
39 """Initialize a wait gate with the given duration.
40
41 Args:
42 duration: A constant or parameterized wait duration. This can be
43 an instance of `datetime.timedelta` or `cirq.Duration`.
44 """
45 self.duration = value.Duration(duration)
46 if not protocols.is_parameterized(self.duration) and self.duration < 0:
47 raise ValueError('duration < 0')
48 if qid_shape is None:
49 if num_qubits is None:
50 # Assume one qubit for backwards compatibility
51 qid_shape = (2,)
52 else:
53 qid_shape = (2,) * num_qubits
54 if num_qubits is None:
55 num_qubits = len(qid_shape)
56 if not qid_shape:
57 raise ValueError('Waiting on an empty set of qubits.')
58 if num_qubits != len(qid_shape):
59 raise ValueError('len(qid_shape) != num_qubits')
60 self._qid_shape = qid_shape
61
62 def _is_parameterized_(self) -> bool:
63 return protocols.is_parameterized(self.duration)
64
65 def _parameter_names_(self) -> AbstractSet[str]:
66 return protocols.parameter_names(self.duration)
67
68 def _resolve_parameters_(self, resolver: 'cirq.ParamResolver', recursive: bool) -> 'WaitGate':
69 return WaitGate(protocols.resolve_parameters(self.duration, resolver, recursive))
70
71 def _qid_shape_(self) -> Tuple[int, ...]:
72 return self._qid_shape
73
74 def _has_unitary_(self) -> bool:
75 return True
76
77 def _apply_unitary_(self, args):
78 return args.target_tensor # Identity.
79
80 def _decompose_(self, qubits):
81 return []
82
83 def _trace_distance_bound_(self):
84 return 0
85
86 def __pow__(self, power):
87 if power == 1 or power == -1:
88 # The inverse of a wait is still a wait.
89 return self
90 # Other scalar exponents could scale the wait... but ultimately it is
91 # ambiguous whether the user wanted to scale the duration or just wanted
92 # to affect the unitary. Play it safe and fail.
93 return NotImplemented
94
95 def __str__(self) -> str:
96 return f'WaitGate({self.duration})'
97
98 def __repr__(self) -> str:
99 return f'cirq.WaitGate({repr(self.duration)})'
100
101 def _json_dict_(self) -> Dict[str, Any]:
102 d = protocols.obj_to_dict_helper(self, ['duration'])
103 if len(self._qid_shape) != 1:
104 d['num_qubits'] = len(self._qid_shape)
105 if any(d != 2 for d in self._qid_shape):
106 d['qid_shape'] = self._qid_shape
107 return d
108
109 @classmethod
110 def _from_json_dict_(cls, duration, num_qubits=None, qid_shape=None, **kwargs):
111 return cls(
112 duration=duration,
113 num_qubits=num_qubits,
114 qid_shape=None if qid_shape is None else tuple(qid_shape),
115 )
116
117 def _value_equality_values_(self) -> Any:
118 return self.duration
119
120 def _quil_(self, qubits: Tuple['cirq.Qid', ...], formatter: 'cirq.QuilFormatter'):
121 return 'WAIT\n'
122
123
124 def wait(
125 *target: 'cirq.Qid',
126 duration: 'cirq.DURATION_LIKE' = None,
127 picos: Union[int, float, sympy.Basic] = 0,
128 nanos: Union[int, float, sympy.Basic] = 0,
129 micros: Union[int, float, sympy.Basic] = 0,
130 millis: Union[int, float, sympy.Basic] = 0,
131 ) -> raw_types.Operation:
132 """Creates a WaitGate applied to all the given qubits.
133
134 The duration can be specified as a DURATION_LIKE or using keyword args with
135 numbers in the appropriate units. See Duration for details.
136
137 Args:
138 *target: The qubits that should wait.
139 value: Wait duration (see Duration).
140 picos: Picoseconds to wait (see Duration).
141 nanos: Picoseconds to wait (see Duration).
142 micros: Picoseconds to wait (see Duration).
143 millis: Picoseconds to wait (see Duration).
144 """
145 return WaitGate(
146 duration=value.Duration(
147 duration,
148 picos=picos,
149 nanos=nanos,
150 micros=micros,
151 millis=millis,
152 ),
153 qid_shape=protocols.qid_shape(target),
154 ).on(*target)
155
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/cirq/ops/wait_gate.py b/cirq/ops/wait_gate.py
--- a/cirq/ops/wait_gate.py
+++ b/cirq/ops/wait_gate.py
@@ -138,9 +138,9 @@
*target: The qubits that should wait.
value: Wait duration (see Duration).
picos: Picoseconds to wait (see Duration).
- nanos: Picoseconds to wait (see Duration).
- micros: Picoseconds to wait (see Duration).
- millis: Picoseconds to wait (see Duration).
+ nanos: Nanoseconds to wait (see Duration).
+ micros: Microseconds to wait (see Duration).
+ millis: Milliseconds to wait (see Duration).
"""
return WaitGate(
duration=value.Duration(
|
{"golden_diff": "diff --git a/cirq/ops/wait_gate.py b/cirq/ops/wait_gate.py\n--- a/cirq/ops/wait_gate.py\n+++ b/cirq/ops/wait_gate.py\n@@ -138,9 +138,9 @@\n *target: The qubits that should wait.\n value: Wait duration (see Duration).\n picos: Picoseconds to wait (see Duration).\n- nanos: Picoseconds to wait (see Duration).\n- micros: Picoseconds to wait (see Duration).\n- millis: Picoseconds to wait (see Duration).\n+ nanos: Nanoseconds to wait (see Duration).\n+ micros: Microseconds to wait (see Duration).\n+ millis: Milliseconds to wait (see Duration).\n \"\"\"\n return WaitGate(\n duration=value.Duration(\n", "issue": "Typos in wait_gate.py::wait\nhttps://github.com/quantumlib/Cirq/blob/150f95c31042669ab9905654998a8432844a4209/cirq/ops/wait_gate.py#L140-L143\r\n\r\nThey all say picoseconds, but should say picos, nanos, micros, millis.\r\n\n", "before_files": [{"content": "# Copyright 2019 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import AbstractSet, Any, Dict, Optional, Tuple, TYPE_CHECKING, Union\n\nimport sympy\n\nfrom cirq import value, protocols\nfrom cirq.ops import raw_types\n\nif TYPE_CHECKING:\n import cirq\n\n\[email protected]_equality\nclass WaitGate(raw_types.Gate):\n \"\"\"A single-qubit idle gate that represents waiting.\n\n In non-noisy simulators, this gate is just an identity gate. But noisy\n simulators and noise models may insert more error for longer waits.\n \"\"\"\n\n def __init__(\n self,\n duration: 'cirq.DURATION_LIKE',\n num_qubits: Optional[int] = None,\n qid_shape: Tuple[int, ...] = None,\n ) -> None:\n \"\"\"Initialize a wait gate with the given duration.\n\n Args:\n duration: A constant or parameterized wait duration. This can be\n an instance of `datetime.timedelta` or `cirq.Duration`.\n \"\"\"\n self.duration = value.Duration(duration)\n if not protocols.is_parameterized(self.duration) and self.duration < 0:\n raise ValueError('duration < 0')\n if qid_shape is None:\n if num_qubits is None:\n # Assume one qubit for backwards compatibility\n qid_shape = (2,)\n else:\n qid_shape = (2,) * num_qubits\n if num_qubits is None:\n num_qubits = len(qid_shape)\n if not qid_shape:\n raise ValueError('Waiting on an empty set of qubits.')\n if num_qubits != len(qid_shape):\n raise ValueError('len(qid_shape) != num_qubits')\n self._qid_shape = qid_shape\n\n def _is_parameterized_(self) -> bool:\n return protocols.is_parameterized(self.duration)\n\n def _parameter_names_(self) -> AbstractSet[str]:\n return protocols.parameter_names(self.duration)\n\n def _resolve_parameters_(self, resolver: 'cirq.ParamResolver', recursive: bool) -> 'WaitGate':\n return WaitGate(protocols.resolve_parameters(self.duration, resolver, recursive))\n\n def _qid_shape_(self) -> Tuple[int, ...]:\n return self._qid_shape\n\n def _has_unitary_(self) -> bool:\n return True\n\n def _apply_unitary_(self, args):\n return args.target_tensor # Identity.\n\n def _decompose_(self, qubits):\n return []\n\n def _trace_distance_bound_(self):\n return 0\n\n def __pow__(self, power):\n if power == 1 or power == -1:\n # The inverse of a wait is still a wait.\n return self\n # Other scalar exponents could scale the wait... but ultimately it is\n # ambiguous whether the user wanted to scale the duration or just wanted\n # to affect the unitary. Play it safe and fail.\n return NotImplemented\n\n def __str__(self) -> str:\n return f'WaitGate({self.duration})'\n\n def __repr__(self) -> str:\n return f'cirq.WaitGate({repr(self.duration)})'\n\n def _json_dict_(self) -> Dict[str, Any]:\n d = protocols.obj_to_dict_helper(self, ['duration'])\n if len(self._qid_shape) != 1:\n d['num_qubits'] = len(self._qid_shape)\n if any(d != 2 for d in self._qid_shape):\n d['qid_shape'] = self._qid_shape\n return d\n\n @classmethod\n def _from_json_dict_(cls, duration, num_qubits=None, qid_shape=None, **kwargs):\n return cls(\n duration=duration,\n num_qubits=num_qubits,\n qid_shape=None if qid_shape is None else tuple(qid_shape),\n )\n\n def _value_equality_values_(self) -> Any:\n return self.duration\n\n def _quil_(self, qubits: Tuple['cirq.Qid', ...], formatter: 'cirq.QuilFormatter'):\n return 'WAIT\\n'\n\n\ndef wait(\n *target: 'cirq.Qid',\n duration: 'cirq.DURATION_LIKE' = None,\n picos: Union[int, float, sympy.Basic] = 0,\n nanos: Union[int, float, sympy.Basic] = 0,\n micros: Union[int, float, sympy.Basic] = 0,\n millis: Union[int, float, sympy.Basic] = 0,\n) -> raw_types.Operation:\n \"\"\"Creates a WaitGate applied to all the given qubits.\n\n The duration can be specified as a DURATION_LIKE or using keyword args with\n numbers in the appropriate units. See Duration for details.\n\n Args:\n *target: The qubits that should wait.\n value: Wait duration (see Duration).\n picos: Picoseconds to wait (see Duration).\n nanos: Picoseconds to wait (see Duration).\n micros: Picoseconds to wait (see Duration).\n millis: Picoseconds to wait (see Duration).\n \"\"\"\n return WaitGate(\n duration=value.Duration(\n duration,\n picos=picos,\n nanos=nanos,\n micros=micros,\n millis=millis,\n ),\n qid_shape=protocols.qid_shape(target),\n ).on(*target)\n", "path": "cirq/ops/wait_gate.py"}], "after_files": [{"content": "# Copyright 2019 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import AbstractSet, Any, Dict, Optional, Tuple, TYPE_CHECKING, Union\n\nimport sympy\n\nfrom cirq import value, protocols\nfrom cirq.ops import raw_types\n\nif TYPE_CHECKING:\n import cirq\n\n\[email protected]_equality\nclass WaitGate(raw_types.Gate):\n \"\"\"A single-qubit idle gate that represents waiting.\n\n In non-noisy simulators, this gate is just an identity gate. But noisy\n simulators and noise models may insert more error for longer waits.\n \"\"\"\n\n def __init__(\n self,\n duration: 'cirq.DURATION_LIKE',\n num_qubits: Optional[int] = None,\n qid_shape: Tuple[int, ...] = None,\n ) -> None:\n \"\"\"Initialize a wait gate with the given duration.\n\n Args:\n duration: A constant or parameterized wait duration. This can be\n an instance of `datetime.timedelta` or `cirq.Duration`.\n \"\"\"\n self.duration = value.Duration(duration)\n if not protocols.is_parameterized(self.duration) and self.duration < 0:\n raise ValueError('duration < 0')\n if qid_shape is None:\n if num_qubits is None:\n # Assume one qubit for backwards compatibility\n qid_shape = (2,)\n else:\n qid_shape = (2,) * num_qubits\n if num_qubits is None:\n num_qubits = len(qid_shape)\n if not qid_shape:\n raise ValueError('Waiting on an empty set of qubits.')\n if num_qubits != len(qid_shape):\n raise ValueError('len(qid_shape) != num_qubits')\n self._qid_shape = qid_shape\n\n def _is_parameterized_(self) -> bool:\n return protocols.is_parameterized(self.duration)\n\n def _parameter_names_(self) -> AbstractSet[str]:\n return protocols.parameter_names(self.duration)\n\n def _resolve_parameters_(self, resolver: 'cirq.ParamResolver', recursive: bool) -> 'WaitGate':\n return WaitGate(protocols.resolve_parameters(self.duration, resolver, recursive))\n\n def _qid_shape_(self) -> Tuple[int, ...]:\n return self._qid_shape\n\n def _has_unitary_(self) -> bool:\n return True\n\n def _apply_unitary_(self, args):\n return args.target_tensor # Identity.\n\n def _decompose_(self, qubits):\n return []\n\n def _trace_distance_bound_(self):\n return 0\n\n def __pow__(self, power):\n if power == 1 or power == -1:\n # The inverse of a wait is still a wait.\n return self\n # Other scalar exponents could scale the wait... but ultimately it is\n # ambiguous whether the user wanted to scale the duration or just wanted\n # to affect the unitary. Play it safe and fail.\n return NotImplemented\n\n def __str__(self) -> str:\n return f'WaitGate({self.duration})'\n\n def __repr__(self) -> str:\n return f'cirq.WaitGate({repr(self.duration)})'\n\n def _json_dict_(self) -> Dict[str, Any]:\n d = protocols.obj_to_dict_helper(self, ['duration'])\n if len(self._qid_shape) != 1:\n d['num_qubits'] = len(self._qid_shape)\n if any(d != 2 for d in self._qid_shape):\n d['qid_shape'] = self._qid_shape\n return d\n\n @classmethod\n def _from_json_dict_(cls, duration, num_qubits=None, qid_shape=None, **kwargs):\n return cls(\n duration=duration,\n num_qubits=num_qubits,\n qid_shape=None if qid_shape is None else tuple(qid_shape),\n )\n\n def _value_equality_values_(self) -> Any:\n return self.duration\n\n def _quil_(self, qubits: Tuple['cirq.Qid', ...], formatter: 'cirq.QuilFormatter'):\n return 'WAIT\\n'\n\n\ndef wait(\n *target: 'cirq.Qid',\n duration: 'cirq.DURATION_LIKE' = None,\n picos: Union[int, float, sympy.Basic] = 0,\n nanos: Union[int, float, sympy.Basic] = 0,\n micros: Union[int, float, sympy.Basic] = 0,\n millis: Union[int, float, sympy.Basic] = 0,\n) -> raw_types.Operation:\n \"\"\"Creates a WaitGate applied to all the given qubits.\n\n The duration can be specified as a DURATION_LIKE or using keyword args with\n numbers in the appropriate units. See Duration for details.\n\n Args:\n *target: The qubits that should wait.\n value: Wait duration (see Duration).\n picos: Picoseconds to wait (see Duration).\n nanos: Nanoseconds to wait (see Duration).\n micros: Microseconds to wait (see Duration).\n millis: Milliseconds to wait (see Duration).\n \"\"\"\n return WaitGate(\n duration=value.Duration(\n duration,\n picos=picos,\n nanos=nanos,\n micros=micros,\n millis=millis,\n ),\n qid_shape=protocols.qid_shape(target),\n ).on(*target)\n", "path": "cirq/ops/wait_gate.py"}]}
| 2,010 | 174 |
gh_patches_debug_2325
|
rasdani/github-patches
|
git_diff
|
chainer__chainer-256
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`concat` with the last axis fails on py3
Same problem in `concat` as #253
@ShigekiKarita reported this problem too. Thanks!
https://gist.github.com/ShigekiKarita/4293f886765a1ed4a144
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `chainer/functions/concat.py`
Content:
```
1 import numpy
2
3 from chainer import cuda
4 from chainer import function
5 from chainer.utils import type_check
6
7 _args = 'const float* x, float* y, int cdimx, int cdimy, int rdim, int coffset'
8 _preamble = '''
9 #define COPY(statement) \
10 int l = i / (rdim * cdimx); \
11 int c = i / rdim % cdimx + coffset; \
12 int r = i % rdim; \
13 int idx = r + rdim * (c + cdimy * l); \
14 statement;
15 '''
16
17
18 class Concat(function.Function):
19
20 """Concatenate multiple tensors towards specified axis."""
21
22 # concat along the channel dimension by default
23 def __init__(self, axis=1):
24 self.axis = axis
25
26 def check_type_forward(self, in_types):
27 type_check.expect(in_types.size() > 0)
28 type_check.expect(in_types[0].ndim >
29 type_check.Variable(self.axis, 'axis'))
30
31 ndim = in_types[0].ndim.eval()
32 for i in range(1, in_types.size().eval()):
33 type_check.expect(
34 in_types[0].dtype == in_types[i].dtype,
35 in_types[0].ndim == in_types[i].ndim,
36 )
37 for d in range(0, ndim):
38 if d == self.axis:
39 continue
40 type_check.expect(in_types[0].shape[d] == in_types[i].shape[d])
41
42 def check_type_backward(self, in_types, out_types):
43 type_check.expect(
44 in_types.size() > 0,
45 out_types.size() == 1,
46 )
47 y_type, = out_types
48
49 type_check.expect(y_type.dtype == in_types[0].dtype)
50 ndim = in_types[0].ndim.eval()
51 concat_size = sum(typ.shape[self.axis] for typ in in_types)
52 type_check.expect(concat_size == y_type.shape[self.axis])
53
54 for d in range(0, ndim):
55 if d == self.axis:
56 continue
57 type_check.expect(y_type.shape[d] == in_types[0].shape[d])
58
59 def forward_cpu(self, xs):
60 return numpy.concatenate(xs, axis=self.axis),
61
62 def forward_gpu(self, xs):
63 # TODO(beam2d): Unify the process into a single kernel.
64 shape = list(xs[0].shape)
65 for x in xs[1:]:
66 shape[self.axis] += x.shape[self.axis]
67 self.shape = shape
68
69 y = cuda.empty(shape, dtype=xs[0].dtype)
70 self.cdimy = y.shape[self.axis]
71 self.rdim = numpy.prod(shape[self.axis + 1:])
72
73 coffset = 0
74 kernel = cuda.elementwise(
75 _args, 'COPY(y[idx] = x[i])', 'concat_fwd', preamble=_preamble)
76 for x in xs:
77 cdimx = x.shape[self.axis]
78 kernel(x, y, cdimx, self.cdimy, self.rdim, coffset)
79 coffset += cdimx
80
81 return y,
82
83 def backward_cpu(self, xs, gy):
84 sizes = numpy.array([x.shape[self.axis] for x in xs[:-1]]).cumsum()
85 return numpy.split(gy[0], sizes, axis=self.axis)
86
87 def backward_gpu(self, xs, gy):
88 gxs = tuple(cuda.empty_like(x) for x in xs)
89
90 coffset = 0
91 kernel = cuda.elementwise(
92 _args, 'COPY(x[i] = y[idx])', 'concat_bwd', preamble=_preamble)
93 for gx in gxs:
94 cdimx = gx.shape[self.axis]
95 kernel(gx, gy[0], cdimx, self.cdimy, self.rdim, coffset)
96 coffset += cdimx
97
98 return gxs
99
100
101 def concat(xs, axis=1):
102 """Concatenates given variables along an axis.
103
104 Args:
105 xs (tuple of Variables): Variables to be concatenated.
106 axis (int): Axis that the input arrays are concatenated along.
107
108 Returns:
109 ~chainer.Variable: Output variable.
110
111 """
112 return Concat(axis=axis)(*xs)
113
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/chainer/functions/concat.py b/chainer/functions/concat.py
--- a/chainer/functions/concat.py
+++ b/chainer/functions/concat.py
@@ -68,7 +68,7 @@
y = cuda.empty(shape, dtype=xs[0].dtype)
self.cdimy = y.shape[self.axis]
- self.rdim = numpy.prod(shape[self.axis + 1:])
+ self.rdim = numpy.prod(shape[self.axis + 1:], dtype=int)
coffset = 0
kernel = cuda.elementwise(
|
{"golden_diff": "diff --git a/chainer/functions/concat.py b/chainer/functions/concat.py\n--- a/chainer/functions/concat.py\n+++ b/chainer/functions/concat.py\n@@ -68,7 +68,7 @@\n \n y = cuda.empty(shape, dtype=xs[0].dtype)\n self.cdimy = y.shape[self.axis]\n- self.rdim = numpy.prod(shape[self.axis + 1:])\n+ self.rdim = numpy.prod(shape[self.axis + 1:], dtype=int)\n \n coffset = 0\n kernel = cuda.elementwise(\n", "issue": "`concat` with the last axis fails on py3\nSame problem in `concat` as #253 \n\n@ShigekiKarita reported this problem too. Thanks!\nhttps://gist.github.com/ShigekiKarita/4293f886765a1ed4a144\n\n", "before_files": [{"content": "import numpy\n\nfrom chainer import cuda\nfrom chainer import function\nfrom chainer.utils import type_check\n\n_args = 'const float* x, float* y, int cdimx, int cdimy, int rdim, int coffset'\n_preamble = '''\n#define COPY(statement) \\\n int l = i / (rdim * cdimx); \\\n int c = i / rdim % cdimx + coffset; \\\n int r = i % rdim; \\\n int idx = r + rdim * (c + cdimy * l); \\\n statement;\n'''\n\n\nclass Concat(function.Function):\n\n \"\"\"Concatenate multiple tensors towards specified axis.\"\"\"\n\n # concat along the channel dimension by default\n def __init__(self, axis=1):\n self.axis = axis\n\n def check_type_forward(self, in_types):\n type_check.expect(in_types.size() > 0)\n type_check.expect(in_types[0].ndim >\n type_check.Variable(self.axis, 'axis'))\n\n ndim = in_types[0].ndim.eval()\n for i in range(1, in_types.size().eval()):\n type_check.expect(\n in_types[0].dtype == in_types[i].dtype,\n in_types[0].ndim == in_types[i].ndim,\n )\n for d in range(0, ndim):\n if d == self.axis:\n continue\n type_check.expect(in_types[0].shape[d] == in_types[i].shape[d])\n\n def check_type_backward(self, in_types, out_types):\n type_check.expect(\n in_types.size() > 0,\n out_types.size() == 1,\n )\n y_type, = out_types\n\n type_check.expect(y_type.dtype == in_types[0].dtype)\n ndim = in_types[0].ndim.eval()\n concat_size = sum(typ.shape[self.axis] for typ in in_types)\n type_check.expect(concat_size == y_type.shape[self.axis])\n\n for d in range(0, ndim):\n if d == self.axis:\n continue\n type_check.expect(y_type.shape[d] == in_types[0].shape[d])\n\n def forward_cpu(self, xs):\n return numpy.concatenate(xs, axis=self.axis),\n\n def forward_gpu(self, xs):\n # TODO(beam2d): Unify the process into a single kernel.\n shape = list(xs[0].shape)\n for x in xs[1:]:\n shape[self.axis] += x.shape[self.axis]\n self.shape = shape\n\n y = cuda.empty(shape, dtype=xs[0].dtype)\n self.cdimy = y.shape[self.axis]\n self.rdim = numpy.prod(shape[self.axis + 1:])\n\n coffset = 0\n kernel = cuda.elementwise(\n _args, 'COPY(y[idx] = x[i])', 'concat_fwd', preamble=_preamble)\n for x in xs:\n cdimx = x.shape[self.axis]\n kernel(x, y, cdimx, self.cdimy, self.rdim, coffset)\n coffset += cdimx\n\n return y,\n\n def backward_cpu(self, xs, gy):\n sizes = numpy.array([x.shape[self.axis] for x in xs[:-1]]).cumsum()\n return numpy.split(gy[0], sizes, axis=self.axis)\n\n def backward_gpu(self, xs, gy):\n gxs = tuple(cuda.empty_like(x) for x in xs)\n\n coffset = 0\n kernel = cuda.elementwise(\n _args, 'COPY(x[i] = y[idx])', 'concat_bwd', preamble=_preamble)\n for gx in gxs:\n cdimx = gx.shape[self.axis]\n kernel(gx, gy[0], cdimx, self.cdimy, self.rdim, coffset)\n coffset += cdimx\n\n return gxs\n\n\ndef concat(xs, axis=1):\n \"\"\"Concatenates given variables along an axis.\n\n Args:\n xs (tuple of Variables): Variables to be concatenated.\n axis (int): Axis that the input arrays are concatenated along.\n\n Returns:\n ~chainer.Variable: Output variable.\n\n \"\"\"\n return Concat(axis=axis)(*xs)\n", "path": "chainer/functions/concat.py"}], "after_files": [{"content": "import numpy\n\nfrom chainer import cuda\nfrom chainer import function\nfrom chainer.utils import type_check\n\n_args = 'const float* x, float* y, int cdimx, int cdimy, int rdim, int coffset'\n_preamble = '''\n#define COPY(statement) \\\n int l = i / (rdim * cdimx); \\\n int c = i / rdim % cdimx + coffset; \\\n int r = i % rdim; \\\n int idx = r + rdim * (c + cdimy * l); \\\n statement;\n'''\n\n\nclass Concat(function.Function):\n\n \"\"\"Concatenate multiple tensors towards specified axis.\"\"\"\n\n # concat along the channel dimension by default\n def __init__(self, axis=1):\n self.axis = axis\n\n def check_type_forward(self, in_types):\n type_check.expect(in_types.size() > 0)\n type_check.expect(in_types[0].ndim >\n type_check.Variable(self.axis, 'axis'))\n\n ndim = in_types[0].ndim.eval()\n for i in range(1, in_types.size().eval()):\n type_check.expect(\n in_types[0].dtype == in_types[i].dtype,\n in_types[0].ndim == in_types[i].ndim,\n )\n for d in range(0, ndim):\n if d == self.axis:\n continue\n type_check.expect(in_types[0].shape[d] == in_types[i].shape[d])\n\n def check_type_backward(self, in_types, out_types):\n type_check.expect(\n in_types.size() > 0,\n out_types.size() == 1,\n )\n y_type, = out_types\n\n type_check.expect(y_type.dtype == in_types[0].dtype)\n ndim = in_types[0].ndim.eval()\n concat_size = sum(typ.shape[self.axis] for typ in in_types)\n type_check.expect(concat_size == y_type.shape[self.axis])\n\n for d in range(0, ndim):\n if d == self.axis:\n continue\n type_check.expect(y_type.shape[d] == in_types[0].shape[d])\n\n def forward_cpu(self, xs):\n return numpy.concatenate(xs, axis=self.axis),\n\n def forward_gpu(self, xs):\n # TODO(beam2d): Unify the process into a single kernel.\n shape = list(xs[0].shape)\n for x in xs[1:]:\n shape[self.axis] += x.shape[self.axis]\n self.shape = shape\n\n y = cuda.empty(shape, dtype=xs[0].dtype)\n self.cdimy = y.shape[self.axis]\n self.rdim = numpy.prod(shape[self.axis + 1:], dtype=int)\n\n coffset = 0\n kernel = cuda.elementwise(\n _args, 'COPY(y[idx] = x[i])', 'concat_fwd', preamble=_preamble)\n for x in xs:\n cdimx = x.shape[self.axis]\n kernel(x, y, cdimx, self.cdimy, self.rdim, coffset)\n coffset += cdimx\n\n return y,\n\n def backward_cpu(self, xs, gy):\n sizes = numpy.array([x.shape[self.axis] for x in xs[:-1]]).cumsum()\n return numpy.split(gy[0], sizes, axis=self.axis)\n\n def backward_gpu(self, xs, gy):\n gxs = tuple(cuda.empty_like(x) for x in xs)\n\n coffset = 0\n kernel = cuda.elementwise(\n _args, 'COPY(x[i] = y[idx])', 'concat_bwd', preamble=_preamble)\n for gx in gxs:\n cdimx = gx.shape[self.axis]\n kernel(gx, gy[0], cdimx, self.cdimy, self.rdim, coffset)\n coffset += cdimx\n\n return gxs\n\n\ndef concat(xs, axis=1):\n \"\"\"Concatenates given variables along an axis.\n\n Args:\n xs (tuple of Variables): Variables to be concatenated.\n axis (int): Axis that the input arrays are concatenated along.\n\n Returns:\n ~chainer.Variable: Output variable.\n\n \"\"\"\n return Concat(axis=axis)(*xs)\n", "path": "chainer/functions/concat.py"}]}
| 1,495 | 123 |
gh_patches_debug_12504
|
rasdani/github-patches
|
git_diff
|
WeblateOrg__weblate-9990
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
docker: Enable WEBLATE_GITLAB_CREDENTIALS environment variable
### Describe the problem
Right now it seems I can use gitlab_username and gitlab_token variables. But when I try to use gitlab_credentials:
> WEBLATE_GITLAB_CREDENTIALS: "git.duniter.org": {username: weblate,token: XXXXXXXXXXXXXXX}
I get this error:
> ERROR: yaml.scanner.ScannerError: mapping values are not allowed here
> in "./docker-compose.override.yml", line 17, column 52
### Describe the solution you'd like
Add weblate_gitlab_credentials support
### Describe alternatives you've considered
_No response_
### Screenshots
_No response_
### Additional context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `weblate/utils/environment.py`
Content:
```
1 # Copyright © Michal Čihař <[email protected]>
2 #
3 # SPDX-License-Identifier: GPL-3.0-or-later
4
5 from __future__ import annotations
6
7 import os
8
9
10 def get_env_str(
11 name: str,
12 default: str | None = None,
13 required: bool = False,
14 fallback_name: str | None = None,
15 ) -> str:
16 file_env = f"{name}_FILE"
17 if filename := os.environ.get(file_env):
18 try:
19 with open(filename) as handle:
20 result = handle.read()
21 except OSError as error:
22 raise ValueError(
23 f"Failed to open {filename} as specified by {file_env}: {error}"
24 ) from error
25 else:
26 if fallback_name and name not in os.environ:
27 name = fallback_name
28 result = os.environ.get(name, default)
29 if required and not result:
30 raise ValueError(f"{name} has to be configured!")
31 return result
32
33
34 def get_env_list(name: str, default: list[str] | None = None) -> list[str]:
35 """Helper to get list from environment."""
36 if name not in os.environ:
37 return default or []
38 return os.environ[name].split(",")
39
40
41 def get_env_map(name: str, default: dict[str, str] | None = None) -> dict[str, str]:
42 """
43 Helper to get mapping from environment.
44
45 parses 'full_name:name,email:mail' into {'email': 'mail', 'full_name': 'name'}
46 """
47 if os.environ.get(name):
48 return dict(e.split(":") for e in os.environ[name].split(","))
49 return default or {}
50
51
52 def get_env_int(name: str, default: int = 0) -> int:
53 """Helper to get integer value from environment."""
54 if name not in os.environ:
55 return default
56 try:
57 return int(os.environ[name])
58 except ValueError as error:
59 raise ValueError(f"{name} is not an integer: {error}") from error
60
61
62 def get_env_float(name: str, default: float = 0.0) -> float:
63 """Helper to get float value from environment."""
64 if name not in os.environ:
65 return default
66 try:
67 return float(os.environ[name])
68 except ValueError as error:
69 raise ValueError(f"{name} is not an float: {error}") from error
70
71
72 def get_env_bool(name: str, default: bool = False) -> bool:
73 """Helper to get boolean value from environment."""
74 if name not in os.environ:
75 return default
76 true_values = {"true", "yes", "1"}
77 return os.environ[name].lower() in true_values
78
79
80 def modify_env_list(current: list[str], name: str) -> list[str]:
81 """Helper to modify list (for example checks)."""
82 for item in reversed(get_env_list(f"WEBLATE_ADD_{name}")):
83 current.insert(0, item)
84 for item in get_env_list(f"WEBLATE_REMOVE_{name}"):
85 current.remove(item)
86 return current
87
88
89 def get_env_credentials(
90 name: str,
91 ) -> dict[str, dict[str, str]]:
92 """Parses VCS integration credentials."""
93 username = os.environ.get(f"WEBLATE_{name}_USERNAME")
94 token = os.environ.get(f"WEBLATE_{name}_TOKEN")
95 host = os.environ.get(f"WEBLATE_{name}_HOST")
96
97 if not host and (username or token):
98 raise ValueError(
99 f"Incomplete {name}_CREDENTIALS configuration: missing WEBLATE_{name}_HOST"
100 )
101 return {host: {"username": username, "token": token}}
102
103
104 def get_env_ratelimit(name: str, default: str) -> str:
105 value = os.environ.get(name, default)
106
107 # Taken from rest_framework.throttling.SimpleRateThrottle.parse_rate
108 # it can not be imported here as that breaks config loading for
109 # rest_framework
110
111 try:
112 num, period = value.split("/")
113 except ValueError as error:
114 raise ValueError(f"Could not parse {name}: {error}") from error
115 if not num.isdigit():
116 raise ValueError(f"Could not parse {name}: rate is not numeric: {num}")
117 if period[0] not in ("s", "m", "h", "d"):
118 raise ValueError(f"Could not parse {name}: unknown period: {period}")
119
120 return value
121
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/weblate/utils/environment.py b/weblate/utils/environment.py
--- a/weblate/utils/environment.py
+++ b/weblate/utils/environment.py
@@ -4,6 +4,7 @@
from __future__ import annotations
+import ast
import os
@@ -90,6 +91,8 @@
name: str,
) -> dict[str, dict[str, str]]:
"""Parses VCS integration credentials."""
+ if credentials := get_env_str(f"WEBLATE_{name}_CREDENTIALS"):
+ return ast.literal_eval(credentials)
username = os.environ.get(f"WEBLATE_{name}_USERNAME")
token = os.environ.get(f"WEBLATE_{name}_TOKEN")
host = os.environ.get(f"WEBLATE_{name}_HOST")
|
{"golden_diff": "diff --git a/weblate/utils/environment.py b/weblate/utils/environment.py\n--- a/weblate/utils/environment.py\n+++ b/weblate/utils/environment.py\n@@ -4,6 +4,7 @@\n \n from __future__ import annotations\n \n+import ast\n import os\n \n \n@@ -90,6 +91,8 @@\n name: str,\n ) -> dict[str, dict[str, str]]:\n \"\"\"Parses VCS integration credentials.\"\"\"\n+ if credentials := get_env_str(f\"WEBLATE_{name}_CREDENTIALS\"):\n+ return ast.literal_eval(credentials)\n username = os.environ.get(f\"WEBLATE_{name}_USERNAME\")\n token = os.environ.get(f\"WEBLATE_{name}_TOKEN\")\n host = os.environ.get(f\"WEBLATE_{name}_HOST\")\n", "issue": "docker: Enable WEBLATE_GITLAB_CREDENTIALS environment variable\n### Describe the problem\n\nRight now it seems I can use gitlab_username and gitlab_token variables. But when I try to use gitlab_credentials:\r\n\r\n> WEBLATE_GITLAB_CREDENTIALS: \"git.duniter.org\": {username: weblate,token: XXXXXXXXXXXXXXX}\r\n\r\nI get this error:\r\n\r\n> ERROR: yaml.scanner.ScannerError: mapping values are not allowed here\r\n> in \"./docker-compose.override.yml\", line 17, column 52\r\n\n\n### Describe the solution you'd like\n\nAdd weblate_gitlab_credentials support\n\n### Describe alternatives you've considered\n\n_No response_\n\n### Screenshots\n\n_No response_\n\n### Additional context\n\n_No response_\n", "before_files": [{"content": "# Copyright \u00a9 Michal \u010ciha\u0159 <[email protected]>\n#\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nfrom __future__ import annotations\n\nimport os\n\n\ndef get_env_str(\n name: str,\n default: str | None = None,\n required: bool = False,\n fallback_name: str | None = None,\n) -> str:\n file_env = f\"{name}_FILE\"\n if filename := os.environ.get(file_env):\n try:\n with open(filename) as handle:\n result = handle.read()\n except OSError as error:\n raise ValueError(\n f\"Failed to open {filename} as specified by {file_env}: {error}\"\n ) from error\n else:\n if fallback_name and name not in os.environ:\n name = fallback_name\n result = os.environ.get(name, default)\n if required and not result:\n raise ValueError(f\"{name} has to be configured!\")\n return result\n\n\ndef get_env_list(name: str, default: list[str] | None = None) -> list[str]:\n \"\"\"Helper to get list from environment.\"\"\"\n if name not in os.environ:\n return default or []\n return os.environ[name].split(\",\")\n\n\ndef get_env_map(name: str, default: dict[str, str] | None = None) -> dict[str, str]:\n \"\"\"\n Helper to get mapping from environment.\n\n parses 'full_name:name,email:mail' into {'email': 'mail', 'full_name': 'name'}\n \"\"\"\n if os.environ.get(name):\n return dict(e.split(\":\") for e in os.environ[name].split(\",\"))\n return default or {}\n\n\ndef get_env_int(name: str, default: int = 0) -> int:\n \"\"\"Helper to get integer value from environment.\"\"\"\n if name not in os.environ:\n return default\n try:\n return int(os.environ[name])\n except ValueError as error:\n raise ValueError(f\"{name} is not an integer: {error}\") from error\n\n\ndef get_env_float(name: str, default: float = 0.0) -> float:\n \"\"\"Helper to get float value from environment.\"\"\"\n if name not in os.environ:\n return default\n try:\n return float(os.environ[name])\n except ValueError as error:\n raise ValueError(f\"{name} is not an float: {error}\") from error\n\n\ndef get_env_bool(name: str, default: bool = False) -> bool:\n \"\"\"Helper to get boolean value from environment.\"\"\"\n if name not in os.environ:\n return default\n true_values = {\"true\", \"yes\", \"1\"}\n return os.environ[name].lower() in true_values\n\n\ndef modify_env_list(current: list[str], name: str) -> list[str]:\n \"\"\"Helper to modify list (for example checks).\"\"\"\n for item in reversed(get_env_list(f\"WEBLATE_ADD_{name}\")):\n current.insert(0, item)\n for item in get_env_list(f\"WEBLATE_REMOVE_{name}\"):\n current.remove(item)\n return current\n\n\ndef get_env_credentials(\n name: str,\n) -> dict[str, dict[str, str]]:\n \"\"\"Parses VCS integration credentials.\"\"\"\n username = os.environ.get(f\"WEBLATE_{name}_USERNAME\")\n token = os.environ.get(f\"WEBLATE_{name}_TOKEN\")\n host = os.environ.get(f\"WEBLATE_{name}_HOST\")\n\n if not host and (username or token):\n raise ValueError(\n f\"Incomplete {name}_CREDENTIALS configuration: missing WEBLATE_{name}_HOST\"\n )\n return {host: {\"username\": username, \"token\": token}}\n\n\ndef get_env_ratelimit(name: str, default: str) -> str:\n value = os.environ.get(name, default)\n\n # Taken from rest_framework.throttling.SimpleRateThrottle.parse_rate\n # it can not be imported here as that breaks config loading for\n # rest_framework\n\n try:\n num, period = value.split(\"/\")\n except ValueError as error:\n raise ValueError(f\"Could not parse {name}: {error}\") from error\n if not num.isdigit():\n raise ValueError(f\"Could not parse {name}: rate is not numeric: {num}\")\n if period[0] not in (\"s\", \"m\", \"h\", \"d\"):\n raise ValueError(f\"Could not parse {name}: unknown period: {period}\")\n\n return value\n", "path": "weblate/utils/environment.py"}], "after_files": [{"content": "# Copyright \u00a9 Michal \u010ciha\u0159 <[email protected]>\n#\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nfrom __future__ import annotations\n\nimport ast\nimport os\n\n\ndef get_env_str(\n name: str,\n default: str | None = None,\n required: bool = False,\n fallback_name: str | None = None,\n) -> str:\n file_env = f\"{name}_FILE\"\n if filename := os.environ.get(file_env):\n try:\n with open(filename) as handle:\n result = handle.read()\n except OSError as error:\n raise ValueError(\n f\"Failed to open {filename} as specified by {file_env}: {error}\"\n ) from error\n else:\n if fallback_name and name not in os.environ:\n name = fallback_name\n result = os.environ.get(name, default)\n if required and not result:\n raise ValueError(f\"{name} has to be configured!\")\n return result\n\n\ndef get_env_list(name: str, default: list[str] | None = None) -> list[str]:\n \"\"\"Helper to get list from environment.\"\"\"\n if name not in os.environ:\n return default or []\n return os.environ[name].split(\",\")\n\n\ndef get_env_map(name: str, default: dict[str, str] | None = None) -> dict[str, str]:\n \"\"\"\n Helper to get mapping from environment.\n\n parses 'full_name:name,email:mail' into {'email': 'mail', 'full_name': 'name'}\n \"\"\"\n if os.environ.get(name):\n return dict(e.split(\":\") for e in os.environ[name].split(\",\"))\n return default or {}\n\n\ndef get_env_int(name: str, default: int = 0) -> int:\n \"\"\"Helper to get integer value from environment.\"\"\"\n if name not in os.environ:\n return default\n try:\n return int(os.environ[name])\n except ValueError as error:\n raise ValueError(f\"{name} is not an integer: {error}\") from error\n\n\ndef get_env_float(name: str, default: float = 0.0) -> float:\n \"\"\"Helper to get float value from environment.\"\"\"\n if name not in os.environ:\n return default\n try:\n return float(os.environ[name])\n except ValueError as error:\n raise ValueError(f\"{name} is not an float: {error}\") from error\n\n\ndef get_env_bool(name: str, default: bool = False) -> bool:\n \"\"\"Helper to get boolean value from environment.\"\"\"\n if name not in os.environ:\n return default\n true_values = {\"true\", \"yes\", \"1\"}\n return os.environ[name].lower() in true_values\n\n\ndef modify_env_list(current: list[str], name: str) -> list[str]:\n \"\"\"Helper to modify list (for example checks).\"\"\"\n for item in reversed(get_env_list(f\"WEBLATE_ADD_{name}\")):\n current.insert(0, item)\n for item in get_env_list(f\"WEBLATE_REMOVE_{name}\"):\n current.remove(item)\n return current\n\n\ndef get_env_credentials(\n name: str,\n) -> dict[str, dict[str, str]]:\n \"\"\"Parses VCS integration credentials.\"\"\"\n if credentials := get_env_str(f\"WEBLATE_{name}_CREDENTIALS\"):\n return ast.literal_eval(credentials)\n username = os.environ.get(f\"WEBLATE_{name}_USERNAME\")\n token = os.environ.get(f\"WEBLATE_{name}_TOKEN\")\n host = os.environ.get(f\"WEBLATE_{name}_HOST\")\n\n if not host and (username or token):\n raise ValueError(\n f\"Incomplete {name}_CREDENTIALS configuration: missing WEBLATE_{name}_HOST\"\n )\n return {host: {\"username\": username, \"token\": token}}\n\n\ndef get_env_ratelimit(name: str, default: str) -> str:\n value = os.environ.get(name, default)\n\n # Taken from rest_framework.throttling.SimpleRateThrottle.parse_rate\n # it can not be imported here as that breaks config loading for\n # rest_framework\n\n try:\n num, period = value.split(\"/\")\n except ValueError as error:\n raise ValueError(f\"Could not parse {name}: {error}\") from error\n if not num.isdigit():\n raise ValueError(f\"Could not parse {name}: rate is not numeric: {num}\")\n if period[0] not in (\"s\", \"m\", \"h\", \"d\"):\n raise ValueError(f\"Could not parse {name}: unknown period: {period}\")\n\n return value\n", "path": "weblate/utils/environment.py"}]}
| 1,641 | 175 |
gh_patches_debug_11791
|
rasdani/github-patches
|
git_diff
|
streamlink__streamlink-1027
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ine.py for source in data["playlist"][0]["sources"]: TypeError: 'NoneType' object is not subscriptable
Hi, INE plugin is failing since recently:
```
$ streamlink -o ./streamlink.mp4 https://streaming.ine.com/play/1cfbc029-dd6d-4646-80b9-7316e3ac121a/introduction 720p --http-cookie laravel_session=removed
[cli][info] Found matching plugin ine for URL https://streaming.ine.com/play/1cfbc029-dd6d-4646-80b9-7316e3ac121a/introduction
Traceback (most recent call last):
File "/opt/local/Library/Frameworks/Python.framework/Versions/Current/bin/streamlink", line 11, in <module>
load_entry_point('streamlink==0.6.0', 'console_scripts', 'streamlink')()
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/streamlink_cli/main.py", line 1027, in main
handle_url()
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/streamlink_cli/main.py", line 482, in handle_url
streams = fetch_streams(plugin)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/streamlink_cli/main.py", line 394, in fetch_streams
sorting_excludes=args.stream_sorting_excludes)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/streamlink/plugin/plugin.py", line 328, in get_streams
return self.streams(*args, **kwargs)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/streamlink/plugin/plugin.py", line 236, in streams
ostreams = self._get_streams()
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/streamlink/plugins/ine.py", line 50, in _get_streams
for source in data["playlist"][0]["sources"]:
TypeError: 'NoneType' object is not subscriptable
$
$ python --version
Python 3.5.3
$ streamlink --version
streamlink 0.6.0
$ streamlink --version-check
[cli][info] Your Streamlink version (0.6) is up to date!
$
```
Same error on mac OS and Windows.
This particular URL was 'downloadable' with no problem about a month ago or so.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink/plugins/ine.py`
Content:
```
1 from __future__ import print_function
2
3 import json
4 import re
5
6 from streamlink.plugin import Plugin
7 from streamlink.plugin.api import http
8 from streamlink.plugin.api import validate
9 from streamlink.stream import HLSStream
10
11
12 class INE(Plugin):
13 url_re = re.compile(r"""https://streaming.ine.com/play\#?/
14 ([0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12})/?
15 (.*?)""", re.VERBOSE)
16 play_url = "https://streaming.ine.com/play/{vid}/watch"
17 js_re = re.compile(r'''script type="text/javascript" src="(https://content.jwplatform.com/players/.*?)"''')
18 jwplayer_re = re.compile(r'''jwplayer\(".*?"\).setup\((\{.*\})\);''', re.DOTALL)
19 setup_schema = validate.Schema(
20 validate.transform(jwplayer_re.search),
21 validate.any(
22 None,
23 validate.all(
24 validate.get(1),
25 validate.transform(json.loads),
26 {"playlist": [
27 {"sources": [{"file": validate.text,
28 "type": validate.text}]}
29 ]}
30 )
31 )
32 )
33
34 @classmethod
35 def can_handle_url(cls, url):
36 return cls.url_re.match(url) is not None
37
38 def _get_streams(self):
39 vid = self.url_re.match(self.url).group(1)
40 self.logger.debug("Found video ID: {0}", vid)
41
42 page = http.get(self.play_url.format(vid=vid))
43 js_url_m = self.js_re.search(page.text)
44 if js_url_m:
45 js_url = js_url_m.group(1)
46 self.logger.debug("Loading player JS: {0}", js_url)
47
48 res = http.get(js_url)
49 data = self.setup_schema.validate(res.text)
50 for source in data["playlist"][0]["sources"]:
51 if source["type"] == "hls":
52 return HLSStream.parse_variant_playlist(self.session, "https:" + source["file"])
53
54
55 __plugin__ = INE
56
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/streamlink/plugins/ine.py b/src/streamlink/plugins/ine.py
--- a/src/streamlink/plugins/ine.py
+++ b/src/streamlink/plugins/ine.py
@@ -15,7 +15,7 @@
(.*?)""", re.VERBOSE)
play_url = "https://streaming.ine.com/play/{vid}/watch"
js_re = re.compile(r'''script type="text/javascript" src="(https://content.jwplatform.com/players/.*?)"''')
- jwplayer_re = re.compile(r'''jwplayer\(".*?"\).setup\((\{.*\})\);''', re.DOTALL)
+ jwplayer_re = re.compile(r'''jwConfig\s*=\s*(\{.*\});''', re.DOTALL)
setup_schema = validate.Schema(
validate.transform(jwplayer_re.search),
validate.any(
|
{"golden_diff": "diff --git a/src/streamlink/plugins/ine.py b/src/streamlink/plugins/ine.py\n--- a/src/streamlink/plugins/ine.py\n+++ b/src/streamlink/plugins/ine.py\n@@ -15,7 +15,7 @@\n (.*?)\"\"\", re.VERBOSE)\n play_url = \"https://streaming.ine.com/play/{vid}/watch\"\n js_re = re.compile(r'''script type=\"text/javascript\" src=\"(https://content.jwplatform.com/players/.*?)\"''')\n- jwplayer_re = re.compile(r'''jwplayer\\(\".*?\"\\).setup\\((\\{.*\\})\\);''', re.DOTALL)\n+ jwplayer_re = re.compile(r'''jwConfig\\s*=\\s*(\\{.*\\});''', re.DOTALL)\n setup_schema = validate.Schema(\n validate.transform(jwplayer_re.search),\n validate.any(\n", "issue": "ine.py for source in data[\"playlist\"][0][\"sources\"]: TypeError: 'NoneType' object is not subscriptable\nHi, INE plugin is failing since recently:\r\n\r\n```\r\n$ streamlink -o ./streamlink.mp4 https://streaming.ine.com/play/1cfbc029-dd6d-4646-80b9-7316e3ac121a/introduction 720p --http-cookie laravel_session=removed\r\n[cli][info] Found matching plugin ine for URL https://streaming.ine.com/play/1cfbc029-dd6d-4646-80b9-7316e3ac121a/introduction\r\nTraceback (most recent call last):\r\n File \"/opt/local/Library/Frameworks/Python.framework/Versions/Current/bin/streamlink\", line 11, in <module>\r\n load_entry_point('streamlink==0.6.0', 'console_scripts', 'streamlink')()\r\n File \"/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/streamlink_cli/main.py\", line 1027, in main\r\n handle_url()\r\n File \"/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/streamlink_cli/main.py\", line 482, in handle_url\r\n streams = fetch_streams(plugin)\r\n File \"/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/streamlink_cli/main.py\", line 394, in fetch_streams\r\n sorting_excludes=args.stream_sorting_excludes)\r\n File \"/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/streamlink/plugin/plugin.py\", line 328, in get_streams\r\n return self.streams(*args, **kwargs)\r\n File \"/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/streamlink/plugin/plugin.py\", line 236, in streams\r\n ostreams = self._get_streams()\r\n File \"/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/streamlink/plugins/ine.py\", line 50, in _get_streams\r\n for source in data[\"playlist\"][0][\"sources\"]:\r\nTypeError: 'NoneType' object is not subscriptable\r\n$ \r\n$ python --version\r\nPython 3.5.3\r\n$ streamlink --version\r\nstreamlink 0.6.0\r\n$ streamlink --version-check\r\n[cli][info] Your Streamlink version (0.6) is up to date!\r\n$\r\n```\r\nSame error on mac OS and Windows.\r\nThis particular URL was 'downloadable' with no problem about a month ago or so.\n", "before_files": [{"content": "from __future__ import print_function\n\nimport json\nimport re\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import http\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream import HLSStream\n\n\nclass INE(Plugin):\n url_re = re.compile(r\"\"\"https://streaming.ine.com/play\\#?/\n ([0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12})/?\n (.*?)\"\"\", re.VERBOSE)\n play_url = \"https://streaming.ine.com/play/{vid}/watch\"\n js_re = re.compile(r'''script type=\"text/javascript\" src=\"(https://content.jwplatform.com/players/.*?)\"''')\n jwplayer_re = re.compile(r'''jwplayer\\(\".*?\"\\).setup\\((\\{.*\\})\\);''', re.DOTALL)\n setup_schema = validate.Schema(\n validate.transform(jwplayer_re.search),\n validate.any(\n None,\n validate.all(\n validate.get(1),\n validate.transform(json.loads),\n {\"playlist\": [\n {\"sources\": [{\"file\": validate.text,\n \"type\": validate.text}]}\n ]}\n )\n )\n )\n\n @classmethod\n def can_handle_url(cls, url):\n return cls.url_re.match(url) is not None\n\n def _get_streams(self):\n vid = self.url_re.match(self.url).group(1)\n self.logger.debug(\"Found video ID: {0}\", vid)\n\n page = http.get(self.play_url.format(vid=vid))\n js_url_m = self.js_re.search(page.text)\n if js_url_m:\n js_url = js_url_m.group(1)\n self.logger.debug(\"Loading player JS: {0}\", js_url)\n\n res = http.get(js_url)\n data = self.setup_schema.validate(res.text)\n for source in data[\"playlist\"][0][\"sources\"]:\n if source[\"type\"] == \"hls\":\n return HLSStream.parse_variant_playlist(self.session, \"https:\" + source[\"file\"])\n\n\n__plugin__ = INE\n", "path": "src/streamlink/plugins/ine.py"}], "after_files": [{"content": "from __future__ import print_function\n\nimport json\nimport re\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import http\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream import HLSStream\n\n\nclass INE(Plugin):\n url_re = re.compile(r\"\"\"https://streaming.ine.com/play\\#?/\n ([0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12})/?\n (.*?)\"\"\", re.VERBOSE)\n play_url = \"https://streaming.ine.com/play/{vid}/watch\"\n js_re = re.compile(r'''script type=\"text/javascript\" src=\"(https://content.jwplatform.com/players/.*?)\"''')\n jwplayer_re = re.compile(r'''jwConfig\\s*=\\s*(\\{.*\\});''', re.DOTALL)\n setup_schema = validate.Schema(\n validate.transform(jwplayer_re.search),\n validate.any(\n None,\n validate.all(\n validate.get(1),\n validate.transform(json.loads),\n {\"playlist\": [\n {\"sources\": [{\"file\": validate.text,\n \"type\": validate.text}]}\n ]}\n )\n )\n )\n\n @classmethod\n def can_handle_url(cls, url):\n return cls.url_re.match(url) is not None\n\n def _get_streams(self):\n vid = self.url_re.match(self.url).group(1)\n self.logger.debug(\"Found video ID: {0}\", vid)\n\n page = http.get(self.play_url.format(vid=vid))\n js_url_m = self.js_re.search(page.text)\n if js_url_m:\n js_url = js_url_m.group(1)\n self.logger.debug(\"Loading player JS: {0}\", js_url)\n\n res = http.get(js_url)\n data = self.setup_schema.validate(res.text)\n for source in data[\"playlist\"][0][\"sources\"]:\n if source[\"type\"] == \"hls\":\n return HLSStream.parse_variant_playlist(self.session, \"https:\" + source[\"file\"])\n\n\n__plugin__ = INE\n", "path": "src/streamlink/plugins/ine.py"}]}
| 1,450 | 198 |
gh_patches_debug_29769
|
rasdani/github-patches
|
git_diff
|
alltheplaces__alltheplaces-5130
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Relationship between vetco and petco
I was looking at fixing the `vetco` spider, but after a quick look on the website everything I've seen is titled "At Petco".
To the Americans: is Vetco a real brand?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/vetco_clinics.py`
Content:
```
1 import re
2
3 import scrapy
4 from scrapy.selector import Selector
5
6 from locations.geo import postal_regions
7 from locations.items import Feature
8
9
10 class VetcoClinicsSpider(scrapy.Spider):
11 name = "vetco"
12 item_attributes = {"brand": "Vetco Clinics"}
13 allowed_domains = ["vetcoclinics.com"]
14
15 def start_requests(self):
16 for record in postal_regions("US"):
17 url_template = "https://www.vetcoclinics.com/_assets/dynamic/ajax/locator.php?zip={}"
18 yield scrapy.http.Request(url_template.format(record["postal_region"]))
19
20 def parse(self, response):
21 jsonresponse = response.json()
22 if jsonresponse is not None:
23 clinics = jsonresponse.get("clinics")
24 if clinics:
25 for stores in clinics:
26 body = stores["label"]
27 address = Selector(text=body).xpath("//address/text()").extract()
28 if len(address) == 3:
29 addr_full, city_state_postal, phone = (item.split(",") for item in address)
30 city, state_postal = (item.split(",") for item in city_state_postal)
31 state, postal = re.search(r"([A-Z]{2}) (\d{5})", state_postal[0]).groups()
32
33 else:
34 addr_full, city_state_postal = (item.split(",") for item in address)
35 city, state_postal = (item.split(",") for item in city_state_postal)
36 state, postal = re.search(r"([A-Z]{2}) (\d{5})", state_postal[0]).groups()
37
38 properties = {
39 "ref": addr_full[0].strip(),
40 "addr_full": addr_full[0].strip(),
41 "city": city[0].strip(),
42 "state": state,
43 "postcode": postal,
44 "lat": float(stores["point"]["lat"]),
45 "lon": float(stores["point"]["long"]),
46 "website": response.url,
47 }
48
49 yield Feature(**properties)
50
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/locations/spiders/vetco_clinics.py b/locations/spiders/vetco_clinics.py
deleted file mode 100644
--- a/locations/spiders/vetco_clinics.py
+++ /dev/null
@@ -1,49 +0,0 @@
-import re
-
-import scrapy
-from scrapy.selector import Selector
-
-from locations.geo import postal_regions
-from locations.items import Feature
-
-
-class VetcoClinicsSpider(scrapy.Spider):
- name = "vetco"
- item_attributes = {"brand": "Vetco Clinics"}
- allowed_domains = ["vetcoclinics.com"]
-
- def start_requests(self):
- for record in postal_regions("US"):
- url_template = "https://www.vetcoclinics.com/_assets/dynamic/ajax/locator.php?zip={}"
- yield scrapy.http.Request(url_template.format(record["postal_region"]))
-
- def parse(self, response):
- jsonresponse = response.json()
- if jsonresponse is not None:
- clinics = jsonresponse.get("clinics")
- if clinics:
- for stores in clinics:
- body = stores["label"]
- address = Selector(text=body).xpath("//address/text()").extract()
- if len(address) == 3:
- addr_full, city_state_postal, phone = (item.split(",") for item in address)
- city, state_postal = (item.split(",") for item in city_state_postal)
- state, postal = re.search(r"([A-Z]{2}) (\d{5})", state_postal[0]).groups()
-
- else:
- addr_full, city_state_postal = (item.split(",") for item in address)
- city, state_postal = (item.split(",") for item in city_state_postal)
- state, postal = re.search(r"([A-Z]{2}) (\d{5})", state_postal[0]).groups()
-
- properties = {
- "ref": addr_full[0].strip(),
- "addr_full": addr_full[0].strip(),
- "city": city[0].strip(),
- "state": state,
- "postcode": postal,
- "lat": float(stores["point"]["lat"]),
- "lon": float(stores["point"]["long"]),
- "website": response.url,
- }
-
- yield Feature(**properties)
|
{"golden_diff": "diff --git a/locations/spiders/vetco_clinics.py b/locations/spiders/vetco_clinics.py\ndeleted file mode 100644\n--- a/locations/spiders/vetco_clinics.py\n+++ /dev/null\n@@ -1,49 +0,0 @@\n-import re\n-\n-import scrapy\n-from scrapy.selector import Selector\n-\n-from locations.geo import postal_regions\n-from locations.items import Feature\n-\n-\n-class VetcoClinicsSpider(scrapy.Spider):\n- name = \"vetco\"\n- item_attributes = {\"brand\": \"Vetco Clinics\"}\n- allowed_domains = [\"vetcoclinics.com\"]\n-\n- def start_requests(self):\n- for record in postal_regions(\"US\"):\n- url_template = \"https://www.vetcoclinics.com/_assets/dynamic/ajax/locator.php?zip={}\"\n- yield scrapy.http.Request(url_template.format(record[\"postal_region\"]))\n-\n- def parse(self, response):\n- jsonresponse = response.json()\n- if jsonresponse is not None:\n- clinics = jsonresponse.get(\"clinics\")\n- if clinics:\n- for stores in clinics:\n- body = stores[\"label\"]\n- address = Selector(text=body).xpath(\"//address/text()\").extract()\n- if len(address) == 3:\n- addr_full, city_state_postal, phone = (item.split(\",\") for item in address)\n- city, state_postal = (item.split(\",\") for item in city_state_postal)\n- state, postal = re.search(r\"([A-Z]{2}) (\\d{5})\", state_postal[0]).groups()\n-\n- else:\n- addr_full, city_state_postal = (item.split(\",\") for item in address)\n- city, state_postal = (item.split(\",\") for item in city_state_postal)\n- state, postal = re.search(r\"([A-Z]{2}) (\\d{5})\", state_postal[0]).groups()\n-\n- properties = {\n- \"ref\": addr_full[0].strip(),\n- \"addr_full\": addr_full[0].strip(),\n- \"city\": city[0].strip(),\n- \"state\": state,\n- \"postcode\": postal,\n- \"lat\": float(stores[\"point\"][\"lat\"]),\n- \"lon\": float(stores[\"point\"][\"long\"]),\n- \"website\": response.url,\n- }\n-\n- yield Feature(**properties)\n", "issue": "Relationship between vetco and petco\nI was looking at fixing the `vetco` spider, but after a quick look on the website everything I've seen is titled \"At Petco\".\r\n\r\nTo the Americans: is Vetco a real brand?\n", "before_files": [{"content": "import re\n\nimport scrapy\nfrom scrapy.selector import Selector\n\nfrom locations.geo import postal_regions\nfrom locations.items import Feature\n\n\nclass VetcoClinicsSpider(scrapy.Spider):\n name = \"vetco\"\n item_attributes = {\"brand\": \"Vetco Clinics\"}\n allowed_domains = [\"vetcoclinics.com\"]\n\n def start_requests(self):\n for record in postal_regions(\"US\"):\n url_template = \"https://www.vetcoclinics.com/_assets/dynamic/ajax/locator.php?zip={}\"\n yield scrapy.http.Request(url_template.format(record[\"postal_region\"]))\n\n def parse(self, response):\n jsonresponse = response.json()\n if jsonresponse is not None:\n clinics = jsonresponse.get(\"clinics\")\n if clinics:\n for stores in clinics:\n body = stores[\"label\"]\n address = Selector(text=body).xpath(\"//address/text()\").extract()\n if len(address) == 3:\n addr_full, city_state_postal, phone = (item.split(\",\") for item in address)\n city, state_postal = (item.split(\",\") for item in city_state_postal)\n state, postal = re.search(r\"([A-Z]{2}) (\\d{5})\", state_postal[0]).groups()\n\n else:\n addr_full, city_state_postal = (item.split(\",\") for item in address)\n city, state_postal = (item.split(\",\") for item in city_state_postal)\n state, postal = re.search(r\"([A-Z]{2}) (\\d{5})\", state_postal[0]).groups()\n\n properties = {\n \"ref\": addr_full[0].strip(),\n \"addr_full\": addr_full[0].strip(),\n \"city\": city[0].strip(),\n \"state\": state,\n \"postcode\": postal,\n \"lat\": float(stores[\"point\"][\"lat\"]),\n \"lon\": float(stores[\"point\"][\"long\"]),\n \"website\": response.url,\n }\n\n yield Feature(**properties)\n", "path": "locations/spiders/vetco_clinics.py"}], "after_files": [{"content": null, "path": "locations/spiders/vetco_clinics.py"}]}
| 839 | 536 |
gh_patches_debug_299
|
rasdani/github-patches
|
git_diff
|
PyGithub__PyGithub-557
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
GitHub Integration raises "NotImplementedError Algorithm not supported"
We have working github integration code using PyGithub v1.32 that does essentially:
```python
integration = github.GithubIntegration(settings.GITHUB_INTEGRATION_ID, settings.GITHUB_INTEGRATION_PRIVATE_PEM)
inst_token = integration.get_access_token(installation_id).token
```
After upgrading to v1.34 this code raises "NotImplementedError Algorithm not supported"
I suspect it has to do with the [switch to pyjwt from python-jose](https://github.com/PyGithub/PyGithub/commit/d447eb13b9f4688a4c981ca03b1b3111fb299142)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 # ########################## Copyrights and license ############################
5 # #
6 # Copyright 2012 Vincent Jacques <[email protected]> #
7 # Copyright 2012 Zearin <[email protected]> #
8 # Copyright 2013 Vincent Jacques <[email protected]> #
9 # #
10 # This file is part of PyGithub. #
11 # http://pygithub.github.io/PyGithub/v1/index.html #
12 # #
13 # PyGithub is free software: you can redistribute it and/or modify it under #
14 # the terms of the GNU Lesser General Public License as published by the Free #
15 # Software Foundation, either version 3 of the License, or (at your option) #
16 # any later version. #
17 # #
18 # PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #
19 # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #
20 # FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #
21 # details. #
22 # #
23 # You should have received a copy of the GNU Lesser General Public License #
24 # along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #
25 # #
26 # ##############################################################################
27
28 import setuptools
29 import textwrap
30
31 version = "1.34"
32
33
34 if __name__ == "__main__":
35 setuptools.setup(
36 name="PyGithub",
37 version=version,
38 description="Use the full Github API v3",
39 author="Vincent Jacques",
40 author_email="[email protected]",
41 url="http://pygithub.github.io/PyGithub/v1/index.html",
42 long_description=textwrap.dedent("""\
43 (Very short) Tutorial
44 =====================
45
46 First create a Github instance::
47
48 from github import Github
49
50 g = Github("user", "password")
51
52 Then play with your Github objects::
53
54 for repo in g.get_user().get_repos():
55 print repo.name
56 repo.edit(has_wiki=False)
57
58 You can also create a Github instance with an OAuth token::
59
60 g = Github(token)
61
62 Or without authentication::
63
64 g = Github()
65
66 Reference documentation
67 =======================
68
69 See http://pygithub.github.io/PyGithub/v1/index.html"""),
70 packages=[
71 "github",
72 "github.tests",
73 ],
74 package_data={
75 "github": ["tests/ReplayData/*.txt"]
76 },
77 classifiers=[
78 "Development Status :: 5 - Production/Stable",
79 "Environment :: Web Environment",
80 "Intended Audience :: Developers",
81 "License :: OSI Approved :: GNU Library or Lesser General Public License (LGPL)",
82 "Operating System :: OS Independent",
83 "Programming Language :: Python",
84 "Programming Language :: Python :: 2",
85 "Programming Language :: Python :: 2.5",
86 "Programming Language :: Python :: 2.6",
87 "Programming Language :: Python :: 2.7",
88 "Programming Language :: Python :: 3",
89 "Programming Language :: Python :: 3.2",
90 "Programming Language :: Python :: 3.3",
91 "Programming Language :: Python :: 3.4",
92 "Programming Language :: Python :: 3.5",
93 "Topic :: Software Development",
94 ],
95 test_suite="github.tests.AllTests",
96 use_2to3=True,
97 install_requires=[
98 "pyjwt"
99 ]
100 )
101
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -96,5 +96,8 @@
use_2to3=True,
install_requires=[
"pyjwt"
- ]
+ ],
+ extras_require = {
+ "integrations": ["cryptography"]
+ }
)
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -96,5 +96,8 @@\n use_2to3=True,\n install_requires=[\n \"pyjwt\"\n- ]\n+ ],\n+ extras_require = {\n+ \"integrations\": [\"cryptography\"]\n+ }\n )\n", "issue": "GitHub Integration raises \"NotImplementedError Algorithm not supported\"\nWe have working github integration code using PyGithub v1.32 that does essentially:\r\n\r\n```python\r\nintegration = github.GithubIntegration(settings.GITHUB_INTEGRATION_ID, settings.GITHUB_INTEGRATION_PRIVATE_PEM)\r\ninst_token = integration.get_access_token(installation_id).token\r\n```\r\nAfter upgrading to v1.34 this code raises \"NotImplementedError Algorithm not supported\"\r\n\r\nI suspect it has to do with the [switch to pyjwt from python-jose](https://github.com/PyGithub/PyGithub/commit/d447eb13b9f4688a4c981ca03b1b3111fb299142)\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n# ########################## Copyrights and license ############################\n# #\n# Copyright 2012 Vincent Jacques <[email protected]> #\n# Copyright 2012 Zearin <[email protected]> #\n# Copyright 2013 Vincent Jacques <[email protected]> #\n# #\n# This file is part of PyGithub. #\n# http://pygithub.github.io/PyGithub/v1/index.html #\n# #\n# PyGithub is free software: you can redistribute it and/or modify it under #\n# the terms of the GNU Lesser General Public License as published by the Free #\n# Software Foundation, either version 3 of the License, or (at your option) #\n# any later version. #\n# #\n# PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #\n# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #\n# FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #\n# details. #\n# #\n# You should have received a copy of the GNU Lesser General Public License #\n# along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #\n# #\n# ##############################################################################\n\nimport setuptools\nimport textwrap\n\nversion = \"1.34\"\n\n\nif __name__ == \"__main__\":\n setuptools.setup(\n name=\"PyGithub\",\n version=version,\n description=\"Use the full Github API v3\",\n author=\"Vincent Jacques\",\n author_email=\"[email protected]\",\n url=\"http://pygithub.github.io/PyGithub/v1/index.html\",\n long_description=textwrap.dedent(\"\"\"\\\n (Very short) Tutorial\n =====================\n\n First create a Github instance::\n\n from github import Github\n\n g = Github(\"user\", \"password\")\n\n Then play with your Github objects::\n\n for repo in g.get_user().get_repos():\n print repo.name\n repo.edit(has_wiki=False)\n\n You can also create a Github instance with an OAuth token::\n\n g = Github(token)\n\n Or without authentication::\n\n g = Github()\n\n Reference documentation\n =======================\n\n See http://pygithub.github.io/PyGithub/v1/index.html\"\"\"),\n packages=[\n \"github\",\n \"github.tests\",\n ],\n package_data={\n \"github\": [\"tests/ReplayData/*.txt\"]\n },\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Web Environment\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: GNU Library or Lesser General Public License (LGPL)\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.5\",\n \"Programming Language :: Python :: 2.6\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.2\",\n \"Programming Language :: Python :: 3.3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Topic :: Software Development\",\n ],\n test_suite=\"github.tests.AllTests\",\n use_2to3=True,\n install_requires=[\n \"pyjwt\"\n ]\n )\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n# ########################## Copyrights and license ############################\n# #\n# Copyright 2012 Vincent Jacques <[email protected]> #\n# Copyright 2012 Zearin <[email protected]> #\n# Copyright 2013 Vincent Jacques <[email protected]> #\n# #\n# This file is part of PyGithub. #\n# http://pygithub.github.io/PyGithub/v1/index.html #\n# #\n# PyGithub is free software: you can redistribute it and/or modify it under #\n# the terms of the GNU Lesser General Public License as published by the Free #\n# Software Foundation, either version 3 of the License, or (at your option) #\n# any later version. #\n# #\n# PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #\n# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #\n# FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #\n# details. #\n# #\n# You should have received a copy of the GNU Lesser General Public License #\n# along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #\n# #\n# ##############################################################################\n\nimport setuptools\nimport textwrap\n\nversion = \"1.34\"\n\n\nif __name__ == \"__main__\":\n setuptools.setup(\n name=\"PyGithub\",\n version=version,\n description=\"Use the full Github API v3\",\n author=\"Vincent Jacques\",\n author_email=\"[email protected]\",\n url=\"http://pygithub.github.io/PyGithub/v1/index.html\",\n long_description=textwrap.dedent(\"\"\"\\\n (Very short) Tutorial\n =====================\n\n First create a Github instance::\n\n from github import Github\n\n g = Github(\"user\", \"password\")\n\n Then play with your Github objects::\n\n for repo in g.get_user().get_repos():\n print repo.name\n repo.edit(has_wiki=False)\n\n You can also create a Github instance with an OAuth token::\n\n g = Github(token)\n\n Or without authentication::\n\n g = Github()\n\n Reference documentation\n =======================\n\n See http://pygithub.github.io/PyGithub/v1/index.html\"\"\"),\n packages=[\n \"github\",\n \"github.tests\",\n ],\n package_data={\n \"github\": [\"tests/ReplayData/*.txt\"]\n },\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Web Environment\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: GNU Library or Lesser General Public License (LGPL)\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.5\",\n \"Programming Language :: Python :: 2.6\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.2\",\n \"Programming Language :: Python :: 3.3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Topic :: Software Development\",\n ],\n test_suite=\"github.tests.AllTests\",\n use_2to3=True,\n install_requires=[\n \"pyjwt\"\n ],\n extras_require = {\n \"integrations\": [\"cryptography\"]\n }\n )\n", "path": "setup.py"}]}
| 1,370 | 76 |
gh_patches_debug_55170
|
rasdani/github-patches
|
git_diff
|
spack__spack-10720
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
git-lfs aborts (sometimes), fix in progress upstream
This is mostly an FYI.
Starting with `[email protected]` we frequently had `git-lfs` aborting. In some situations it ran successfully, in others it didn't. It seemed to depend on what other modules were loaded, but...
Between `[email protected]` and `[email protected]` the Makefile started unconditionally adding a `-extldflags` bit to the `go` command line, setting it to the value of `LDFLAGS`. If `LDFLAGS` isn't set to anything (our case) then it wasn't given an argument, even though it needs one. I'm not sure why this doesn't provide an error from the compiler, it seems to be grabbing something out of whatever comes next in memory.
I've changed the Makefile only set `-extldflags` if `LDFLAGS` is defined and made a Pull Request upstream: https://github.com/git-lfs/git-lfs/pull/3545
Depending what Upstream has to say, perhaps we'll want to patch `[email protected]`, or forbid it, or ...
I'll keep this updated as the `git-lfs` PR progresses.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `var/spack/repos/builtin/packages/git-lfs/package.py`
Content:
```
1 # Copyright 2013-2019 Lawrence Livermore National Security, LLC and other
2 # Spack Project Developers. See the top-level COPYRIGHT file for details.
3 #
4 # SPDX-License-Identifier: (Apache-2.0 OR MIT)
5
6 from spack import *
7
8
9 class GitLfs(MakefilePackage):
10 """Git LFS is a system for managing and versioning large files in
11 association with a Git repository. Instead of storing the large files
12 within the Git repository as blobs, Git LFS stores special "pointer
13 files" in the repository, while storing the actual file contents on a
14 Git LFS server."""
15
16 homepage = "https://git-lfs.github.com"
17 url = "https://github.com/git-lfs/git-lfs/archive/v2.6.1.tar.gz"
18
19 version('2.7.0', sha256='1c829ddd163be2206a44edb366bd7f6d84c5afae3496687405ca9d2a5f3af07b')
20 version('2.6.1', sha256='e17cd9d4e66d1116be32f7ddc7e660c7f8fabbf510bc01b01ec15a22dd934ead')
21
22 depends_on('[email protected]:', type='build')
23 depends_on('[email protected]:', type='run')
24
25 parallel = False
26
27 # Git-lfs does not provide an 'install' target in the Makefile
28 def install(self, spec, prefix):
29 mkdirp(prefix.bin)
30 install(join_path('bin', 'git-lfs'), prefix.bin)
31
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/var/spack/repos/builtin/packages/git-lfs/package.py b/var/spack/repos/builtin/packages/git-lfs/package.py
--- a/var/spack/repos/builtin/packages/git-lfs/package.py
+++ b/var/spack/repos/builtin/packages/git-lfs/package.py
@@ -22,6 +22,8 @@
depends_on('[email protected]:', type='build')
depends_on('[email protected]:', type='run')
+ patch('patches/issue-10702.patch', when='@2.7.0')
+
parallel = False
# Git-lfs does not provide an 'install' target in the Makefile
|
{"golden_diff": "diff --git a/var/spack/repos/builtin/packages/git-lfs/package.py b/var/spack/repos/builtin/packages/git-lfs/package.py\n--- a/var/spack/repos/builtin/packages/git-lfs/package.py\n+++ b/var/spack/repos/builtin/packages/git-lfs/package.py\n@@ -22,6 +22,8 @@\n depends_on('[email protected]:', type='build')\n depends_on('[email protected]:', type='run')\n \n+ patch('patches/issue-10702.patch', when='@2.7.0')\n+\n parallel = False\n \n # Git-lfs does not provide an 'install' target in the Makefile\n", "issue": "git-lfs aborts (sometimes), fix in progress upstream\nThis is mostly an FYI.\r\n\r\nStarting with `[email protected]` we frequently had `git-lfs` aborting. In some situations it ran successfully, in others it didn't. It seemed to depend on what other modules were loaded, but...\r\n\r\nBetween `[email protected]` and `[email protected]` the Makefile started unconditionally adding a `-extldflags` bit to the `go` command line, setting it to the value of `LDFLAGS`. If `LDFLAGS` isn't set to anything (our case) then it wasn't given an argument, even though it needs one. I'm not sure why this doesn't provide an error from the compiler, it seems to be grabbing something out of whatever comes next in memory.\r\n\r\nI've changed the Makefile only set `-extldflags` if `LDFLAGS` is defined and made a Pull Request upstream: https://github.com/git-lfs/git-lfs/pull/3545\r\n\r\nDepending what Upstream has to say, perhaps we'll want to patch `[email protected]`, or forbid it, or ...\r\n\r\nI'll keep this updated as the `git-lfs` PR progresses.\r\n\r\n\n", "before_files": [{"content": "# Copyright 2013-2019 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nfrom spack import *\n\n\nclass GitLfs(MakefilePackage):\n \"\"\"Git LFS is a system for managing and versioning large files in\n association with a Git repository. Instead of storing the large files\n within the Git repository as blobs, Git LFS stores special \"pointer\n files\" in the repository, while storing the actual file contents on a\n Git LFS server.\"\"\"\n\n homepage = \"https://git-lfs.github.com\"\n url = \"https://github.com/git-lfs/git-lfs/archive/v2.6.1.tar.gz\"\n\n version('2.7.0', sha256='1c829ddd163be2206a44edb366bd7f6d84c5afae3496687405ca9d2a5f3af07b')\n version('2.6.1', sha256='e17cd9d4e66d1116be32f7ddc7e660c7f8fabbf510bc01b01ec15a22dd934ead')\n\n depends_on('[email protected]:', type='build')\n depends_on('[email protected]:', type='run')\n\n parallel = False\n\n # Git-lfs does not provide an 'install' target in the Makefile\n def install(self, spec, prefix):\n mkdirp(prefix.bin)\n install(join_path('bin', 'git-lfs'), prefix.bin)\n", "path": "var/spack/repos/builtin/packages/git-lfs/package.py"}], "after_files": [{"content": "# Copyright 2013-2019 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nfrom spack import *\n\n\nclass GitLfs(MakefilePackage):\n \"\"\"Git LFS is a system for managing and versioning large files in\n association with a Git repository. Instead of storing the large files\n within the Git repository as blobs, Git LFS stores special \"pointer\n files\" in the repository, while storing the actual file contents on a\n Git LFS server.\"\"\"\n\n homepage = \"https://git-lfs.github.com\"\n url = \"https://github.com/git-lfs/git-lfs/archive/v2.6.1.tar.gz\"\n\n version('2.7.0', sha256='1c829ddd163be2206a44edb366bd7f6d84c5afae3496687405ca9d2a5f3af07b')\n version('2.6.1', sha256='e17cd9d4e66d1116be32f7ddc7e660c7f8fabbf510bc01b01ec15a22dd934ead')\n\n depends_on('[email protected]:', type='build')\n depends_on('[email protected]:', type='run')\n\n patch('patches/issue-10702.patch', when='@2.7.0')\n\n parallel = False\n\n # Git-lfs does not provide an 'install' target in the Makefile\n def install(self, spec, prefix):\n mkdirp(prefix.bin)\n install(join_path('bin', 'git-lfs'), prefix.bin)\n", "path": "var/spack/repos/builtin/packages/git-lfs/package.py"}]}
| 995 | 146 |
gh_patches_debug_15244
|
rasdani/github-patches
|
git_diff
|
google__fuzzbench-242
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Should local docker run restrict cpu to 1 to match FuzzBench prod environment ?
See also
https://github.com/google/fuzzbench/issues/173#issuecomment-605283610
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `common/fuzzer_utils.py`
Content:
```
1 # Copyright 2020 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Fuzzer helpers."""
15
16 import importlib
17 import os
18 import re
19 from typing import Optional
20
21 from common import logs
22 from common import utils
23 from common import yaml_utils
24
25 DEFAULT_FUZZ_TARGET_NAME = 'fuzz-target'
26 FUZZ_TARGET_SEARCH_STRING = b'LLVMFuzzerTestOneInput'
27 VALID_FUZZER_REGEX = re.compile(r'^[A-Za-z0-9_]+$')
28
29
30 def get_fuzz_target_binary(search_directory: str,
31 fuzz_target_name: str) -> Optional[str]:
32 """Return target binary path."""
33 if fuzz_target_name:
34 fuzz_target_binary = os.path.join(search_directory, fuzz_target_name)
35 if os.path.exists(fuzz_target_binary):
36 return fuzz_target_binary
37 return None
38
39 default_fuzz_target_binary = os.path.join(search_directory,
40 DEFAULT_FUZZ_TARGET_NAME)
41 if os.path.exists(default_fuzz_target_binary):
42 return default_fuzz_target_binary
43
44 for root, _, files in os.walk(search_directory):
45 if root == 'uninstrumented':
46 continue
47 for filename in files:
48 if filename.endswith('-uninstrumented'):
49 # Skip uninstrumented binaries (e.g. with QSYM).
50 continue
51
52 file_path = os.path.join(root, filename)
53 with open(file_path, 'rb') as file_handle:
54 if FUZZ_TARGET_SEARCH_STRING in file_handle.read():
55 return file_path
56
57 return None
58
59
60 def validate(fuzzer):
61 """Return True if |fuzzer| is a valid fuzzbench fuzzer."""
62 # Although importing probably allows a subset of what the regex allows, use
63 # the regex anyway to be safe. The regex is enforcing that the fuzzer is a
64 # valid path for GCS or a linux system.
65 if VALID_FUZZER_REGEX.match(fuzzer) is None:
66 logs.error('%s does not conform to %s pattern.', fuzzer,
67 VALID_FUZZER_REGEX.pattern)
68 return False
69
70 # Try importing the fuzzer module.
71 module_name = 'fuzzers.{}.fuzzer'.format(fuzzer)
72 try:
73 importlib.import_module(module_name)
74 return True
75 except Exception as error: # pylint: disable=broad-except
76 logs.error('Encountered "%s" while trying to import %s.', error,
77 module_name)
78 return False
79
80
81 def get_fuzzer_configs(fuzzers=None):
82 """Returns the list of all fuzzers."""
83 fuzzers_dir = os.path.join(utils.ROOT_DIR, 'fuzzers')
84 fuzzer_configs = []
85 for fuzzer in os.listdir(fuzzers_dir):
86 if not os.path.isfile(os.path.join(fuzzers_dir, fuzzer, 'fuzzer.py')):
87 continue
88 if fuzzer == 'coverage':
89 continue
90
91 if not fuzzers or fuzzer in fuzzers:
92 # Auto-generate the default configuration for each base fuzzer.
93 fuzzer_configs.append({'fuzzer': fuzzer})
94
95 variant_config_path = os.path.join(fuzzers_dir, fuzzer, 'variants.yaml')
96 if not os.path.isfile(variant_config_path):
97 continue
98
99 variant_config = yaml_utils.read(variant_config_path)
100 assert 'variants' in variant_config, (
101 'Missing "variants" section of {}'.format(variant_config_path))
102 for variant in variant_config['variants']:
103 if not fuzzers or variant['name'] in fuzzers:
104 # Modify the config from the variants.yaml format to the
105 # format expected by a fuzzer config.
106 assert 'name' in variant, (
107 'Missing name attribute for fuzzer variant in {}'.format(
108 variant_config_path))
109 variant['variant_name'] = variant['name']
110 del variant['name']
111 variant['fuzzer'] = fuzzer
112 fuzzer_configs.append(variant)
113
114 return fuzzer_configs
115
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/common/fuzzer_utils.py b/common/fuzzer_utils.py
--- a/common/fuzzer_utils.py
+++ b/common/fuzzer_utils.py
@@ -20,7 +20,6 @@
from common import logs
from common import utils
-from common import yaml_utils
DEFAULT_FUZZ_TARGET_NAME = 'fuzz-target'
FUZZ_TARGET_SEARCH_STRING = b'LLVMFuzzerTestOneInput'
@@ -80,6 +79,10 @@
def get_fuzzer_configs(fuzzers=None):
"""Returns the list of all fuzzers."""
+ # Import it here to avoid yaml dependency in runner.
+ # pylint: disable=import-outside-toplevel
+ from common import yaml_utils
+
fuzzers_dir = os.path.join(utils.ROOT_DIR, 'fuzzers')
fuzzer_configs = []
for fuzzer in os.listdir(fuzzers_dir):
|
{"golden_diff": "diff --git a/common/fuzzer_utils.py b/common/fuzzer_utils.py\n--- a/common/fuzzer_utils.py\n+++ b/common/fuzzer_utils.py\n@@ -20,7 +20,6 @@\n \n from common import logs\n from common import utils\n-from common import yaml_utils\n \n DEFAULT_FUZZ_TARGET_NAME = 'fuzz-target'\n FUZZ_TARGET_SEARCH_STRING = b'LLVMFuzzerTestOneInput'\n@@ -80,6 +79,10 @@\n \n def get_fuzzer_configs(fuzzers=None):\n \"\"\"Returns the list of all fuzzers.\"\"\"\n+ # Import it here to avoid yaml dependency in runner.\n+ # pylint: disable=import-outside-toplevel\n+ from common import yaml_utils\n+\n fuzzers_dir = os.path.join(utils.ROOT_DIR, 'fuzzers')\n fuzzer_configs = []\n for fuzzer in os.listdir(fuzzers_dir):\n", "issue": "Should local docker run restrict cpu to 1 to match FuzzBench prod environment ?\nSee also\r\nhttps://github.com/google/fuzzbench/issues/173#issuecomment-605283610\n", "before_files": [{"content": "# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Fuzzer helpers.\"\"\"\n\nimport importlib\nimport os\nimport re\nfrom typing import Optional\n\nfrom common import logs\nfrom common import utils\nfrom common import yaml_utils\n\nDEFAULT_FUZZ_TARGET_NAME = 'fuzz-target'\nFUZZ_TARGET_SEARCH_STRING = b'LLVMFuzzerTestOneInput'\nVALID_FUZZER_REGEX = re.compile(r'^[A-Za-z0-9_]+$')\n\n\ndef get_fuzz_target_binary(search_directory: str,\n fuzz_target_name: str) -> Optional[str]:\n \"\"\"Return target binary path.\"\"\"\n if fuzz_target_name:\n fuzz_target_binary = os.path.join(search_directory, fuzz_target_name)\n if os.path.exists(fuzz_target_binary):\n return fuzz_target_binary\n return None\n\n default_fuzz_target_binary = os.path.join(search_directory,\n DEFAULT_FUZZ_TARGET_NAME)\n if os.path.exists(default_fuzz_target_binary):\n return default_fuzz_target_binary\n\n for root, _, files in os.walk(search_directory):\n if root == 'uninstrumented':\n continue\n for filename in files:\n if filename.endswith('-uninstrumented'):\n # Skip uninstrumented binaries (e.g. with QSYM).\n continue\n\n file_path = os.path.join(root, filename)\n with open(file_path, 'rb') as file_handle:\n if FUZZ_TARGET_SEARCH_STRING in file_handle.read():\n return file_path\n\n return None\n\n\ndef validate(fuzzer):\n \"\"\"Return True if |fuzzer| is a valid fuzzbench fuzzer.\"\"\"\n # Although importing probably allows a subset of what the regex allows, use\n # the regex anyway to be safe. The regex is enforcing that the fuzzer is a\n # valid path for GCS or a linux system.\n if VALID_FUZZER_REGEX.match(fuzzer) is None:\n logs.error('%s does not conform to %s pattern.', fuzzer,\n VALID_FUZZER_REGEX.pattern)\n return False\n\n # Try importing the fuzzer module.\n module_name = 'fuzzers.{}.fuzzer'.format(fuzzer)\n try:\n importlib.import_module(module_name)\n return True\n except Exception as error: # pylint: disable=broad-except\n logs.error('Encountered \"%s\" while trying to import %s.', error,\n module_name)\n return False\n\n\ndef get_fuzzer_configs(fuzzers=None):\n \"\"\"Returns the list of all fuzzers.\"\"\"\n fuzzers_dir = os.path.join(utils.ROOT_DIR, 'fuzzers')\n fuzzer_configs = []\n for fuzzer in os.listdir(fuzzers_dir):\n if not os.path.isfile(os.path.join(fuzzers_dir, fuzzer, 'fuzzer.py')):\n continue\n if fuzzer == 'coverage':\n continue\n\n if not fuzzers or fuzzer in fuzzers:\n # Auto-generate the default configuration for each base fuzzer.\n fuzzer_configs.append({'fuzzer': fuzzer})\n\n variant_config_path = os.path.join(fuzzers_dir, fuzzer, 'variants.yaml')\n if not os.path.isfile(variant_config_path):\n continue\n\n variant_config = yaml_utils.read(variant_config_path)\n assert 'variants' in variant_config, (\n 'Missing \"variants\" section of {}'.format(variant_config_path))\n for variant in variant_config['variants']:\n if not fuzzers or variant['name'] in fuzzers:\n # Modify the config from the variants.yaml format to the\n # format expected by a fuzzer config.\n assert 'name' in variant, (\n 'Missing name attribute for fuzzer variant in {}'.format(\n variant_config_path))\n variant['variant_name'] = variant['name']\n del variant['name']\n variant['fuzzer'] = fuzzer\n fuzzer_configs.append(variant)\n\n return fuzzer_configs\n", "path": "common/fuzzer_utils.py"}], "after_files": [{"content": "# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Fuzzer helpers.\"\"\"\n\nimport importlib\nimport os\nimport re\nfrom typing import Optional\n\nfrom common import logs\nfrom common import utils\n\nDEFAULT_FUZZ_TARGET_NAME = 'fuzz-target'\nFUZZ_TARGET_SEARCH_STRING = b'LLVMFuzzerTestOneInput'\nVALID_FUZZER_REGEX = re.compile(r'^[A-Za-z0-9_]+$')\n\n\ndef get_fuzz_target_binary(search_directory: str,\n fuzz_target_name: str) -> Optional[str]:\n \"\"\"Return target binary path.\"\"\"\n if fuzz_target_name:\n fuzz_target_binary = os.path.join(search_directory, fuzz_target_name)\n if os.path.exists(fuzz_target_binary):\n return fuzz_target_binary\n return None\n\n default_fuzz_target_binary = os.path.join(search_directory,\n DEFAULT_FUZZ_TARGET_NAME)\n if os.path.exists(default_fuzz_target_binary):\n return default_fuzz_target_binary\n\n for root, _, files in os.walk(search_directory):\n if root == 'uninstrumented':\n continue\n for filename in files:\n if filename.endswith('-uninstrumented'):\n # Skip uninstrumented binaries (e.g. with QSYM).\n continue\n\n file_path = os.path.join(root, filename)\n with open(file_path, 'rb') as file_handle:\n if FUZZ_TARGET_SEARCH_STRING in file_handle.read():\n return file_path\n\n return None\n\n\ndef validate(fuzzer):\n \"\"\"Return True if |fuzzer| is a valid fuzzbench fuzzer.\"\"\"\n # Although importing probably allows a subset of what the regex allows, use\n # the regex anyway to be safe. The regex is enforcing that the fuzzer is a\n # valid path for GCS or a linux system.\n if VALID_FUZZER_REGEX.match(fuzzer) is None:\n logs.error('%s does not conform to %s pattern.', fuzzer,\n VALID_FUZZER_REGEX.pattern)\n return False\n\n # Try importing the fuzzer module.\n module_name = 'fuzzers.{}.fuzzer'.format(fuzzer)\n try:\n importlib.import_module(module_name)\n return True\n except Exception as error: # pylint: disable=broad-except\n logs.error('Encountered \"%s\" while trying to import %s.', error,\n module_name)\n return False\n\n\ndef get_fuzzer_configs(fuzzers=None):\n \"\"\"Returns the list of all fuzzers.\"\"\"\n # Import it here to avoid yaml dependency in runner.\n # pylint: disable=import-outside-toplevel\n from common import yaml_utils\n\n fuzzers_dir = os.path.join(utils.ROOT_DIR, 'fuzzers')\n fuzzer_configs = []\n for fuzzer in os.listdir(fuzzers_dir):\n if not os.path.isfile(os.path.join(fuzzers_dir, fuzzer, 'fuzzer.py')):\n continue\n if fuzzer == 'coverage':\n continue\n\n if not fuzzers or fuzzer in fuzzers:\n # Auto-generate the default configuration for each base fuzzer.\n fuzzer_configs.append({'fuzzer': fuzzer})\n\n variant_config_path = os.path.join(fuzzers_dir, fuzzer, 'variants.yaml')\n if not os.path.isfile(variant_config_path):\n continue\n\n variant_config = yaml_utils.read(variant_config_path)\n assert 'variants' in variant_config, (\n 'Missing \"variants\" section of {}'.format(variant_config_path))\n for variant in variant_config['variants']:\n if not fuzzers or variant['name'] in fuzzers:\n # Modify the config from the variants.yaml format to the\n # format expected by a fuzzer config.\n assert 'name' in variant, (\n 'Missing name attribute for fuzzer variant in {}'.format(\n variant_config_path))\n variant['variant_name'] = variant['name']\n del variant['name']\n variant['fuzzer'] = fuzzer\n fuzzer_configs.append(variant)\n\n return fuzzer_configs\n", "path": "common/fuzzer_utils.py"}]}
| 1,502 | 194 |
gh_patches_debug_3667
|
rasdani/github-patches
|
git_diff
|
vega__altair-784
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix vega-embed version for Altair 1
For example in https://github.com/altair-viz/altair/blob/d4d29ca06e920f71073766c6456d387e682cee17/altair/vegalite/v1/html.py#L7
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `altair/vegalite/v1/html.py`
Content:
```
1 HTML_TEMPLATE = """
2 <!DOCTYPE html>
3 <html>
4 <head>
5 <script src="https://cdn.jsdelivr.net/npm/vega@2"></script>
6 <script src="https://cdn.jsdelivr.net/npm/vega-lite@1"></script>
7 <script src="https://cdn.jsdelivr.net/npm/vega-embed@3"></script>
8 </head>
9 <body>
10 <div id="vis"></div>
11 <script type="text/javascript">
12 var spec = {spec};
13 var opt = {opt};
14 vegaEmbed("#vis", spec, opt);
15 </script>
16 </body>
17 </html>
18 """
19
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/altair/vegalite/v1/html.py b/altair/vegalite/v1/html.py
--- a/altair/vegalite/v1/html.py
+++ b/altair/vegalite/v1/html.py
@@ -4,7 +4,7 @@
<head>
<script src="https://cdn.jsdelivr.net/npm/vega@2"></script>
<script src="https://cdn.jsdelivr.net/npm/vega-lite@1"></script>
- <script src="https://cdn.jsdelivr.net/npm/vega-embed@3"></script>
+ <script src="https://cdn.jsdelivr.net/npm/vega-embed@2"></script>
</head>
<body>
<div id="vis"></div>
|
{"golden_diff": "diff --git a/altair/vegalite/v1/html.py b/altair/vegalite/v1/html.py\n--- a/altair/vegalite/v1/html.py\n+++ b/altair/vegalite/v1/html.py\n@@ -4,7 +4,7 @@\n <head>\n <script src=\"https://cdn.jsdelivr.net/npm/vega@2\"></script>\n <script src=\"https://cdn.jsdelivr.net/npm/vega-lite@1\"></script>\n- <script src=\"https://cdn.jsdelivr.net/npm/vega-embed@3\"></script>\n+ <script src=\"https://cdn.jsdelivr.net/npm/vega-embed@2\"></script>\n </head>\n <body>\n <div id=\"vis\"></div>\n", "issue": "Fix vega-embed version for Altair 1\nFor example in https://github.com/altair-viz/altair/blob/d4d29ca06e920f71073766c6456d387e682cee17/altair/vegalite/v1/html.py#L7\n", "before_files": [{"content": "HTML_TEMPLATE = \"\"\"\n<!DOCTYPE html>\n<html>\n<head>\n <script src=\"https://cdn.jsdelivr.net/npm/vega@2\"></script>\n <script src=\"https://cdn.jsdelivr.net/npm/vega-lite@1\"></script>\n <script src=\"https://cdn.jsdelivr.net/npm/vega-embed@3\"></script>\n</head>\n<body>\n <div id=\"vis\"></div>\n <script type=\"text/javascript\">\n var spec = {spec};\n var opt = {opt};\n vegaEmbed(\"#vis\", spec, opt);\n </script>\n</body>\n</html>\n\"\"\"\n", "path": "altair/vegalite/v1/html.py"}], "after_files": [{"content": "HTML_TEMPLATE = \"\"\"\n<!DOCTYPE html>\n<html>\n<head>\n <script src=\"https://cdn.jsdelivr.net/npm/vega@2\"></script>\n <script src=\"https://cdn.jsdelivr.net/npm/vega-lite@1\"></script>\n <script src=\"https://cdn.jsdelivr.net/npm/vega-embed@2\"></script>\n</head>\n<body>\n <div id=\"vis\"></div>\n <script type=\"text/javascript\">\n var spec = {spec};\n var opt = {opt};\n vegaEmbed(\"#vis\", spec, opt);\n </script>\n</body>\n</html>\n\"\"\"\n", "path": "altair/vegalite/v1/html.py"}]}
| 506 | 167 |
gh_patches_debug_2104
|
rasdani/github-patches
|
git_diff
|
shuup__shuup-1574
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Admin: Main menu won't stay hidden
Two issues (at least):
Desktop: If I close (minimize, desktop) main-menu and click any link, the menu appears again.
Desktop to mobile: If I minimize the menu on a bigger desktop and then drag window smaller the menu appears again.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `shuup/admin/views/menu.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # This file is part of Shuup.
3 #
4 # Copyright (c) 2012-2018, Shuup Inc. All rights reserved.
5 #
6 # This source code is licensed under the OSL-3.0 license found in the
7 # LICENSE file in the root directory of this source tree.
8 from django.http import JsonResponse
9 from django.views.generic import TemplateView, View
10
11
12 class MenuView(TemplateView):
13 template_name = "shuup/admin/base/_main_menu.jinja"
14
15
16 class MenuToggleView(View):
17 def post(self, request, *args, **kwargs):
18 request.session["menu_open"] = int(request.POST.get("menu_open", 0))
19 return JsonResponse({"success": True})
20
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/shuup/admin/views/menu.py b/shuup/admin/views/menu.py
--- a/shuup/admin/views/menu.py
+++ b/shuup/admin/views/menu.py
@@ -15,5 +15,5 @@
class MenuToggleView(View):
def post(self, request, *args, **kwargs):
- request.session["menu_open"] = int(request.POST.get("menu_open", 0))
+ request.session["menu_open"] = not bool(request.session.get("menu_open", True))
return JsonResponse({"success": True})
|
{"golden_diff": "diff --git a/shuup/admin/views/menu.py b/shuup/admin/views/menu.py\n--- a/shuup/admin/views/menu.py\n+++ b/shuup/admin/views/menu.py\n@@ -15,5 +15,5 @@\n \n class MenuToggleView(View):\n def post(self, request, *args, **kwargs):\n- request.session[\"menu_open\"] = int(request.POST.get(\"menu_open\", 0))\n+ request.session[\"menu_open\"] = not bool(request.session.get(\"menu_open\", True))\n return JsonResponse({\"success\": True})\n", "issue": "Admin: Main menu won't stay hidden\nTwo issues (at least):\r\nDesktop: If I close (minimize, desktop) main-menu and click any link, the menu appears again.\r\nDesktop to mobile: If I minimize the menu on a bigger desktop and then drag window smaller the menu appears again. \n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This file is part of Shuup.\n#\n# Copyright (c) 2012-2018, Shuup Inc. All rights reserved.\n#\n# This source code is licensed under the OSL-3.0 license found in the\n# LICENSE file in the root directory of this source tree.\nfrom django.http import JsonResponse\nfrom django.views.generic import TemplateView, View\n\n\nclass MenuView(TemplateView):\n template_name = \"shuup/admin/base/_main_menu.jinja\"\n\n\nclass MenuToggleView(View):\n def post(self, request, *args, **kwargs):\n request.session[\"menu_open\"] = int(request.POST.get(\"menu_open\", 0))\n return JsonResponse({\"success\": True})\n", "path": "shuup/admin/views/menu.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# This file is part of Shuup.\n#\n# Copyright (c) 2012-2018, Shuup Inc. All rights reserved.\n#\n# This source code is licensed under the OSL-3.0 license found in the\n# LICENSE file in the root directory of this source tree.\nfrom django.http import JsonResponse\nfrom django.views.generic import TemplateView, View\n\n\nclass MenuView(TemplateView):\n template_name = \"shuup/admin/base/_main_menu.jinja\"\n\n\nclass MenuToggleView(View):\n def post(self, request, *args, **kwargs):\n request.session[\"menu_open\"] = not bool(request.session.get(\"menu_open\", True))\n return JsonResponse({\"success\": True})\n", "path": "shuup/admin/views/menu.py"}]}
| 520 | 120 |
gh_patches_debug_34226
|
rasdani/github-patches
|
git_diff
|
dj-stripe__dj-stripe-268
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
importing djstripe within setup.py causes race condition when installing from repo
Trying to install dj-stripe from a repo runs into a race condition at setup.py:
``` bash
pip install -e git://github.com/pydanny/dj-stripe.git#egg=djstripe
Obtaining djstripe from git+git://github.com/pydanny/dj-stripe.git#egg=djstripe
Cloning git://github.com/pydanny/dj-stripe.git to ./v/test_djstripe/src/djstripe
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 20, in <module>
File "/home/dave/v/test_djstripe/src/djstripe/setup.py", line 6, in <module>
import djstripe
File "/home/dave/v/test_djstripe/src/djstripe/djstripe/__init__.py", line 4, in <module>
from django import get_version as get_django_version
ImportError: No module named 'django'
----------------------------------------
```
There are a few ways to fix this. I would suggest the, for example, get_version(package) methods used in https://github.com/pydanny/django-admin2/blob/master/setup.py
This is a trivial fix, I'll get a patch together soon.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 import os
4 import sys
5
6 import djstripe
7
8 version = djstripe.__version__
9
10 try:
11 from setuptools import setup
12 except ImportError:
13 from distutils.core import setup
14
15 if sys.argv[-1] == 'publish':
16 os.system('python setup.py sdist upload')
17 os.system('python setup.py bdist_wheel upload')
18 sys.exit()
19
20 if sys.argv[-1] == 'tag':
21 print("Tagging the version on github:")
22 os.system("git tag -a %s -m 'version %s'" % (version, version))
23 os.system("git push --tags")
24 sys.exit()
25
26 readme = open('README.rst').read()
27 history = open('HISTORY.rst').read().replace('.. :changelog:', '')
28
29 INSTALL_REQUIRES = [
30 'django>=1.7',
31 'stripe>=1.22.2',
32 'django-model-utils>=2.2',
33 'django-braces>=1.8.0',
34 'jsonfield>=1.0.3',
35 'pytz>=2015.4'
36 ]
37
38 setup(
39 name='dj-stripe',
40 version=version,
41 description=djstripe.__summary__,
42 long_description=readme + '\n\n' + history,
43 author=djstripe.__author__,
44 author_email=djstripe.__email__,
45 url=djstripe.__uri__,
46 packages=[
47 'djstripe',
48 ],
49 package_dir={'djstripe': 'djstripe'},
50 include_package_data=True,
51 install_requires=INSTALL_REQUIRES,
52 license=djstripe.__license__,
53 zip_safe=False,
54 keywords='stripe django',
55 classifiers=[
56 'Development Status :: 4 - Beta',
57 'Environment :: Web Environment',
58 'Framework :: Django',
59 'Framework :: Django :: 1.7',
60 'Framework :: Django :: 1.8',
61 'Intended Audience :: Developers',
62 'License :: OSI Approved :: BSD License',
63 'Natural Language :: English',
64 "Programming Language :: Python :: 2",
65 'Programming Language :: Python :: 2.7',
66 'Programming Language :: Python :: 3',
67 'Programming Language :: Python :: 3.3',
68 'Programming Language :: Python :: 3.4',
69 'Programming Language :: Python :: 3.5'
70 ],
71 )
72
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -1,11 +1,37 @@
#!/usr/bin/env python
+import ast
import os
import sys
-import djstripe
-version = djstripe.__version__
+class MetadataFinder(ast.NodeVisitor):
+ def __init__(self):
+ self.version = None
+ self.summary = None
+ self.author = None
+ self.email = None
+ self.uri = None
+ self.licence = None
+
+ def visit_Assign(self, node):
+ if node.targets[0].id == '__version__':
+ self.version = node.value.s
+ elif node.targets[0].id == '__summary__':
+ self.summary = node.value.s
+ elif node.targets[0].id == '__author__':
+ self.author = node.value.s
+ elif node.targets[0].id == '__email__':
+ self.email = node.value.s
+ elif node.targets[0].id == '__uri__':
+ self.uri = node.value.s
+ elif node.targets[0].id == '__license__':
+ self.license = node.value.s
+
+
+with open(os.path.join('djstripe', '__init__.py')) as open_file:
+ finder = MetadataFinder()
+ finder.visit(ast.parse(open_file.read()))
try:
from setuptools import setup
@@ -19,7 +45,8 @@
if sys.argv[-1] == 'tag':
print("Tagging the version on github:")
- os.system("git tag -a %s -m 'version %s'" % (version, version))
+ os.system("git tag -a %s -m 'version %s'" % (finder.version,
+ finder.version))
os.system("git push --tags")
sys.exit()
@@ -37,19 +64,19 @@
setup(
name='dj-stripe',
- version=version,
- description=djstripe.__summary__,
+ version=finder.version,
+ description=finder.summary,
long_description=readme + '\n\n' + history,
- author=djstripe.__author__,
- author_email=djstripe.__email__,
- url=djstripe.__uri__,
+ author=finder.author,
+ author_email=finder.email,
+ url=finder.uri,
packages=[
'djstripe',
],
package_dir={'djstripe': 'djstripe'},
include_package_data=True,
install_requires=INSTALL_REQUIRES,
- license=djstripe.__license__,
+ license=finder.license,
zip_safe=False,
keywords='stripe django',
classifiers=[
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -1,11 +1,37 @@\n #!/usr/bin/env python\n \n+import ast\n import os\n import sys\n \n-import djstripe\n \n-version = djstripe.__version__\n+class MetadataFinder(ast.NodeVisitor):\n+ def __init__(self):\n+ self.version = None\n+ self.summary = None\n+ self.author = None\n+ self.email = None\n+ self.uri = None\n+ self.licence = None\n+\n+ def visit_Assign(self, node):\n+ if node.targets[0].id == '__version__':\n+ self.version = node.value.s\n+ elif node.targets[0].id == '__summary__':\n+ self.summary = node.value.s\n+ elif node.targets[0].id == '__author__':\n+ self.author = node.value.s\n+ elif node.targets[0].id == '__email__':\n+ self.email = node.value.s\n+ elif node.targets[0].id == '__uri__':\n+ self.uri = node.value.s\n+ elif node.targets[0].id == '__license__':\n+ self.license = node.value.s\n+\n+\n+with open(os.path.join('djstripe', '__init__.py')) as open_file:\n+ finder = MetadataFinder()\n+ finder.visit(ast.parse(open_file.read()))\n \n try:\n from setuptools import setup\n@@ -19,7 +45,8 @@\n \n if sys.argv[-1] == 'tag':\n print(\"Tagging the version on github:\")\n- os.system(\"git tag -a %s -m 'version %s'\" % (version, version))\n+ os.system(\"git tag -a %s -m 'version %s'\" % (finder.version,\n+ finder.version))\n os.system(\"git push --tags\")\n sys.exit()\n \n@@ -37,19 +64,19 @@\n \n setup(\n name='dj-stripe',\n- version=version,\n- description=djstripe.__summary__,\n+ version=finder.version,\n+ description=finder.summary,\n long_description=readme + '\\n\\n' + history,\n- author=djstripe.__author__,\n- author_email=djstripe.__email__,\n- url=djstripe.__uri__,\n+ author=finder.author,\n+ author_email=finder.email,\n+ url=finder.uri,\n packages=[\n 'djstripe',\n ],\n package_dir={'djstripe': 'djstripe'},\n include_package_data=True,\n install_requires=INSTALL_REQUIRES,\n- license=djstripe.__license__,\n+ license=finder.license,\n zip_safe=False,\n keywords='stripe django',\n classifiers=[\n", "issue": "importing djstripe within setup.py causes race condition when installing from repo\nTrying to install dj-stripe from a repo runs into a race condition at setup.py:\n\n``` bash\npip install -e git://github.com/pydanny/dj-stripe.git#egg=djstripe \nObtaining djstripe from git+git://github.com/pydanny/dj-stripe.git#egg=djstripe\n Cloning git://github.com/pydanny/dj-stripe.git to ./v/test_djstripe/src/djstripe\n Complete output from command python setup.py egg_info:\n Traceback (most recent call last):\n File \"<string>\", line 20, in <module>\n File \"/home/dave/v/test_djstripe/src/djstripe/setup.py\", line 6, in <module>\n import djstripe\n File \"/home/dave/v/test_djstripe/src/djstripe/djstripe/__init__.py\", line 4, in <module>\n from django import get_version as get_django_version\n ImportError: No module named 'django'\n\n ----------------------------------------\n```\n\nThere are a few ways to fix this. I would suggest the, for example, get_version(package) methods used in https://github.com/pydanny/django-admin2/blob/master/setup.py\n\nThis is a trivial fix, I'll get a patch together soon. \n\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport os\nimport sys\n\nimport djstripe\n\nversion = djstripe.__version__\n\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\nif sys.argv[-1] == 'publish':\n os.system('python setup.py sdist upload')\n os.system('python setup.py bdist_wheel upload')\n sys.exit()\n\nif sys.argv[-1] == 'tag':\n print(\"Tagging the version on github:\")\n os.system(\"git tag -a %s -m 'version %s'\" % (version, version))\n os.system(\"git push --tags\")\n sys.exit()\n\nreadme = open('README.rst').read()\nhistory = open('HISTORY.rst').read().replace('.. :changelog:', '')\n\nINSTALL_REQUIRES = [\n 'django>=1.7',\n 'stripe>=1.22.2',\n 'django-model-utils>=2.2',\n 'django-braces>=1.8.0',\n 'jsonfield>=1.0.3',\n 'pytz>=2015.4'\n]\n\nsetup(\n name='dj-stripe',\n version=version,\n description=djstripe.__summary__,\n long_description=readme + '\\n\\n' + history,\n author=djstripe.__author__,\n author_email=djstripe.__email__,\n url=djstripe.__uri__,\n packages=[\n 'djstripe',\n ],\n package_dir={'djstripe': 'djstripe'},\n include_package_data=True,\n install_requires=INSTALL_REQUIRES,\n license=djstripe.__license__,\n zip_safe=False,\n keywords='stripe django',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: Web Environment',\n 'Framework :: Django',\n 'Framework :: Django :: 1.7',\n 'Framework :: Django :: 1.8',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Natural Language :: English',\n \"Programming Language :: Python :: 2\",\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5'\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\nimport ast\nimport os\nimport sys\n\n\nclass MetadataFinder(ast.NodeVisitor):\n def __init__(self):\n self.version = None\n self.summary = None\n self.author = None\n self.email = None\n self.uri = None\n self.licence = None\n\n def visit_Assign(self, node):\n if node.targets[0].id == '__version__':\n self.version = node.value.s\n elif node.targets[0].id == '__summary__':\n self.summary = node.value.s\n elif node.targets[0].id == '__author__':\n self.author = node.value.s\n elif node.targets[0].id == '__email__':\n self.email = node.value.s\n elif node.targets[0].id == '__uri__':\n self.uri = node.value.s\n elif node.targets[0].id == '__license__':\n self.license = node.value.s\n\n\nwith open(os.path.join('djstripe', '__init__.py')) as open_file:\n finder = MetadataFinder()\n finder.visit(ast.parse(open_file.read()))\n\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\nif sys.argv[-1] == 'publish':\n os.system('python setup.py sdist upload')\n os.system('python setup.py bdist_wheel upload')\n sys.exit()\n\nif sys.argv[-1] == 'tag':\n print(\"Tagging the version on github:\")\n os.system(\"git tag -a %s -m 'version %s'\" % (finder.version,\n finder.version))\n os.system(\"git push --tags\")\n sys.exit()\n\nreadme = open('README.rst').read()\nhistory = open('HISTORY.rst').read().replace('.. :changelog:', '')\n\nINSTALL_REQUIRES = [\n 'django>=1.7',\n 'stripe>=1.22.2',\n 'django-model-utils>=2.2',\n 'django-braces>=1.8.0',\n 'jsonfield>=1.0.3',\n 'pytz>=2015.4'\n]\n\nsetup(\n name='dj-stripe',\n version=finder.version,\n description=finder.summary,\n long_description=readme + '\\n\\n' + history,\n author=finder.author,\n author_email=finder.email,\n url=finder.uri,\n packages=[\n 'djstripe',\n ],\n package_dir={'djstripe': 'djstripe'},\n include_package_data=True,\n install_requires=INSTALL_REQUIRES,\n license=finder.license,\n zip_safe=False,\n keywords='stripe django',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: Web Environment',\n 'Framework :: Django',\n 'Framework :: Django :: 1.7',\n 'Framework :: Django :: 1.8',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Natural Language :: English',\n \"Programming Language :: Python :: 2\",\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5'\n ],\n)\n", "path": "setup.py"}]}
| 1,182 | 591 |
gh_patches_debug_9495
|
rasdani/github-patches
|
git_diff
|
DataDog__dd-trace-py-231
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
psycopg2 cursor's __enter__ method is not patched to be traced
See behavior here:
```python
>>> import ddtrace
>>> ddtrace.patch_all()
>>> import psycopg2
>>> conn = psycopg2.connect('postgresql://localhost')
>>> print(type(conn.cursor()))
<class 'ddtrace.contrib.dbapi.TracedCursor'>
>>> with conn.cursor() as cur:
... print(type(cur))
<type 'psycopg2.extensions.cursor'>
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ddtrace/contrib/dbapi/__init__.py`
Content:
```
1 """
2 Generic dbapi tracing code.
3 """
4
5 # stdlib
6 import logging
7
8 # 3p
9 import wrapt
10
11 # project
12 from ddtrace import Pin
13 from ddtrace.ext import sql
14
15
16 log = logging.getLogger(__name__)
17
18
19 class TracedCursor(wrapt.ObjectProxy):
20 """ TracedCursor wraps a psql cursor and traces it's queries. """
21
22 _datadog_pin = None
23 _datadog_name = None
24
25 def __init__(self, cursor, pin):
26 super(TracedCursor, self).__init__(cursor)
27 self._datadog_pin = pin
28 name = pin.app or 'sql'
29 self._datadog_name = '%s.query' % name
30
31 def executemany(self, query, *args, **kwargs):
32 pin = self._datadog_pin
33 if not pin or not pin.enabled():
34 return self.__wrapped__.executemany(query, *args, **kwargs)
35 service = pin.service
36
37 # FIXME[matt] properly handle kwargs here. arg names can be different
38 # with different libs.
39 with pin.tracer.trace(self._datadog_name, service=service, resource=query) as s:
40 s.span_type = sql.TYPE
41 s.set_tag(sql.QUERY, query)
42 s.set_tags(pin.tags)
43 s.set_tag("sql.executemany", "true")
44 try:
45 return self.__wrapped__.executemany(query, *args, **kwargs)
46 finally:
47 s.set_metric("db.rowcount", self.rowcount)
48
49 def execute(self, query, *args, **kwargs):
50 pin = self._datadog_pin
51 if not pin or not pin.enabled():
52 return self.__wrapped__.execute(query, *args, **kwargs)
53
54 service = pin.service
55 with pin.tracer.trace(self._datadog_name, service=service, resource=query) as s:
56 s.span_type = sql.TYPE
57 s.set_tag(sql.QUERY, query)
58 s.set_tags(pin.tags)
59 try:
60 return self.__wrapped__.execute(query, *args, **kwargs)
61 finally:
62 s.set_metric("db.rowcount", self.rowcount)
63
64 def callproc(self, proc, args):
65 pin = self._datadog_pin
66 if not pin or not pin.enabled():
67 return self.__wrapped__.callproc(proc, args)
68
69 with pin.tracer.trace(self._datadog_name, service=pin.service, resource=proc) as s:
70 s.span_type = sql.TYPE
71 s.set_tag(sql.QUERY, proc)
72 s.set_tags(pin.tags)
73 try:
74 return self.__wrapped__.callproc(proc, args)
75 finally:
76 s.set_metric("db.rowcount", self.rowcount)
77
78
79 class TracedConnection(wrapt.ObjectProxy):
80 """ TracedConnection wraps a Connection with tracing code. """
81
82 _datadog_pin = None
83
84 def __init__(self, conn):
85 super(TracedConnection, self).__init__(conn)
86 name = _get_vendor(conn)
87 Pin(service=name, app=name).onto(self)
88
89 def cursor(self, *args, **kwargs):
90 cursor = self.__wrapped__.cursor(*args, **kwargs)
91 pin = self._datadog_pin
92 if not pin:
93 return cursor
94 return TracedCursor(cursor, pin)
95
96
97 def _get_vendor(conn):
98 """ Return the vendor (e.g postgres, mysql) of the given
99 database.
100 """
101 try:
102 name = _get_module_name(conn)
103 except Exception:
104 log.debug("couldnt parse module name", exc_info=True)
105 name = "sql"
106 return sql.normalize_vendor(name)
107
108 def _get_module_name(conn):
109 return conn.__class__.__module__.split('.')[0]
110
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ddtrace/contrib/dbapi/__init__.py b/ddtrace/contrib/dbapi/__init__.py
--- a/ddtrace/contrib/dbapi/__init__.py
+++ b/ddtrace/contrib/dbapi/__init__.py
@@ -75,6 +75,15 @@
finally:
s.set_metric("db.rowcount", self.rowcount)
+ def __enter__(self):
+ # previous versions of the dbapi didn't support context managers. let's
+ # reference the func that would be called to ensure that errors
+ # messages will be the same.
+ self.__wrapped__.__enter__
+
+ # and finally, yield the traced cursor.
+ return self
+
class TracedConnection(wrapt.ObjectProxy):
""" TracedConnection wraps a Connection with tracing code. """
|
{"golden_diff": "diff --git a/ddtrace/contrib/dbapi/__init__.py b/ddtrace/contrib/dbapi/__init__.py\n--- a/ddtrace/contrib/dbapi/__init__.py\n+++ b/ddtrace/contrib/dbapi/__init__.py\n@@ -75,6 +75,15 @@\n finally:\n s.set_metric(\"db.rowcount\", self.rowcount)\n \n+ def __enter__(self):\n+ # previous versions of the dbapi didn't support context managers. let's\n+ # reference the func that would be called to ensure that errors\n+ # messages will be the same.\n+ self.__wrapped__.__enter__\n+\n+ # and finally, yield the traced cursor.\n+ return self\n+\n \n class TracedConnection(wrapt.ObjectProxy):\n \"\"\" TracedConnection wraps a Connection with tracing code. \"\"\"\n", "issue": "psycopg2 cursor's __enter__ method is not patched to be traced\nSee behavior here:\r\n\r\n```python\r\n>>> import ddtrace\r\n>>> ddtrace.patch_all()\r\n>>> import psycopg2\r\n>>> conn = psycopg2.connect('postgresql://localhost')\r\n>>> print(type(conn.cursor()))\r\n<class 'ddtrace.contrib.dbapi.TracedCursor'>\r\n>>> with conn.cursor() as cur:\r\n... print(type(cur))\r\n<type 'psycopg2.extensions.cursor'>\r\n```\n", "before_files": [{"content": "\"\"\"\nGeneric dbapi tracing code.\n\"\"\"\n\n# stdlib\nimport logging\n\n# 3p\nimport wrapt\n\n# project\nfrom ddtrace import Pin\nfrom ddtrace.ext import sql\n\n\nlog = logging.getLogger(__name__)\n\n\nclass TracedCursor(wrapt.ObjectProxy):\n \"\"\" TracedCursor wraps a psql cursor and traces it's queries. \"\"\"\n\n _datadog_pin = None\n _datadog_name = None\n\n def __init__(self, cursor, pin):\n super(TracedCursor, self).__init__(cursor)\n self._datadog_pin = pin\n name = pin.app or 'sql'\n self._datadog_name = '%s.query' % name\n\n def executemany(self, query, *args, **kwargs):\n pin = self._datadog_pin\n if not pin or not pin.enabled():\n return self.__wrapped__.executemany(query, *args, **kwargs)\n service = pin.service\n\n # FIXME[matt] properly handle kwargs here. arg names can be different\n # with different libs.\n with pin.tracer.trace(self._datadog_name, service=service, resource=query) as s:\n s.span_type = sql.TYPE\n s.set_tag(sql.QUERY, query)\n s.set_tags(pin.tags)\n s.set_tag(\"sql.executemany\", \"true\")\n try:\n return self.__wrapped__.executemany(query, *args, **kwargs)\n finally:\n s.set_metric(\"db.rowcount\", self.rowcount)\n\n def execute(self, query, *args, **kwargs):\n pin = self._datadog_pin\n if not pin or not pin.enabled():\n return self.__wrapped__.execute(query, *args, **kwargs)\n\n service = pin.service\n with pin.tracer.trace(self._datadog_name, service=service, resource=query) as s:\n s.span_type = sql.TYPE\n s.set_tag(sql.QUERY, query)\n s.set_tags(pin.tags)\n try:\n return self.__wrapped__.execute(query, *args, **kwargs)\n finally:\n s.set_metric(\"db.rowcount\", self.rowcount)\n\n def callproc(self, proc, args):\n pin = self._datadog_pin\n if not pin or not pin.enabled():\n return self.__wrapped__.callproc(proc, args)\n\n with pin.tracer.trace(self._datadog_name, service=pin.service, resource=proc) as s:\n s.span_type = sql.TYPE\n s.set_tag(sql.QUERY, proc)\n s.set_tags(pin.tags)\n try:\n return self.__wrapped__.callproc(proc, args)\n finally:\n s.set_metric(\"db.rowcount\", self.rowcount)\n\n\nclass TracedConnection(wrapt.ObjectProxy):\n \"\"\" TracedConnection wraps a Connection with tracing code. \"\"\"\n\n _datadog_pin = None\n\n def __init__(self, conn):\n super(TracedConnection, self).__init__(conn)\n name = _get_vendor(conn)\n Pin(service=name, app=name).onto(self)\n\n def cursor(self, *args, **kwargs):\n cursor = self.__wrapped__.cursor(*args, **kwargs)\n pin = self._datadog_pin\n if not pin:\n return cursor\n return TracedCursor(cursor, pin)\n\n\ndef _get_vendor(conn):\n \"\"\" Return the vendor (e.g postgres, mysql) of the given\n database.\n \"\"\"\n try:\n name = _get_module_name(conn)\n except Exception:\n log.debug(\"couldnt parse module name\", exc_info=True)\n name = \"sql\"\n return sql.normalize_vendor(name)\n\ndef _get_module_name(conn):\n return conn.__class__.__module__.split('.')[0]\n", "path": "ddtrace/contrib/dbapi/__init__.py"}], "after_files": [{"content": "\"\"\"\nGeneric dbapi tracing code.\n\"\"\"\n\n# stdlib\nimport logging\n\n# 3p\nimport wrapt\n\n# project\nfrom ddtrace import Pin\nfrom ddtrace.ext import sql\n\n\nlog = logging.getLogger(__name__)\n\n\nclass TracedCursor(wrapt.ObjectProxy):\n \"\"\" TracedCursor wraps a psql cursor and traces it's queries. \"\"\"\n\n _datadog_pin = None\n _datadog_name = None\n\n def __init__(self, cursor, pin):\n super(TracedCursor, self).__init__(cursor)\n self._datadog_pin = pin\n name = pin.app or 'sql'\n self._datadog_name = '%s.query' % name\n\n def executemany(self, query, *args, **kwargs):\n pin = self._datadog_pin\n if not pin or not pin.enabled():\n return self.__wrapped__.executemany(query, *args, **kwargs)\n service = pin.service\n\n # FIXME[matt] properly handle kwargs here. arg names can be different\n # with different libs.\n with pin.tracer.trace(self._datadog_name, service=service, resource=query) as s:\n s.span_type = sql.TYPE\n s.set_tag(sql.QUERY, query)\n s.set_tags(pin.tags)\n s.set_tag(\"sql.executemany\", \"true\")\n try:\n return self.__wrapped__.executemany(query, *args, **kwargs)\n finally:\n s.set_metric(\"db.rowcount\", self.rowcount)\n\n def execute(self, query, *args, **kwargs):\n pin = self._datadog_pin\n if not pin or not pin.enabled():\n return self.__wrapped__.execute(query, *args, **kwargs)\n\n service = pin.service\n with pin.tracer.trace(self._datadog_name, service=service, resource=query) as s:\n s.span_type = sql.TYPE\n s.set_tag(sql.QUERY, query)\n s.set_tags(pin.tags)\n try:\n return self.__wrapped__.execute(query, *args, **kwargs)\n finally:\n s.set_metric(\"db.rowcount\", self.rowcount)\n\n def callproc(self, proc, args):\n pin = self._datadog_pin\n if not pin or not pin.enabled():\n return self.__wrapped__.callproc(proc, args)\n\n with pin.tracer.trace(self._datadog_name, service=pin.service, resource=proc) as s:\n s.span_type = sql.TYPE\n s.set_tag(sql.QUERY, proc)\n s.set_tags(pin.tags)\n try:\n return self.__wrapped__.callproc(proc, args)\n finally:\n s.set_metric(\"db.rowcount\", self.rowcount)\n\n def __enter__(self):\n # previous versions of the dbapi didn't support context managers. let's\n # reference the func that would be called to ensure that errors\n # messages will be the same.\n self.__wrapped__.__enter__\n\n # and finally, yield the traced cursor.\n return self\n\n\nclass TracedConnection(wrapt.ObjectProxy):\n \"\"\" TracedConnection wraps a Connection with tracing code. \"\"\"\n\n _datadog_pin = None\n\n def __init__(self, conn):\n super(TracedConnection, self).__init__(conn)\n name = _get_vendor(conn)\n Pin(service=name, app=name).onto(self)\n\n def cursor(self, *args, **kwargs):\n cursor = self.__wrapped__.cursor(*args, **kwargs)\n pin = self._datadog_pin\n if not pin:\n return cursor\n return TracedCursor(cursor, pin)\n\n\ndef _get_vendor(conn):\n \"\"\" Return the vendor (e.g postgres, mysql) of the given\n database.\n \"\"\"\n try:\n name = _get_module_name(conn)\n except Exception:\n log.debug(\"couldnt parse module name\", exc_info=True)\n name = \"sql\"\n return sql.normalize_vendor(name)\n\ndef _get_module_name(conn):\n return conn.__class__.__module__.split('.')[0]\n", "path": "ddtrace/contrib/dbapi/__init__.py"}]}
| 1,403 | 182 |
gh_patches_debug_6033
|
rasdani/github-patches
|
git_diff
|
encode__starlette-1504
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Deprecate `WSGIMiddleware` in favor of `a2wsgi`
### Checklist
- [X] There are no similar issues or pull requests for this yet.
- [X] I discussed this idea on the [community chat](https://gitter.im/encode/community) and feedback is positive.
### Is your feature related to a problem? Please describe.
I want to deprecate `WSGIMiddleware` and recommend [a2wsgi](https://github.com/abersheeran/a2wsgi) on the documentation.
Right now, the `WSGIMiddleware` is not documented, so not that harmful to deprecate. I expect the deprecation message to inform about `a2wsgi` or recommend the specific page on the docs so users using the middleware can fix the warning easily.
### Describe the solution you would like.
_No response_
### Describe alternatives you considered
_No response_
### Additional context
Gitter conversation about the topic:

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `starlette/middleware/wsgi.py`
Content:
```
1 import io
2 import math
3 import sys
4 import typing
5
6 import anyio
7
8 from starlette.types import Receive, Scope, Send
9
10
11 def build_environ(scope: Scope, body: bytes) -> dict:
12 """
13 Builds a scope and request body into a WSGI environ object.
14 """
15 environ = {
16 "REQUEST_METHOD": scope["method"],
17 "SCRIPT_NAME": scope.get("root_path", "").encode("utf8").decode("latin1"),
18 "PATH_INFO": scope["path"].encode("utf8").decode("latin1"),
19 "QUERY_STRING": scope["query_string"].decode("ascii"),
20 "SERVER_PROTOCOL": f"HTTP/{scope['http_version']}",
21 "wsgi.version": (1, 0),
22 "wsgi.url_scheme": scope.get("scheme", "http"),
23 "wsgi.input": io.BytesIO(body),
24 "wsgi.errors": sys.stdout,
25 "wsgi.multithread": True,
26 "wsgi.multiprocess": True,
27 "wsgi.run_once": False,
28 }
29
30 # Get server name and port - required in WSGI, not in ASGI
31 server = scope.get("server") or ("localhost", 80)
32 environ["SERVER_NAME"] = server[0]
33 environ["SERVER_PORT"] = server[1]
34
35 # Get client IP address
36 if scope.get("client"):
37 environ["REMOTE_ADDR"] = scope["client"][0]
38
39 # Go through headers and make them into environ entries
40 for name, value in scope.get("headers", []):
41 name = name.decode("latin1")
42 if name == "content-length":
43 corrected_name = "CONTENT_LENGTH"
44 elif name == "content-type":
45 corrected_name = "CONTENT_TYPE"
46 else:
47 corrected_name = f"HTTP_{name}".upper().replace("-", "_")
48 # HTTPbis say only ASCII chars are allowed in headers, but we latin1 just in
49 # case
50 value = value.decode("latin1")
51 if corrected_name in environ:
52 value = environ[corrected_name] + "," + value
53 environ[corrected_name] = value
54 return environ
55
56
57 class WSGIMiddleware:
58 def __init__(self, app: typing.Callable) -> None:
59 self.app = app
60
61 async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
62 assert scope["type"] == "http"
63 responder = WSGIResponder(self.app, scope)
64 await responder(receive, send)
65
66
67 class WSGIResponder:
68 def __init__(self, app: typing.Callable, scope: Scope) -> None:
69 self.app = app
70 self.scope = scope
71 self.status = None
72 self.response_headers = None
73 self.stream_send, self.stream_receive = anyio.create_memory_object_stream(
74 math.inf
75 )
76 self.response_started = False
77 self.exc_info: typing.Any = None
78
79 async def __call__(self, receive: Receive, send: Send) -> None:
80 body = b""
81 more_body = True
82 while more_body:
83 message = await receive()
84 body += message.get("body", b"")
85 more_body = message.get("more_body", False)
86 environ = build_environ(self.scope, body)
87
88 async with anyio.create_task_group() as task_group:
89 task_group.start_soon(self.sender, send)
90 async with self.stream_send:
91 await anyio.to_thread.run_sync(self.wsgi, environ, self.start_response)
92 if self.exc_info is not None:
93 raise self.exc_info[0].with_traceback(self.exc_info[1], self.exc_info[2])
94
95 async def sender(self, send: Send) -> None:
96 async with self.stream_receive:
97 async for message in self.stream_receive:
98 await send(message)
99
100 def start_response(
101 self,
102 status: str,
103 response_headers: typing.List[typing.Tuple[str, str]],
104 exc_info: typing.Any = None,
105 ) -> None:
106 self.exc_info = exc_info
107 if not self.response_started:
108 self.response_started = True
109 status_code_string, _ = status.split(" ", 1)
110 status_code = int(status_code_string)
111 headers = [
112 (name.strip().encode("ascii").lower(), value.strip().encode("ascii"))
113 for name, value in response_headers
114 ]
115 anyio.from_thread.run(
116 self.stream_send.send,
117 {
118 "type": "http.response.start",
119 "status": status_code,
120 "headers": headers,
121 },
122 )
123
124 def wsgi(self, environ: dict, start_response: typing.Callable) -> None:
125 for chunk in self.app(environ, start_response):
126 anyio.from_thread.run(
127 self.stream_send.send,
128 {"type": "http.response.body", "body": chunk, "more_body": True},
129 )
130
131 anyio.from_thread.run(
132 self.stream_send.send, {"type": "http.response.body", "body": b""}
133 )
134
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/starlette/middleware/wsgi.py b/starlette/middleware/wsgi.py
--- a/starlette/middleware/wsgi.py
+++ b/starlette/middleware/wsgi.py
@@ -2,11 +2,18 @@
import math
import sys
import typing
+import warnings
import anyio
from starlette.types import Receive, Scope, Send
+warnings.warn(
+ "starlette.middleware.wsgi is deprecated and will be removed in a future release. "
+ "Please refer to https://github.com/abersheeran/a2wsgi as a replacement.",
+ DeprecationWarning,
+)
+
def build_environ(scope: Scope, body: bytes) -> dict:
"""
|
{"golden_diff": "diff --git a/starlette/middleware/wsgi.py b/starlette/middleware/wsgi.py\n--- a/starlette/middleware/wsgi.py\n+++ b/starlette/middleware/wsgi.py\n@@ -2,11 +2,18 @@\n import math\n import sys\n import typing\n+import warnings\n \n import anyio\n \n from starlette.types import Receive, Scope, Send\n \n+warnings.warn(\n+ \"starlette.middleware.wsgi is deprecated and will be removed in a future release. \"\n+ \"Please refer to https://github.com/abersheeran/a2wsgi as a replacement.\",\n+ DeprecationWarning,\n+)\n+\n \n def build_environ(scope: Scope, body: bytes) -> dict:\n \"\"\"\n", "issue": "Deprecate `WSGIMiddleware` in favor of `a2wsgi`\n### Checklist\n\n- [X] There are no similar issues or pull requests for this yet.\n- [X] I discussed this idea on the [community chat](https://gitter.im/encode/community) and feedback is positive.\n\n### Is your feature related to a problem? Please describe.\n\nI want to deprecate `WSGIMiddleware` and recommend [a2wsgi](https://github.com/abersheeran/a2wsgi) on the documentation.\r\n\r\nRight now, the `WSGIMiddleware` is not documented, so not that harmful to deprecate. I expect the deprecation message to inform about `a2wsgi` or recommend the specific page on the docs so users using the middleware can fix the warning easily.\n\n### Describe the solution you would like.\n\n_No response_\n\n### Describe alternatives you considered\n\n_No response_\n\n### Additional context\n\nGitter conversation about the topic:\r\n\r\n\r\n\n", "before_files": [{"content": "import io\nimport math\nimport sys\nimport typing\n\nimport anyio\n\nfrom starlette.types import Receive, Scope, Send\n\n\ndef build_environ(scope: Scope, body: bytes) -> dict:\n \"\"\"\n Builds a scope and request body into a WSGI environ object.\n \"\"\"\n environ = {\n \"REQUEST_METHOD\": scope[\"method\"],\n \"SCRIPT_NAME\": scope.get(\"root_path\", \"\").encode(\"utf8\").decode(\"latin1\"),\n \"PATH_INFO\": scope[\"path\"].encode(\"utf8\").decode(\"latin1\"),\n \"QUERY_STRING\": scope[\"query_string\"].decode(\"ascii\"),\n \"SERVER_PROTOCOL\": f\"HTTP/{scope['http_version']}\",\n \"wsgi.version\": (1, 0),\n \"wsgi.url_scheme\": scope.get(\"scheme\", \"http\"),\n \"wsgi.input\": io.BytesIO(body),\n \"wsgi.errors\": sys.stdout,\n \"wsgi.multithread\": True,\n \"wsgi.multiprocess\": True,\n \"wsgi.run_once\": False,\n }\n\n # Get server name and port - required in WSGI, not in ASGI\n server = scope.get(\"server\") or (\"localhost\", 80)\n environ[\"SERVER_NAME\"] = server[0]\n environ[\"SERVER_PORT\"] = server[1]\n\n # Get client IP address\n if scope.get(\"client\"):\n environ[\"REMOTE_ADDR\"] = scope[\"client\"][0]\n\n # Go through headers and make them into environ entries\n for name, value in scope.get(\"headers\", []):\n name = name.decode(\"latin1\")\n if name == \"content-length\":\n corrected_name = \"CONTENT_LENGTH\"\n elif name == \"content-type\":\n corrected_name = \"CONTENT_TYPE\"\n else:\n corrected_name = f\"HTTP_{name}\".upper().replace(\"-\", \"_\")\n # HTTPbis say only ASCII chars are allowed in headers, but we latin1 just in\n # case\n value = value.decode(\"latin1\")\n if corrected_name in environ:\n value = environ[corrected_name] + \",\" + value\n environ[corrected_name] = value\n return environ\n\n\nclass WSGIMiddleware:\n def __init__(self, app: typing.Callable) -> None:\n self.app = app\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n assert scope[\"type\"] == \"http\"\n responder = WSGIResponder(self.app, scope)\n await responder(receive, send)\n\n\nclass WSGIResponder:\n def __init__(self, app: typing.Callable, scope: Scope) -> None:\n self.app = app\n self.scope = scope\n self.status = None\n self.response_headers = None\n self.stream_send, self.stream_receive = anyio.create_memory_object_stream(\n math.inf\n )\n self.response_started = False\n self.exc_info: typing.Any = None\n\n async def __call__(self, receive: Receive, send: Send) -> None:\n body = b\"\"\n more_body = True\n while more_body:\n message = await receive()\n body += message.get(\"body\", b\"\")\n more_body = message.get(\"more_body\", False)\n environ = build_environ(self.scope, body)\n\n async with anyio.create_task_group() as task_group:\n task_group.start_soon(self.sender, send)\n async with self.stream_send:\n await anyio.to_thread.run_sync(self.wsgi, environ, self.start_response)\n if self.exc_info is not None:\n raise self.exc_info[0].with_traceback(self.exc_info[1], self.exc_info[2])\n\n async def sender(self, send: Send) -> None:\n async with self.stream_receive:\n async for message in self.stream_receive:\n await send(message)\n\n def start_response(\n self,\n status: str,\n response_headers: typing.List[typing.Tuple[str, str]],\n exc_info: typing.Any = None,\n ) -> None:\n self.exc_info = exc_info\n if not self.response_started:\n self.response_started = True\n status_code_string, _ = status.split(\" \", 1)\n status_code = int(status_code_string)\n headers = [\n (name.strip().encode(\"ascii\").lower(), value.strip().encode(\"ascii\"))\n for name, value in response_headers\n ]\n anyio.from_thread.run(\n self.stream_send.send,\n {\n \"type\": \"http.response.start\",\n \"status\": status_code,\n \"headers\": headers,\n },\n )\n\n def wsgi(self, environ: dict, start_response: typing.Callable) -> None:\n for chunk in self.app(environ, start_response):\n anyio.from_thread.run(\n self.stream_send.send,\n {\"type\": \"http.response.body\", \"body\": chunk, \"more_body\": True},\n )\n\n anyio.from_thread.run(\n self.stream_send.send, {\"type\": \"http.response.body\", \"body\": b\"\"}\n )\n", "path": "starlette/middleware/wsgi.py"}], "after_files": [{"content": "import io\nimport math\nimport sys\nimport typing\nimport warnings\n\nimport anyio\n\nfrom starlette.types import Receive, Scope, Send\n\nwarnings.warn(\n \"starlette.middleware.wsgi is deprecated and will be removed in a future release. \"\n \"Please refer to https://github.com/abersheeran/a2wsgi as a replacement.\",\n DeprecationWarning,\n)\n\n\ndef build_environ(scope: Scope, body: bytes) -> dict:\n \"\"\"\n Builds a scope and request body into a WSGI environ object.\n \"\"\"\n environ = {\n \"REQUEST_METHOD\": scope[\"method\"],\n \"SCRIPT_NAME\": scope.get(\"root_path\", \"\").encode(\"utf8\").decode(\"latin1\"),\n \"PATH_INFO\": scope[\"path\"].encode(\"utf8\").decode(\"latin1\"),\n \"QUERY_STRING\": scope[\"query_string\"].decode(\"ascii\"),\n \"SERVER_PROTOCOL\": f\"HTTP/{scope['http_version']}\",\n \"wsgi.version\": (1, 0),\n \"wsgi.url_scheme\": scope.get(\"scheme\", \"http\"),\n \"wsgi.input\": io.BytesIO(body),\n \"wsgi.errors\": sys.stdout,\n \"wsgi.multithread\": True,\n \"wsgi.multiprocess\": True,\n \"wsgi.run_once\": False,\n }\n\n # Get server name and port - required in WSGI, not in ASGI\n server = scope.get(\"server\") or (\"localhost\", 80)\n environ[\"SERVER_NAME\"] = server[0]\n environ[\"SERVER_PORT\"] = server[1]\n\n # Get client IP address\n if scope.get(\"client\"):\n environ[\"REMOTE_ADDR\"] = scope[\"client\"][0]\n\n # Go through headers and make them into environ entries\n for name, value in scope.get(\"headers\", []):\n name = name.decode(\"latin1\")\n if name == \"content-length\":\n corrected_name = \"CONTENT_LENGTH\"\n elif name == \"content-type\":\n corrected_name = \"CONTENT_TYPE\"\n else:\n corrected_name = f\"HTTP_{name}\".upper().replace(\"-\", \"_\")\n # HTTPbis say only ASCII chars are allowed in headers, but we latin1 just in\n # case\n value = value.decode(\"latin1\")\n if corrected_name in environ:\n value = environ[corrected_name] + \",\" + value\n environ[corrected_name] = value\n return environ\n\n\nclass WSGIMiddleware:\n def __init__(self, app: typing.Callable) -> None:\n self.app = app\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n assert scope[\"type\"] == \"http\"\n responder = WSGIResponder(self.app, scope)\n await responder(receive, send)\n\n\nclass WSGIResponder:\n def __init__(self, app: typing.Callable, scope: Scope) -> None:\n self.app = app\n self.scope = scope\n self.status = None\n self.response_headers = None\n self.stream_send, self.stream_receive = anyio.create_memory_object_stream(\n math.inf\n )\n self.response_started = False\n self.exc_info: typing.Any = None\n\n async def __call__(self, receive: Receive, send: Send) -> None:\n body = b\"\"\n more_body = True\n while more_body:\n message = await receive()\n body += message.get(\"body\", b\"\")\n more_body = message.get(\"more_body\", False)\n environ = build_environ(self.scope, body)\n\n async with anyio.create_task_group() as task_group:\n task_group.start_soon(self.sender, send)\n async with self.stream_send:\n await anyio.to_thread.run_sync(self.wsgi, environ, self.start_response)\n if self.exc_info is not None:\n raise self.exc_info[0].with_traceback(self.exc_info[1], self.exc_info[2])\n\n async def sender(self, send: Send) -> None:\n async with self.stream_receive:\n async for message in self.stream_receive:\n await send(message)\n\n def start_response(\n self,\n status: str,\n response_headers: typing.List[typing.Tuple[str, str]],\n exc_info: typing.Any = None,\n ) -> None:\n self.exc_info = exc_info\n if not self.response_started:\n self.response_started = True\n status_code_string, _ = status.split(\" \", 1)\n status_code = int(status_code_string)\n headers = [\n (name.strip().encode(\"ascii\").lower(), value.strip().encode(\"ascii\"))\n for name, value in response_headers\n ]\n anyio.from_thread.run(\n self.stream_send.send,\n {\n \"type\": \"http.response.start\",\n \"status\": status_code,\n \"headers\": headers,\n },\n )\n\n def wsgi(self, environ: dict, start_response: typing.Callable) -> None:\n for chunk in self.app(environ, start_response):\n anyio.from_thread.run(\n self.stream_send.send,\n {\"type\": \"http.response.body\", \"body\": chunk, \"more_body\": True},\n )\n\n anyio.from_thread.run(\n self.stream_send.send, {\"type\": \"http.response.body\", \"body\": b\"\"}\n )\n", "path": "starlette/middleware/wsgi.py"}]}
| 1,913 | 156 |
gh_patches_debug_4362
|
rasdani/github-patches
|
git_diff
|
psf__black-2836
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Ignore __pypackages__ directory contents
**Describe the bug**
When using [PDM](https://pdm.fming.dev/), `black` does not ignore `__pypackages__` directory contents.
**To Reproduce**
Run `pdm run black .`
**Expected behavior**
`black` should reformat only project files.
**Environment**
- Black's version: 22.1.0
- PDM version: 1.12.6
- OS and Python version: Ubuntu 21.10 with Python 3.10.1
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/black/const.py`
Content:
```
1 DEFAULT_LINE_LENGTH = 88
2 DEFAULT_EXCLUDES = r"/(\.direnv|\.eggs|\.git|\.hg|\.mypy_cache|\.nox|\.tox|\.venv|venv|\.svn|_build|buck-out|build|dist)/" # noqa: B950
3 DEFAULT_INCLUDES = r"(\.pyi?|\.ipynb)$"
4 STDIN_PLACEHOLDER = "__BLACK_STDIN_FILENAME__"
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/black/const.py b/src/black/const.py
--- a/src/black/const.py
+++ b/src/black/const.py
@@ -1,4 +1,4 @@
DEFAULT_LINE_LENGTH = 88
-DEFAULT_EXCLUDES = r"/(\.direnv|\.eggs|\.git|\.hg|\.mypy_cache|\.nox|\.tox|\.venv|venv|\.svn|_build|buck-out|build|dist)/" # noqa: B950
+DEFAULT_EXCLUDES = r"/(\.direnv|\.eggs|\.git|\.hg|\.mypy_cache|\.nox|\.tox|\.venv|venv|\.svn|_build|buck-out|build|dist|__pypackages__)/" # noqa: B950
DEFAULT_INCLUDES = r"(\.pyi?|\.ipynb)$"
STDIN_PLACEHOLDER = "__BLACK_STDIN_FILENAME__"
|
{"golden_diff": "diff --git a/src/black/const.py b/src/black/const.py\n--- a/src/black/const.py\n+++ b/src/black/const.py\n@@ -1,4 +1,4 @@\n DEFAULT_LINE_LENGTH = 88\n-DEFAULT_EXCLUDES = r\"/(\\.direnv|\\.eggs|\\.git|\\.hg|\\.mypy_cache|\\.nox|\\.tox|\\.venv|venv|\\.svn|_build|buck-out|build|dist)/\" # noqa: B950\n+DEFAULT_EXCLUDES = r\"/(\\.direnv|\\.eggs|\\.git|\\.hg|\\.mypy_cache|\\.nox|\\.tox|\\.venv|venv|\\.svn|_build|buck-out|build|dist|__pypackages__)/\" # noqa: B950\n DEFAULT_INCLUDES = r\"(\\.pyi?|\\.ipynb)$\"\n STDIN_PLACEHOLDER = \"__BLACK_STDIN_FILENAME__\"\n", "issue": "Ignore __pypackages__ directory contents\n**Describe the bug**\r\n\r\nWhen using [PDM](https://pdm.fming.dev/), `black` does not ignore `__pypackages__` directory contents.\r\n\r\n**To Reproduce**\r\n\r\nRun `pdm run black .`\r\n\r\n**Expected behavior**\r\n\r\n`black` should reformat only project files.\r\n\r\n**Environment**\r\n\r\n- Black's version: 22.1.0\r\n- PDM version: 1.12.6\r\n- OS and Python version: Ubuntu 21.10 with Python 3.10.1\r\n\n", "before_files": [{"content": "DEFAULT_LINE_LENGTH = 88\nDEFAULT_EXCLUDES = r\"/(\\.direnv|\\.eggs|\\.git|\\.hg|\\.mypy_cache|\\.nox|\\.tox|\\.venv|venv|\\.svn|_build|buck-out|build|dist)/\" # noqa: B950\nDEFAULT_INCLUDES = r\"(\\.pyi?|\\.ipynb)$\"\nSTDIN_PLACEHOLDER = \"__BLACK_STDIN_FILENAME__\"\n", "path": "src/black/const.py"}], "after_files": [{"content": "DEFAULT_LINE_LENGTH = 88\nDEFAULT_EXCLUDES = r\"/(\\.direnv|\\.eggs|\\.git|\\.hg|\\.mypy_cache|\\.nox|\\.tox|\\.venv|venv|\\.svn|_build|buck-out|build|dist|__pypackages__)/\" # noqa: B950\nDEFAULT_INCLUDES = r\"(\\.pyi?|\\.ipynb)$\"\nSTDIN_PLACEHOLDER = \"__BLACK_STDIN_FILENAME__\"\n", "path": "src/black/const.py"}]}
| 497 | 218 |
gh_patches_debug_25971
|
rasdani/github-patches
|
git_diff
|
flairNLP__flair-214
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add torch.no_grad() to LanguageModel.generate_text()
The autograd engine is not required when using an LM to generate text.
So, as pointed out in #167, `torch.no_grad()` needs to be added to `LanguageModel.generate_text()` for better performance and to avoid out of memory issues.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `flair/models/language_model.py`
Content:
```
1 import torch.nn as nn
2 import torch
3 import math
4 from torch.autograd import Variable
5 from typing import List
6 from flair.data import Dictionary
7
8
9 class LanguageModel(nn.Module):
10 """Container module with an encoder, a recurrent module, and a decoder."""
11
12 def __init__(self,
13 dictionary: Dictionary,
14 is_forward_lm: bool,
15 hidden_size: int,
16 nlayers: int,
17 embedding_size: int = 100,
18 nout=None,
19 dropout=0.5):
20
21 super(LanguageModel, self).__init__()
22
23 self.dictionary = dictionary
24 self.is_forward_lm: bool = is_forward_lm
25
26 self.dropout = dropout
27 self.hidden_size = hidden_size
28 self.embedding_size = embedding_size
29 self.nlayers = nlayers
30
31 self.drop = nn.Dropout(dropout)
32 self.encoder = nn.Embedding(len(dictionary), embedding_size)
33
34 if nlayers == 1:
35 self.rnn = nn.LSTM(embedding_size, hidden_size, nlayers)
36 else:
37 self.rnn = nn.LSTM(embedding_size, hidden_size, nlayers, dropout=dropout)
38
39 self.hidden = None
40
41 self.nout = nout
42 if nout is not None:
43 self.proj = nn.Linear(hidden_size, nout)
44 self.initialize(self.proj.weight)
45 self.decoder = nn.Linear(nout, len(dictionary))
46 else:
47 self.proj = None
48 self.decoder = nn.Linear(hidden_size, len(dictionary))
49
50 self.init_weights()
51
52 # auto-spawn on GPU if available
53 if torch.cuda.is_available():
54 self.cuda()
55
56 def init_weights(self):
57 initrange = 0.1
58 self.encoder.weight.data.uniform_(-initrange, initrange)
59 self.decoder.bias.data.fill_(0)
60 self.decoder.weight.data.uniform_(-initrange, initrange)
61
62 def set_hidden(self, hidden):
63 self.hidden = hidden
64
65 def forward(self, input, hidden, ordered_sequence_lengths=None):
66 encoded = self.encoder(input)
67 emb = self.drop(encoded)
68
69 self.rnn.flatten_parameters()
70
71 output, hidden = self.rnn(emb, hidden)
72
73 if self.proj is not None:
74 output = self.proj(output)
75
76 output = self.drop(output)
77
78 decoded = self.decoder(output.view(output.size(0) * output.size(1), output.size(2)))
79
80 return decoded.view(output.size(0), output.size(1), decoded.size(1)), output, hidden
81
82 def init_hidden(self, bsz):
83 weight = next(self.parameters()).data
84 return (Variable(weight.new(self.nlayers, bsz, self.hidden_size).zero_()),
85 Variable(weight.new(self.nlayers, bsz, self.hidden_size).zero_()))
86
87 def get_representation(self, strings: List[str], detach_from_lm=True):
88
89 sequences_as_char_indices: List[List[int]] = []
90 for string in strings:
91 char_indices = [self.dictionary.get_idx_for_item(char) for char in string]
92 sequences_as_char_indices.append(char_indices)
93
94 batch = Variable(torch.LongTensor(sequences_as_char_indices).transpose(0, 1))
95
96 if torch.cuda.is_available():
97 batch = batch.cuda()
98
99 hidden = self.init_hidden(len(strings))
100 prediction, rnn_output, hidden = self.forward(batch, hidden)
101
102 if detach_from_lm: rnn_output = self.repackage_hidden(rnn_output)
103
104 return rnn_output
105
106 def repackage_hidden(self, h):
107 """Wraps hidden states in new Variables, to detach them from their history."""
108 if type(h) == torch.Tensor:
109 return Variable(h.data)
110 else:
111 return tuple(self.repackage_hidden(v) for v in h)
112
113 def initialize(self, matrix):
114 in_, out_ = matrix.size()
115 stdv = math.sqrt(3. / (in_ + out_))
116 matrix.data.uniform_(-stdv, stdv)
117
118 @classmethod
119 def load_language_model(cls, model_file):
120
121 if not torch.cuda.is_available():
122 state = torch.load(model_file, map_location='cpu')
123 else:
124 state = torch.load(model_file)
125
126 model = LanguageModel(state['dictionary'],
127 state['is_forward_lm'],
128 state['hidden_size'],
129 state['nlayers'],
130 state['embedding_size'],
131 state['nout'],
132 state['dropout'])
133 model.load_state_dict(state['state_dict'])
134 model.eval()
135 if torch.cuda.is_available():
136 model.cuda()
137 return model
138
139 def save(self, file):
140 model_state = {
141 'state_dict': self.state_dict(),
142 'dictionary': self.dictionary,
143 'is_forward_lm': self.is_forward_lm,
144 'hidden_size': self.hidden_size,
145 'nlayers': self.nlayers,
146 'embedding_size': self.embedding_size,
147 'nout': self.nout,
148 'dropout': self.dropout
149 }
150 torch.save(model_state, file, pickle_protocol=4)
151
152 def generate_text(self, number_of_characters=1000) -> str:
153 characters = []
154
155 idx2item = self.dictionary.idx2item
156
157 # initial hidden state
158 hidden = self.init_hidden(1)
159 input = torch.rand(1, 1).mul(len(idx2item)).long()
160 if torch.cuda.is_available():
161 input = input.cuda()
162
163 for i in range(number_of_characters):
164 prediction, rnn_output, hidden = self.forward(input, hidden)
165 word_weights = prediction.squeeze().data.div(1.0).exp().cpu()
166 word_idx = torch.multinomial(word_weights, 1)[0]
167 input.data.fill_(word_idx)
168 word = idx2item[word_idx].decode('UTF-8')
169 characters.append(word)
170
171 return ''.join(characters)
172
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/flair/models/language_model.py b/flair/models/language_model.py
--- a/flair/models/language_model.py
+++ b/flair/models/language_model.py
@@ -150,22 +150,23 @@
torch.save(model_state, file, pickle_protocol=4)
def generate_text(self, number_of_characters=1000) -> str:
- characters = []
-
- idx2item = self.dictionary.idx2item
-
- # initial hidden state
- hidden = self.init_hidden(1)
- input = torch.rand(1, 1).mul(len(idx2item)).long()
- if torch.cuda.is_available():
- input = input.cuda()
-
- for i in range(number_of_characters):
- prediction, rnn_output, hidden = self.forward(input, hidden)
- word_weights = prediction.squeeze().data.div(1.0).exp().cpu()
- word_idx = torch.multinomial(word_weights, 1)[0]
- input.data.fill_(word_idx)
- word = idx2item[word_idx].decode('UTF-8')
- characters.append(word)
-
- return ''.join(characters)
+ with torch.no_grad():
+ characters = []
+
+ idx2item = self.dictionary.idx2item
+
+ # initial hidden state
+ hidden = self.init_hidden(1)
+ input = torch.rand(1, 1).mul(len(idx2item)).long()
+ if torch.cuda.is_available():
+ input = input.cuda()
+
+ for i in range(number_of_characters):
+ prediction, rnn_output, hidden = self.forward(input, hidden)
+ word_weights = prediction.squeeze().data.div(1.0).exp().cpu()
+ word_idx = torch.multinomial(word_weights, 1)[0]
+ input.data.fill_(word_idx)
+ word = idx2item[word_idx].decode('UTF-8')
+ characters.append(word)
+
+ return ''.join(characters)
|
{"golden_diff": "diff --git a/flair/models/language_model.py b/flair/models/language_model.py\n--- a/flair/models/language_model.py\n+++ b/flair/models/language_model.py\n@@ -150,22 +150,23 @@\n torch.save(model_state, file, pickle_protocol=4)\n \n def generate_text(self, number_of_characters=1000) -> str:\n- characters = []\n-\n- idx2item = self.dictionary.idx2item\n-\n- # initial hidden state\n- hidden = self.init_hidden(1)\n- input = torch.rand(1, 1).mul(len(idx2item)).long()\n- if torch.cuda.is_available():\n- input = input.cuda()\n-\n- for i in range(number_of_characters):\n- prediction, rnn_output, hidden = self.forward(input, hidden)\n- word_weights = prediction.squeeze().data.div(1.0).exp().cpu()\n- word_idx = torch.multinomial(word_weights, 1)[0]\n- input.data.fill_(word_idx)\n- word = idx2item[word_idx].decode('UTF-8')\n- characters.append(word)\n-\n- return ''.join(characters)\n+ with torch.no_grad():\n+ characters = []\n+\n+ idx2item = self.dictionary.idx2item\n+\n+ # initial hidden state\n+ hidden = self.init_hidden(1)\n+ input = torch.rand(1, 1).mul(len(idx2item)).long()\n+ if torch.cuda.is_available():\n+ input = input.cuda()\n+\n+ for i in range(number_of_characters):\n+ prediction, rnn_output, hidden = self.forward(input, hidden)\n+ word_weights = prediction.squeeze().data.div(1.0).exp().cpu()\n+ word_idx = torch.multinomial(word_weights, 1)[0]\n+ input.data.fill_(word_idx)\n+ word = idx2item[word_idx].decode('UTF-8')\n+ characters.append(word)\n+\n+ return ''.join(characters)\n", "issue": "Add torch.no_grad() to LanguageModel.generate_text()\nThe autograd engine is not required when using an LM to generate text.\r\n\r\nSo, as pointed out in #167, `torch.no_grad()` needs to be added to `LanguageModel.generate_text()` for better performance and to avoid out of memory issues.\n", "before_files": [{"content": "import torch.nn as nn\nimport torch\nimport math\nfrom torch.autograd import Variable\nfrom typing import List\nfrom flair.data import Dictionary\n\n\nclass LanguageModel(nn.Module):\n \"\"\"Container module with an encoder, a recurrent module, and a decoder.\"\"\"\n\n def __init__(self,\n dictionary: Dictionary,\n is_forward_lm: bool,\n hidden_size: int,\n nlayers: int,\n embedding_size: int = 100,\n nout=None,\n dropout=0.5):\n\n super(LanguageModel, self).__init__()\n\n self.dictionary = dictionary\n self.is_forward_lm: bool = is_forward_lm\n\n self.dropout = dropout\n self.hidden_size = hidden_size\n self.embedding_size = embedding_size\n self.nlayers = nlayers\n\n self.drop = nn.Dropout(dropout)\n self.encoder = nn.Embedding(len(dictionary), embedding_size)\n\n if nlayers == 1:\n self.rnn = nn.LSTM(embedding_size, hidden_size, nlayers)\n else:\n self.rnn = nn.LSTM(embedding_size, hidden_size, nlayers, dropout=dropout)\n\n self.hidden = None\n\n self.nout = nout\n if nout is not None:\n self.proj = nn.Linear(hidden_size, nout)\n self.initialize(self.proj.weight)\n self.decoder = nn.Linear(nout, len(dictionary))\n else:\n self.proj = None\n self.decoder = nn.Linear(hidden_size, len(dictionary))\n\n self.init_weights()\n\n # auto-spawn on GPU if available\n if torch.cuda.is_available():\n self.cuda()\n\n def init_weights(self):\n initrange = 0.1\n self.encoder.weight.data.uniform_(-initrange, initrange)\n self.decoder.bias.data.fill_(0)\n self.decoder.weight.data.uniform_(-initrange, initrange)\n\n def set_hidden(self, hidden):\n self.hidden = hidden\n\n def forward(self, input, hidden, ordered_sequence_lengths=None):\n encoded = self.encoder(input)\n emb = self.drop(encoded)\n\n self.rnn.flatten_parameters()\n\n output, hidden = self.rnn(emb, hidden)\n\n if self.proj is not None:\n output = self.proj(output)\n\n output = self.drop(output)\n\n decoded = self.decoder(output.view(output.size(0) * output.size(1), output.size(2)))\n\n return decoded.view(output.size(0), output.size(1), decoded.size(1)), output, hidden\n\n def init_hidden(self, bsz):\n weight = next(self.parameters()).data\n return (Variable(weight.new(self.nlayers, bsz, self.hidden_size).zero_()),\n Variable(weight.new(self.nlayers, bsz, self.hidden_size).zero_()))\n\n def get_representation(self, strings: List[str], detach_from_lm=True):\n\n sequences_as_char_indices: List[List[int]] = []\n for string in strings:\n char_indices = [self.dictionary.get_idx_for_item(char) for char in string]\n sequences_as_char_indices.append(char_indices)\n\n batch = Variable(torch.LongTensor(sequences_as_char_indices).transpose(0, 1))\n\n if torch.cuda.is_available():\n batch = batch.cuda()\n\n hidden = self.init_hidden(len(strings))\n prediction, rnn_output, hidden = self.forward(batch, hidden)\n\n if detach_from_lm: rnn_output = self.repackage_hidden(rnn_output)\n\n return rnn_output\n\n def repackage_hidden(self, h):\n \"\"\"Wraps hidden states in new Variables, to detach them from their history.\"\"\"\n if type(h) == torch.Tensor:\n return Variable(h.data)\n else:\n return tuple(self.repackage_hidden(v) for v in h)\n\n def initialize(self, matrix):\n in_, out_ = matrix.size()\n stdv = math.sqrt(3. / (in_ + out_))\n matrix.data.uniform_(-stdv, stdv)\n\n @classmethod\n def load_language_model(cls, model_file):\n\n if not torch.cuda.is_available():\n state = torch.load(model_file, map_location='cpu')\n else:\n state = torch.load(model_file)\n\n model = LanguageModel(state['dictionary'],\n state['is_forward_lm'],\n state['hidden_size'],\n state['nlayers'],\n state['embedding_size'],\n state['nout'],\n state['dropout'])\n model.load_state_dict(state['state_dict'])\n model.eval()\n if torch.cuda.is_available():\n model.cuda()\n return model\n\n def save(self, file):\n model_state = {\n 'state_dict': self.state_dict(),\n 'dictionary': self.dictionary,\n 'is_forward_lm': self.is_forward_lm,\n 'hidden_size': self.hidden_size,\n 'nlayers': self.nlayers,\n 'embedding_size': self.embedding_size,\n 'nout': self.nout,\n 'dropout': self.dropout\n }\n torch.save(model_state, file, pickle_protocol=4)\n\n def generate_text(self, number_of_characters=1000) -> str:\n characters = []\n\n idx2item = self.dictionary.idx2item\n\n # initial hidden state\n hidden = self.init_hidden(1)\n input = torch.rand(1, 1).mul(len(idx2item)).long()\n if torch.cuda.is_available():\n input = input.cuda()\n\n for i in range(number_of_characters):\n prediction, rnn_output, hidden = self.forward(input, hidden)\n word_weights = prediction.squeeze().data.div(1.0).exp().cpu()\n word_idx = torch.multinomial(word_weights, 1)[0]\n input.data.fill_(word_idx)\n word = idx2item[word_idx].decode('UTF-8')\n characters.append(word)\n\n return ''.join(characters)\n", "path": "flair/models/language_model.py"}], "after_files": [{"content": "import torch.nn as nn\nimport torch\nimport math\nfrom torch.autograd import Variable\nfrom typing import List\nfrom flair.data import Dictionary\n\n\nclass LanguageModel(nn.Module):\n \"\"\"Container module with an encoder, a recurrent module, and a decoder.\"\"\"\n\n def __init__(self,\n dictionary: Dictionary,\n is_forward_lm: bool,\n hidden_size: int,\n nlayers: int,\n embedding_size: int = 100,\n nout=None,\n dropout=0.5):\n\n super(LanguageModel, self).__init__()\n\n self.dictionary = dictionary\n self.is_forward_lm: bool = is_forward_lm\n\n self.dropout = dropout\n self.hidden_size = hidden_size\n self.embedding_size = embedding_size\n self.nlayers = nlayers\n\n self.drop = nn.Dropout(dropout)\n self.encoder = nn.Embedding(len(dictionary), embedding_size)\n\n if nlayers == 1:\n self.rnn = nn.LSTM(embedding_size, hidden_size, nlayers)\n else:\n self.rnn = nn.LSTM(embedding_size, hidden_size, nlayers, dropout=dropout)\n\n self.hidden = None\n\n self.nout = nout\n if nout is not None:\n self.proj = nn.Linear(hidden_size, nout)\n self.initialize(self.proj.weight)\n self.decoder = nn.Linear(nout, len(dictionary))\n else:\n self.proj = None\n self.decoder = nn.Linear(hidden_size, len(dictionary))\n\n self.init_weights()\n\n # auto-spawn on GPU if available\n if torch.cuda.is_available():\n self.cuda()\n\n def init_weights(self):\n initrange = 0.1\n self.encoder.weight.data.uniform_(-initrange, initrange)\n self.decoder.bias.data.fill_(0)\n self.decoder.weight.data.uniform_(-initrange, initrange)\n\n def set_hidden(self, hidden):\n self.hidden = hidden\n\n def forward(self, input, hidden, ordered_sequence_lengths=None):\n encoded = self.encoder(input)\n emb = self.drop(encoded)\n\n self.rnn.flatten_parameters()\n\n output, hidden = self.rnn(emb, hidden)\n\n if self.proj is not None:\n output = self.proj(output)\n\n output = self.drop(output)\n\n decoded = self.decoder(output.view(output.size(0) * output.size(1), output.size(2)))\n\n return decoded.view(output.size(0), output.size(1), decoded.size(1)), output, hidden\n\n def init_hidden(self, bsz):\n weight = next(self.parameters()).data\n return (Variable(weight.new(self.nlayers, bsz, self.hidden_size).zero_()),\n Variable(weight.new(self.nlayers, bsz, self.hidden_size).zero_()))\n\n def get_representation(self, strings: List[str], detach_from_lm=True):\n\n sequences_as_char_indices: List[List[int]] = []\n for string in strings:\n char_indices = [self.dictionary.get_idx_for_item(char) for char in string]\n sequences_as_char_indices.append(char_indices)\n\n batch = Variable(torch.LongTensor(sequences_as_char_indices).transpose(0, 1))\n\n if torch.cuda.is_available():\n batch = batch.cuda()\n\n hidden = self.init_hidden(len(strings))\n prediction, rnn_output, hidden = self.forward(batch, hidden)\n\n if detach_from_lm: rnn_output = self.repackage_hidden(rnn_output)\n\n return rnn_output\n\n def repackage_hidden(self, h):\n \"\"\"Wraps hidden states in new Variables, to detach them from their history.\"\"\"\n if type(h) == torch.Tensor:\n return Variable(h.data)\n else:\n return tuple(self.repackage_hidden(v) for v in h)\n\n def initialize(self, matrix):\n in_, out_ = matrix.size()\n stdv = math.sqrt(3. / (in_ + out_))\n matrix.data.uniform_(-stdv, stdv)\n\n @classmethod\n def load_language_model(cls, model_file):\n\n if not torch.cuda.is_available():\n state = torch.load(model_file, map_location='cpu')\n else:\n state = torch.load(model_file)\n\n model = LanguageModel(state['dictionary'],\n state['is_forward_lm'],\n state['hidden_size'],\n state['nlayers'],\n state['embedding_size'],\n state['nout'],\n state['dropout'])\n model.load_state_dict(state['state_dict'])\n model.eval()\n if torch.cuda.is_available():\n model.cuda()\n return model\n\n def save(self, file):\n model_state = {\n 'state_dict': self.state_dict(),\n 'dictionary': self.dictionary,\n 'is_forward_lm': self.is_forward_lm,\n 'hidden_size': self.hidden_size,\n 'nlayers': self.nlayers,\n 'embedding_size': self.embedding_size,\n 'nout': self.nout,\n 'dropout': self.dropout\n }\n torch.save(model_state, file, pickle_protocol=4)\n\n def generate_text(self, number_of_characters=1000) -> str:\n with torch.no_grad():\n characters = []\n\n idx2item = self.dictionary.idx2item\n\n # initial hidden state\n hidden = self.init_hidden(1)\n input = torch.rand(1, 1).mul(len(idx2item)).long()\n if torch.cuda.is_available():\n input = input.cuda()\n\n for i in range(number_of_characters):\n prediction, rnn_output, hidden = self.forward(input, hidden)\n word_weights = prediction.squeeze().data.div(1.0).exp().cpu()\n word_idx = torch.multinomial(word_weights, 1)[0]\n input.data.fill_(word_idx)\n word = idx2item[word_idx].decode('UTF-8')\n characters.append(word)\n\n return ''.join(characters)\n", "path": "flair/models/language_model.py"}]}
| 1,973 | 437 |
gh_patches_debug_23756
|
rasdani/github-patches
|
git_diff
|
chainer__chainer-1398
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
log_name of LogReport with keys causes AttributeError
In MNIST example you change the code
https://github.com/pfnet/chainer/blob/master/examples/mnist/train_mnist.py#L84
```
# Write a log of evaluation statistics for each epoch
trainer.extend(extensions.LogReport())
```
to
```
# Write a log of evaluation statistics for each epoch
trainer.extend(extensions.LogReport(log_name='log_{.iteration}'))
```
run train_mnist.py and you'll get `AttributeError: 'dict' object has no attribute 'iteration'`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `chainer/training/extensions/log_report.py`
Content:
```
1 import json
2 import os
3 import tempfile
4
5 import six
6
7 from chainer import reporter
8 import chainer.serializer as serializer_module
9 from chainer.training import extension
10 import chainer.training.trigger as trigger_module
11
12
13 class LogReport(extension.Extension):
14
15 """Trainer extension to output the accumulated results to a log file.
16
17 This extension accumulates the observations of the trainer to
18 :class:`~chainer.DictSummary` at a regular interval specified by a supplied
19 trigger, and writes them into a log file in JSON format.
20
21 There are two triggers to handle this extension. One is the trigger to
22 invoke this extension, which is used to handle the timing of accumulating
23 the results. It is set to ``1, 'iteration'`` by default. The other is the
24 trigger to determine when to emit the result. When this trigger returns
25 True, this extension appends the summary of accumulated values to the list
26 of past summaries, and writes the list to the log file. Then, this
27 extension makes a new fresh summary object which is used until the next
28 time that the trigger fires.
29
30 It also adds ``'epoch'`` and ``'iteration'`` entries to each result
31 dictionary, which are the epoch and iteration counts at the output.
32
33 Args:
34 keys (iterable of strs): Keys of values to accumulate. If this is None,
35 all the values are accumulated and output to the log file.
36 trigger: Trigger that decides when to aggregate the result and output
37 the values. This is distinct from the trigger of this extension
38 itself. If it is a tuple in the form ``<int>, 'epoch'`` or
39 ``<int>, 'iteration'``, it is passed to :class:`IntervalTrigger`.
40 postprocess: Callback to postprocess the result dictionaries. Each
41 result dictionary is passed to this callback on the output. This
42 callback can modify the result dictionaries, which are used to
43 output to the log file.
44 log_name (str): Name of the log file under the output directory. It can
45 be a format string: the last result dictionary is passed for the
46 formatting. For example, users can use '{.iteration}' to separate
47 the log files for different iterations. If the log name is None, it
48 does not output the log to any file.
49
50 """
51 def __init__(self, keys=None, trigger=(1, 'epoch'), postprocess=None,
52 log_name='log'):
53 self._keys = keys
54 self._trigger = trigger_module.get_trigger(trigger)
55 self._postprocess = postprocess
56 self._log_name = log_name
57 self._log = []
58
59 self._init_summary()
60
61 def __call__(self, trainer):
62 # accumulate the observations
63 keys = self._keys
64 observation = trainer.observation
65 summary = self._summary
66
67 if keys is None:
68 summary.add(observation)
69 else:
70 summary.add({k: observation[k] for k in keys if k in observation})
71
72 if self._trigger(trainer):
73 # output the result
74 stats = self._summary.compute_mean()
75 stats_cpu = {}
76 for name, value in six.iteritems(stats):
77 stats_cpu[name] = float(value) # copy to CPU
78
79 updater = trainer.updater
80 stats_cpu['epoch'] = updater.epoch
81 stats_cpu['iteration'] = updater.iteration
82
83 if self._postprocess is not None:
84 self._postprocess(stats_cpu)
85
86 self._log.append(stats_cpu)
87
88 # write to the log file
89 if self._log_name is not None:
90 log_name = self._log_name.format(stats_cpu)
91 fd, path = tempfile.mkstemp(prefix=log_name, dir=trainer.out)
92 with os.fdopen(fd, 'w') as f:
93 json.dump(self._log, f, indent=4)
94 os.rename(path, os.path.join(trainer.out, log_name))
95
96 # reset the summary for the next output
97 self._init_summary()
98
99 @property
100 def log(self):
101 """The current list of observation dictionaries."""
102 return self._log
103
104 def serialize(self, serializer):
105 # Note that this serialization may lose some information of small
106 # numerical differences.
107 if isinstance(serializer, serializer_module.Serializer):
108 log = json.dumps(self._log)
109 serializer('_log', log)
110 else:
111 log = serializer('_log', '')
112 self._log = json.loads(log)
113
114 def _init_summary(self):
115 self._summary = reporter.DictSummary()
116
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/chainer/training/extensions/log_report.py b/chainer/training/extensions/log_report.py
--- a/chainer/training/extensions/log_report.py
+++ b/chainer/training/extensions/log_report.py
@@ -43,7 +43,7 @@
output to the log file.
log_name (str): Name of the log file under the output directory. It can
be a format string: the last result dictionary is passed for the
- formatting. For example, users can use '{.iteration}' to separate
+ formatting. For example, users can use '{iteration}' to separate
the log files for different iterations. If the log name is None, it
does not output the log to any file.
@@ -87,7 +87,7 @@
# write to the log file
if self._log_name is not None:
- log_name = self._log_name.format(stats_cpu)
+ log_name = self._log_name.format(**stats_cpu)
fd, path = tempfile.mkstemp(prefix=log_name, dir=trainer.out)
with os.fdopen(fd, 'w') as f:
json.dump(self._log, f, indent=4)
|
{"golden_diff": "diff --git a/chainer/training/extensions/log_report.py b/chainer/training/extensions/log_report.py\n--- a/chainer/training/extensions/log_report.py\n+++ b/chainer/training/extensions/log_report.py\n@@ -43,7 +43,7 @@\n output to the log file.\n log_name (str): Name of the log file under the output directory. It can\n be a format string: the last result dictionary is passed for the\n- formatting. For example, users can use '{.iteration}' to separate\n+ formatting. For example, users can use '{iteration}' to separate\n the log files for different iterations. If the log name is None, it\n does not output the log to any file.\n \n@@ -87,7 +87,7 @@\n \n # write to the log file\n if self._log_name is not None:\n- log_name = self._log_name.format(stats_cpu)\n+ log_name = self._log_name.format(**stats_cpu)\n fd, path = tempfile.mkstemp(prefix=log_name, dir=trainer.out)\n with os.fdopen(fd, 'w') as f:\n json.dump(self._log, f, indent=4)\n", "issue": "log_name of LogReport with keys causes AttributeError\nIn MNIST example you change the code\n\nhttps://github.com/pfnet/chainer/blob/master/examples/mnist/train_mnist.py#L84\n\n```\n # Write a log of evaluation statistics for each epoch\n trainer.extend(extensions.LogReport())\n```\n\nto \n\n```\n # Write a log of evaluation statistics for each epoch\n trainer.extend(extensions.LogReport(log_name='log_{.iteration}'))\n```\n\nrun train_mnist.py and you'll get `AttributeError: 'dict' object has no attribute 'iteration'`\n\n", "before_files": [{"content": "import json\nimport os\nimport tempfile\n\nimport six\n\nfrom chainer import reporter\nimport chainer.serializer as serializer_module\nfrom chainer.training import extension\nimport chainer.training.trigger as trigger_module\n\n\nclass LogReport(extension.Extension):\n\n \"\"\"Trainer extension to output the accumulated results to a log file.\n\n This extension accumulates the observations of the trainer to\n :class:`~chainer.DictSummary` at a regular interval specified by a supplied\n trigger, and writes them into a log file in JSON format.\n\n There are two triggers to handle this extension. One is the trigger to\n invoke this extension, which is used to handle the timing of accumulating\n the results. It is set to ``1, 'iteration'`` by default. The other is the\n trigger to determine when to emit the result. When this trigger returns\n True, this extension appends the summary of accumulated values to the list\n of past summaries, and writes the list to the log file. Then, this\n extension makes a new fresh summary object which is used until the next\n time that the trigger fires.\n\n It also adds ``'epoch'`` and ``'iteration'`` entries to each result\n dictionary, which are the epoch and iteration counts at the output.\n\n Args:\n keys (iterable of strs): Keys of values to accumulate. If this is None,\n all the values are accumulated and output to the log file.\n trigger: Trigger that decides when to aggregate the result and output\n the values. This is distinct from the trigger of this extension\n itself. If it is a tuple in the form ``<int>, 'epoch'`` or\n ``<int>, 'iteration'``, it is passed to :class:`IntervalTrigger`.\n postprocess: Callback to postprocess the result dictionaries. Each\n result dictionary is passed to this callback on the output. This\n callback can modify the result dictionaries, which are used to\n output to the log file.\n log_name (str): Name of the log file under the output directory. It can\n be a format string: the last result dictionary is passed for the\n formatting. For example, users can use '{.iteration}' to separate\n the log files for different iterations. If the log name is None, it\n does not output the log to any file.\n\n \"\"\"\n def __init__(self, keys=None, trigger=(1, 'epoch'), postprocess=None,\n log_name='log'):\n self._keys = keys\n self._trigger = trigger_module.get_trigger(trigger)\n self._postprocess = postprocess\n self._log_name = log_name\n self._log = []\n\n self._init_summary()\n\n def __call__(self, trainer):\n # accumulate the observations\n keys = self._keys\n observation = trainer.observation\n summary = self._summary\n\n if keys is None:\n summary.add(observation)\n else:\n summary.add({k: observation[k] for k in keys if k in observation})\n\n if self._trigger(trainer):\n # output the result\n stats = self._summary.compute_mean()\n stats_cpu = {}\n for name, value in six.iteritems(stats):\n stats_cpu[name] = float(value) # copy to CPU\n\n updater = trainer.updater\n stats_cpu['epoch'] = updater.epoch\n stats_cpu['iteration'] = updater.iteration\n\n if self._postprocess is not None:\n self._postprocess(stats_cpu)\n\n self._log.append(stats_cpu)\n\n # write to the log file\n if self._log_name is not None:\n log_name = self._log_name.format(stats_cpu)\n fd, path = tempfile.mkstemp(prefix=log_name, dir=trainer.out)\n with os.fdopen(fd, 'w') as f:\n json.dump(self._log, f, indent=4)\n os.rename(path, os.path.join(trainer.out, log_name))\n\n # reset the summary for the next output\n self._init_summary()\n\n @property\n def log(self):\n \"\"\"The current list of observation dictionaries.\"\"\"\n return self._log\n\n def serialize(self, serializer):\n # Note that this serialization may lose some information of small\n # numerical differences.\n if isinstance(serializer, serializer_module.Serializer):\n log = json.dumps(self._log)\n serializer('_log', log)\n else:\n log = serializer('_log', '')\n self._log = json.loads(log)\n\n def _init_summary(self):\n self._summary = reporter.DictSummary()\n", "path": "chainer/training/extensions/log_report.py"}], "after_files": [{"content": "import json\nimport os\nimport tempfile\n\nimport six\n\nfrom chainer import reporter\nimport chainer.serializer as serializer_module\nfrom chainer.training import extension\nimport chainer.training.trigger as trigger_module\n\n\nclass LogReport(extension.Extension):\n\n \"\"\"Trainer extension to output the accumulated results to a log file.\n\n This extension accumulates the observations of the trainer to\n :class:`~chainer.DictSummary` at a regular interval specified by a supplied\n trigger, and writes them into a log file in JSON format.\n\n There are two triggers to handle this extension. One is the trigger to\n invoke this extension, which is used to handle the timing of accumulating\n the results. It is set to ``1, 'iteration'`` by default. The other is the\n trigger to determine when to emit the result. When this trigger returns\n True, this extension appends the summary of accumulated values to the list\n of past summaries, and writes the list to the log file. Then, this\n extension makes a new fresh summary object which is used until the next\n time that the trigger fires.\n\n It also adds ``'epoch'`` and ``'iteration'`` entries to each result\n dictionary, which are the epoch and iteration counts at the output.\n\n Args:\n keys (iterable of strs): Keys of values to accumulate. If this is None,\n all the values are accumulated and output to the log file.\n trigger: Trigger that decides when to aggregate the result and output\n the values. This is distinct from the trigger of this extension\n itself. If it is a tuple in the form ``<int>, 'epoch'`` or\n ``<int>, 'iteration'``, it is passed to :class:`IntervalTrigger`.\n postprocess: Callback to postprocess the result dictionaries. Each\n result dictionary is passed to this callback on the output. This\n callback can modify the result dictionaries, which are used to\n output to the log file.\n log_name (str): Name of the log file under the output directory. It can\n be a format string: the last result dictionary is passed for the\n formatting. For example, users can use '{iteration}' to separate\n the log files for different iterations. If the log name is None, it\n does not output the log to any file.\n\n \"\"\"\n def __init__(self, keys=None, trigger=(1, 'epoch'), postprocess=None,\n log_name='log'):\n self._keys = keys\n self._trigger = trigger_module.get_trigger(trigger)\n self._postprocess = postprocess\n self._log_name = log_name\n self._log = []\n\n self._init_summary()\n\n def __call__(self, trainer):\n # accumulate the observations\n keys = self._keys\n observation = trainer.observation\n summary = self._summary\n\n if keys is None:\n summary.add(observation)\n else:\n summary.add({k: observation[k] for k in keys if k in observation})\n\n if self._trigger(trainer):\n # output the result\n stats = self._summary.compute_mean()\n stats_cpu = {}\n for name, value in six.iteritems(stats):\n stats_cpu[name] = float(value) # copy to CPU\n\n updater = trainer.updater\n stats_cpu['epoch'] = updater.epoch\n stats_cpu['iteration'] = updater.iteration\n\n if self._postprocess is not None:\n self._postprocess(stats_cpu)\n\n self._log.append(stats_cpu)\n\n # write to the log file\n if self._log_name is not None:\n log_name = self._log_name.format(**stats_cpu)\n fd, path = tempfile.mkstemp(prefix=log_name, dir=trainer.out)\n with os.fdopen(fd, 'w') as f:\n json.dump(self._log, f, indent=4)\n os.rename(path, os.path.join(trainer.out, log_name))\n\n # reset the summary for the next output\n self._init_summary()\n\n @property\n def log(self):\n \"\"\"The current list of observation dictionaries.\"\"\"\n return self._log\n\n def serialize(self, serializer):\n # Note that this serialization may lose some information of small\n # numerical differences.\n if isinstance(serializer, serializer_module.Serializer):\n log = json.dumps(self._log)\n serializer('_log', log)\n else:\n log = serializer('_log', '')\n self._log = json.loads(log)\n\n def _init_summary(self):\n self._summary = reporter.DictSummary()\n", "path": "chainer/training/extensions/log_report.py"}]}
| 1,611 | 260 |
gh_patches_debug_42503
|
rasdani/github-patches
|
git_diff
|
nvaccess__nvda-12122
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
nvda does not automatically read messages received on skype for business
### Steps to reproduce:
I'm sure I've seen this working for some time.
I have not used skype for business for a long time but in my current work I'm making use of it again, and when I received a message on skype for business, nvda did not automatically read the received message.
Open a conversation on skype for business type something and wait inside the conversation for your partner to respond.
### Actual behavior:
The nvda is mute.
### Expected behavior:
Nvda should automatically announce the response received.
### System configuration
#### NVDA installed/portable/running from source:
install
#### NVDA version:
2018.4.1
#### Windows version:
10 17134.556
#### Name and version of other software in use when reproducing the issue:
office 16.0.4266.1001
#### Other information about your system:
### Other questions
#### Does the issue still occur after restarting your PC?
yes
#### Have you tried any other versions of NVDA?
no
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `source/appModules/lync.py`
Content:
```
1 #A part of NonVisual Desktop Access (NVDA)
2 #This file is covered by the GNU General Public License.
3 #See the file COPYING for more details.
4 #Copyright (C) 2017 NV Access Limited
5
6 """appModule for Microsoft Skype for business. """
7
8 import ui
9 from NVDAObjects.UIA import UIA
10 import appModuleHandler
11
12 class NetUIRicherLabel(UIA):
13 """A label sometimes found within list items that can fire live region changes, such as for chat messages."""
14
15 def event_liveRegionChange(self):
16 # The base liveRegionChange event is not enough as Skype for Business concatinates recent chat messages from the same person within the same minute
17 # Therefore, specifically strip out the chat content and only report the most recent part added.
18 # The object's name contains the full message (I.e. person: content, timestamp) loosely separated by commas.
19 # Example string: "Michael Curran : , , Hello\r\n\r\nThis is a test , 10:45 am."
20 # Where person is "Michael Curran", content is "Hello\nThis is a test" and timestamp is "10:45 am"
21 # The object's value just contains the content.
22 # Example: "Hello\rThis is a test"
23 # We are only interested in person and content
24 # Therefore use value (content) to locate and split off the person from the name (fullText)
25 # Normalize the usage of end-of-line characters (name and value seem to expose them differently, which would break comparison)
26 content=self.value.replace('\r','\n').strip()
27 fullText=self.name.replace('\r\n\r\n','\n')
28 contentLines=content.split('\n')
29 contentStartIndex=fullText.find(content)
30 pretext=fullText[:contentStartIndex]
31 # There are some annoying comma characters after the person's name
32 pretext=pretext.replace(' ,','')
33 # If the objects are the same, the person is the same, and the new content is the old content but with more appended, report the appended content
34 # Otherwise, report the person and the initial content
35 runtimeID=self.UIAElement.getRuntimeId()
36 lastRuntimeID,lastPretext,lastContentLines=self.appModule._lastLiveChatMessageData
37 contentLinesLen=len(contentLines)
38 lastContentLinesLen=len(lastContentLines)
39 if runtimeID==lastRuntimeID and pretext==lastPretext and contentLinesLen>lastContentLinesLen and contentLines[:lastContentLinesLen]==lastContentLines:
40 message="\n".join(contentLines[lastContentLinesLen:])
41 else:
42 message=pretext+content
43 ui.message(message)
44 # Cache the message data for later possible comparisons
45 self.appModule._lastLiveChatMessageData=runtimeID,pretext,contentLines
46
47 class AppModule(appModuleHandler.AppModule):
48
49 # data to store the last chat message (runtime ID,person,content lines)
50 _lastLiveChatMessageData=[],"",[]
51
52 def chooseNVDAObjectOverlayClasses(self,obj,clsList):
53 if isinstance(obj,UIA) and obj.UIAElement.cachedClassName=='NetUIRicherLabel':
54 clsList.insert(0,NetUIRicherLabel)
55 return clsList
56
57
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/source/appModules/lync.py b/source/appModules/lync.py
--- a/source/appModules/lync.py
+++ b/source/appModules/lync.py
@@ -4,10 +4,14 @@
#Copyright (C) 2017 NV Access Limited
"""appModule for Microsoft Skype for business. """
-
+
import ui
from NVDAObjects.UIA import UIA
import appModuleHandler
+from logHandler import log
+
+import re
+
class NetUIRicherLabel(UIA):
"""A label sometimes found within list items that can fire live region changes, such as for chat messages."""
@@ -15,19 +19,45 @@
def event_liveRegionChange(self):
# The base liveRegionChange event is not enough as Skype for Business concatinates recent chat messages from the same person within the same minute
# Therefore, specifically strip out the chat content and only report the most recent part added.
- # The object's name contains the full message (I.e. person: content, timestamp) loosely separated by commas.
+ # When not empty, the object's name contains the full message (I.e. person: content, timestamp)
+ # loosely separated by commas.
# Example string: "Michael Curran : , , Hello\r\n\r\nThis is a test , 10:45 am."
# Where person is "Michael Curran", content is "Hello\nThis is a test" and timestamp is "10:45 am"
- # The object's value just contains the content.
- # Example: "Hello\rThis is a test"
- # We are only interested in person and content
- # Therefore use value (content) to locate and split off the person from the name (fullText)
+
# Normalize the usage of end-of-line characters (name and value seem to expose them differently, which would break comparison)
- content=self.value.replace('\r','\n').strip()
fullText=self.name.replace('\r\n\r\n','\n')
+
+ # At the object's creation, an unuseful liveRegionChange event is triggered with an empty name,
+ # so we discard it.
+ if not self.name.strip():
+ return
+
+ if self.value is not None:
+ # For some versions of Lync / Skype for Business, the object's value contains just the content.
+ # Example: "Hello\rThis is a test"
+ # We are only interested in person and content
+ # Therefore use value (content) to locate and split off the person from the name (fullText)
+ content = self.value.replace('\r', '\n').strip()
+ contentStartIndex = fullText.find(content)
+ pretext = fullText[:contentStartIndex]
+ else:
+ # For other versions of Lync / Skype for Business, self.value is just None.
+ # So we just look at self.name formatting to split content from person and timestamp (less robust).
+ pattern = r'^(?P<name>.+?): (?P<priority>.*?), , (?P<content>.+),(?!, , ) , (?P<timestamp>.+)'
+ match = re.match(pattern, self.name, flags=re.DOTALL)
+ if match:
+ pretext = match['name']
+ priority = match['priority']
+ content = match['content']
+ if priority:
+ content = priority + ', ' + content
+ else:
+ # In case no match is found, log the unexpected message and return the whole message.
+ log.error(f'Unrecognized pattern in the following message: {self.name}')
+ pretext = ''
+ content = self.name
+ content = content.replace('\r', '\n').strip()
contentLines=content.split('\n')
- contentStartIndex=fullText.find(content)
- pretext=fullText[:contentStartIndex]
# There are some annoying comma characters after the person's name
pretext=pretext.replace(' ,','')
# If the objects are the same, the person is the same, and the new content is the old content but with more appended, report the appended content
|
{"golden_diff": "diff --git a/source/appModules/lync.py b/source/appModules/lync.py\n--- a/source/appModules/lync.py\n+++ b/source/appModules/lync.py\n@@ -4,10 +4,14 @@\n #Copyright (C) 2017 NV Access Limited\r\n \r\n \"\"\"appModule for Microsoft Skype for business. \"\"\"\r\n- \r\n+\r\n import ui\r\n from NVDAObjects.UIA import UIA\r\n import appModuleHandler\r\n+from logHandler import log\r\n+\r\n+import re\r\n+\r\n \r\n class NetUIRicherLabel(UIA):\r\n \t\"\"\"A label sometimes found within list items that can fire live region changes, such as for chat messages.\"\"\"\r\n@@ -15,19 +19,45 @@\n \tdef event_liveRegionChange(self):\r\n \t\t# The base liveRegionChange event is not enough as Skype for Business concatinates recent chat messages from the same person within the same minute\r\n \t\t# Therefore, specifically strip out the chat content and only report the most recent part added.\r\n-\t\t# The object's name contains the full message (I.e. person: content, timestamp) loosely separated by commas.\r\n+\t\t# When not empty, the object's name contains the full message (I.e. person: content, timestamp)\r\n+\t\t# loosely separated by commas.\r\n \t\t# Example string: \"Michael Curran : , , Hello\\r\\n\\r\\nThis is a test , 10:45 am.\"\r\n \t\t# Where person is \"Michael Curran\", content is \"Hello\\nThis is a test\" and timestamp is \"10:45 am\" \r\n-\t\t# The object's value just contains the content.\r\n-\t\t# Example: \"Hello\\rThis is a test\"\r\n-\t\t# We are only interested in person and content\r\n-\t\t# Therefore use value (content) to locate and split off the person from the name (fullText)\r\n+\t\t\r\n \t\t# Normalize the usage of end-of-line characters (name and value seem to expose them differently, which would break comparison)\r\n-\t\tcontent=self.value.replace('\\r','\\n').strip()\r\n \t\tfullText=self.name.replace('\\r\\n\\r\\n','\\n')\r\n+\t\t\r\n+\t\t# At the object's creation, an unuseful liveRegionChange event is triggered with an empty name,\r\n+\t\t# so we discard it.\r\n+\t\tif not self.name.strip():\r\n+\t\t\treturn\r\n+\t\t\r\n+\t\tif self.value is not None:\r\n+\t\t\t# For some versions of Lync / Skype for Business, the object's value contains just the content.\r\n+\t\t\t# Example: \"Hello\\rThis is a test\"\r\n+\t\t\t# We are only interested in person and content\r\n+\t\t\t# Therefore use value (content) to locate and split off the person from the name (fullText)\r\n+\t\t\tcontent = self.value.replace('\\r', '\\n').strip()\r\n+\t\t\tcontentStartIndex = fullText.find(content)\r\n+\t\t\tpretext = fullText[:contentStartIndex]\r\n+\t\telse:\r\n+\t\t\t# For other versions of Lync / Skype for Business, self.value is just None.\r\n+\t\t\t# So we just look at self.name formatting to split content from person and timestamp (less robust).\r\n+\t\t\tpattern = r'^(?P<name>.+?): (?P<priority>.*?), , (?P<content>.+),(?!, , ) , (?P<timestamp>.+)'\r\n+\t\t\tmatch = re.match(pattern, self.name, flags=re.DOTALL)\r\n+\t\t\tif match:\r\n+\t\t\t\tpretext = match['name']\r\n+\t\t\t\tpriority = match['priority']\r\n+\t\t\t\tcontent = match['content']\r\n+\t\t\t\tif priority:\r\n+\t\t\t\t\tcontent = priority + ', ' + content\r\n+\t\t\telse:\r\n+\t\t\t\t# In case no match is found, log the unexpected message and return the whole message.\r\n+\t\t\t\tlog.error(f'Unrecognized pattern in the following message: {self.name}')\r\n+\t\t\t\tpretext = ''\r\n+\t\t\t\tcontent = self.name\r\n+\t\t\tcontent = content.replace('\\r', '\\n').strip()\r\n \t\tcontentLines=content.split('\\n')\r\n-\t\tcontentStartIndex=fullText.find(content)\r\n-\t\tpretext=fullText[:contentStartIndex]\r\n \t\t# There are some annoying comma characters after the person's name \r\n \t\tpretext=pretext.replace(' ,','')\r\n \t\t# If the objects are the same, the person is the same, and the new content is the old content but with more appended, report the appended content\n", "issue": "nvda does not automatically read messages received on skype for business\n\r\n### Steps to reproduce:\r\nI'm sure I've seen this working for some time.\r\nI have not used skype for business for a long time but in my current work I'm making use of it again, and when I received a message on skype for business, nvda did not automatically read the received message.\r\nOpen a conversation on skype for business type something and wait inside the conversation for your partner to respond.\r\n### Actual behavior:\r\nThe nvda is mute.\r\n### Expected behavior:\r\nNvda should automatically announce the response received.\r\n### System configuration\r\n#### NVDA installed/portable/running from source:\r\ninstall\r\n#### NVDA version:\r\n2018.4.1\r\n#### Windows version:\r\n10 17134.556\r\n#### Name and version of other software in use when reproducing the issue:\r\noffice 16.0.4266.1001\r\n\r\n#### Other information about your system:\r\n\r\n### Other questions\r\n#### Does the issue still occur after restarting your PC?\r\nyes\r\n#### Have you tried any other versions of NVDA?\r\nno\n", "before_files": [{"content": "#A part of NonVisual Desktop Access (NVDA)\r\n#This file is covered by the GNU General Public License.\r\n#See the file COPYING for more details.\r\n#Copyright (C) 2017 NV Access Limited\r\n\r\n\"\"\"appModule for Microsoft Skype for business. \"\"\"\r\n \r\nimport ui\r\nfrom NVDAObjects.UIA import UIA\r\nimport appModuleHandler\r\n\r\nclass NetUIRicherLabel(UIA):\r\n\t\"\"\"A label sometimes found within list items that can fire live region changes, such as for chat messages.\"\"\"\r\n\r\n\tdef event_liveRegionChange(self):\r\n\t\t# The base liveRegionChange event is not enough as Skype for Business concatinates recent chat messages from the same person within the same minute\r\n\t\t# Therefore, specifically strip out the chat content and only report the most recent part added.\r\n\t\t# The object's name contains the full message (I.e. person: content, timestamp) loosely separated by commas.\r\n\t\t# Example string: \"Michael Curran : , , Hello\\r\\n\\r\\nThis is a test , 10:45 am.\"\r\n\t\t# Where person is \"Michael Curran\", content is \"Hello\\nThis is a test\" and timestamp is \"10:45 am\" \r\n\t\t# The object's value just contains the content.\r\n\t\t# Example: \"Hello\\rThis is a test\"\r\n\t\t# We are only interested in person and content\r\n\t\t# Therefore use value (content) to locate and split off the person from the name (fullText)\r\n\t\t# Normalize the usage of end-of-line characters (name and value seem to expose them differently, which would break comparison)\r\n\t\tcontent=self.value.replace('\\r','\\n').strip()\r\n\t\tfullText=self.name.replace('\\r\\n\\r\\n','\\n')\r\n\t\tcontentLines=content.split('\\n')\r\n\t\tcontentStartIndex=fullText.find(content)\r\n\t\tpretext=fullText[:contentStartIndex]\r\n\t\t# There are some annoying comma characters after the person's name \r\n\t\tpretext=pretext.replace(' ,','')\r\n\t\t# If the objects are the same, the person is the same, and the new content is the old content but with more appended, report the appended content\r\n\t\t# Otherwise, report the person and the initial content\r\n\t\truntimeID=self.UIAElement.getRuntimeId()\r\n\t\tlastRuntimeID,lastPretext,lastContentLines=self.appModule._lastLiveChatMessageData\r\n\t\tcontentLinesLen=len(contentLines)\r\n\t\tlastContentLinesLen=len(lastContentLines)\r\n\t\tif runtimeID==lastRuntimeID and pretext==lastPretext and contentLinesLen>lastContentLinesLen and contentLines[:lastContentLinesLen]==lastContentLines:\r\n\t\t\tmessage=\"\\n\".join(contentLines[lastContentLinesLen:])\r\n\t\telse:\r\n\t\t\tmessage=pretext+content\r\n\t\tui.message(message)\r\n\t\t# Cache the message data for later possible comparisons \r\n\t\tself.appModule._lastLiveChatMessageData=runtimeID,pretext,contentLines\r\n\r\nclass AppModule(appModuleHandler.AppModule):\r\n\r\n\t# data to store the last chat message (runtime ID,person,content lines)\r\n\t_lastLiveChatMessageData=[],\"\",[]\r\n\r\n\tdef chooseNVDAObjectOverlayClasses(self,obj,clsList):\r\n\t\tif isinstance(obj,UIA) and obj.UIAElement.cachedClassName=='NetUIRicherLabel':\r\n\t\t\tclsList.insert(0,NetUIRicherLabel)\r\n\t\treturn clsList\r\n\r\n", "path": "source/appModules/lync.py"}], "after_files": [{"content": "#A part of NonVisual Desktop Access (NVDA)\r\n#This file is covered by the GNU General Public License.\r\n#See the file COPYING for more details.\r\n#Copyright (C) 2017 NV Access Limited\r\n\r\n\"\"\"appModule for Microsoft Skype for business. \"\"\"\r\n\r\nimport ui\r\nfrom NVDAObjects.UIA import UIA\r\nimport appModuleHandler\r\nfrom logHandler import log\r\n\r\nimport re\r\n\r\n\r\nclass NetUIRicherLabel(UIA):\r\n\t\"\"\"A label sometimes found within list items that can fire live region changes, such as for chat messages.\"\"\"\r\n\r\n\tdef event_liveRegionChange(self):\r\n\t\t# The base liveRegionChange event is not enough as Skype for Business concatinates recent chat messages from the same person within the same minute\r\n\t\t# Therefore, specifically strip out the chat content and only report the most recent part added.\r\n\t\t# When not empty, the object's name contains the full message (I.e. person: content, timestamp)\r\n\t\t# loosely separated by commas.\r\n\t\t# Example string: \"Michael Curran : , , Hello\\r\\n\\r\\nThis is a test , 10:45 am.\"\r\n\t\t# Where person is \"Michael Curran\", content is \"Hello\\nThis is a test\" and timestamp is \"10:45 am\" \r\n\t\t\r\n\t\t# Normalize the usage of end-of-line characters (name and value seem to expose them differently, which would break comparison)\r\n\t\tfullText=self.name.replace('\\r\\n\\r\\n','\\n')\r\n\t\t\r\n\t\t# At the object's creation, an unuseful liveRegionChange event is triggered with an empty name,\r\n\t\t# so we discard it.\r\n\t\tif not self.name.strip():\r\n\t\t\treturn\r\n\t\t\r\n\t\tif self.value is not None:\r\n\t\t\t# For some versions of Lync / Skype for Business, the object's value contains just the content.\r\n\t\t\t# Example: \"Hello\\rThis is a test\"\r\n\t\t\t# We are only interested in person and content\r\n\t\t\t# Therefore use value (content) to locate and split off the person from the name (fullText)\r\n\t\t\tcontent = self.value.replace('\\r', '\\n').strip()\r\n\t\t\tcontentStartIndex = fullText.find(content)\r\n\t\t\tpretext = fullText[:contentStartIndex]\r\n\t\telse:\r\n\t\t\t# For other versions of Lync / Skype for Business, self.value is just None.\r\n\t\t\t# So we just look at self.name formatting to split content from person and timestamp (less robust).\r\n\t\t\tpattern = r'^(?P<name>.+?): (?P<priority>.*?), , (?P<content>.+),(?!, , ) , (?P<timestamp>.+)'\r\n\t\t\tmatch = re.match(pattern, self.name, flags=re.DOTALL)\r\n\t\t\tif match:\r\n\t\t\t\tpretext = match['name']\r\n\t\t\t\tpriority = match['priority']\r\n\t\t\t\tcontent = match['content']\r\n\t\t\t\tif priority:\r\n\t\t\t\t\tcontent = priority + ', ' + content\r\n\t\t\telse:\r\n\t\t\t\t# In case no match is found, log the unexpected message and return the whole message.\r\n\t\t\t\tlog.error(f'Unrecognized pattern in the following message: {self.name}')\r\n\t\t\t\tpretext = ''\r\n\t\t\t\tcontent = self.name\r\n\t\t\tcontent = content.replace('\\r', '\\n').strip()\r\n\t\tcontentLines=content.split('\\n')\r\n\t\t# There are some annoying comma characters after the person's name \r\n\t\tpretext=pretext.replace(' ,','')\r\n\t\t# If the objects are the same, the person is the same, and the new content is the old content but with more appended, report the appended content\r\n\t\t# Otherwise, report the person and the initial content\r\n\t\truntimeID=self.UIAElement.getRuntimeId()\r\n\t\tlastRuntimeID,lastPretext,lastContentLines=self.appModule._lastLiveChatMessageData\r\n\t\tcontentLinesLen=len(contentLines)\r\n\t\tlastContentLinesLen=len(lastContentLines)\r\n\t\tif runtimeID==lastRuntimeID and pretext==lastPretext and contentLinesLen>lastContentLinesLen and contentLines[:lastContentLinesLen]==lastContentLines:\r\n\t\t\tmessage=\"\\n\".join(contentLines[lastContentLinesLen:])\r\n\t\telse:\r\n\t\t\tmessage=pretext+content\r\n\t\tui.message(message)\r\n\t\t# Cache the message data for later possible comparisons \r\n\t\tself.appModule._lastLiveChatMessageData=runtimeID,pretext,contentLines\r\n\r\nclass AppModule(appModuleHandler.AppModule):\r\n\r\n\t# data to store the last chat message (runtime ID,person,content lines)\r\n\t_lastLiveChatMessageData=[],\"\",[]\r\n\r\n\tdef chooseNVDAObjectOverlayClasses(self,obj,clsList):\r\n\t\tif isinstance(obj,UIA) and obj.UIAElement.cachedClassName=='NetUIRicherLabel':\r\n\t\t\tclsList.insert(0,NetUIRicherLabel)\r\n\t\treturn clsList\r\n\r\n", "path": "source/appModules/lync.py"}]}
| 1,326 | 960 |
gh_patches_debug_21060
|
rasdani/github-patches
|
git_diff
|
Pylons__pyramid-1296
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
DOC: XSS in quicktour/views/views.py
http://docs.pylonsproject.org/projects/pyramid/en/latest/quick_tour.html#views
https://github.com/Pylons/pyramid/blob/master/docs/quick_tour/views/views.py#L17
As there is no templating layer to autoescape the user-supplied `name` parameter and the response is by default `text/html`, `hello_view` contains an XSS vulnerability.
Templating is not the focus of (this part of) the quick tour.
I can think of two approaches:
1. Use `cgi.escape` before doing string interpolation (`body % cgi.escape(name)').
2. Add a note about XSS and the value of utilizing a good templating engine with autoescape.
"CWE-79: Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting')" http://cwe.mitre.org/data/definitions/79.html
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/quick_tour/views/views.py`
Content:
```
1 from pyramid.httpexceptions import HTTPFound
2 from pyramid.response import Response
3 from pyramid.view import view_config
4
5
6 # First view, available at http://localhost:6543/
7 @view_config(route_name='home')
8 def home_view(request):
9 return Response('<p>Visit <a href="/howdy?name=lisa">hello</a></p>')
10
11
12 # /howdy?name=alice which links to the next view
13 @view_config(route_name='hello')
14 def hello_view(request):
15 name = request.params.get('name', 'No Name')
16 body = '<p>Hi %s, this <a href="/goto">redirects</a></p>'
17 return Response(body % name)
18
19
20 # /goto which issues HTTP redirect to the last view
21 @view_config(route_name='redirect')
22 def redirect_view(request):
23 return HTTPFound(location="/problem")
24
25
26 # /problem which causes an site error
27 @view_config(route_name='exception')
28 def exception_view(request):
29 raise Exception()
30
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docs/quick_tour/views/views.py b/docs/quick_tour/views/views.py
--- a/docs/quick_tour/views/views.py
+++ b/docs/quick_tour/views/views.py
@@ -1,3 +1,5 @@
+import cgi
+
from pyramid.httpexceptions import HTTPFound
from pyramid.response import Response
from pyramid.view import view_config
@@ -14,7 +16,8 @@
def hello_view(request):
name = request.params.get('name', 'No Name')
body = '<p>Hi %s, this <a href="/goto">redirects</a></p>'
- return Response(body % name)
+ # cgi.escape to prevent Cross-Site Scripting (XSS) [CWE 79]
+ return Response(body % cgi.escape(name))
# /goto which issues HTTP redirect to the last view
@@ -23,7 +26,7 @@
return HTTPFound(location="/problem")
-# /problem which causes an site error
+# /problem which causes a site error
@view_config(route_name='exception')
def exception_view(request):
raise Exception()
|
{"golden_diff": "diff --git a/docs/quick_tour/views/views.py b/docs/quick_tour/views/views.py\n--- a/docs/quick_tour/views/views.py\n+++ b/docs/quick_tour/views/views.py\n@@ -1,3 +1,5 @@\n+import cgi\n+\n from pyramid.httpexceptions import HTTPFound\n from pyramid.response import Response\n from pyramid.view import view_config\n@@ -14,7 +16,8 @@\n def hello_view(request):\n name = request.params.get('name', 'No Name')\n body = '<p>Hi %s, this <a href=\"/goto\">redirects</a></p>'\n- return Response(body % name)\n+ # cgi.escape to prevent Cross-Site Scripting (XSS) [CWE 79]\n+ return Response(body % cgi.escape(name))\n \n \n # /goto which issues HTTP redirect to the last view\n@@ -23,7 +26,7 @@\n return HTTPFound(location=\"/problem\")\n \n \n-# /problem which causes an site error\n+# /problem which causes a site error\n @view_config(route_name='exception')\n def exception_view(request):\n raise Exception()\n", "issue": "DOC: XSS in quicktour/views/views.py\nhttp://docs.pylonsproject.org/projects/pyramid/en/latest/quick_tour.html#views\n\nhttps://github.com/Pylons/pyramid/blob/master/docs/quick_tour/views/views.py#L17\n\nAs there is no templating layer to autoescape the user-supplied `name` parameter and the response is by default `text/html`, `hello_view` contains an XSS vulnerability.\n\nTemplating is not the focus of (this part of) the quick tour.\n\nI can think of two approaches:\n1. Use `cgi.escape` before doing string interpolation (`body % cgi.escape(name)').\n2. Add a note about XSS and the value of utilizing a good templating engine with autoescape.\n\n\"CWE-79: Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting')\" http://cwe.mitre.org/data/definitions/79.html\n\n", "before_files": [{"content": "from pyramid.httpexceptions import HTTPFound\nfrom pyramid.response import Response\nfrom pyramid.view import view_config\n\n\n# First view, available at http://localhost:6543/\n@view_config(route_name='home')\ndef home_view(request):\n return Response('<p>Visit <a href=\"/howdy?name=lisa\">hello</a></p>')\n\n\n# /howdy?name=alice which links to the next view\n@view_config(route_name='hello')\ndef hello_view(request):\n name = request.params.get('name', 'No Name')\n body = '<p>Hi %s, this <a href=\"/goto\">redirects</a></p>'\n return Response(body % name)\n\n\n# /goto which issues HTTP redirect to the last view\n@view_config(route_name='redirect')\ndef redirect_view(request):\n return HTTPFound(location=\"/problem\")\n\n\n# /problem which causes an site error\n@view_config(route_name='exception')\ndef exception_view(request):\n raise Exception()\n", "path": "docs/quick_tour/views/views.py"}], "after_files": [{"content": "import cgi\n\nfrom pyramid.httpexceptions import HTTPFound\nfrom pyramid.response import Response\nfrom pyramid.view import view_config\n\n\n# First view, available at http://localhost:6543/\n@view_config(route_name='home')\ndef home_view(request):\n return Response('<p>Visit <a href=\"/howdy?name=lisa\">hello</a></p>')\n\n\n# /howdy?name=alice which links to the next view\n@view_config(route_name='hello')\ndef hello_view(request):\n name = request.params.get('name', 'No Name')\n body = '<p>Hi %s, this <a href=\"/goto\">redirects</a></p>'\n # cgi.escape to prevent Cross-Site Scripting (XSS) [CWE 79]\n return Response(body % cgi.escape(name))\n\n\n# /goto which issues HTTP redirect to the last view\n@view_config(route_name='redirect')\ndef redirect_view(request):\n return HTTPFound(location=\"/problem\")\n\n\n# /problem which causes a site error\n@view_config(route_name='exception')\ndef exception_view(request):\n raise Exception()\n", "path": "docs/quick_tour/views/views.py"}]}
| 718 | 245 |
gh_patches_debug_5685
|
rasdani/github-patches
|
git_diff
|
pulp__pulpcore-3996
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
RESTAPI document fix for Upstream Pulp Replication API
**Version**
Pulp installed through the Python modules.
"core:3.28.0"
"certguard:3.28.0"
"file:3.28.0"
"python:3.28.0"
"rpm:3.28.0"
**Describe the bug**
Why the attributes of **upstream_pulps_create**/**update** is mentioned again in the **upstream_pulps_replicate" document? Are those attributes (base_url, api_root, domain,...) used at time making an API request "https://PULP-SERVER/pulp/api/v3/upstream_pulps/{object_id}/replicate/"?
**To Reproduce**
None.
**Expected behavior**
A fix is required in the REST API document.
**Additional context**
Create Upstream Pulp API document: https://docs.pulpproject.org/pulpcore/restapi.html#tag/Upstream-Pulps/operation/upstream_pulps_create
Upstream Replication API document: https://docs.pulpproject.org/pulpcore/restapi.html#tag/Upstream-Pulps/operation/upstream_pulps_replicate
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pulpcore/app/viewsets/replica.py`
Content:
```
1 """
2 ViewSet for replicating repositories and distributions from an upstream Pulp
3 """
4 from django.conf import settings
5 from drf_spectacular.utils import extend_schema
6 from rest_framework import mixins
7 from rest_framework.decorators import action
8
9 from pulpcore.app.models import TaskGroup, UpstreamPulp
10 from pulpcore.app.serializers import AsyncOperationResponseSerializer, UpstreamPulpSerializer
11 from pulpcore.app.viewsets import NamedModelViewSet
12 from pulpcore.app.response import TaskGroupOperationResponse
13 from pulpcore.app.tasks import replicate_distributions
14 from pulpcore.tasking.tasks import dispatch
15
16
17 class UpstreamPulpViewSet(
18 NamedModelViewSet,
19 mixins.CreateModelMixin,
20 mixins.RetrieveModelMixin,
21 mixins.ListModelMixin,
22 mixins.DestroyModelMixin,
23 mixins.UpdateModelMixin,
24 ):
25 """API for configuring an upstream Pulp to replicate. This API is provided as a tech preview."""
26
27 queryset = UpstreamPulp.objects.all()
28 endpoint_name = "upstream-pulps"
29 serializer_class = UpstreamPulpSerializer
30 ordering = "-pulp_created"
31
32 @extend_schema(
33 summary="Replicate",
34 description="Trigger an asynchronous repository replication task group. This API is "
35 "provided as a tech preview.",
36 responses={202: AsyncOperationResponseSerializer},
37 )
38 @action(detail=True, methods=["post"])
39 def replicate(self, request, pk):
40 """
41 Triggers an asynchronous repository replication operation.
42 """
43 server = UpstreamPulp.objects.get(pk=pk)
44 task_group = TaskGroup.objects.create(description=f"Replication of {server.name}")
45
46 uri = "/api/v3/servers/"
47 if settings.DOMAIN_ENABLED:
48 uri = f"/{request.domain.name}{uri}"
49
50 dispatch(
51 replicate_distributions,
52 exclusive_resources=[uri],
53 kwargs={"server_pk": pk},
54 task_group=task_group,
55 )
56
57 return TaskGroupOperationResponse(task_group, request)
58
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pulpcore/app/viewsets/replica.py b/pulpcore/app/viewsets/replica.py
--- a/pulpcore/app/viewsets/replica.py
+++ b/pulpcore/app/viewsets/replica.py
@@ -33,6 +33,7 @@
summary="Replicate",
description="Trigger an asynchronous repository replication task group. This API is "
"provided as a tech preview.",
+ request=None,
responses={202: AsyncOperationResponseSerializer},
)
@action(detail=True, methods=["post"])
|
{"golden_diff": "diff --git a/pulpcore/app/viewsets/replica.py b/pulpcore/app/viewsets/replica.py\n--- a/pulpcore/app/viewsets/replica.py\n+++ b/pulpcore/app/viewsets/replica.py\n@@ -33,6 +33,7 @@\n summary=\"Replicate\",\n description=\"Trigger an asynchronous repository replication task group. This API is \"\n \"provided as a tech preview.\",\n+ request=None,\n responses={202: AsyncOperationResponseSerializer},\n )\n @action(detail=True, methods=[\"post\"])\n", "issue": "RESTAPI document fix for Upstream Pulp Replication API\n**Version**\r\nPulp installed through the Python modules.\r\n\"core:3.28.0\"\r\n\"certguard:3.28.0\"\r\n\"file:3.28.0\"\r\n\"python:3.28.0\"\r\n\"rpm:3.28.0\"\r\n\r\n**Describe the bug**\r\nWhy the attributes of **upstream_pulps_create**/**update** is mentioned again in the **upstream_pulps_replicate\" document? Are those attributes (base_url, api_root, domain,...) used at time making an API request \"https://PULP-SERVER/pulp/api/v3/upstream_pulps/{object_id}/replicate/\"?\r\n\r\n**To Reproduce**\r\nNone.\r\n\r\n**Expected behavior**\r\nA fix is required in the REST API document.\r\n\r\n**Additional context**\r\nCreate Upstream Pulp API document: https://docs.pulpproject.org/pulpcore/restapi.html#tag/Upstream-Pulps/operation/upstream_pulps_create\r\nUpstream Replication API document: https://docs.pulpproject.org/pulpcore/restapi.html#tag/Upstream-Pulps/operation/upstream_pulps_replicate\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nViewSet for replicating repositories and distributions from an upstream Pulp\n\"\"\"\nfrom django.conf import settings\nfrom drf_spectacular.utils import extend_schema\nfrom rest_framework import mixins\nfrom rest_framework.decorators import action\n\nfrom pulpcore.app.models import TaskGroup, UpstreamPulp\nfrom pulpcore.app.serializers import AsyncOperationResponseSerializer, UpstreamPulpSerializer\nfrom pulpcore.app.viewsets import NamedModelViewSet\nfrom pulpcore.app.response import TaskGroupOperationResponse\nfrom pulpcore.app.tasks import replicate_distributions\nfrom pulpcore.tasking.tasks import dispatch\n\n\nclass UpstreamPulpViewSet(\n NamedModelViewSet,\n mixins.CreateModelMixin,\n mixins.RetrieveModelMixin,\n mixins.ListModelMixin,\n mixins.DestroyModelMixin,\n mixins.UpdateModelMixin,\n):\n \"\"\"API for configuring an upstream Pulp to replicate. This API is provided as a tech preview.\"\"\"\n\n queryset = UpstreamPulp.objects.all()\n endpoint_name = \"upstream-pulps\"\n serializer_class = UpstreamPulpSerializer\n ordering = \"-pulp_created\"\n\n @extend_schema(\n summary=\"Replicate\",\n description=\"Trigger an asynchronous repository replication task group. This API is \"\n \"provided as a tech preview.\",\n responses={202: AsyncOperationResponseSerializer},\n )\n @action(detail=True, methods=[\"post\"])\n def replicate(self, request, pk):\n \"\"\"\n Triggers an asynchronous repository replication operation.\n \"\"\"\n server = UpstreamPulp.objects.get(pk=pk)\n task_group = TaskGroup.objects.create(description=f\"Replication of {server.name}\")\n\n uri = \"/api/v3/servers/\"\n if settings.DOMAIN_ENABLED:\n uri = f\"/{request.domain.name}{uri}\"\n\n dispatch(\n replicate_distributions,\n exclusive_resources=[uri],\n kwargs={\"server_pk\": pk},\n task_group=task_group,\n )\n\n return TaskGroupOperationResponse(task_group, request)\n", "path": "pulpcore/app/viewsets/replica.py"}], "after_files": [{"content": "\"\"\"\nViewSet for replicating repositories and distributions from an upstream Pulp\n\"\"\"\nfrom django.conf import settings\nfrom drf_spectacular.utils import extend_schema\nfrom rest_framework import mixins\nfrom rest_framework.decorators import action\n\nfrom pulpcore.app.models import TaskGroup, UpstreamPulp\nfrom pulpcore.app.serializers import AsyncOperationResponseSerializer, UpstreamPulpSerializer\nfrom pulpcore.app.viewsets import NamedModelViewSet\nfrom pulpcore.app.response import TaskGroupOperationResponse\nfrom pulpcore.app.tasks import replicate_distributions\nfrom pulpcore.tasking.tasks import dispatch\n\n\nclass UpstreamPulpViewSet(\n NamedModelViewSet,\n mixins.CreateModelMixin,\n mixins.RetrieveModelMixin,\n mixins.ListModelMixin,\n mixins.DestroyModelMixin,\n mixins.UpdateModelMixin,\n):\n \"\"\"API for configuring an upstream Pulp to replicate. This API is provided as a tech preview.\"\"\"\n\n queryset = UpstreamPulp.objects.all()\n endpoint_name = \"upstream-pulps\"\n serializer_class = UpstreamPulpSerializer\n ordering = \"-pulp_created\"\n\n @extend_schema(\n summary=\"Replicate\",\n description=\"Trigger an asynchronous repository replication task group. This API is \"\n \"provided as a tech preview.\",\n request=None,\n responses={202: AsyncOperationResponseSerializer},\n )\n @action(detail=True, methods=[\"post\"])\n def replicate(self, request, pk):\n \"\"\"\n Triggers an asynchronous repository replication operation.\n \"\"\"\n server = UpstreamPulp.objects.get(pk=pk)\n task_group = TaskGroup.objects.create(description=f\"Replication of {server.name}\")\n\n uri = \"/api/v3/servers/\"\n if settings.DOMAIN_ENABLED:\n uri = f\"/{request.domain.name}{uri}\"\n\n dispatch(\n replicate_distributions,\n exclusive_resources=[uri],\n kwargs={\"server_pk\": pk},\n task_group=task_group,\n )\n\n return TaskGroupOperationResponse(task_group, request)\n", "path": "pulpcore/app/viewsets/replica.py"}]}
| 1,049 | 122 |
gh_patches_debug_9381
|
rasdani/github-patches
|
git_diff
|
pyinstaller__pyinstaller-5128
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Hook file for sqlalchemy misses hidden import "sqlalchemy.ext.baked"
The provided hook file for sqlalchemy doesn't seem to pick up the hidden import of "sqlalchemy.ext.baked".
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `PyInstaller/hooks/hook-sqlalchemy.py`
Content:
```
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2005-2020, PyInstaller Development Team.
3 #
4 # Distributed under the terms of the GNU General Public License (version 2
5 # or later) with exception for distributing the bootloader.
6 #
7 # The full license is in the file COPYING.txt, distributed with this software.
8 #
9 # SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)
10 #-----------------------------------------------------------------------------
11
12 import re
13 from PyInstaller.utils.hooks import (
14 exec_statement, is_module_satisfies, logger)
15 from PyInstaller.compat import open_file, text_read_mode
16 from PyInstaller.lib.modulegraph.modulegraph import SourceModule
17 from PyInstaller.lib.modulegraph.util import guess_encoding
18
19 # 'sqlalchemy.testing' causes bundling a lot of unnecessary modules.
20 excludedimports = ['sqlalchemy.testing']
21
22 # include most common database bindings
23 # some database bindings are detected and include some
24 # are not. We should explicitly include database backends.
25 hiddenimports = ['pysqlite2', 'MySQLdb', 'psycopg2']
26
27 # In SQLAlchemy >= 0.6, the "sqlalchemy.dialects" package provides dialects.
28 if is_module_satisfies('sqlalchemy >= 0.6'):
29 dialects = exec_statement("import sqlalchemy.dialects;print(sqlalchemy.dialects.__all__)")
30 dialects = eval(dialects.strip())
31
32 for n in dialects:
33 hiddenimports.append("sqlalchemy.dialects." + n)
34 # In SQLAlchemy <= 0.5, the "sqlalchemy.databases" package provides dialects.
35 else:
36 databases = exec_statement("import sqlalchemy.databases; print(sqlalchemy.databases.__all__)")
37 databases = eval(databases.strip())
38
39 for n in databases:
40 hiddenimports.append("sqlalchemy.databases." + n)
41
42
43 def hook(hook_api):
44 """
45 SQLAlchemy 0.9 introduced the decorator 'util.dependencies'. This
46 decorator does imports. eg:
47
48 @util.dependencies("sqlalchemy.sql.schema")
49
50 This hook scans for included SQLAlchemy modules and then scans those modules
51 for any util.dependencies and marks those modules as hidden imports.
52 """
53
54 if not is_module_satisfies('sqlalchemy >= 0.9'):
55 return
56
57 # this parser is very simplistic but seems to catch all cases as of V1.1
58 depend_regex = re.compile(r'@util.dependencies\([\'"](.*?)[\'"]\)')
59
60 hidden_imports_set = set()
61 known_imports = set()
62 for node in hook_api.module_graph.flatten(start=hook_api.module):
63 if isinstance(node, SourceModule) and \
64 node.identifier.startswith('sqlalchemy.'):
65 known_imports.add(node.identifier)
66 # Determine the encoding of the source file.
67 with open_file(node.filename, 'rb') as f:
68 encoding = guess_encoding(f)
69 # Use that to open the file.
70 with open_file(node.filename, text_read_mode,
71 encoding=encoding) as f:
72 for match in depend_regex.findall(f.read()):
73 hidden_imports_set.add(match)
74
75 hidden_imports_set -= known_imports
76 if len(hidden_imports_set):
77 logger.info(" Found %d sqlalchemy hidden imports",
78 len(hidden_imports_set))
79 hook_api.add_imports(*list(hidden_imports_set))
80
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/PyInstaller/hooks/hook-sqlalchemy.py b/PyInstaller/hooks/hook-sqlalchemy.py
--- a/PyInstaller/hooks/hook-sqlalchemy.py
+++ b/PyInstaller/hooks/hook-sqlalchemy.py
@@ -22,7 +22,7 @@
# include most common database bindings
# some database bindings are detected and include some
# are not. We should explicitly include database backends.
-hiddenimports = ['pysqlite2', 'MySQLdb', 'psycopg2']
+hiddenimports = ['pysqlite2', 'MySQLdb', 'psycopg2', 'sqlalchemy.ext.baked']
# In SQLAlchemy >= 0.6, the "sqlalchemy.dialects" package provides dialects.
if is_module_satisfies('sqlalchemy >= 0.6'):
|
{"golden_diff": "diff --git a/PyInstaller/hooks/hook-sqlalchemy.py b/PyInstaller/hooks/hook-sqlalchemy.py\n--- a/PyInstaller/hooks/hook-sqlalchemy.py\n+++ b/PyInstaller/hooks/hook-sqlalchemy.py\n@@ -22,7 +22,7 @@\n # include most common database bindings\n # some database bindings are detected and include some\n # are not. We should explicitly include database backends.\n-hiddenimports = ['pysqlite2', 'MySQLdb', 'psycopg2']\n+hiddenimports = ['pysqlite2', 'MySQLdb', 'psycopg2', 'sqlalchemy.ext.baked']\n \n # In SQLAlchemy >= 0.6, the \"sqlalchemy.dialects\" package provides dialects.\n if is_module_satisfies('sqlalchemy >= 0.6'):\n", "issue": "Hook file for sqlalchemy misses hidden import \"sqlalchemy.ext.baked\"\nThe provided hook file for sqlalchemy doesn't seem to pick up the hidden import of \"sqlalchemy.ext.baked\".\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2005-2020, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License (version 2\n# or later) with exception for distributing the bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#\n# SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)\n#-----------------------------------------------------------------------------\n\nimport re\nfrom PyInstaller.utils.hooks import (\n exec_statement, is_module_satisfies, logger)\nfrom PyInstaller.compat import open_file, text_read_mode\nfrom PyInstaller.lib.modulegraph.modulegraph import SourceModule\nfrom PyInstaller.lib.modulegraph.util import guess_encoding\n\n# 'sqlalchemy.testing' causes bundling a lot of unnecessary modules.\nexcludedimports = ['sqlalchemy.testing']\n\n# include most common database bindings\n# some database bindings are detected and include some\n# are not. We should explicitly include database backends.\nhiddenimports = ['pysqlite2', 'MySQLdb', 'psycopg2']\n\n# In SQLAlchemy >= 0.6, the \"sqlalchemy.dialects\" package provides dialects.\nif is_module_satisfies('sqlalchemy >= 0.6'):\n dialects = exec_statement(\"import sqlalchemy.dialects;print(sqlalchemy.dialects.__all__)\")\n dialects = eval(dialects.strip())\n\n for n in dialects:\n hiddenimports.append(\"sqlalchemy.dialects.\" + n)\n# In SQLAlchemy <= 0.5, the \"sqlalchemy.databases\" package provides dialects.\nelse:\n databases = exec_statement(\"import sqlalchemy.databases; print(sqlalchemy.databases.__all__)\")\n databases = eval(databases.strip())\n\n for n in databases:\n hiddenimports.append(\"sqlalchemy.databases.\" + n)\n\n\ndef hook(hook_api):\n \"\"\"\n SQLAlchemy 0.9 introduced the decorator 'util.dependencies'. This\n decorator does imports. eg:\n\n @util.dependencies(\"sqlalchemy.sql.schema\")\n\n This hook scans for included SQLAlchemy modules and then scans those modules\n for any util.dependencies and marks those modules as hidden imports.\n \"\"\"\n\n if not is_module_satisfies('sqlalchemy >= 0.9'):\n return\n\n # this parser is very simplistic but seems to catch all cases as of V1.1\n depend_regex = re.compile(r'@util.dependencies\\([\\'\"](.*?)[\\'\"]\\)')\n\n hidden_imports_set = set()\n known_imports = set()\n for node in hook_api.module_graph.flatten(start=hook_api.module):\n if isinstance(node, SourceModule) and \\\n node.identifier.startswith('sqlalchemy.'):\n known_imports.add(node.identifier)\n # Determine the encoding of the source file.\n with open_file(node.filename, 'rb') as f:\n encoding = guess_encoding(f)\n # Use that to open the file.\n with open_file(node.filename, text_read_mode,\n encoding=encoding) as f:\n for match in depend_regex.findall(f.read()):\n hidden_imports_set.add(match)\n\n hidden_imports_set -= known_imports\n if len(hidden_imports_set):\n logger.info(\" Found %d sqlalchemy hidden imports\",\n len(hidden_imports_set))\n hook_api.add_imports(*list(hidden_imports_set))\n", "path": "PyInstaller/hooks/hook-sqlalchemy.py"}], "after_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2005-2020, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License (version 2\n# or later) with exception for distributing the bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#\n# SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)\n#-----------------------------------------------------------------------------\n\nimport re\nfrom PyInstaller.utils.hooks import (\n exec_statement, is_module_satisfies, logger)\nfrom PyInstaller.compat import open_file, text_read_mode\nfrom PyInstaller.lib.modulegraph.modulegraph import SourceModule\nfrom PyInstaller.lib.modulegraph.util import guess_encoding\n\n# 'sqlalchemy.testing' causes bundling a lot of unnecessary modules.\nexcludedimports = ['sqlalchemy.testing']\n\n# include most common database bindings\n# some database bindings are detected and include some\n# are not. We should explicitly include database backends.\nhiddenimports = ['pysqlite2', 'MySQLdb', 'psycopg2', 'sqlalchemy.ext.baked']\n\n# In SQLAlchemy >= 0.6, the \"sqlalchemy.dialects\" package provides dialects.\nif is_module_satisfies('sqlalchemy >= 0.6'):\n dialects = exec_statement(\"import sqlalchemy.dialects;print(sqlalchemy.dialects.__all__)\")\n dialects = eval(dialects.strip())\n\n for n in dialects:\n hiddenimports.append(\"sqlalchemy.dialects.\" + n)\n# In SQLAlchemy <= 0.5, the \"sqlalchemy.databases\" package provides dialects.\nelse:\n databases = exec_statement(\"import sqlalchemy.databases; print(sqlalchemy.databases.__all__)\")\n databases = eval(databases.strip())\n\n for n in databases:\n hiddenimports.append(\"sqlalchemy.databases.\" + n)\n\n\ndef hook(hook_api):\n \"\"\"\n SQLAlchemy 0.9 introduced the decorator 'util.dependencies'. This\n decorator does imports. eg:\n\n @util.dependencies(\"sqlalchemy.sql.schema\")\n\n This hook scans for included SQLAlchemy modules and then scans those modules\n for any util.dependencies and marks those modules as hidden imports.\n \"\"\"\n\n if not is_module_satisfies('sqlalchemy >= 0.9'):\n return\n\n # this parser is very simplistic but seems to catch all cases as of V1.1\n depend_regex = re.compile(r'@util.dependencies\\([\\'\"](.*?)[\\'\"]\\)')\n\n hidden_imports_set = set()\n known_imports = set()\n for node in hook_api.module_graph.flatten(start=hook_api.module):\n if isinstance(node, SourceModule) and \\\n node.identifier.startswith('sqlalchemy.'):\n known_imports.add(node.identifier)\n # Determine the encoding of the source file.\n with open_file(node.filename, 'rb') as f:\n encoding = guess_encoding(f)\n # Use that to open the file.\n with open_file(node.filename, text_read_mode,\n encoding=encoding) as f:\n for match in depend_regex.findall(f.read()):\n hidden_imports_set.add(match)\n\n hidden_imports_set -= known_imports\n if len(hidden_imports_set):\n logger.info(\" Found %d sqlalchemy hidden imports\",\n len(hidden_imports_set))\n hook_api.add_imports(*list(hidden_imports_set))\n", "path": "PyInstaller/hooks/hook-sqlalchemy.py"}]}
| 1,163 | 175 |
gh_patches_debug_11454
|
rasdani/github-patches
|
git_diff
|
beeware__toga-1617
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
WHEN TYPING "-" IN THE NUMBERINPUT, WIDGET FAILS.
"""
TESTE
"""
import toga
from toga.style import Pack
from toga.style.pack import COLUMN, ROW
class TESTE(toga.App):
def startup(self):
"""
Construct and show the Toga application.
Usually, you would add your application to a main content box.
We then create a main window (with a name matching the app), and
show the main window.
"""
# WIDGETS ###############################
self.number = toga.NumberInput()
self.pushButton = toga.Button('AHHHH')
########################################
# BOX ####################################################
main_box = toga.Box(style=Pack(direction=COLUMN))
main_box.add(self.number, self.pushButton)
#########################################################
# EVENT #####################################################
self.pushButton.on_press = self.printar
##############################################################
# WINDOW #####################################################
self.main_window = toga.MainWindow(title=self.formal_name)
self.main_window.content = main_box
self.main_window.show()
##############################################################
def printar(self, widget):
brasil = float(self.number.value)
print(brasil)
def main():
return TESTE()
https://user-images.githubusercontent.com/75274707/195914116-84981cc4-62d4-423c-a51d-0b77b4f6948a.mp4
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/android/toga_android/widgets/numberinput.py`
Content:
```
1 from decimal import Decimal
2
3 from travertino.size import at_least
4
5 from ..libs.android.text import InputType, TextWatcher
6 from ..libs.android.util import TypedValue
7 from ..libs.android.view import Gravity, View__MeasureSpec
8 from ..libs.android.widget import EditText
9 from .base import Widget, align
10
11
12 def decimal_from_string(s):
13 """If s is the empty string, return `None`. Otherwise, convert to a `Decimal`,
14 allowing any exceptions to bubble up."""
15 if not s:
16 return None
17 return Decimal(s)
18
19
20 def string_from_decimal(d):
21 '''Implement the inverse of `decimal_from_string()`. This way, Toga's
22 `NumericInput` can pass us a `None` or `Decimal`, and we can always place
23 a String in the Android `EditText`.'''
24 if d is None:
25 return ""
26 return str(d)
27
28
29 class TogaNumberInputWatcher(TextWatcher):
30 def __init__(self, impl):
31 super().__init__()
32 self.interface = impl.interface
33
34 def beforeTextChanged(self, _charSequence, _start, _count, _after):
35 pass
36
37 def afterTextChanged(self, editable):
38 # Toga `NumberInput` stores the value as a property on the `interface`.
39 self.interface._value = decimal_from_string(editable.toString())
40 # Call the user on_change callback, if it exists.
41 if self.interface.on_change:
42 self.interface.on_change(widget=self.interface)
43
44 def onTextChanged(self, _charSequence, _start, _before, _count):
45 pass
46
47
48 class NumberInput(Widget):
49 def create(self):
50 self.native = EditText(self._native_activity)
51 self.native.addTextChangedListener(TogaNumberInputWatcher(self))
52
53 # A `NumberInput` in Toga supports signed decimal numbers.
54 self.native.setInputType(
55 InputType.TYPE_CLASS_NUMBER
56 | InputType.TYPE_NUMBER_FLAG_DECIMAL
57 | InputType.TYPE_NUMBER_FLAG_SIGNED
58 )
59
60 def set_readonly(self, value):
61 self.native.setFocusable(not value)
62
63 def set_placeholder(self, value):
64 # Android EditText's setHint() requires a Python string.
65 self.native.setHint(value if value is not None else "")
66
67 def set_alignment(self, value):
68 self.native.setGravity(Gravity.CENTER_VERTICAL | align(value))
69
70 def set_font(self, font):
71 if font:
72 font_impl = font.bind(self.interface.factory)
73 self.native.setTextSize(TypedValue.COMPLEX_UNIT_SP, font_impl.get_size())
74 self.native.setTypeface(font_impl.get_typeface(), font_impl.get_style())
75
76 def set_value(self, value):
77 # Store a string in the Android widget. The `afterTextChanged` method
78 # will call the user on_change handler.
79 self.native.setText(string_from_decimal(value))
80
81 def set_step(self, step):
82 self.interface.factory.not_implemented("NumberInput.set_step()")
83
84 def set_max_value(self, value):
85 self.interface.factory.not_implemented("NumberInput.set_max_value()")
86
87 def set_min_value(self, value):
88 self.interface.factory.not_implemented("NumberInput.set_min_value()")
89
90 def set_on_change(self, handler):
91 # No special handling required.
92 pass
93
94 def rehint(self):
95 # On Android, EditText's measure() throws NullPointerException if the widget has no
96 # LayoutParams.
97 if not self.native.getLayoutParams():
98 return
99 self.native.measure(
100 View__MeasureSpec.UNSPECIFIED, View__MeasureSpec.UNSPECIFIED
101 )
102 self.interface.intrinsic.width = at_least(self.native.getMeasuredWidth())
103 self.interface.intrinsic.height = self.native.getMeasuredHeight()
104
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/android/toga_android/widgets/numberinput.py b/src/android/toga_android/widgets/numberinput.py
--- a/src/android/toga_android/widgets/numberinput.py
+++ b/src/android/toga_android/widgets/numberinput.py
@@ -1,4 +1,4 @@
-from decimal import Decimal
+from decimal import Decimal, InvalidOperation
from travertino.size import at_least
@@ -10,11 +10,11 @@
def decimal_from_string(s):
- """If s is the empty string, return `None`. Otherwise, convert to a `Decimal`,
- allowing any exceptions to bubble up."""
- if not s:
+ """Convert s to a `Decimal`, returning `None` if it's not a valid number."""
+ try:
+ return Decimal(s)
+ except InvalidOperation:
return None
- return Decimal(s)
def string_from_decimal(d):
|
{"golden_diff": "diff --git a/src/android/toga_android/widgets/numberinput.py b/src/android/toga_android/widgets/numberinput.py\n--- a/src/android/toga_android/widgets/numberinput.py\n+++ b/src/android/toga_android/widgets/numberinput.py\n@@ -1,4 +1,4 @@\n-from decimal import Decimal\n+from decimal import Decimal, InvalidOperation\n \n from travertino.size import at_least\n \n@@ -10,11 +10,11 @@\n \n \n def decimal_from_string(s):\n- \"\"\"If s is the empty string, return `None`. Otherwise, convert to a `Decimal`,\n- allowing any exceptions to bubble up.\"\"\"\n- if not s:\n+ \"\"\"Convert s to a `Decimal`, returning `None` if it's not a valid number.\"\"\"\n+ try:\n+ return Decimal(s)\n+ except InvalidOperation:\n return None\n- return Decimal(s)\n \n \n def string_from_decimal(d):\n", "issue": "WHEN TYPING \"-\" IN THE NUMBERINPUT, WIDGET FAILS.\n\"\"\"\r\nTESTE\r\n\"\"\"\r\nimport toga\r\nfrom toga.style import Pack\r\nfrom toga.style.pack import COLUMN, ROW\r\n\r\n\r\nclass TESTE(toga.App):\r\n\r\n def startup(self):\r\n \"\"\"\r\n Construct and show the Toga application.\r\n\r\n Usually, you would add your application to a main content box.\r\n We then create a main window (with a name matching the app), and\r\n show the main window.\r\n \"\"\"\r\n\r\n # WIDGETS ###############################\r\n self.number = toga.NumberInput()\r\n self.pushButton = toga.Button('AHHHH')\r\n ########################################\r\n\r\n # BOX ####################################################\r\n main_box = toga.Box(style=Pack(direction=COLUMN))\r\n main_box.add(self.number, self.pushButton)\r\n #########################################################\r\n\r\n # EVENT #####################################################\r\n self.pushButton.on_press = self.printar\r\n ##############################################################\r\n\r\n # WINDOW #####################################################\r\n self.main_window = toga.MainWindow(title=self.formal_name)\r\n self.main_window.content = main_box\r\n self.main_window.show()\r\n ##############################################################\r\n\r\n def printar(self, widget):\r\n brasil = float(self.number.value)\r\n print(brasil)\r\n\r\ndef main():\r\n return TESTE()\r\n\r\nhttps://user-images.githubusercontent.com/75274707/195914116-84981cc4-62d4-423c-a51d-0b77b4f6948a.mp4\r\n\r\n\n", "before_files": [{"content": "from decimal import Decimal\n\nfrom travertino.size import at_least\n\nfrom ..libs.android.text import InputType, TextWatcher\nfrom ..libs.android.util import TypedValue\nfrom ..libs.android.view import Gravity, View__MeasureSpec\nfrom ..libs.android.widget import EditText\nfrom .base import Widget, align\n\n\ndef decimal_from_string(s):\n \"\"\"If s is the empty string, return `None`. Otherwise, convert to a `Decimal`,\n allowing any exceptions to bubble up.\"\"\"\n if not s:\n return None\n return Decimal(s)\n\n\ndef string_from_decimal(d):\n '''Implement the inverse of `decimal_from_string()`. This way, Toga's\n `NumericInput` can pass us a `None` or `Decimal`, and we can always place\n a String in the Android `EditText`.'''\n if d is None:\n return \"\"\n return str(d)\n\n\nclass TogaNumberInputWatcher(TextWatcher):\n def __init__(self, impl):\n super().__init__()\n self.interface = impl.interface\n\n def beforeTextChanged(self, _charSequence, _start, _count, _after):\n pass\n\n def afterTextChanged(self, editable):\n # Toga `NumberInput` stores the value as a property on the `interface`.\n self.interface._value = decimal_from_string(editable.toString())\n # Call the user on_change callback, if it exists.\n if self.interface.on_change:\n self.interface.on_change(widget=self.interface)\n\n def onTextChanged(self, _charSequence, _start, _before, _count):\n pass\n\n\nclass NumberInput(Widget):\n def create(self):\n self.native = EditText(self._native_activity)\n self.native.addTextChangedListener(TogaNumberInputWatcher(self))\n\n # A `NumberInput` in Toga supports signed decimal numbers.\n self.native.setInputType(\n InputType.TYPE_CLASS_NUMBER\n | InputType.TYPE_NUMBER_FLAG_DECIMAL\n | InputType.TYPE_NUMBER_FLAG_SIGNED\n )\n\n def set_readonly(self, value):\n self.native.setFocusable(not value)\n\n def set_placeholder(self, value):\n # Android EditText's setHint() requires a Python string.\n self.native.setHint(value if value is not None else \"\")\n\n def set_alignment(self, value):\n self.native.setGravity(Gravity.CENTER_VERTICAL | align(value))\n\n def set_font(self, font):\n if font:\n font_impl = font.bind(self.interface.factory)\n self.native.setTextSize(TypedValue.COMPLEX_UNIT_SP, font_impl.get_size())\n self.native.setTypeface(font_impl.get_typeface(), font_impl.get_style())\n\n def set_value(self, value):\n # Store a string in the Android widget. The `afterTextChanged` method\n # will call the user on_change handler.\n self.native.setText(string_from_decimal(value))\n\n def set_step(self, step):\n self.interface.factory.not_implemented(\"NumberInput.set_step()\")\n\n def set_max_value(self, value):\n self.interface.factory.not_implemented(\"NumberInput.set_max_value()\")\n\n def set_min_value(self, value):\n self.interface.factory.not_implemented(\"NumberInput.set_min_value()\")\n\n def set_on_change(self, handler):\n # No special handling required.\n pass\n\n def rehint(self):\n # On Android, EditText's measure() throws NullPointerException if the widget has no\n # LayoutParams.\n if not self.native.getLayoutParams():\n return\n self.native.measure(\n View__MeasureSpec.UNSPECIFIED, View__MeasureSpec.UNSPECIFIED\n )\n self.interface.intrinsic.width = at_least(self.native.getMeasuredWidth())\n self.interface.intrinsic.height = self.native.getMeasuredHeight()\n", "path": "src/android/toga_android/widgets/numberinput.py"}], "after_files": [{"content": "from decimal import Decimal, InvalidOperation\n\nfrom travertino.size import at_least\n\nfrom ..libs.android.text import InputType, TextWatcher\nfrom ..libs.android.util import TypedValue\nfrom ..libs.android.view import Gravity, View__MeasureSpec\nfrom ..libs.android.widget import EditText\nfrom .base import Widget, align\n\n\ndef decimal_from_string(s):\n \"\"\"Convert s to a `Decimal`, returning `None` if it's not a valid number.\"\"\"\n try:\n return Decimal(s)\n except InvalidOperation:\n return None\n\n\ndef string_from_decimal(d):\n '''Implement the inverse of `decimal_from_string()`. This way, Toga's\n `NumericInput` can pass us a `None` or `Decimal`, and we can always place\n a String in the Android `EditText`.'''\n if d is None:\n return \"\"\n return str(d)\n\n\nclass TogaNumberInputWatcher(TextWatcher):\n def __init__(self, impl):\n super().__init__()\n self.interface = impl.interface\n\n def beforeTextChanged(self, _charSequence, _start, _count, _after):\n pass\n\n def afterTextChanged(self, editable):\n # Toga `NumberInput` stores the value as a property on the `interface`.\n self.interface._value = decimal_from_string(editable.toString())\n # Call the user on_change callback, if it exists.\n if self.interface.on_change:\n self.interface.on_change(widget=self.interface)\n\n def onTextChanged(self, _charSequence, _start, _before, _count):\n pass\n\n\nclass NumberInput(Widget):\n def create(self):\n self.native = EditText(self._native_activity)\n self.native.addTextChangedListener(TogaNumberInputWatcher(self))\n\n # A `NumberInput` in Toga supports signed decimal numbers.\n self.native.setInputType(\n InputType.TYPE_CLASS_NUMBER\n | InputType.TYPE_NUMBER_FLAG_DECIMAL\n | InputType.TYPE_NUMBER_FLAG_SIGNED\n )\n\n def set_readonly(self, value):\n self.native.setFocusable(not value)\n\n def set_placeholder(self, value):\n # Android EditText's setHint() requires a Python string.\n self.native.setHint(value if value is not None else \"\")\n\n def set_alignment(self, value):\n self.native.setGravity(Gravity.CENTER_VERTICAL | align(value))\n\n def set_font(self, font):\n if font:\n font_impl = font.bind(self.interface.factory)\n self.native.setTextSize(TypedValue.COMPLEX_UNIT_SP, font_impl.get_size())\n self.native.setTypeface(font_impl.get_typeface(), font_impl.get_style())\n\n def set_value(self, value):\n # Store a string in the Android widget. The `afterTextChanged` method\n # will call the user on_change handler.\n self.native.setText(string_from_decimal(value))\n\n def set_step(self, step):\n self.interface.factory.not_implemented(\"NumberInput.set_step()\")\n\n def set_max_value(self, value):\n self.interface.factory.not_implemented(\"NumberInput.set_max_value()\")\n\n def set_min_value(self, value):\n self.interface.factory.not_implemented(\"NumberInput.set_min_value()\")\n\n def set_on_change(self, handler):\n # No special handling required.\n pass\n\n def rehint(self):\n # On Android, EditText's measure() throws NullPointerException if the widget has no\n # LayoutParams.\n if not self.native.getLayoutParams():\n return\n self.native.measure(\n View__MeasureSpec.UNSPECIFIED, View__MeasureSpec.UNSPECIFIED\n )\n self.interface.intrinsic.width = at_least(self.native.getMeasuredWidth())\n self.interface.intrinsic.height = self.native.getMeasuredHeight()\n", "path": "src/android/toga_android/widgets/numberinput.py"}]}
| 1,576 | 197 |
gh_patches_debug_11123
|
rasdani/github-patches
|
git_diff
|
kivy__python-for-android-662
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
.jam files not installed
The user-config.jam in https://github.com/kivy/python-for-android/tree/master/pythonforandroid/recipes/boost does not show up in the installed p4a recipes folder /home/paul/.local/lib/python2.7/site-packages/pythonforandroid/recipes/boost/
Perhaps .jam files have to be added to this array as well: https://github.com/kived/python-for-android/commit/93fcf656e2aafc6a75ee06dab3e471e1eb509d87
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1
2 from setuptools import setup, find_packages
3 from os import walk
4 from os.path import join, dirname, sep
5 import os
6 import glob
7
8 # NOTE: All package data should also be set in MANIFEST.in
9
10 packages = find_packages()
11
12 package_data = {'': ['*.tmpl',
13 '*.patch', ], }
14
15 data_files = []
16
17 # By specifying every file manually, package_data will be able to
18 # include them in binary distributions. Note that we have to add
19 # everything as a 'pythonforandroid' rule, using '' apparently doesn't
20 # work.
21 def recursively_include(results, directory, patterns):
22 for root, subfolders, files in walk(directory):
23 for fn in files:
24 if not any([glob.fnmatch.fnmatch(fn, pattern) for pattern in patterns]):
25 continue
26 filename = join(root, fn)
27 directory = 'pythonforandroid'
28 if directory not in results:
29 results[directory] = []
30 results[directory].append(join(*filename.split(sep)[1:]))
31
32 recursively_include(package_data, 'pythonforandroid/recipes',
33 ['*.patch', 'Setup*', '*.pyx', '*.py', '*.c', '*.h',
34 '*.mk', ])
35 recursively_include(package_data, 'pythonforandroid/bootstraps',
36 ['*.properties', '*.xml', '*.java', '*.tmpl', '*.txt', '*.png',
37 '*.mk', '*.c', '*.h', '*.py', '*.sh', '*.jpg', '*.aidl', ])
38 recursively_include(package_data, 'pythonforandroid/bootstraps',
39 ['sdl-config', ])
40 recursively_include(package_data, 'pythonforandroid',
41 ['liblink', 'biglink', 'liblink.sh'])
42
43 setup(name='python-for-android',
44 version='0.3',
45 description='Android APK packager for Python scripts and apps',
46 author='The Kivy team',
47 author_email='[email protected]',
48 url='https://github.com/kivy/python-for-android',
49 license='MIT',
50 install_requires=['appdirs', 'colorama>0.3', 'sh', 'jinja2', 'argparse',
51 'six'],
52 entry_points={
53 'console_scripts': [
54 'python-for-android = pythonforandroid.toolchain:main',
55 'p4a = pythonforandroid.toolchain:main',
56 ],
57 'distutils.commands': [
58 'bdist_apk = pythonforandroid.bdist_apk:BdistAPK',
59 ],
60 },
61 classifiers = [
62 'Development Status :: 3 - Alpha',
63 'Intended Audience :: Developers',
64 'License :: OSI Approved :: MIT License',
65 'Operating System :: Microsoft :: Windows',
66 'Operating System :: OS Independent',
67 'Operating System :: POSIX :: Linux',
68 'Operating System :: MacOS :: MacOS X',
69 'Programming Language :: C',
70 'Programming Language :: Python :: 2',
71 'Programming Language :: Python :: 3',
72 'Topic :: Software Development',
73 'Topic :: Utilities',
74 ],
75 packages=packages,
76 package_data=package_data,
77 )
78
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -31,7 +31,7 @@
recursively_include(package_data, 'pythonforandroid/recipes',
['*.patch', 'Setup*', '*.pyx', '*.py', '*.c', '*.h',
- '*.mk', ])
+ '*.mk', '*.jam', ])
recursively_include(package_data, 'pythonforandroid/bootstraps',
['*.properties', '*.xml', '*.java', '*.tmpl', '*.txt', '*.png',
'*.mk', '*.c', '*.h', '*.py', '*.sh', '*.jpg', '*.aidl', ])
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -31,7 +31,7 @@\n \n recursively_include(package_data, 'pythonforandroid/recipes',\n ['*.patch', 'Setup*', '*.pyx', '*.py', '*.c', '*.h',\n- '*.mk', ])\n+ '*.mk', '*.jam', ])\n recursively_include(package_data, 'pythonforandroid/bootstraps',\n ['*.properties', '*.xml', '*.java', '*.tmpl', '*.txt', '*.png',\n '*.mk', '*.c', '*.h', '*.py', '*.sh', '*.jpg', '*.aidl', ])\n", "issue": ".jam files not installed\nThe user-config.jam in https://github.com/kivy/python-for-android/tree/master/pythonforandroid/recipes/boost does not show up in the installed p4a recipes folder /home/paul/.local/lib/python2.7/site-packages/pythonforandroid/recipes/boost/\n\nPerhaps .jam files have to be added to this array as well: https://github.com/kived/python-for-android/commit/93fcf656e2aafc6a75ee06dab3e471e1eb509d87\n\n", "before_files": [{"content": "\nfrom setuptools import setup, find_packages\nfrom os import walk\nfrom os.path import join, dirname, sep\nimport os\nimport glob\n\n# NOTE: All package data should also be set in MANIFEST.in\n\npackages = find_packages()\n\npackage_data = {'': ['*.tmpl',\n '*.patch', ], }\n\ndata_files = []\n\n# By specifying every file manually, package_data will be able to\n# include them in binary distributions. Note that we have to add\n# everything as a 'pythonforandroid' rule, using '' apparently doesn't\n# work.\ndef recursively_include(results, directory, patterns):\n for root, subfolders, files in walk(directory):\n for fn in files:\n if not any([glob.fnmatch.fnmatch(fn, pattern) for pattern in patterns]):\n continue\n filename = join(root, fn)\n directory = 'pythonforandroid'\n if directory not in results:\n results[directory] = []\n results[directory].append(join(*filename.split(sep)[1:]))\n\nrecursively_include(package_data, 'pythonforandroid/recipes',\n ['*.patch', 'Setup*', '*.pyx', '*.py', '*.c', '*.h',\n '*.mk', ])\nrecursively_include(package_data, 'pythonforandroid/bootstraps',\n ['*.properties', '*.xml', '*.java', '*.tmpl', '*.txt', '*.png',\n '*.mk', '*.c', '*.h', '*.py', '*.sh', '*.jpg', '*.aidl', ])\nrecursively_include(package_data, 'pythonforandroid/bootstraps',\n ['sdl-config', ])\nrecursively_include(package_data, 'pythonforandroid',\n ['liblink', 'biglink', 'liblink.sh'])\n\nsetup(name='python-for-android',\n version='0.3',\n description='Android APK packager for Python scripts and apps',\n author='The Kivy team',\n author_email='[email protected]',\n url='https://github.com/kivy/python-for-android', \n license='MIT', \n install_requires=['appdirs', 'colorama>0.3', 'sh', 'jinja2', 'argparse',\n 'six'],\n entry_points={\n 'console_scripts': [\n 'python-for-android = pythonforandroid.toolchain:main',\n 'p4a = pythonforandroid.toolchain:main',\n ],\n 'distutils.commands': [\n 'bdist_apk = pythonforandroid.bdist_apk:BdistAPK',\n ],\n },\n classifiers = [\n 'Development Status :: 3 - Alpha',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: OS Independent',\n 'Operating System :: POSIX :: Linux',\n 'Operating System :: MacOS :: MacOS X',\n 'Programming Language :: C',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 3',\n 'Topic :: Software Development',\n 'Topic :: Utilities',\n ],\n packages=packages,\n package_data=package_data,\n )\n", "path": "setup.py"}], "after_files": [{"content": "\nfrom setuptools import setup, find_packages\nfrom os import walk\nfrom os.path import join, dirname, sep\nimport os\nimport glob\n\n# NOTE: All package data should also be set in MANIFEST.in\n\npackages = find_packages()\n\npackage_data = {'': ['*.tmpl',\n '*.patch', ], }\n\ndata_files = []\n\n# By specifying every file manually, package_data will be able to\n# include them in binary distributions. Note that we have to add\n# everything as a 'pythonforandroid' rule, using '' apparently doesn't\n# work.\ndef recursively_include(results, directory, patterns):\n for root, subfolders, files in walk(directory):\n for fn in files:\n if not any([glob.fnmatch.fnmatch(fn, pattern) for pattern in patterns]):\n continue\n filename = join(root, fn)\n directory = 'pythonforandroid'\n if directory not in results:\n results[directory] = []\n results[directory].append(join(*filename.split(sep)[1:]))\n\nrecursively_include(package_data, 'pythonforandroid/recipes',\n ['*.patch', 'Setup*', '*.pyx', '*.py', '*.c', '*.h',\n '*.mk', '*.jam', ])\nrecursively_include(package_data, 'pythonforandroid/bootstraps',\n ['*.properties', '*.xml', '*.java', '*.tmpl', '*.txt', '*.png',\n '*.mk', '*.c', '*.h', '*.py', '*.sh', '*.jpg', '*.aidl', ])\nrecursively_include(package_data, 'pythonforandroid/bootstraps',\n ['sdl-config', ])\nrecursively_include(package_data, 'pythonforandroid',\n ['liblink', 'biglink', 'liblink.sh'])\n\nsetup(name='python-for-android',\n version='0.3',\n description='Android APK packager for Python scripts and apps',\n author='The Kivy team',\n author_email='[email protected]',\n url='https://github.com/kivy/python-for-android', \n license='MIT', \n install_requires=['appdirs', 'colorama>0.3', 'sh', 'jinja2', 'argparse',\n 'six'],\n entry_points={\n 'console_scripts': [\n 'python-for-android = pythonforandroid.toolchain:main',\n 'p4a = pythonforandroid.toolchain:main',\n ],\n 'distutils.commands': [\n 'bdist_apk = pythonforandroid.bdist_apk:BdistAPK',\n ],\n },\n classifiers = [\n 'Development Status :: 3 - Alpha',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: OS Independent',\n 'Operating System :: POSIX :: Linux',\n 'Operating System :: MacOS :: MacOS X',\n 'Programming Language :: C',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 3',\n 'Topic :: Software Development',\n 'Topic :: Utilities',\n ],\n packages=packages,\n package_data=package_data,\n )\n", "path": "setup.py"}]}
| 1,177 | 137 |
gh_patches_debug_36804
|
rasdani/github-patches
|
git_diff
|
OpenMined__PySyft-124
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Implement Default Absolute Value Functions in Base Tensor Type
**User Story A:** As a Data Scientist using Syft's Base Tensor type, we want to implement a default method for computing the elementwise absolute value of a Tensor of arbitrary type. abs() should return a new tensor and abs_ should perform the operation inline. For a great reference on how
**Acceptance Criteria:**
- If the Base Tensor type's attribute "encrypted" is set to True, it should return a NotImplemented error.
- a unit test demonstrating the correct operation of abs() and abs_() on the Base Tensor type implemented over int and float Tensors.
- inline documentation in the python code. For inspiration on inline documentation, please check out [PyTorch](http://pytorch.org/docs/master/tensors.html)'s documentation for this operator.
Implement Default addmm Functionality in Base Tensor Type
**User Story A:** As a Data Scientist using Syft's Base Tensor type, we want to implement a default method for computing each operation on a Tensor of arbitrary type. addmm_() should return a new tensor and addmm_() should perform the operation inline. For a reference on the operation this performs check out [PyTorch](http://pytorch.org/docs/master/tensors.html)'s documentation.
**Acceptance Criteria:**
- If the Base Tensor type's attribute "encrypted" is set to True, it should return a NotImplemented error.
- a unit test demonstrating the correct operation of addmm() and addmm_() on the Base Tensor type implemented over int and float Tensors.
- inline documentation in the python code. For inspiration on inline documentation, please check out [PyTorch](http://pytorch.org/docs/master/tensors.html)'s documentation for this operator.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `syft/tensor.py`
Content:
```
1 import numpy as np
2
3 def _ensure_ndarray(arr):
4 if not isinstance(arr, np.ndarray):
5 arr = np.array(arr)
6
7 return arr
8
9 class TensorBase(object):
10 """
11 A base tensor class that perform basic element-wise operation such as
12 addition, subtraction, multiplication and division
13 """
14
15 def __init__(self, arr_like, encrypted=False):
16 self.data = _ensure_ndarray(arr_like)
17 self.encrypted = encrypted
18
19 def __add__(self, arr_like):
20 """Performs element-wise addition between two array like objects"""
21 if self.encrypted:
22 return NotImplemented
23
24 arr_like = _ensure_ndarray(arr_like)
25 return self.data + arr_like
26
27 def __iadd__(self, arr_like):
28 """Performs in place element-wise addition between two array like objects"""
29 if self.encrypted:
30 return NotImplemented
31
32 arr_like = _ensure_ndarray(arr_like)
33 self.data = self.data + arr_like
34 return self.data
35
36 def __sub__(self, arr_like):
37 """Performs element-wise subtraction between two array like objects"""
38 if self.encrypted:
39 return NotImplemented
40
41 arr_like = _ensure_ndarray(arr_like)
42 return self.data - arr_like
43
44 def __isub__(self, arr_like):
45 """Performs in place element-wise subtraction between two array like objects"""
46 if self.encrypted:
47 return NotImplemented
48
49 arr_like = _ensure_ndarray(arr_like)
50 self.data = self.data - arr_like
51 return self.data
52
53 def __mul__(self, arr_like):
54 """Performs element-wise multiplication between two array like objects"""
55 if self.encrypted:
56 return NotImplemented
57
58 arr_like = _ensure_ndarray(arr_like)
59 return self.data * arr_like
60
61 def __imul__(self, arr_like):
62 """Performs in place element-wise multiplication between two array like objects"""
63 if self.encrypted:
64 return NotImplemented
65
66 arr_like = _ensure_ndarray(arr_like)
67 self.data = self.data * arr_like
68 return self.data
69
70 def __truediv__(self, arr_like):
71 """Performs element-wise division between two array like objects"""
72 if self.encrypted:
73 return NotImplemented
74
75 arr_like = _ensure_ndarray(arr_like)
76 return self.data / arr_like
77
78 def __itruediv__(self, arr_like):
79 """Performs in place element-wise subtraction between two array like objects"""
80 if self.encrypted:
81 return NotImplemented
82
83 arr_like = _ensure_ndarray(arr_like)
84 self.data = self.data / arr_like
85 return self.data
86
87 def shape(self):
88 """Returns a tuple of input array dimensions."""
89 if self.encrypted:
90 return NotImplemented
91
92 return self.data.shape
93
94 def sum(self, dim=None):
95 """Returns the sum of all elements in the input array."""
96 if self.encrypted:
97 return NotImplemented
98
99 if dim is None:
100 return self.data.sum()
101 else:
102 return self.data.sum(axis=dim)
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/syft/tensor.py b/syft/tensor.py
--- a/syft/tensor.py
+++ b/syft/tensor.py
@@ -84,6 +84,19 @@
self.data = self.data / arr_like
return self.data
+ def abs(self):
+ """Returns absolute value of tensor as a new tensor"""
+ if self.encrypted:
+ return NotImplemented
+ return np.absolute(self.data)
+
+ def abs_(self):
+ """Replaces tensor values with its absolute value"""
+ if self.encrypted:
+ return NotImplemented
+ self.data=np.absolute(self.data)
+ return self.data
+
def shape(self):
"""Returns a tuple of input array dimensions."""
if self.encrypted:
@@ -100,3 +113,33 @@
return self.data.sum()
else:
return self.data.sum(axis=dim)
+
+ def addmm(self,tensor2,mat,beta=1,alpha=1):
+ """Performs ((Mat*Beta)+((Tensor1.Tensor2)*Alpha)) and returns the result as a Tensor
+ Tensor1.Tensor2 is performed as Matrix product of two array The behavior depends on the arguments in the following way.
+ *If both tensors are 1-dimensional, their dot product is returned.
+ *If both arguments are 2-D they are multiplied like conventional matrices.
+ *If either argument is N-D, N > 2, it is treated as a stack of matrices residing in the last two indexes and broadcast accordingly.
+ *If the first argument is 1-D, it is promoted to a matrix by prepending a 1 to its dimensions. After matrix multiplication the prepended 1 is removed.
+ *If the second argument is 1-D, it is promoted to a matrix by appending a 1 to its dimensions. After matrix multiplication the appended 1 is removed.
+ """
+ if self.encrypted or tensor2.encrypted or mat.encrypted:
+ return NotImplemented
+ else:
+ return TensorBase(np.array((mat*beta)+((np.matmul(self.data,tensor2.data))*alpha)))
+
+ def addmm_(self,tensor2,mat,beta=1,alpha=1):
+ """Performs ((Mat*Beta)+((Tensor1.Tensor2)*Alpha)) and updates Tensor1 with result and reurns it
+ Tensor1.Tensor2 is performed as Matrix product of two array The behavior depends on the arguments in the following way.
+ *If both tensors are 1-dimensional, their dot product is returned.
+ *If both arguments are 2-D they are multiplied like conventional matrices.
+ *If either argument is N-D, N > 2, it is treated as a stack of matrices residing in the last two indexes and broadcast accordingly.
+ *If the first argument is 1-D, it is promoted to a matrix by prepending a 1 to its dimensions. After matrix multiplication the prepended 1 is removed.
+ *If the second argument is 1-D, it is promoted to a matrix by appending a 1 to its dimensions. After matrix multiplication the appended 1 is removed.
+ """
+ if self.encrypted is True or tensor2.encrypted is True or mat.encrypted is True:
+ return NotImplemented
+ else:
+ self.data=np.array((mat*beta)+((np.matmul(self.data,tensor2.data))*alpha))
+ return self
+
|
{"golden_diff": "diff --git a/syft/tensor.py b/syft/tensor.py\n--- a/syft/tensor.py\n+++ b/syft/tensor.py\n@@ -84,6 +84,19 @@\n self.data = self.data / arr_like\n return self.data\n \n+ def abs(self):\n+ \"\"\"Returns absolute value of tensor as a new tensor\"\"\"\n+ if self.encrypted:\n+ return NotImplemented\n+ return np.absolute(self.data)\n+ \n+ def abs_(self):\n+ \"\"\"Replaces tensor values with its absolute value\"\"\"\n+ if self.encrypted:\n+ return NotImplemented\n+ self.data=np.absolute(self.data)\n+ return self.data\n+\n def shape(self):\n \"\"\"Returns a tuple of input array dimensions.\"\"\"\n if self.encrypted:\n@@ -100,3 +113,33 @@\n return self.data.sum()\n else:\n return self.data.sum(axis=dim)\n+ \n+ def addmm(self,tensor2,mat,beta=1,alpha=1):\n+ \"\"\"Performs ((Mat*Beta)+((Tensor1.Tensor2)*Alpha)) and returns the result as a Tensor\n+ Tensor1.Tensor2 is performed as Matrix product of two array The behavior depends on the arguments in the following way.\n+ *If both tensors are 1-dimensional, their dot product is returned.\n+ *If both arguments are 2-D they are multiplied like conventional matrices.\n+ *If either argument is N-D, N > 2, it is treated as a stack of matrices residing in the last two indexes and broadcast accordingly.\n+ *If the first argument is 1-D, it is promoted to a matrix by prepending a 1 to its dimensions. After matrix multiplication the prepended 1 is removed.\n+ *If the second argument is 1-D, it is promoted to a matrix by appending a 1 to its dimensions. After matrix multiplication the appended 1 is removed.\n+ \"\"\"\n+ if self.encrypted or tensor2.encrypted or mat.encrypted:\n+ return NotImplemented\n+ else:\n+ return TensorBase(np.array((mat*beta)+((np.matmul(self.data,tensor2.data))*alpha)))\n+\n+ def addmm_(self,tensor2,mat,beta=1,alpha=1):\n+ \"\"\"Performs ((Mat*Beta)+((Tensor1.Tensor2)*Alpha)) and updates Tensor1 with result and reurns it\n+ Tensor1.Tensor2 is performed as Matrix product of two array The behavior depends on the arguments in the following way.\n+ *If both tensors are 1-dimensional, their dot product is returned.\n+ *If both arguments are 2-D they are multiplied like conventional matrices.\n+ *If either argument is N-D, N > 2, it is treated as a stack of matrices residing in the last two indexes and broadcast accordingly.\n+ *If the first argument is 1-D, it is promoted to a matrix by prepending a 1 to its dimensions. After matrix multiplication the prepended 1 is removed.\n+ *If the second argument is 1-D, it is promoted to a matrix by appending a 1 to its dimensions. After matrix multiplication the appended 1 is removed.\n+ \"\"\"\n+ if self.encrypted is True or tensor2.encrypted is True or mat.encrypted is True:\n+ return NotImplemented\n+ else:\n+ self.data=np.array((mat*beta)+((np.matmul(self.data,tensor2.data))*alpha))\n+ return self\n+\n", "issue": "Implement Default Absolute Value Functions in Base Tensor Type\n**User Story A:** As a Data Scientist using Syft's Base Tensor type, we want to implement a default method for computing the elementwise absolute value of a Tensor of arbitrary type. abs() should return a new tensor and abs_ should perform the operation inline. For a great reference on how \r\n\r\n**Acceptance Criteria:**\r\n- If the Base Tensor type's attribute \"encrypted\" is set to True, it should return a NotImplemented error.\r\n- a unit test demonstrating the correct operation of abs() and abs_() on the Base Tensor type implemented over int and float Tensors.\r\n- inline documentation in the python code. For inspiration on inline documentation, please check out [PyTorch](http://pytorch.org/docs/master/tensors.html)'s documentation for this operator.\nImplement Default addmm Functionality in Base Tensor Type\n**User Story A:** As a Data Scientist using Syft's Base Tensor type, we want to implement a default method for computing each operation on a Tensor of arbitrary type. addmm_() should return a new tensor and addmm_() should perform the operation inline. For a reference on the operation this performs check out [PyTorch](http://pytorch.org/docs/master/tensors.html)'s documentation.\r\n\r\n**Acceptance Criteria:**\r\n- If the Base Tensor type's attribute \"encrypted\" is set to True, it should return a NotImplemented error.\r\n- a unit test demonstrating the correct operation of addmm() and addmm_() on the Base Tensor type implemented over int and float Tensors.\r\n- inline documentation in the python code. For inspiration on inline documentation, please check out [PyTorch](http://pytorch.org/docs/master/tensors.html)'s documentation for this operator.\n", "before_files": [{"content": "import numpy as np\n\ndef _ensure_ndarray(arr):\n if not isinstance(arr, np.ndarray):\n arr = np.array(arr)\n\n return arr\n\nclass TensorBase(object):\n \"\"\"\n A base tensor class that perform basic element-wise operation such as\n addition, subtraction, multiplication and division\n \"\"\"\n\n def __init__(self, arr_like, encrypted=False):\n self.data = _ensure_ndarray(arr_like)\n self.encrypted = encrypted\n\n def __add__(self, arr_like):\n \"\"\"Performs element-wise addition between two array like objects\"\"\"\n if self.encrypted:\n return NotImplemented\n\n arr_like = _ensure_ndarray(arr_like)\n return self.data + arr_like\n\n def __iadd__(self, arr_like):\n \"\"\"Performs in place element-wise addition between two array like objects\"\"\"\n if self.encrypted:\n return NotImplemented\n\n arr_like = _ensure_ndarray(arr_like)\n self.data = self.data + arr_like\n return self.data\n\n def __sub__(self, arr_like):\n \"\"\"Performs element-wise subtraction between two array like objects\"\"\"\n if self.encrypted:\n return NotImplemented\n\n arr_like = _ensure_ndarray(arr_like)\n return self.data - arr_like\n\n def __isub__(self, arr_like):\n \"\"\"Performs in place element-wise subtraction between two array like objects\"\"\"\n if self.encrypted:\n return NotImplemented\n\n arr_like = _ensure_ndarray(arr_like)\n self.data = self.data - arr_like\n return self.data\n\n def __mul__(self, arr_like):\n \"\"\"Performs element-wise multiplication between two array like objects\"\"\"\n if self.encrypted:\n return NotImplemented\n\n arr_like = _ensure_ndarray(arr_like)\n return self.data * arr_like\n\n def __imul__(self, arr_like):\n \"\"\"Performs in place element-wise multiplication between two array like objects\"\"\"\n if self.encrypted:\n return NotImplemented\n\n arr_like = _ensure_ndarray(arr_like)\n self.data = self.data * arr_like\n return self.data\n\n def __truediv__(self, arr_like):\n \"\"\"Performs element-wise division between two array like objects\"\"\"\n if self.encrypted:\n return NotImplemented\n\n arr_like = _ensure_ndarray(arr_like)\n return self.data / arr_like\n\n def __itruediv__(self, arr_like):\n \"\"\"Performs in place element-wise subtraction between two array like objects\"\"\"\n if self.encrypted:\n return NotImplemented\n\n arr_like = _ensure_ndarray(arr_like)\n self.data = self.data / arr_like\n return self.data\n\n def shape(self):\n \"\"\"Returns a tuple of input array dimensions.\"\"\"\n if self.encrypted:\n return NotImplemented\n\n return self.data.shape\n\n def sum(self, dim=None):\n \"\"\"Returns the sum of all elements in the input array.\"\"\"\n if self.encrypted:\n return NotImplemented\n\n if dim is None:\n return self.data.sum()\n else:\n return self.data.sum(axis=dim)\n", "path": "syft/tensor.py"}], "after_files": [{"content": "import numpy as np\n\ndef _ensure_ndarray(arr):\n if not isinstance(arr, np.ndarray):\n arr = np.array(arr)\n\n return arr\n\nclass TensorBase(object):\n \"\"\"\n A base tensor class that perform basic element-wise operation such as\n addition, subtraction, multiplication and division\n \"\"\"\n\n def __init__(self, arr_like, encrypted=False):\n self.data = _ensure_ndarray(arr_like)\n self.encrypted = encrypted\n\n def __add__(self, arr_like):\n \"\"\"Performs element-wise addition between two array like objects\"\"\"\n if self.encrypted:\n return NotImplemented\n\n arr_like = _ensure_ndarray(arr_like)\n return self.data + arr_like\n\n def __iadd__(self, arr_like):\n \"\"\"Performs in place element-wise addition between two array like objects\"\"\"\n if self.encrypted:\n return NotImplemented\n\n arr_like = _ensure_ndarray(arr_like)\n self.data = self.data + arr_like\n return self.data\n\n def __sub__(self, arr_like):\n \"\"\"Performs element-wise subtraction between two array like objects\"\"\"\n if self.encrypted:\n return NotImplemented\n\n arr_like = _ensure_ndarray(arr_like)\n return self.data - arr_like\n\n def __isub__(self, arr_like):\n \"\"\"Performs in place element-wise subtraction between two array like objects\"\"\"\n if self.encrypted:\n return NotImplemented\n\n arr_like = _ensure_ndarray(arr_like)\n self.data = self.data - arr_like\n return self.data\n\n def __mul__(self, arr_like):\n \"\"\"Performs element-wise multiplication between two array like objects\"\"\"\n if self.encrypted:\n return NotImplemented\n\n arr_like = _ensure_ndarray(arr_like)\n return self.data * arr_like\n\n def __imul__(self, arr_like):\n \"\"\"Performs in place element-wise multiplication between two array like objects\"\"\"\n if self.encrypted:\n return NotImplemented\n\n arr_like = _ensure_ndarray(arr_like)\n self.data = self.data * arr_like\n return self.data\n\n def __truediv__(self, arr_like):\n \"\"\"Performs element-wise division between two array like objects\"\"\"\n if self.encrypted:\n return NotImplemented\n\n arr_like = _ensure_ndarray(arr_like)\n return self.data / arr_like\n\n def __itruediv__(self, arr_like):\n \"\"\"Performs in place element-wise subtraction between two array like objects\"\"\"\n if self.encrypted:\n return NotImplemented\n\n arr_like = _ensure_ndarray(arr_like)\n self.data = self.data / arr_like\n return self.data\n\n def abs(self):\n \"\"\"Returns absolute value of tensor as a new tensor\"\"\"\n if self.encrypted:\n return NotImplemented\n return np.absolute(self.data)\n \n def abs_(self):\n \"\"\"Replaces tensor values with its absolute value\"\"\"\n if self.encrypted:\n return NotImplemented\n self.data=np.absolute(self.data)\n return self.data\n\n def shape(self):\n \"\"\"Returns a tuple of input array dimensions.\"\"\"\n if self.encrypted:\n return NotImplemented\n\n return self.data.shape\n\n def sum(self, dim=None):\n \"\"\"Returns the sum of all elements in the input array.\"\"\"\n if self.encrypted:\n return NotImplemented\n\n if dim is None:\n return self.data.sum()\n else:\n return self.data.sum(axis=dim)\n \n def addmm(self,tensor2,mat,beta=1,alpha=1):\n \"\"\"Performs ((Mat*Beta)+((Tensor1.Tensor2)*Alpha)) and returns the result as a Tensor\n Tensor1.Tensor2 is performed as Matrix product of two array The behavior depends on the arguments in the following way.\n *If both tensors are 1-dimensional, their dot product is returned.\n *If both arguments are 2-D they are multiplied like conventional matrices.\n *If either argument is N-D, N > 2, it is treated as a stack of matrices residing in the last two indexes and broadcast accordingly.\n *If the first argument is 1-D, it is promoted to a matrix by prepending a 1 to its dimensions. After matrix multiplication the prepended 1 is removed.\n *If the second argument is 1-D, it is promoted to a matrix by appending a 1 to its dimensions. After matrix multiplication the appended 1 is removed.\n \"\"\"\n if self.encrypted or tensor2.encrypted or mat.encrypted:\n return NotImplemented\n else:\n return TensorBase(np.array((mat*beta)+((np.matmul(self.data,tensor2.data))*alpha)))\n\n def addmm_(self,tensor2,mat,beta=1,alpha=1):\n \"\"\"Performs ((Mat*Beta)+((Tensor1.Tensor2)*Alpha)) and updates Tensor1 with result and reurns it\n Tensor1.Tensor2 is performed as Matrix product of two array The behavior depends on the arguments in the following way.\n *If both tensors are 1-dimensional, their dot product is returned.\n *If both arguments are 2-D they are multiplied like conventional matrices.\n *If either argument is N-D, N > 2, it is treated as a stack of matrices residing in the last two indexes and broadcast accordingly.\n *If the first argument is 1-D, it is promoted to a matrix by prepending a 1 to its dimensions. After matrix multiplication the prepended 1 is removed.\n *If the second argument is 1-D, it is promoted to a matrix by appending a 1 to its dimensions. After matrix multiplication the appended 1 is removed.\n \"\"\"\n if self.encrypted is True or tensor2.encrypted is True or mat.encrypted is True:\n return NotImplemented\n else:\n self.data=np.array((mat*beta)+((np.matmul(self.data,tensor2.data))*alpha))\n return self\n\n", "path": "syft/tensor.py"}]}
| 1,485 | 756 |
gh_patches_debug_4215
|
rasdani/github-patches
|
git_diff
|
Kinto__kinto-1956
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Memory permission backend implementation of remove_principal() is wrong
According to the `PermissionBase` docstring, `remove_principal` is supposed to `Remove a principal from every user`. In other words, `remove_principal(principal)` is equivalent to `remove_user_principal(user_id, principal) for user_id in all_possible_user_ids`. However, the current implementation stores all permissions of all kinds in one hash table, and removes the principal from permissions of non-user things as well.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kinto/core/permission/memory.py`
Content:
```
1 import re
2
3 from kinto.core.decorators import synchronized
4 from kinto.core.permission import PermissionBase
5
6
7 class Permission(PermissionBase):
8 """Permission backend implementation in local process memory.
9
10 Enable in configuration::
11
12 kinto.permission_backend = kinto.core.permission.memory
13
14 :noindex:
15 """
16
17 def __init__(self, *args, **kwargs):
18 super().__init__(*args, **kwargs)
19 self.flush()
20
21 def initialize_schema(self, dry_run=False):
22 # Nothing to do.
23 pass
24
25 def flush(self):
26 self._store = {}
27
28 @synchronized
29 def add_user_principal(self, user_id, principal):
30 user_key = f"user:{user_id}"
31 user_principals = self._store.get(user_key, set())
32 user_principals.add(principal)
33 self._store[user_key] = user_principals
34
35 @synchronized
36 def remove_user_principal(self, user_id, principal):
37 user_key = f"user:{user_id}"
38 user_principals = self._store.get(user_key, set())
39 try:
40 user_principals.remove(principal)
41 except KeyError:
42 pass
43 if len(user_principals) == 0:
44 if user_key in self._store:
45 del self._store[user_key]
46 else:
47 self._store[user_key] = user_principals
48
49 @synchronized
50 def remove_principal(self, principal):
51 for user_principals in self._store.values():
52 try:
53 user_principals.remove(principal)
54 except KeyError:
55 pass
56
57 @synchronized
58 def get_user_principals(self, user_id):
59 # Fetch the groups the user is in.
60 user_key = f"user:{user_id}"
61 members = self._store.get(user_key, set())
62 # Fetch the groups system.Authenticated is in.
63 group_authenticated = self._store.get("user:system.Authenticated", set())
64 return members | group_authenticated
65
66 @synchronized
67 def add_principal_to_ace(self, object_id, permission, principal):
68 permission_key = f"permission:{object_id}:{permission}"
69 object_permission_principals = self._store.get(permission_key, set())
70 object_permission_principals.add(principal)
71 self._store[permission_key] = object_permission_principals
72
73 @synchronized
74 def remove_principal_from_ace(self, object_id, permission, principal):
75 permission_key = f"permission:{object_id}:{permission}"
76 object_permission_principals = self._store.get(permission_key, set())
77 try:
78 object_permission_principals.remove(principal)
79 except KeyError:
80 pass
81 if len(object_permission_principals) == 0:
82 if permission_key in self._store:
83 del self._store[permission_key]
84 else:
85 self._store[permission_key] = object_permission_principals
86
87 @synchronized
88 def get_object_permission_principals(self, object_id, permission):
89 permission_key = f"permission:{object_id}:{permission}"
90 members = self._store.get(permission_key, set())
91 return members
92
93 @synchronized
94 def get_accessible_objects(self, principals, bound_permissions=None, with_children=True):
95 principals = set(principals)
96 candidates = []
97 if bound_permissions is None:
98 for key, value in self._store.items():
99 _, object_id, permission = key.split(":", 2)
100 candidates.append((object_id, permission, value))
101 else:
102 for pattern, perm in bound_permissions:
103 id_match = ".*" if with_children else "[^/]+"
104 regexp = re.compile(f"^{pattern.replace('*', id_match)}$")
105 for key, value in self._store.items():
106 if key.endswith(perm):
107 object_id = key.split(":")[1]
108 if regexp.match(object_id):
109 candidates.append((object_id, perm, value))
110
111 perms_by_object_id = {}
112 for (object_id, perm, value) in candidates:
113 if len(principals & value) > 0:
114 perms_by_object_id.setdefault(object_id, set()).add(perm)
115 return perms_by_object_id
116
117 @synchronized
118 def get_authorized_principals(self, bound_permissions):
119 principals = set()
120 for obj_id, perm in bound_permissions:
121 principals |= self.get_object_permission_principals(obj_id, perm)
122 return principals
123
124 @synchronized
125 def get_objects_permissions(self, objects_ids, permissions=None):
126 result = []
127 for object_id in objects_ids:
128 if permissions is None:
129 aces = [k for k in self._store.keys() if k.startswith(f"permission:{object_id}:")]
130 else:
131 aces = [f"permission:{object_id}:{permission}" for permission in permissions]
132 perms = {}
133 for ace in aces:
134 # Should work with 'permission:/url/id:object:create'.
135 permission = ace.split(":", 2)[2]
136 perms[permission] = set(self._store[ace])
137 result.append(perms)
138 return result
139
140 @synchronized
141 def replace_object_permissions(self, object_id, permissions):
142 for permission, principals in permissions.items():
143 permission_key = f"permission:{object_id}:{permission}"
144 if permission_key in self._store and len(principals) == 0:
145 del self._store[permission_key]
146 elif principals:
147 self._store[permission_key] = set(principals)
148 return permissions
149
150 @synchronized
151 def delete_object_permissions(self, *object_id_list):
152 to_delete = []
153 for key in self._store.keys():
154 object_id = key.split(":")[1]
155 for pattern in object_id_list:
156 regexp = re.compile(f"^{pattern.replace('*', '.*')}$")
157 if regexp.match(object_id):
158 to_delete.append(key)
159 for k in to_delete:
160 del self._store[k]
161
162
163 def load_from_config(config):
164 return Permission()
165
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kinto/core/permission/memory.py b/kinto/core/permission/memory.py
--- a/kinto/core/permission/memory.py
+++ b/kinto/core/permission/memory.py
@@ -48,7 +48,9 @@
@synchronized
def remove_principal(self, principal):
- for user_principals in self._store.values():
+ for key, user_principals in self._store.items():
+ if not key.startswith("user:"):
+ continue
try:
user_principals.remove(principal)
except KeyError:
|
{"golden_diff": "diff --git a/kinto/core/permission/memory.py b/kinto/core/permission/memory.py\n--- a/kinto/core/permission/memory.py\n+++ b/kinto/core/permission/memory.py\n@@ -48,7 +48,9 @@\n \n @synchronized\n def remove_principal(self, principal):\n- for user_principals in self._store.values():\n+ for key, user_principals in self._store.items():\n+ if not key.startswith(\"user:\"):\n+ continue\n try:\n user_principals.remove(principal)\n except KeyError:\n", "issue": "Memory permission backend implementation of remove_principal() is wrong\nAccording to the `PermissionBase` docstring, `remove_principal` is supposed to `Remove a principal from every user`. In other words, `remove_principal(principal)` is equivalent to `remove_user_principal(user_id, principal) for user_id in all_possible_user_ids`. However, the current implementation stores all permissions of all kinds in one hash table, and removes the principal from permissions of non-user things as well.\n", "before_files": [{"content": "import re\n\nfrom kinto.core.decorators import synchronized\nfrom kinto.core.permission import PermissionBase\n\n\nclass Permission(PermissionBase):\n \"\"\"Permission backend implementation in local process memory.\n\n Enable in configuration::\n\n kinto.permission_backend = kinto.core.permission.memory\n\n :noindex:\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.flush()\n\n def initialize_schema(self, dry_run=False):\n # Nothing to do.\n pass\n\n def flush(self):\n self._store = {}\n\n @synchronized\n def add_user_principal(self, user_id, principal):\n user_key = f\"user:{user_id}\"\n user_principals = self._store.get(user_key, set())\n user_principals.add(principal)\n self._store[user_key] = user_principals\n\n @synchronized\n def remove_user_principal(self, user_id, principal):\n user_key = f\"user:{user_id}\"\n user_principals = self._store.get(user_key, set())\n try:\n user_principals.remove(principal)\n except KeyError:\n pass\n if len(user_principals) == 0:\n if user_key in self._store:\n del self._store[user_key]\n else:\n self._store[user_key] = user_principals\n\n @synchronized\n def remove_principal(self, principal):\n for user_principals in self._store.values():\n try:\n user_principals.remove(principal)\n except KeyError:\n pass\n\n @synchronized\n def get_user_principals(self, user_id):\n # Fetch the groups the user is in.\n user_key = f\"user:{user_id}\"\n members = self._store.get(user_key, set())\n # Fetch the groups system.Authenticated is in.\n group_authenticated = self._store.get(\"user:system.Authenticated\", set())\n return members | group_authenticated\n\n @synchronized\n def add_principal_to_ace(self, object_id, permission, principal):\n permission_key = f\"permission:{object_id}:{permission}\"\n object_permission_principals = self._store.get(permission_key, set())\n object_permission_principals.add(principal)\n self._store[permission_key] = object_permission_principals\n\n @synchronized\n def remove_principal_from_ace(self, object_id, permission, principal):\n permission_key = f\"permission:{object_id}:{permission}\"\n object_permission_principals = self._store.get(permission_key, set())\n try:\n object_permission_principals.remove(principal)\n except KeyError:\n pass\n if len(object_permission_principals) == 0:\n if permission_key in self._store:\n del self._store[permission_key]\n else:\n self._store[permission_key] = object_permission_principals\n\n @synchronized\n def get_object_permission_principals(self, object_id, permission):\n permission_key = f\"permission:{object_id}:{permission}\"\n members = self._store.get(permission_key, set())\n return members\n\n @synchronized\n def get_accessible_objects(self, principals, bound_permissions=None, with_children=True):\n principals = set(principals)\n candidates = []\n if bound_permissions is None:\n for key, value in self._store.items():\n _, object_id, permission = key.split(\":\", 2)\n candidates.append((object_id, permission, value))\n else:\n for pattern, perm in bound_permissions:\n id_match = \".*\" if with_children else \"[^/]+\"\n regexp = re.compile(f\"^{pattern.replace('*', id_match)}$\")\n for key, value in self._store.items():\n if key.endswith(perm):\n object_id = key.split(\":\")[1]\n if regexp.match(object_id):\n candidates.append((object_id, perm, value))\n\n perms_by_object_id = {}\n for (object_id, perm, value) in candidates:\n if len(principals & value) > 0:\n perms_by_object_id.setdefault(object_id, set()).add(perm)\n return perms_by_object_id\n\n @synchronized\n def get_authorized_principals(self, bound_permissions):\n principals = set()\n for obj_id, perm in bound_permissions:\n principals |= self.get_object_permission_principals(obj_id, perm)\n return principals\n\n @synchronized\n def get_objects_permissions(self, objects_ids, permissions=None):\n result = []\n for object_id in objects_ids:\n if permissions is None:\n aces = [k for k in self._store.keys() if k.startswith(f\"permission:{object_id}:\")]\n else:\n aces = [f\"permission:{object_id}:{permission}\" for permission in permissions]\n perms = {}\n for ace in aces:\n # Should work with 'permission:/url/id:object:create'.\n permission = ace.split(\":\", 2)[2]\n perms[permission] = set(self._store[ace])\n result.append(perms)\n return result\n\n @synchronized\n def replace_object_permissions(self, object_id, permissions):\n for permission, principals in permissions.items():\n permission_key = f\"permission:{object_id}:{permission}\"\n if permission_key in self._store and len(principals) == 0:\n del self._store[permission_key]\n elif principals:\n self._store[permission_key] = set(principals)\n return permissions\n\n @synchronized\n def delete_object_permissions(self, *object_id_list):\n to_delete = []\n for key in self._store.keys():\n object_id = key.split(\":\")[1]\n for pattern in object_id_list:\n regexp = re.compile(f\"^{pattern.replace('*', '.*')}$\")\n if regexp.match(object_id):\n to_delete.append(key)\n for k in to_delete:\n del self._store[k]\n\n\ndef load_from_config(config):\n return Permission()\n", "path": "kinto/core/permission/memory.py"}], "after_files": [{"content": "import re\n\nfrom kinto.core.decorators import synchronized\nfrom kinto.core.permission import PermissionBase\n\n\nclass Permission(PermissionBase):\n \"\"\"Permission backend implementation in local process memory.\n\n Enable in configuration::\n\n kinto.permission_backend = kinto.core.permission.memory\n\n :noindex:\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.flush()\n\n def initialize_schema(self, dry_run=False):\n # Nothing to do.\n pass\n\n def flush(self):\n self._store = {}\n\n @synchronized\n def add_user_principal(self, user_id, principal):\n user_key = f\"user:{user_id}\"\n user_principals = self._store.get(user_key, set())\n user_principals.add(principal)\n self._store[user_key] = user_principals\n\n @synchronized\n def remove_user_principal(self, user_id, principal):\n user_key = f\"user:{user_id}\"\n user_principals = self._store.get(user_key, set())\n try:\n user_principals.remove(principal)\n except KeyError:\n pass\n if len(user_principals) == 0:\n if user_key in self._store:\n del self._store[user_key]\n else:\n self._store[user_key] = user_principals\n\n @synchronized\n def remove_principal(self, principal):\n for key, user_principals in self._store.items():\n if not key.startswith(\"user:\"):\n continue\n try:\n user_principals.remove(principal)\n except KeyError:\n pass\n\n @synchronized\n def get_user_principals(self, user_id):\n # Fetch the groups the user is in.\n user_key = f\"user:{user_id}\"\n members = self._store.get(user_key, set())\n # Fetch the groups system.Authenticated is in.\n group_authenticated = self._store.get(\"user:system.Authenticated\", set())\n return members | group_authenticated\n\n @synchronized\n def add_principal_to_ace(self, object_id, permission, principal):\n permission_key = f\"permission:{object_id}:{permission}\"\n object_permission_principals = self._store.get(permission_key, set())\n object_permission_principals.add(principal)\n self._store[permission_key] = object_permission_principals\n\n @synchronized\n def remove_principal_from_ace(self, object_id, permission, principal):\n permission_key = f\"permission:{object_id}:{permission}\"\n object_permission_principals = self._store.get(permission_key, set())\n try:\n object_permission_principals.remove(principal)\n except KeyError:\n pass\n if len(object_permission_principals) == 0:\n if permission_key in self._store:\n del self._store[permission_key]\n else:\n self._store[permission_key] = object_permission_principals\n\n @synchronized\n def get_object_permission_principals(self, object_id, permission):\n permission_key = f\"permission:{object_id}:{permission}\"\n members = self._store.get(permission_key, set())\n return members\n\n @synchronized\n def get_accessible_objects(self, principals, bound_permissions=None, with_children=True):\n principals = set(principals)\n candidates = []\n if bound_permissions is None:\n for key, value in self._store.items():\n _, object_id, permission = key.split(\":\", 2)\n candidates.append((object_id, permission, value))\n else:\n for pattern, perm in bound_permissions:\n id_match = \".*\" if with_children else \"[^/]+\"\n regexp = re.compile(f\"^{pattern.replace('*', id_match)}$\")\n for key, value in self._store.items():\n if key.endswith(perm):\n object_id = key.split(\":\")[1]\n if regexp.match(object_id):\n candidates.append((object_id, perm, value))\n\n perms_by_object_id = {}\n for (object_id, perm, value) in candidates:\n if len(principals & value) > 0:\n perms_by_object_id.setdefault(object_id, set()).add(perm)\n return perms_by_object_id\n\n @synchronized\n def get_authorized_principals(self, bound_permissions):\n principals = set()\n for obj_id, perm in bound_permissions:\n principals |= self.get_object_permission_principals(obj_id, perm)\n return principals\n\n @synchronized\n def get_objects_permissions(self, objects_ids, permissions=None):\n result = []\n for object_id in objects_ids:\n if permissions is None:\n aces = [k for k in self._store.keys() if k.startswith(f\"permission:{object_id}:\")]\n else:\n aces = [f\"permission:{object_id}:{permission}\" for permission in permissions]\n perms = {}\n for ace in aces:\n # Should work with 'permission:/url/id:object:create'.\n permission = ace.split(\":\", 2)[2]\n perms[permission] = set(self._store[ace])\n result.append(perms)\n return result\n\n @synchronized\n def replace_object_permissions(self, object_id, permissions):\n for permission, principals in permissions.items():\n permission_key = f\"permission:{object_id}:{permission}\"\n if permission_key in self._store and len(principals) == 0:\n del self._store[permission_key]\n elif principals:\n self._store[permission_key] = set(principals)\n return permissions\n\n @synchronized\n def delete_object_permissions(self, *object_id_list):\n to_delete = []\n for key in self._store.keys():\n object_id = key.split(\":\")[1]\n for pattern in object_id_list:\n regexp = re.compile(f\"^{pattern.replace('*', '.*')}$\")\n if regexp.match(object_id):\n to_delete.append(key)\n for k in to_delete:\n del self._store[k]\n\n\ndef load_from_config(config):\n return Permission()\n", "path": "kinto/core/permission/memory.py"}]}
| 2,032 | 124 |
gh_patches_debug_8432
|
rasdani/github-patches
|
git_diff
|
psychopy__psychopy-6423
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug]: Use of pythonw on macos conda/mamba environments breaks app
### PsychoPy Version
2024.1.3
### What OS are your PsychoPy running on?
macOS Silicon
### Bug Description
When attempting to run the psychopy GUI app following an install in a fresh conda environment:
```
conda create -n pyschopy python=3.10
pip install pyschopy
psychopy
```
Psychopy fails to start.
### Expected Behaviour
Psychopy should not need to call pythonw for versions of python >= 3.9, and the if statement in `psychopyApp.py` should be modified to reflect that probably (or python < 3.9 support dropped entirely for newer versions of the package).
### Steps to Reproduce
After installing as above the following error is returned:
```
Traceback (most recent call last):
File "/Users/MYUSERNAME/mambaforge/envs/intermod/bin/psychopy", line 8, in <module>
sys.exit(main())
File "/Users/MYUSERNAME/mambaforge/envs/intermod/lib/python3.10/site-packages/psychopy/app/psychopyApp.py", line 90, in main
stdout, stderr = core.shellCall(cmd,
File "/Users/MYUSERNAME/mambaforge/envs/intermod/lib/python3.10/site-packages/psychopy/core.py", line 153, in shellCall
proc = subprocess.Popen(cmdObjects, stdin=subprocess.PIPE,
File "/Users/MYUSERNAME/mambaforge/envs/intermod/lib/python3.10/subprocess.py", line 971, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/Users/MYUSERNAME/mambaforge/envs/intermod/lib/python3.10/subprocess.py", line 1863, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: '/Users/MYUSERNAME/mambaforge/envs/intermod/bin/python3.10w'
```
This error happens because, according to `psychopyApp.py`, we need to call `pythonw` when running GUI scripts on MacOS in an Anaconda-based environment. However this is an outdated method as of Python 3.9, and [from that version onwards you can now directly call the python binary regardless](https://docs.python.org/3/using/mac.html#running-scripts-with-a-gui).
In a fresh Python 3.10 installation via conda/mamba the pythonw binary does not exist in the binaries folder for the environment. I have for the moment fixed this by simply symlinking the base python binary to `python3.10w` which psychopy expects, which then allows the app to start.
### Additional context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `psychopy/app/psychopyApp.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 # Part of the PsychoPy library
5 # Copyright (C) 2002-2018 Jonathan Peirce (C) 2019-2024 Open Science Tools Ltd.
6 # Distributed under the terms of the GNU General Public License (GPL).
7
8 import sys
9
10 # fix macOS locale-bug on startup: sets locale to LC_ALL (must be defined!)
11 import psychopy.locale_setup # noqa
12
13
14 # NB the PsychoPyApp classes moved to _psychopyApp.py as of version 1.78.00
15 # to allow for better upgrading possibilities from the mac app bundle. this
16 # file now used solely as a launcher for the app, not as the app itself.
17
18
19 def start_app():
20 from psychopy.app import startApp, quitApp
21 from psychopy.preferences import prefs
22
23 showSplash = prefs.app['showSplash']
24 if '--no-splash' in sys.argv:
25 showSplash = False
26 del sys.argv[sys.argv.index('--no-splash')]
27 _ = startApp(showSplash=showSplash) # main loop
28 quitApp()
29
30
31 def main():
32 if '-x' in sys.argv:
33 # run a .py script from the command line using StandAlone python
34 targetScript = sys.argv[sys.argv.index('-x') + 1]
35 from psychopy import core
36 import os
37 core.shellCall([sys.executable, os.path.abspath(targetScript)])
38 sys.exit()
39 if '-v' in sys.argv or '--version' in sys.argv:
40 from psychopy import __version__
41 msg = ('PsychoPy3, version %s (c)Jonathan Peirce 2018, GNU GPL license'
42 % __version__)
43 print(msg)
44 sys.exit()
45 if '-h' in sys.argv or '--help' in sys.argv:
46 print("""Starts the PsychoPy3 application.
47
48 Usage: python PsychoPy.py [options] [file]
49
50 Without options or files provided this starts PsychoPy using prefs to
51 decide on the view(s) to open. If optional [file] is provided action
52 depends on the type of the [file]:
53
54 Python script 'file.py' -- opens coder
55
56 Experiment design 'file.psyexp' -- opens builder
57
58 Options:
59 -c, --coder, coder opens coder view only
60 -b, --builder, builder opens builder view only
61 -x script.py execute script.py using StandAlone python
62
63 -v, --version prints version and exits
64 -h, --help prints this help and exit
65
66 --firstrun launches configuration wizard
67 --no-splash suppresses splash screen
68
69 """)
70 sys.exit()
71
72 if (sys.platform == 'darwin' and
73 ('| packaged by conda-forge |' in sys.version or
74 '|Anaconda' in sys.version)):
75
76 # On macOS with Anaconda, GUI applications need to be run using
77 # `pythonw`. Since we have no way to determine whether this is currently
78 # the case, we run this script again -- ensuring we're definitely using
79 # pythonw.
80 import os
81 env = os.environ
82 PYTHONW = env.get('PYTHONW', 'False')
83
84 if PYTHONW != 'True':
85 from psychopy import core
86 cmd = [sys.executable + 'w', __file__]
87 if '--no-splash' in sys.argv:
88 cmd.append('--no-splash')
89
90 stdout, stderr = core.shellCall(cmd,
91 env=dict(env, PYTHONW='True'),
92 stderr=True)
93 print(stdout, file=sys.stdout)
94 print(stderr, file=sys.stderr)
95 sys.exit()
96 else:
97 start_app()
98 else:
99 start_app()
100
101
102 if __name__ == '__main__':
103 main()
104
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/psychopy/app/psychopyApp.py b/psychopy/app/psychopyApp.py
--- a/psychopy/app/psychopyApp.py
+++ b/psychopy/app/psychopyApp.py
@@ -69,9 +69,8 @@
""")
sys.exit()
- if (sys.platform == 'darwin' and
- ('| packaged by conda-forge |' in sys.version or
- '|Anaconda' in sys.version)):
+ if (('| packaged by conda-forge |' in sys.version or '|Anaconda' in sys.version)
+ and sys.platform == 'darwin' and sys.version_info >= (3,9)):
# On macOS with Anaconda, GUI applications need to be run using
# `pythonw`. Since we have no way to determine whether this is currently
|
{"golden_diff": "diff --git a/psychopy/app/psychopyApp.py b/psychopy/app/psychopyApp.py\n--- a/psychopy/app/psychopyApp.py\n+++ b/psychopy/app/psychopyApp.py\n@@ -69,9 +69,8 @@\n \"\"\")\n sys.exit()\n \n- if (sys.platform == 'darwin' and\n- ('| packaged by conda-forge |' in sys.version or\n- '|Anaconda' in sys.version)):\n+ if (('| packaged by conda-forge |' in sys.version or '|Anaconda' in sys.version)\n+ and sys.platform == 'darwin' and sys.version_info >= (3,9)):\n \n # On macOS with Anaconda, GUI applications need to be run using\n # `pythonw`. Since we have no way to determine whether this is currently\n", "issue": "[Bug]: Use of pythonw on macos conda/mamba environments breaks app\n### PsychoPy Version\r\n\r\n2024.1.3\r\n\r\n### What OS are your PsychoPy running on?\r\n\r\nmacOS Silicon\r\n\r\n### Bug Description\r\n\r\nWhen attempting to run the psychopy GUI app following an install in a fresh conda environment:\r\n\r\n```\r\nconda create -n pyschopy python=3.10\r\npip install pyschopy\r\npsychopy\r\n```\r\nPsychopy fails to start.\r\n\r\n\r\n### Expected Behaviour\r\n\r\nPsychopy should not need to call pythonw for versions of python >= 3.9, and the if statement in `psychopyApp.py` should be modified to reflect that probably (or python < 3.9 support dropped entirely for newer versions of the package).\r\n\r\n### Steps to Reproduce\r\n\r\nAfter installing as above the following error is returned:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/Users/MYUSERNAME/mambaforge/envs/intermod/bin/psychopy\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/Users/MYUSERNAME/mambaforge/envs/intermod/lib/python3.10/site-packages/psychopy/app/psychopyApp.py\", line 90, in main\r\n stdout, stderr = core.shellCall(cmd,\r\n File \"/Users/MYUSERNAME/mambaforge/envs/intermod/lib/python3.10/site-packages/psychopy/core.py\", line 153, in shellCall\r\n proc = subprocess.Popen(cmdObjects, stdin=subprocess.PIPE,\r\n File \"/Users/MYUSERNAME/mambaforge/envs/intermod/lib/python3.10/subprocess.py\", line 971, in __init__\r\n self._execute_child(args, executable, preexec_fn, close_fds,\r\n File \"/Users/MYUSERNAME/mambaforge/envs/intermod/lib/python3.10/subprocess.py\", line 1863, in _execute_child\r\n raise child_exception_type(errno_num, err_msg, err_filename)\r\nFileNotFoundError: [Errno 2] No such file or directory: '/Users/MYUSERNAME/mambaforge/envs/intermod/bin/python3.10w'\r\n```\r\n\r\nThis error happens because, according to `psychopyApp.py`, we need to call `pythonw` when running GUI scripts on MacOS in an Anaconda-based environment. However this is an outdated method as of Python 3.9, and [from that version onwards you can now directly call the python binary regardless](https://docs.python.org/3/using/mac.html#running-scripts-with-a-gui). \r\n\r\nIn a fresh Python 3.10 installation via conda/mamba the pythonw binary does not exist in the binaries folder for the environment. I have for the moment fixed this by simply symlinking the base python binary to `python3.10w` which psychopy expects, which then allows the app to start.\r\n\r\n\r\n### Additional context\r\n\r\n_No response_\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n# Part of the PsychoPy library\n# Copyright (C) 2002-2018 Jonathan Peirce (C) 2019-2024 Open Science Tools Ltd.\n# Distributed under the terms of the GNU General Public License (GPL).\n\nimport sys\n\n# fix macOS locale-bug on startup: sets locale to LC_ALL (must be defined!)\nimport psychopy.locale_setup # noqa\n\n\n# NB the PsychoPyApp classes moved to _psychopyApp.py as of version 1.78.00\n# to allow for better upgrading possibilities from the mac app bundle. this\n# file now used solely as a launcher for the app, not as the app itself.\n\n\ndef start_app():\n from psychopy.app import startApp, quitApp\n from psychopy.preferences import prefs\n\n showSplash = prefs.app['showSplash']\n if '--no-splash' in sys.argv:\n showSplash = False\n del sys.argv[sys.argv.index('--no-splash')]\n _ = startApp(showSplash=showSplash) # main loop\n quitApp()\n\n\ndef main():\n if '-x' in sys.argv:\n # run a .py script from the command line using StandAlone python\n targetScript = sys.argv[sys.argv.index('-x') + 1]\n from psychopy import core\n import os\n core.shellCall([sys.executable, os.path.abspath(targetScript)])\n sys.exit()\n if '-v' in sys.argv or '--version' in sys.argv:\n from psychopy import __version__\n msg = ('PsychoPy3, version %s (c)Jonathan Peirce 2018, GNU GPL license'\n % __version__)\n print(msg)\n sys.exit()\n if '-h' in sys.argv or '--help' in sys.argv:\n print(\"\"\"Starts the PsychoPy3 application.\n\nUsage: python PsychoPy.py [options] [file]\n\nWithout options or files provided this starts PsychoPy using prefs to\ndecide on the view(s) to open. If optional [file] is provided action\ndepends on the type of the [file]:\n\n Python script 'file.py' -- opens coder\n\n Experiment design 'file.psyexp' -- opens builder\n\nOptions:\n -c, --coder, coder opens coder view only\n -b, --builder, builder opens builder view only\n -x script.py execute script.py using StandAlone python\n\n -v, --version prints version and exits\n -h, --help prints this help and exit\n\n --firstrun launches configuration wizard\n --no-splash suppresses splash screen\n\n\"\"\")\n sys.exit()\n\n if (sys.platform == 'darwin' and\n ('| packaged by conda-forge |' in sys.version or\n '|Anaconda' in sys.version)):\n\n # On macOS with Anaconda, GUI applications need to be run using\n # `pythonw`. Since we have no way to determine whether this is currently\n # the case, we run this script again -- ensuring we're definitely using\n # pythonw.\n import os\n env = os.environ\n PYTHONW = env.get('PYTHONW', 'False')\n\n if PYTHONW != 'True':\n from psychopy import core\n cmd = [sys.executable + 'w', __file__]\n if '--no-splash' in sys.argv:\n cmd.append('--no-splash')\n\n stdout, stderr = core.shellCall(cmd,\n env=dict(env, PYTHONW='True'),\n stderr=True)\n print(stdout, file=sys.stdout)\n print(stderr, file=sys.stderr)\n sys.exit()\n else:\n start_app()\n else:\n start_app()\n\n\nif __name__ == '__main__':\n main()\n", "path": "psychopy/app/psychopyApp.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n# Part of the PsychoPy library\n# Copyright (C) 2002-2018 Jonathan Peirce (C) 2019-2024 Open Science Tools Ltd.\n# Distributed under the terms of the GNU General Public License (GPL).\n\nimport sys\n\n# fix macOS locale-bug on startup: sets locale to LC_ALL (must be defined!)\nimport psychopy.locale_setup # noqa\n\n\n# NB the PsychoPyApp classes moved to _psychopyApp.py as of version 1.78.00\n# to allow for better upgrading possibilities from the mac app bundle. this\n# file now used solely as a launcher for the app, not as the app itself.\n\n\ndef start_app():\n from psychopy.app import startApp, quitApp\n from psychopy.preferences import prefs\n\n showSplash = prefs.app['showSplash']\n if '--no-splash' in sys.argv:\n showSplash = False\n del sys.argv[sys.argv.index('--no-splash')]\n _ = startApp(showSplash=showSplash) # main loop\n quitApp()\n\n\ndef main():\n if '-x' in sys.argv:\n # run a .py script from the command line using StandAlone python\n targetScript = sys.argv[sys.argv.index('-x') + 1]\n from psychopy import core\n import os\n core.shellCall([sys.executable, os.path.abspath(targetScript)])\n sys.exit()\n if '-v' in sys.argv or '--version' in sys.argv:\n from psychopy import __version__\n msg = ('PsychoPy3, version %s (c)Jonathan Peirce 2018, GNU GPL license'\n % __version__)\n print(msg)\n sys.exit()\n if '-h' in sys.argv or '--help' in sys.argv:\n print(\"\"\"Starts the PsychoPy3 application.\n\nUsage: python PsychoPy.py [options] [file]\n\nWithout options or files provided this starts PsychoPy using prefs to\ndecide on the view(s) to open. If optional [file] is provided action\ndepends on the type of the [file]:\n\n Python script 'file.py' -- opens coder\n\n Experiment design 'file.psyexp' -- opens builder\n\nOptions:\n -c, --coder, coder opens coder view only\n -b, --builder, builder opens builder view only\n -x script.py execute script.py using StandAlone python\n\n -v, --version prints version and exits\n -h, --help prints this help and exit\n\n --firstrun launches configuration wizard\n --no-splash suppresses splash screen\n\n\"\"\")\n sys.exit()\n\n if (('| packaged by conda-forge |' in sys.version or '|Anaconda' in sys.version)\n and sys.platform == 'darwin' and sys.version_info >= (3,9)):\n\n # On macOS with Anaconda, GUI applications need to be run using\n # `pythonw`. Since we have no way to determine whether this is currently\n # the case, we run this script again -- ensuring we're definitely using\n # pythonw.\n import os\n env = os.environ\n PYTHONW = env.get('PYTHONW', 'False')\n\n if PYTHONW != 'True':\n from psychopy import core\n cmd = [sys.executable + 'w', __file__]\n if '--no-splash' in sys.argv:\n cmd.append('--no-splash')\n\n stdout, stderr = core.shellCall(cmd,\n env=dict(env, PYTHONW='True'),\n stderr=True)\n print(stdout, file=sys.stdout)\n print(stderr, file=sys.stderr)\n sys.exit()\n else:\n start_app()\n else:\n start_app()\n\n\nif __name__ == '__main__':\n main()\n", "path": "psychopy/app/psychopyApp.py"}]}
| 1,940 | 185 |
gh_patches_debug_12911
|
rasdani/github-patches
|
git_diff
|
svthalia__concrexit-1090
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Newsletters show unpublished events
### Describe the bug
Newsletters show unpublished events
### How to reproduce
Steps to reproduce the behaviour:
1. Check one of the newsletters of the last weeks
### Expected behaviour
Only published events should show.
### Additional context
This is probably because of the low number of events during these days.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/newsletters/services.py`
Content:
```
1 import os
2
3 from django.conf import settings
4 from django.template.loader import get_template
5 from django.utils import translation, timezone
6
7 from events.models import Event
8 from members.models import Member
9 from newsletters import emails
10 from partners.models import Partner
11 from pushnotifications.models import Message, Category
12
13
14 def write_to_file(pk, lang, html_message):
15 """
16 Write newsletter to a file
17 """
18 cache_dir = os.path.join(settings.MEDIA_ROOT, "newsletters")
19 if not os.path.isdir(cache_dir):
20 os.makedirs(cache_dir)
21
22 with open(os.path.join(cache_dir, f"{pk}_{lang}.html"), "w+") as cache_file:
23 cache_file.write(html_message)
24
25
26 def save_to_disk(newsletter, request):
27 """
28 Writes the newsletter as HTML to file (in all languages)
29 """
30 main_partner = Partner.objects.filter(is_main_partner=True).first()
31 local_partner = Partner.objects.filter(is_local_partner=True).first()
32
33 html_template = get_template("newsletters/email.html")
34
35 for language in settings.LANGUAGES:
36 translation.activate(language[0])
37
38 context = {
39 "newsletter": newsletter,
40 "agenda_events": (
41 newsletter.newslettercontent_set.filter(newsletteritem=None).order_by(
42 "newsletterevent__start_datetime"
43 )
44 ),
45 "main_partner": main_partner,
46 "local_partner": local_partner,
47 "lang_code": language[0],
48 "request": request,
49 }
50
51 html_message = html_template.render(context)
52
53 write_to_file(newsletter.pk, language[0], html_message)
54
55
56 def get_agenda(start_date):
57 end_date = start_date + timezone.timedelta(weeks=2)
58 base_events = Event.objects.filter(
59 start__gte=start_date, end__lt=end_date, published=True
60 ).order_by("start")
61 if base_events.count() < 10:
62 more_events = Event.objects.filter(end__gte=end_date).order_by("start")
63 return [*base_events, *more_events][:10]
64 return base_events
65
66
67 def send_newsletter(newsletter):
68 emails.send_newsletter(newsletter)
69 newsletter.sent = True
70 newsletter.save()
71 message = Message.objects.create(
72 title_nl=newsletter.title_nl,
73 title_en=newsletter.title_en,
74 body_nl="Tik om te bekijken",
75 body_en="Tap to view",
76 url=settings.BASE_URL + newsletter.get_absolute_url(),
77 category=Category.objects.get(key=Category.NEWSLETTER),
78 )
79 message.users.set(Member.current_members.all())
80 message.send()
81
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/website/newsletters/services.py b/website/newsletters/services.py
--- a/website/newsletters/services.py
+++ b/website/newsletters/services.py
@@ -55,11 +55,12 @@
def get_agenda(start_date):
end_date = start_date + timezone.timedelta(weeks=2)
- base_events = Event.objects.filter(
- start__gte=start_date, end__lt=end_date, published=True
+ published_events = Event.objects.filter(published=True)
+ base_events = published_events.filter(
+ start__gte=start_date, end__lt=end_date
).order_by("start")
if base_events.count() < 10:
- more_events = Event.objects.filter(end__gte=end_date).order_by("start")
+ more_events = published_events.filter(end__gte=end_date).order_by("start")
return [*base_events, *more_events][:10]
return base_events
|
{"golden_diff": "diff --git a/website/newsletters/services.py b/website/newsletters/services.py\n--- a/website/newsletters/services.py\n+++ b/website/newsletters/services.py\n@@ -55,11 +55,12 @@\n \n def get_agenda(start_date):\n end_date = start_date + timezone.timedelta(weeks=2)\n- base_events = Event.objects.filter(\n- start__gte=start_date, end__lt=end_date, published=True\n+ published_events = Event.objects.filter(published=True)\n+ base_events = published_events.filter(\n+ start__gte=start_date, end__lt=end_date\n ).order_by(\"start\")\n if base_events.count() < 10:\n- more_events = Event.objects.filter(end__gte=end_date).order_by(\"start\")\n+ more_events = published_events.filter(end__gte=end_date).order_by(\"start\")\n return [*base_events, *more_events][:10]\n return base_events\n", "issue": "Newsletters show unpublished events\n### Describe the bug\r\nNewsletters show unpublished events\r\n\r\n### How to reproduce\r\nSteps to reproduce the behaviour:\r\n1. Check one of the newsletters of the last weeks\r\n\r\n### Expected behaviour\r\nOnly published events should show.\r\n\r\n### Additional context\r\nThis is probably because of the low number of events during these days.\r\n\n", "before_files": [{"content": "import os\n\nfrom django.conf import settings\nfrom django.template.loader import get_template\nfrom django.utils import translation, timezone\n\nfrom events.models import Event\nfrom members.models import Member\nfrom newsletters import emails\nfrom partners.models import Partner\nfrom pushnotifications.models import Message, Category\n\n\ndef write_to_file(pk, lang, html_message):\n \"\"\"\n Write newsletter to a file\n \"\"\"\n cache_dir = os.path.join(settings.MEDIA_ROOT, \"newsletters\")\n if not os.path.isdir(cache_dir):\n os.makedirs(cache_dir)\n\n with open(os.path.join(cache_dir, f\"{pk}_{lang}.html\"), \"w+\") as cache_file:\n cache_file.write(html_message)\n\n\ndef save_to_disk(newsletter, request):\n \"\"\"\n Writes the newsletter as HTML to file (in all languages)\n \"\"\"\n main_partner = Partner.objects.filter(is_main_partner=True).first()\n local_partner = Partner.objects.filter(is_local_partner=True).first()\n\n html_template = get_template(\"newsletters/email.html\")\n\n for language in settings.LANGUAGES:\n translation.activate(language[0])\n\n context = {\n \"newsletter\": newsletter,\n \"agenda_events\": (\n newsletter.newslettercontent_set.filter(newsletteritem=None).order_by(\n \"newsletterevent__start_datetime\"\n )\n ),\n \"main_partner\": main_partner,\n \"local_partner\": local_partner,\n \"lang_code\": language[0],\n \"request\": request,\n }\n\n html_message = html_template.render(context)\n\n write_to_file(newsletter.pk, language[0], html_message)\n\n\ndef get_agenda(start_date):\n end_date = start_date + timezone.timedelta(weeks=2)\n base_events = Event.objects.filter(\n start__gte=start_date, end__lt=end_date, published=True\n ).order_by(\"start\")\n if base_events.count() < 10:\n more_events = Event.objects.filter(end__gte=end_date).order_by(\"start\")\n return [*base_events, *more_events][:10]\n return base_events\n\n\ndef send_newsletter(newsletter):\n emails.send_newsletter(newsletter)\n newsletter.sent = True\n newsletter.save()\n message = Message.objects.create(\n title_nl=newsletter.title_nl,\n title_en=newsletter.title_en,\n body_nl=\"Tik om te bekijken\",\n body_en=\"Tap to view\",\n url=settings.BASE_URL + newsletter.get_absolute_url(),\n category=Category.objects.get(key=Category.NEWSLETTER),\n )\n message.users.set(Member.current_members.all())\n message.send()\n", "path": "website/newsletters/services.py"}], "after_files": [{"content": "import os\n\nfrom django.conf import settings\nfrom django.template.loader import get_template\nfrom django.utils import translation, timezone\n\nfrom events.models import Event\nfrom members.models import Member\nfrom newsletters import emails\nfrom partners.models import Partner\nfrom pushnotifications.models import Message, Category\n\n\ndef write_to_file(pk, lang, html_message):\n \"\"\"\n Write newsletter to a file\n \"\"\"\n cache_dir = os.path.join(settings.MEDIA_ROOT, \"newsletters\")\n if not os.path.isdir(cache_dir):\n os.makedirs(cache_dir)\n\n with open(os.path.join(cache_dir, f\"{pk}_{lang}.html\"), \"w+\") as cache_file:\n cache_file.write(html_message)\n\n\ndef save_to_disk(newsletter, request):\n \"\"\"\n Writes the newsletter as HTML to file (in all languages)\n \"\"\"\n main_partner = Partner.objects.filter(is_main_partner=True).first()\n local_partner = Partner.objects.filter(is_local_partner=True).first()\n\n html_template = get_template(\"newsletters/email.html\")\n\n for language in settings.LANGUAGES:\n translation.activate(language[0])\n\n context = {\n \"newsletter\": newsletter,\n \"agenda_events\": (\n newsletter.newslettercontent_set.filter(newsletteritem=None).order_by(\n \"newsletterevent__start_datetime\"\n )\n ),\n \"main_partner\": main_partner,\n \"local_partner\": local_partner,\n \"lang_code\": language[0],\n \"request\": request,\n }\n\n html_message = html_template.render(context)\n\n write_to_file(newsletter.pk, language[0], html_message)\n\n\ndef get_agenda(start_date):\n end_date = start_date + timezone.timedelta(weeks=2)\n published_events = Event.objects.filter(published=True)\n base_events = published_events.filter(\n start__gte=start_date, end__lt=end_date\n ).order_by(\"start\")\n if base_events.count() < 10:\n more_events = published_events.filter(end__gte=end_date).order_by(\"start\")\n return [*base_events, *more_events][:10]\n return base_events\n\n\ndef send_newsletter(newsletter):\n emails.send_newsletter(newsletter)\n newsletter.sent = True\n newsletter.save()\n message = Message.objects.create(\n title_nl=newsletter.title_nl,\n title_en=newsletter.title_en,\n body_nl=\"Tik om te bekijken\",\n body_en=\"Tap to view\",\n url=settings.BASE_URL + newsletter.get_absolute_url(),\n category=Category.objects.get(key=Category.NEWSLETTER),\n )\n message.users.set(Member.current_members.all())\n message.send()\n", "path": "website/newsletters/services.py"}]}
| 1,030 | 207 |
gh_patches_debug_2222
|
rasdani/github-patches
|
git_diff
|
scrapy__scrapy-2337
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add a sample middleware to startproject's template
It will be nice to have a middleware template inside the template project to serve as an example for people that want to use it.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/commands/startproject.py`
Content:
```
1 from __future__ import print_function
2 import re
3 import os
4 import string
5 from importlib import import_module
6 from os.path import join, exists, abspath
7 from shutil import ignore_patterns, move, copy2, copystat
8
9 import scrapy
10 from scrapy.commands import ScrapyCommand
11 from scrapy.utils.template import render_templatefile, string_camelcase
12 from scrapy.exceptions import UsageError
13
14
15 TEMPLATES_TO_RENDER = (
16 ('scrapy.cfg',),
17 ('${project_name}', 'settings.py.tmpl'),
18 ('${project_name}', 'items.py.tmpl'),
19 ('${project_name}', 'pipelines.py.tmpl'),
20 )
21
22 IGNORE = ignore_patterns('*.pyc', '.svn')
23
24
25 class Command(ScrapyCommand):
26
27 requires_project = False
28 default_settings = {'LOG_ENABLED': False}
29
30 def syntax(self):
31 return "<project_name> [project_dir]"
32
33 def short_desc(self):
34 return "Create new project"
35
36 def _is_valid_name(self, project_name):
37 def _module_exists(module_name):
38 try:
39 import_module(module_name)
40 return True
41 except ImportError:
42 return False
43
44 if not re.search(r'^[_a-zA-Z]\w*$', project_name):
45 print('Error: Project names must begin with a letter and contain'\
46 ' only\nletters, numbers and underscores')
47 elif _module_exists(project_name):
48 print('Error: Module %r already exists' % project_name)
49 else:
50 return True
51 return False
52
53 def _copytree(self, src, dst):
54 """
55 Since the original function always creates the directory, to resolve
56 the issue a new function had to be created. It's a simple copy and
57 was reduced for this case.
58
59 More info at:
60 https://github.com/scrapy/scrapy/pull/2005
61 """
62 ignore = IGNORE
63 names = os.listdir(src)
64 ignored_names = ignore(src, names)
65
66 if not os.path.exists(dst):
67 os.makedirs(dst)
68
69 for name in names:
70 if name in ignored_names:
71 continue
72
73 srcname = os.path.join(src, name)
74 dstname = os.path.join(dst, name)
75 if os.path.isdir(srcname):
76 self._copytree(srcname, dstname)
77 else:
78 copy2(srcname, dstname)
79 copystat(src, dst)
80
81 def run(self, args, opts):
82 if len(args) not in (1, 2):
83 raise UsageError()
84
85 project_name = args[0]
86 project_dir = args[0]
87
88 if len(args) == 2:
89 project_dir = args[1]
90
91 if exists(join(project_dir, 'scrapy.cfg')):
92 self.exitcode = 1
93 print('Error: scrapy.cfg already exists in %s' % abspath(project_dir))
94 return
95
96 if not self._is_valid_name(project_name):
97 self.exitcode = 1
98 return
99
100 self._copytree(self.templates_dir, abspath(project_dir))
101 move(join(project_dir, 'module'), join(project_dir, project_name))
102 for paths in TEMPLATES_TO_RENDER:
103 path = join(*paths)
104 tplfile = join(project_dir,
105 string.Template(path).substitute(project_name=project_name))
106 render_templatefile(tplfile, project_name=project_name,
107 ProjectName=string_camelcase(project_name))
108 print("New Scrapy project %r, using template directory %r, created in:" % \
109 (project_name, self.templates_dir))
110 print(" %s\n" % abspath(project_dir))
111 print("You can start your first spider with:")
112 print(" cd %s" % project_dir)
113 print(" scrapy genspider example example.com")
114
115 @property
116 def templates_dir(self):
117 _templates_base_dir = self.settings['TEMPLATES_DIR'] or \
118 join(scrapy.__path__[0], 'templates')
119 return join(_templates_base_dir, 'project')
120
121
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scrapy/commands/startproject.py b/scrapy/commands/startproject.py
--- a/scrapy/commands/startproject.py
+++ b/scrapy/commands/startproject.py
@@ -17,6 +17,7 @@
('${project_name}', 'settings.py.tmpl'),
('${project_name}', 'items.py.tmpl'),
('${project_name}', 'pipelines.py.tmpl'),
+ ('${project_name}', 'middlewares.py.tmpl'),
)
IGNORE = ignore_patterns('*.pyc', '.svn')
|
{"golden_diff": "diff --git a/scrapy/commands/startproject.py b/scrapy/commands/startproject.py\n--- a/scrapy/commands/startproject.py\n+++ b/scrapy/commands/startproject.py\n@@ -17,6 +17,7 @@\n ('${project_name}', 'settings.py.tmpl'),\n ('${project_name}', 'items.py.tmpl'),\n ('${project_name}', 'pipelines.py.tmpl'),\n+ ('${project_name}', 'middlewares.py.tmpl'),\n )\n \n IGNORE = ignore_patterns('*.pyc', '.svn')\n", "issue": "Add a sample middleware to startproject's template\nIt will be nice to have a middleware template inside the template project to serve as an example for people that want to use it.\n\n", "before_files": [{"content": "from __future__ import print_function\nimport re\nimport os\nimport string\nfrom importlib import import_module\nfrom os.path import join, exists, abspath\nfrom shutil import ignore_patterns, move, copy2, copystat\n\nimport scrapy\nfrom scrapy.commands import ScrapyCommand\nfrom scrapy.utils.template import render_templatefile, string_camelcase\nfrom scrapy.exceptions import UsageError\n\n\nTEMPLATES_TO_RENDER = (\n ('scrapy.cfg',),\n ('${project_name}', 'settings.py.tmpl'),\n ('${project_name}', 'items.py.tmpl'),\n ('${project_name}', 'pipelines.py.tmpl'),\n)\n\nIGNORE = ignore_patterns('*.pyc', '.svn')\n\n\nclass Command(ScrapyCommand):\n\n requires_project = False\n default_settings = {'LOG_ENABLED': False}\n\n def syntax(self):\n return \"<project_name> [project_dir]\"\n\n def short_desc(self):\n return \"Create new project\"\n\n def _is_valid_name(self, project_name):\n def _module_exists(module_name):\n try:\n import_module(module_name)\n return True\n except ImportError:\n return False\n\n if not re.search(r'^[_a-zA-Z]\\w*$', project_name):\n print('Error: Project names must begin with a letter and contain'\\\n ' only\\nletters, numbers and underscores')\n elif _module_exists(project_name):\n print('Error: Module %r already exists' % project_name)\n else:\n return True\n return False\n\n def _copytree(self, src, dst):\n \"\"\"\n Since the original function always creates the directory, to resolve\n the issue a new function had to be created. It's a simple copy and\n was reduced for this case.\n\n More info at:\n https://github.com/scrapy/scrapy/pull/2005\n \"\"\"\n ignore = IGNORE\n names = os.listdir(src)\n ignored_names = ignore(src, names)\n\n if not os.path.exists(dst):\n os.makedirs(dst)\n\n for name in names:\n if name in ignored_names:\n continue\n\n srcname = os.path.join(src, name)\n dstname = os.path.join(dst, name)\n if os.path.isdir(srcname):\n self._copytree(srcname, dstname)\n else:\n copy2(srcname, dstname)\n copystat(src, dst)\n\n def run(self, args, opts):\n if len(args) not in (1, 2):\n raise UsageError()\n\n project_name = args[0]\n project_dir = args[0]\n\n if len(args) == 2:\n project_dir = args[1]\n\n if exists(join(project_dir, 'scrapy.cfg')):\n self.exitcode = 1\n print('Error: scrapy.cfg already exists in %s' % abspath(project_dir))\n return\n\n if not self._is_valid_name(project_name):\n self.exitcode = 1\n return\n\n self._copytree(self.templates_dir, abspath(project_dir))\n move(join(project_dir, 'module'), join(project_dir, project_name))\n for paths in TEMPLATES_TO_RENDER:\n path = join(*paths)\n tplfile = join(project_dir,\n string.Template(path).substitute(project_name=project_name))\n render_templatefile(tplfile, project_name=project_name,\n ProjectName=string_camelcase(project_name))\n print(\"New Scrapy project %r, using template directory %r, created in:\" % \\\n (project_name, self.templates_dir))\n print(\" %s\\n\" % abspath(project_dir))\n print(\"You can start your first spider with:\")\n print(\" cd %s\" % project_dir)\n print(\" scrapy genspider example example.com\")\n\n @property\n def templates_dir(self):\n _templates_base_dir = self.settings['TEMPLATES_DIR'] or \\\n join(scrapy.__path__[0], 'templates')\n return join(_templates_base_dir, 'project')\n \n", "path": "scrapy/commands/startproject.py"}], "after_files": [{"content": "from __future__ import print_function\nimport re\nimport os\nimport string\nfrom importlib import import_module\nfrom os.path import join, exists, abspath\nfrom shutil import ignore_patterns, move, copy2, copystat\n\nimport scrapy\nfrom scrapy.commands import ScrapyCommand\nfrom scrapy.utils.template import render_templatefile, string_camelcase\nfrom scrapy.exceptions import UsageError\n\n\nTEMPLATES_TO_RENDER = (\n ('scrapy.cfg',),\n ('${project_name}', 'settings.py.tmpl'),\n ('${project_name}', 'items.py.tmpl'),\n ('${project_name}', 'pipelines.py.tmpl'),\n ('${project_name}', 'middlewares.py.tmpl'),\n)\n\nIGNORE = ignore_patterns('*.pyc', '.svn')\n\n\nclass Command(ScrapyCommand):\n\n requires_project = False\n default_settings = {'LOG_ENABLED': False}\n\n def syntax(self):\n return \"<project_name> [project_dir]\"\n\n def short_desc(self):\n return \"Create new project\"\n\n def _is_valid_name(self, project_name):\n def _module_exists(module_name):\n try:\n import_module(module_name)\n return True\n except ImportError:\n return False\n\n if not re.search(r'^[_a-zA-Z]\\w*$', project_name):\n print('Error: Project names must begin with a letter and contain'\\\n ' only\\nletters, numbers and underscores')\n elif _module_exists(project_name):\n print('Error: Module %r already exists' % project_name)\n else:\n return True\n return False\n\n def _copytree(self, src, dst):\n \"\"\"\n Since the original function always creates the directory, to resolve\n the issue a new function had to be created. It's a simple copy and\n was reduced for this case.\n\n More info at:\n https://github.com/scrapy/scrapy/pull/2005\n \"\"\"\n ignore = IGNORE\n names = os.listdir(src)\n ignored_names = ignore(src, names)\n\n if not os.path.exists(dst):\n os.makedirs(dst)\n\n for name in names:\n if name in ignored_names:\n continue\n\n srcname = os.path.join(src, name)\n dstname = os.path.join(dst, name)\n if os.path.isdir(srcname):\n self._copytree(srcname, dstname)\n else:\n copy2(srcname, dstname)\n copystat(src, dst)\n\n def run(self, args, opts):\n if len(args) not in (1, 2):\n raise UsageError()\n\n project_name = args[0]\n project_dir = args[0]\n\n if len(args) == 2:\n project_dir = args[1]\n\n if exists(join(project_dir, 'scrapy.cfg')):\n self.exitcode = 1\n print('Error: scrapy.cfg already exists in %s' % abspath(project_dir))\n return\n\n if not self._is_valid_name(project_name):\n self.exitcode = 1\n return\n\n self._copytree(self.templates_dir, abspath(project_dir))\n move(join(project_dir, 'module'), join(project_dir, project_name))\n for paths in TEMPLATES_TO_RENDER:\n path = join(*paths)\n tplfile = join(project_dir,\n string.Template(path).substitute(project_name=project_name))\n render_templatefile(tplfile, project_name=project_name,\n ProjectName=string_camelcase(project_name))\n print(\"New Scrapy project %r, using template directory %r, created in:\" % \\\n (project_name, self.templates_dir))\n print(\" %s\\n\" % abspath(project_dir))\n print(\"You can start your first spider with:\")\n print(\" cd %s\" % project_dir)\n print(\" scrapy genspider example example.com\")\n\n @property\n def templates_dir(self):\n _templates_base_dir = self.settings['TEMPLATES_DIR'] or \\\n join(scrapy.__path__[0], 'templates')\n return join(_templates_base_dir, 'project')\n \n", "path": "scrapy/commands/startproject.py"}]}
| 1,421 | 116 |
gh_patches_debug_40251
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-685
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Windows: Node Support
This involves solving this ticket: https://github.com/ekalinin/nodeenv/issues/53
I've already started some work on this
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/languages/node.py`
Content:
```
1 from __future__ import unicode_literals
2
3 import contextlib
4 import os
5 import sys
6
7 from pre_commit.envcontext import envcontext
8 from pre_commit.envcontext import Var
9 from pre_commit.languages import helpers
10 from pre_commit.util import clean_path_on_failure
11 from pre_commit.util import cmd_output
12 from pre_commit.xargs import xargs
13
14
15 ENVIRONMENT_DIR = 'node_env'
16 get_default_version = helpers.basic_get_default_version
17 healthy = helpers.basic_healthy
18
19
20 def get_env_patch(venv): # pragma: windows no cover
21 if sys.platform == 'cygwin': # pragma: no cover
22 _, win_venv, _ = cmd_output('cygpath', '-w', venv)
23 install_prefix = r'{}\bin'.format(win_venv.strip())
24 else:
25 install_prefix = venv
26 return (
27 ('NODE_VIRTUAL_ENV', venv),
28 ('NPM_CONFIG_PREFIX', install_prefix),
29 ('npm_config_prefix', install_prefix),
30 ('NODE_PATH', os.path.join(venv, 'lib', 'node_modules')),
31 ('PATH', (os.path.join(venv, 'bin'), os.pathsep, Var('PATH'))),
32 )
33
34
35 @contextlib.contextmanager
36 def in_env(prefix, language_version): # pragma: windows no cover
37 envdir = prefix.path(
38 helpers.environment_dir(ENVIRONMENT_DIR, language_version),
39 )
40 with envcontext(get_env_patch(envdir)):
41 yield
42
43
44 def install_environment(
45 prefix, version, additional_dependencies,
46 ): # pragma: windows no cover
47 additional_dependencies = tuple(additional_dependencies)
48 assert prefix.exists('package.json')
49 directory = helpers.environment_dir(ENVIRONMENT_DIR, version)
50
51 env_dir = prefix.path(directory)
52 with clean_path_on_failure(env_dir):
53 cmd = [sys.executable, '-m', 'nodeenv', '--prebuilt', env_dir]
54 if version != 'default':
55 cmd.extend(['-n', version])
56 cmd_output(*cmd)
57
58 with in_env(prefix, version):
59 helpers.run_setup_cmd(
60 prefix,
61 ('npm', 'install', '-g', '.') + additional_dependencies,
62 )
63
64
65 def run_hook(prefix, hook, file_args): # pragma: windows no cover
66 with in_env(prefix, hook['language_version']):
67 return xargs(helpers.to_cmd(hook), file_args)
68
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pre_commit/languages/node.py b/pre_commit/languages/node.py
--- a/pre_commit/languages/node.py
+++ b/pre_commit/languages/node.py
@@ -7,6 +7,7 @@
from pre_commit.envcontext import envcontext
from pre_commit.envcontext import Var
from pre_commit.languages import helpers
+from pre_commit.languages.python import bin_dir
from pre_commit.util import clean_path_on_failure
from pre_commit.util import cmd_output
from pre_commit.xargs import xargs
@@ -17,10 +18,17 @@
healthy = helpers.basic_healthy
-def get_env_patch(venv): # pragma: windows no cover
+def _envdir(prefix, version):
+ directory = helpers.environment_dir(ENVIRONMENT_DIR, version)
+ return prefix.path(directory)
+
+
+def get_env_patch(venv):
if sys.platform == 'cygwin': # pragma: no cover
_, win_venv, _ = cmd_output('cygpath', '-w', venv)
install_prefix = r'{}\bin'.format(win_venv.strip())
+ elif sys.platform == 'win32': # pragma: no cover
+ install_prefix = bin_dir(venv)
else:
install_prefix = venv
return (
@@ -28,29 +36,26 @@
('NPM_CONFIG_PREFIX', install_prefix),
('npm_config_prefix', install_prefix),
('NODE_PATH', os.path.join(venv, 'lib', 'node_modules')),
- ('PATH', (os.path.join(venv, 'bin'), os.pathsep, Var('PATH'))),
+ ('PATH', (bin_dir(venv), os.pathsep, Var('PATH'))),
)
@contextlib.contextmanager
-def in_env(prefix, language_version): # pragma: windows no cover
- envdir = prefix.path(
- helpers.environment_dir(ENVIRONMENT_DIR, language_version),
- )
- with envcontext(get_env_patch(envdir)):
+def in_env(prefix, language_version):
+ with envcontext(get_env_patch(_envdir(prefix, language_version))):
yield
-def install_environment(
- prefix, version, additional_dependencies,
-): # pragma: windows no cover
+def install_environment(prefix, version, additional_dependencies):
additional_dependencies = tuple(additional_dependencies)
assert prefix.exists('package.json')
- directory = helpers.environment_dir(ENVIRONMENT_DIR, version)
+ envdir = _envdir(prefix, version)
- env_dir = prefix.path(directory)
- with clean_path_on_failure(env_dir):
- cmd = [sys.executable, '-m', 'nodeenv', '--prebuilt', env_dir]
+ # https://msdn.microsoft.com/en-us/library/windows/desktop/aa365247(v=vs.85).aspx?f=255&MSPPError=-2147217396#maxpath
+ if sys.platform == 'win32': # pragma: no cover
+ envdir = '\\\\?\\' + os.path.normpath(envdir)
+ with clean_path_on_failure(envdir):
+ cmd = [sys.executable, '-m', 'nodeenv', '--prebuilt', envdir]
if version != 'default':
cmd.extend(['-n', version])
cmd_output(*cmd)
@@ -62,6 +67,6 @@
)
-def run_hook(prefix, hook, file_args): # pragma: windows no cover
+def run_hook(prefix, hook, file_args):
with in_env(prefix, hook['language_version']):
return xargs(helpers.to_cmd(hook), file_args)
|
{"golden_diff": "diff --git a/pre_commit/languages/node.py b/pre_commit/languages/node.py\n--- a/pre_commit/languages/node.py\n+++ b/pre_commit/languages/node.py\n@@ -7,6 +7,7 @@\n from pre_commit.envcontext import envcontext\n from pre_commit.envcontext import Var\n from pre_commit.languages import helpers\n+from pre_commit.languages.python import bin_dir\n from pre_commit.util import clean_path_on_failure\n from pre_commit.util import cmd_output\n from pre_commit.xargs import xargs\n@@ -17,10 +18,17 @@\n healthy = helpers.basic_healthy\n \n \n-def get_env_patch(venv): # pragma: windows no cover\n+def _envdir(prefix, version):\n+ directory = helpers.environment_dir(ENVIRONMENT_DIR, version)\n+ return prefix.path(directory)\n+\n+\n+def get_env_patch(venv):\n if sys.platform == 'cygwin': # pragma: no cover\n _, win_venv, _ = cmd_output('cygpath', '-w', venv)\n install_prefix = r'{}\\bin'.format(win_venv.strip())\n+ elif sys.platform == 'win32': # pragma: no cover\n+ install_prefix = bin_dir(venv)\n else:\n install_prefix = venv\n return (\n@@ -28,29 +36,26 @@\n ('NPM_CONFIG_PREFIX', install_prefix),\n ('npm_config_prefix', install_prefix),\n ('NODE_PATH', os.path.join(venv, 'lib', 'node_modules')),\n- ('PATH', (os.path.join(venv, 'bin'), os.pathsep, Var('PATH'))),\n+ ('PATH', (bin_dir(venv), os.pathsep, Var('PATH'))),\n )\n \n \n @contextlib.contextmanager\n-def in_env(prefix, language_version): # pragma: windows no cover\n- envdir = prefix.path(\n- helpers.environment_dir(ENVIRONMENT_DIR, language_version),\n- )\n- with envcontext(get_env_patch(envdir)):\n+def in_env(prefix, language_version):\n+ with envcontext(get_env_patch(_envdir(prefix, language_version))):\n yield\n \n \n-def install_environment(\n- prefix, version, additional_dependencies,\n-): # pragma: windows no cover\n+def install_environment(prefix, version, additional_dependencies):\n additional_dependencies = tuple(additional_dependencies)\n assert prefix.exists('package.json')\n- directory = helpers.environment_dir(ENVIRONMENT_DIR, version)\n+ envdir = _envdir(prefix, version)\n \n- env_dir = prefix.path(directory)\n- with clean_path_on_failure(env_dir):\n- cmd = [sys.executable, '-m', 'nodeenv', '--prebuilt', env_dir]\n+ # https://msdn.microsoft.com/en-us/library/windows/desktop/aa365247(v=vs.85).aspx?f=255&MSPPError=-2147217396#maxpath\n+ if sys.platform == 'win32': # pragma: no cover\n+ envdir = '\\\\\\\\?\\\\' + os.path.normpath(envdir)\n+ with clean_path_on_failure(envdir):\n+ cmd = [sys.executable, '-m', 'nodeenv', '--prebuilt', envdir]\n if version != 'default':\n cmd.extend(['-n', version])\n cmd_output(*cmd)\n@@ -62,6 +67,6 @@\n )\n \n \n-def run_hook(prefix, hook, file_args): # pragma: windows no cover\n+def run_hook(prefix, hook, file_args):\n with in_env(prefix, hook['language_version']):\n return xargs(helpers.to_cmd(hook), file_args)\n", "issue": "Windows: Node Support\nThis involves solving this ticket: https://github.com/ekalinin/nodeenv/issues/53\n\nI've already started some work on this\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport contextlib\nimport os\nimport sys\n\nfrom pre_commit.envcontext import envcontext\nfrom pre_commit.envcontext import Var\nfrom pre_commit.languages import helpers\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import cmd_output\nfrom pre_commit.xargs import xargs\n\n\nENVIRONMENT_DIR = 'node_env'\nget_default_version = helpers.basic_get_default_version\nhealthy = helpers.basic_healthy\n\n\ndef get_env_patch(venv): # pragma: windows no cover\n if sys.platform == 'cygwin': # pragma: no cover\n _, win_venv, _ = cmd_output('cygpath', '-w', venv)\n install_prefix = r'{}\\bin'.format(win_venv.strip())\n else:\n install_prefix = venv\n return (\n ('NODE_VIRTUAL_ENV', venv),\n ('NPM_CONFIG_PREFIX', install_prefix),\n ('npm_config_prefix', install_prefix),\n ('NODE_PATH', os.path.join(venv, 'lib', 'node_modules')),\n ('PATH', (os.path.join(venv, 'bin'), os.pathsep, Var('PATH'))),\n )\n\n\[email protected]\ndef in_env(prefix, language_version): # pragma: windows no cover\n envdir = prefix.path(\n helpers.environment_dir(ENVIRONMENT_DIR, language_version),\n )\n with envcontext(get_env_patch(envdir)):\n yield\n\n\ndef install_environment(\n prefix, version, additional_dependencies,\n): # pragma: windows no cover\n additional_dependencies = tuple(additional_dependencies)\n assert prefix.exists('package.json')\n directory = helpers.environment_dir(ENVIRONMENT_DIR, version)\n\n env_dir = prefix.path(directory)\n with clean_path_on_failure(env_dir):\n cmd = [sys.executable, '-m', 'nodeenv', '--prebuilt', env_dir]\n if version != 'default':\n cmd.extend(['-n', version])\n cmd_output(*cmd)\n\n with in_env(prefix, version):\n helpers.run_setup_cmd(\n prefix,\n ('npm', 'install', '-g', '.') + additional_dependencies,\n )\n\n\ndef run_hook(prefix, hook, file_args): # pragma: windows no cover\n with in_env(prefix, hook['language_version']):\n return xargs(helpers.to_cmd(hook), file_args)\n", "path": "pre_commit/languages/node.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nimport contextlib\nimport os\nimport sys\n\nfrom pre_commit.envcontext import envcontext\nfrom pre_commit.envcontext import Var\nfrom pre_commit.languages import helpers\nfrom pre_commit.languages.python import bin_dir\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import cmd_output\nfrom pre_commit.xargs import xargs\n\n\nENVIRONMENT_DIR = 'node_env'\nget_default_version = helpers.basic_get_default_version\nhealthy = helpers.basic_healthy\n\n\ndef _envdir(prefix, version):\n directory = helpers.environment_dir(ENVIRONMENT_DIR, version)\n return prefix.path(directory)\n\n\ndef get_env_patch(venv):\n if sys.platform == 'cygwin': # pragma: no cover\n _, win_venv, _ = cmd_output('cygpath', '-w', venv)\n install_prefix = r'{}\\bin'.format(win_venv.strip())\n elif sys.platform == 'win32': # pragma: no cover\n install_prefix = bin_dir(venv)\n else:\n install_prefix = venv\n return (\n ('NODE_VIRTUAL_ENV', venv),\n ('NPM_CONFIG_PREFIX', install_prefix),\n ('npm_config_prefix', install_prefix),\n ('NODE_PATH', os.path.join(venv, 'lib', 'node_modules')),\n ('PATH', (bin_dir(venv), os.pathsep, Var('PATH'))),\n )\n\n\[email protected]\ndef in_env(prefix, language_version):\n with envcontext(get_env_patch(_envdir(prefix, language_version))):\n yield\n\n\ndef install_environment(prefix, version, additional_dependencies):\n additional_dependencies = tuple(additional_dependencies)\n assert prefix.exists('package.json')\n envdir = _envdir(prefix, version)\n\n # https://msdn.microsoft.com/en-us/library/windows/desktop/aa365247(v=vs.85).aspx?f=255&MSPPError=-2147217396#maxpath\n if sys.platform == 'win32': # pragma: no cover\n envdir = '\\\\\\\\?\\\\' + os.path.normpath(envdir)\n with clean_path_on_failure(envdir):\n cmd = [sys.executable, '-m', 'nodeenv', '--prebuilt', envdir]\n if version != 'default':\n cmd.extend(['-n', version])\n cmd_output(*cmd)\n\n with in_env(prefix, version):\n helpers.run_setup_cmd(\n prefix,\n ('npm', 'install', '-g', '.') + additional_dependencies,\n )\n\n\ndef run_hook(prefix, hook, file_args):\n with in_env(prefix, hook['language_version']):\n return xargs(helpers.to_cmd(hook), file_args)\n", "path": "pre_commit/languages/node.py"}]}
| 932 | 805 |
gh_patches_debug_18004
|
rasdani/github-patches
|
git_diff
|
modin-project__modin-1381
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Test with Ray 0.8.4 and update version
Ray 0.8.4 was released: https://github.com/ray-project/ray/tree/ray-0.8.4, we should test performance and update version.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import setup, find_packages
2 import versioneer
3 import os
4 from setuptools.dist import Distribution
5
6 try:
7 from wheel.bdist_wheel import bdist_wheel
8
9 HAS_WHEEL = True
10 except ImportError:
11 HAS_WHEEL = False
12
13 with open("README.md", "r") as fh:
14 long_description = fh.read()
15
16 if HAS_WHEEL:
17
18 class ModinWheel(bdist_wheel):
19 def finalize_options(self):
20 bdist_wheel.finalize_options(self)
21 self.root_is_pure = False
22
23 def get_tag(self):
24 _, _, plat = bdist_wheel.get_tag(self)
25 py = "py3"
26 abi = "none"
27 return py, abi, plat
28
29
30 class ModinDistribution(Distribution):
31 def __init__(self, *attrs):
32 Distribution.__init__(self, *attrs)
33 if HAS_WHEEL:
34 self.cmdclass["bdist_wheel"] = ModinWheel
35
36 def is_pure(self):
37 return False
38
39
40 dask_deps = ["dask>=2.1.0", "distributed>=2.3.2"]
41 ray_deps = ["ray==0.8.3", "pyarrow<0.17"]
42 if "SETUP_PLAT_NAME" in os.environ:
43 if "win" in os.environ["SETUP_PLAT_NAME"]:
44 all_deps = dask_deps
45 else:
46 all_deps = dask_deps + ray_deps
47 else:
48 all_deps = dask_deps if os.name == "nt" else dask_deps + ray_deps
49
50 setup(
51 name="modin",
52 version=versioneer.get_version(),
53 cmdclass=versioneer.get_cmdclass(),
54 distclass=ModinDistribution,
55 description="Modin: Make your pandas code run faster by changing one line of code.",
56 packages=find_packages(),
57 license="Apache 2",
58 url="https://github.com/modin-project/modin",
59 long_description=long_description,
60 long_description_content_type="text/markdown",
61 install_requires=["pandas==1.0.3", "packaging"],
62 extras_require={
63 # can be installed by pip install modin[dask]
64 "dask": dask_deps,
65 "ray": ray_deps,
66 "all": all_deps,
67 },
68 python_requires=">=3.5",
69 )
70
```
Path: `modin/__init__.py`
Content:
```
1 # Licensed to Modin Development Team under one or more contributor license agreements.
2 # See the NOTICE file distributed with this work for additional information regarding
3 # copyright ownership. The Modin Development Team licenses this file to you under the
4 # Apache License, Version 2.0 (the "License"); you may not use this file except in
5 # compliance with the License. You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software distributed under
10 # the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific language
12 # governing permissions and limitations under the License.
13
14 import os
15 import sys
16 import warnings
17 from packaging import version
18
19 from ._version import get_versions
20
21
22 def custom_formatwarning(msg, category, *args, **kwargs):
23 # ignore everything except the message
24 return "{}: {}\n".format(category.__name__, msg)
25
26
27 warnings.formatwarning = custom_formatwarning
28 # Filter numpy version warnings because they are not relevant
29 warnings.filterwarnings("ignore", message="numpy.dtype size changed")
30 warnings.filterwarnings("ignore", message="Large object of size")
31 warnings.filterwarnings(
32 "ignore",
33 message="The pandas.datetime class is deprecated and will be removed from pandas in a future version. "
34 "Import from datetime module instead.",
35 )
36
37
38 def get_execution_engine():
39 # In the future, when there are multiple engines and different ways of
40 # backing the DataFrame, there will have to be some changed logic here to
41 # decide these things. In the meantime, we will use the currently supported
42 # execution engine + backing (Pandas + Ray).
43 if "MODIN_ENGINE" in os.environ:
44 # .title allows variants like ray, RAY, Ray
45 return os.environ["MODIN_ENGINE"].title()
46 else:
47 if "MODIN_DEBUG" in os.environ:
48 return "Python"
49 else:
50 if sys.platform != "win32":
51 try:
52 import ray
53
54 except ImportError:
55 pass
56 else:
57 if version.parse(ray.__version__) != version.parse("0.8.3"):
58 raise ImportError(
59 "Please `pip install modin[ray]` to install compatible Ray version."
60 )
61 return "Ray"
62 try:
63 import dask
64 import distributed
65
66 except ImportError:
67 raise ImportError(
68 "Please `pip install {}modin[dask]` to install an engine".format(
69 "modin[ray]` or `" if sys.platform != "win32" else ""
70 )
71 )
72 else:
73 if version.parse(dask.__version__) < version.parse(
74 "2.1.0"
75 ) or version.parse(distributed.__version__) < version.parse("2.3.2"):
76 raise ImportError(
77 "Please `pip install modin[dask]` to install compatible Dask version."
78 )
79 return "Dask"
80
81
82 def get_partition_format():
83 # See note above about engine + backing.
84 return os.environ.get("MODIN_BACKEND", "Pandas").title()
85
86
87 __version__ = "0.6.3"
88 __execution_engine__ = get_execution_engine()
89 __partition_format__ = get_partition_format()
90
91 # We don't want these used outside of this file.
92 del get_execution_engine
93 del get_partition_format
94
95 __version__ = get_versions()["version"]
96 del get_versions
97
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/modin/__init__.py b/modin/__init__.py
--- a/modin/__init__.py
+++ b/modin/__init__.py
@@ -54,7 +54,7 @@
except ImportError:
pass
else:
- if version.parse(ray.__version__) != version.parse("0.8.3"):
+ if version.parse(ray.__version__) != version.parse("0.8.4"):
raise ImportError(
"Please `pip install modin[ray]` to install compatible Ray version."
)
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -38,7 +38,7 @@
dask_deps = ["dask>=2.1.0", "distributed>=2.3.2"]
-ray_deps = ["ray==0.8.3", "pyarrow<0.17"]
+ray_deps = ["ray==0.8.4", "pyarrow<0.17"]
if "SETUP_PLAT_NAME" in os.environ:
if "win" in os.environ["SETUP_PLAT_NAME"]:
all_deps = dask_deps
|
{"golden_diff": "diff --git a/modin/__init__.py b/modin/__init__.py\n--- a/modin/__init__.py\n+++ b/modin/__init__.py\n@@ -54,7 +54,7 @@\n except ImportError:\n pass\n else:\n- if version.parse(ray.__version__) != version.parse(\"0.8.3\"):\n+ if version.parse(ray.__version__) != version.parse(\"0.8.4\"):\n raise ImportError(\n \"Please `pip install modin[ray]` to install compatible Ray version.\"\n )\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -38,7 +38,7 @@\n \n \n dask_deps = [\"dask>=2.1.0\", \"distributed>=2.3.2\"]\n-ray_deps = [\"ray==0.8.3\", \"pyarrow<0.17\"]\n+ray_deps = [\"ray==0.8.4\", \"pyarrow<0.17\"]\n if \"SETUP_PLAT_NAME\" in os.environ:\n if \"win\" in os.environ[\"SETUP_PLAT_NAME\"]:\n all_deps = dask_deps\n", "issue": "Test with Ray 0.8.4 and update version\nRay 0.8.4 was released: https://github.com/ray-project/ray/tree/ray-0.8.4, we should test performance and update version.\n", "before_files": [{"content": "from setuptools import setup, find_packages\nimport versioneer\nimport os\nfrom setuptools.dist import Distribution\n\ntry:\n from wheel.bdist_wheel import bdist_wheel\n\n HAS_WHEEL = True\nexcept ImportError:\n HAS_WHEEL = False\n\nwith open(\"README.md\", \"r\") as fh:\n long_description = fh.read()\n\nif HAS_WHEEL:\n\n class ModinWheel(bdist_wheel):\n def finalize_options(self):\n bdist_wheel.finalize_options(self)\n self.root_is_pure = False\n\n def get_tag(self):\n _, _, plat = bdist_wheel.get_tag(self)\n py = \"py3\"\n abi = \"none\"\n return py, abi, plat\n\n\nclass ModinDistribution(Distribution):\n def __init__(self, *attrs):\n Distribution.__init__(self, *attrs)\n if HAS_WHEEL:\n self.cmdclass[\"bdist_wheel\"] = ModinWheel\n\n def is_pure(self):\n return False\n\n\ndask_deps = [\"dask>=2.1.0\", \"distributed>=2.3.2\"]\nray_deps = [\"ray==0.8.3\", \"pyarrow<0.17\"]\nif \"SETUP_PLAT_NAME\" in os.environ:\n if \"win\" in os.environ[\"SETUP_PLAT_NAME\"]:\n all_deps = dask_deps\n else:\n all_deps = dask_deps + ray_deps\nelse:\n all_deps = dask_deps if os.name == \"nt\" else dask_deps + ray_deps\n\nsetup(\n name=\"modin\",\n version=versioneer.get_version(),\n cmdclass=versioneer.get_cmdclass(),\n distclass=ModinDistribution,\n description=\"Modin: Make your pandas code run faster by changing one line of code.\",\n packages=find_packages(),\n license=\"Apache 2\",\n url=\"https://github.com/modin-project/modin\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n install_requires=[\"pandas==1.0.3\", \"packaging\"],\n extras_require={\n # can be installed by pip install modin[dask]\n \"dask\": dask_deps,\n \"ray\": ray_deps,\n \"all\": all_deps,\n },\n python_requires=\">=3.5\",\n)\n", "path": "setup.py"}, {"content": "# Licensed to Modin Development Team under one or more contributor license agreements.\n# See the NOTICE file distributed with this work for additional information regarding\n# copyright ownership. The Modin Development Team licenses this file to you under the\n# Apache License, Version 2.0 (the \"License\"); you may not use this file except in\n# compliance with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software distributed under\n# the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific language\n# governing permissions and limitations under the License.\n\nimport os\nimport sys\nimport warnings\nfrom packaging import version\n\nfrom ._version import get_versions\n\n\ndef custom_formatwarning(msg, category, *args, **kwargs):\n # ignore everything except the message\n return \"{}: {}\\n\".format(category.__name__, msg)\n\n\nwarnings.formatwarning = custom_formatwarning\n# Filter numpy version warnings because they are not relevant\nwarnings.filterwarnings(\"ignore\", message=\"numpy.dtype size changed\")\nwarnings.filterwarnings(\"ignore\", message=\"Large object of size\")\nwarnings.filterwarnings(\n \"ignore\",\n message=\"The pandas.datetime class is deprecated and will be removed from pandas in a future version. \"\n \"Import from datetime module instead.\",\n)\n\n\ndef get_execution_engine():\n # In the future, when there are multiple engines and different ways of\n # backing the DataFrame, there will have to be some changed logic here to\n # decide these things. In the meantime, we will use the currently supported\n # execution engine + backing (Pandas + Ray).\n if \"MODIN_ENGINE\" in os.environ:\n # .title allows variants like ray, RAY, Ray\n return os.environ[\"MODIN_ENGINE\"].title()\n else:\n if \"MODIN_DEBUG\" in os.environ:\n return \"Python\"\n else:\n if sys.platform != \"win32\":\n try:\n import ray\n\n except ImportError:\n pass\n else:\n if version.parse(ray.__version__) != version.parse(\"0.8.3\"):\n raise ImportError(\n \"Please `pip install modin[ray]` to install compatible Ray version.\"\n )\n return \"Ray\"\n try:\n import dask\n import distributed\n\n except ImportError:\n raise ImportError(\n \"Please `pip install {}modin[dask]` to install an engine\".format(\n \"modin[ray]` or `\" if sys.platform != \"win32\" else \"\"\n )\n )\n else:\n if version.parse(dask.__version__) < version.parse(\n \"2.1.0\"\n ) or version.parse(distributed.__version__) < version.parse(\"2.3.2\"):\n raise ImportError(\n \"Please `pip install modin[dask]` to install compatible Dask version.\"\n )\n return \"Dask\"\n\n\ndef get_partition_format():\n # See note above about engine + backing.\n return os.environ.get(\"MODIN_BACKEND\", \"Pandas\").title()\n\n\n__version__ = \"0.6.3\"\n__execution_engine__ = get_execution_engine()\n__partition_format__ = get_partition_format()\n\n# We don't want these used outside of this file.\ndel get_execution_engine\ndel get_partition_format\n\n__version__ = get_versions()[\"version\"]\ndel get_versions\n", "path": "modin/__init__.py"}], "after_files": [{"content": "from setuptools import setup, find_packages\nimport versioneer\nimport os\nfrom setuptools.dist import Distribution\n\ntry:\n from wheel.bdist_wheel import bdist_wheel\n\n HAS_WHEEL = True\nexcept ImportError:\n HAS_WHEEL = False\n\nwith open(\"README.md\", \"r\") as fh:\n long_description = fh.read()\n\nif HAS_WHEEL:\n\n class ModinWheel(bdist_wheel):\n def finalize_options(self):\n bdist_wheel.finalize_options(self)\n self.root_is_pure = False\n\n def get_tag(self):\n _, _, plat = bdist_wheel.get_tag(self)\n py = \"py3\"\n abi = \"none\"\n return py, abi, plat\n\n\nclass ModinDistribution(Distribution):\n def __init__(self, *attrs):\n Distribution.__init__(self, *attrs)\n if HAS_WHEEL:\n self.cmdclass[\"bdist_wheel\"] = ModinWheel\n\n def is_pure(self):\n return False\n\n\ndask_deps = [\"dask>=2.1.0\", \"distributed>=2.3.2\"]\nray_deps = [\"ray==0.8.4\", \"pyarrow<0.17\"]\nif \"SETUP_PLAT_NAME\" in os.environ:\n if \"win\" in os.environ[\"SETUP_PLAT_NAME\"]:\n all_deps = dask_deps\n else:\n all_deps = dask_deps + ray_deps\nelse:\n all_deps = dask_deps if os.name == \"nt\" else dask_deps + ray_deps\n\nsetup(\n name=\"modin\",\n version=versioneer.get_version(),\n cmdclass=versioneer.get_cmdclass(),\n distclass=ModinDistribution,\n description=\"Modin: Make your pandas code run faster by changing one line of code.\",\n packages=find_packages(),\n license=\"Apache 2\",\n url=\"https://github.com/modin-project/modin\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n install_requires=[\"pandas==1.0.3\", \"packaging\"],\n extras_require={\n # can be installed by pip install modin[dask]\n \"dask\": dask_deps,\n \"ray\": ray_deps,\n \"all\": all_deps,\n },\n python_requires=\">=3.5\",\n)\n", "path": "setup.py"}, {"content": "# Licensed to Modin Development Team under one or more contributor license agreements.\n# See the NOTICE file distributed with this work for additional information regarding\n# copyright ownership. The Modin Development Team licenses this file to you under the\n# Apache License, Version 2.0 (the \"License\"); you may not use this file except in\n# compliance with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software distributed under\n# the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific language\n# governing permissions and limitations under the License.\n\nimport os\nimport sys\nimport warnings\nfrom packaging import version\n\nfrom ._version import get_versions\n\n\ndef custom_formatwarning(msg, category, *args, **kwargs):\n # ignore everything except the message\n return \"{}: {}\\n\".format(category.__name__, msg)\n\n\nwarnings.formatwarning = custom_formatwarning\n# Filter numpy version warnings because they are not relevant\nwarnings.filterwarnings(\"ignore\", message=\"numpy.dtype size changed\")\nwarnings.filterwarnings(\"ignore\", message=\"Large object of size\")\nwarnings.filterwarnings(\n \"ignore\",\n message=\"The pandas.datetime class is deprecated and will be removed from pandas in a future version. \"\n \"Import from datetime module instead.\",\n)\n\n\ndef get_execution_engine():\n # In the future, when there are multiple engines and different ways of\n # backing the DataFrame, there will have to be some changed logic here to\n # decide these things. In the meantime, we will use the currently supported\n # execution engine + backing (Pandas + Ray).\n if \"MODIN_ENGINE\" in os.environ:\n # .title allows variants like ray, RAY, Ray\n return os.environ[\"MODIN_ENGINE\"].title()\n else:\n if \"MODIN_DEBUG\" in os.environ:\n return \"Python\"\n else:\n if sys.platform != \"win32\":\n try:\n import ray\n\n except ImportError:\n pass\n else:\n if version.parse(ray.__version__) != version.parse(\"0.8.4\"):\n raise ImportError(\n \"Please `pip install modin[ray]` to install compatible Ray version.\"\n )\n return \"Ray\"\n try:\n import dask\n import distributed\n\n except ImportError:\n raise ImportError(\n \"Please `pip install {}modin[dask]` to install an engine\".format(\n \"modin[ray]` or `\" if sys.platform != \"win32\" else \"\"\n )\n )\n else:\n if version.parse(dask.__version__) < version.parse(\n \"2.1.0\"\n ) or version.parse(distributed.__version__) < version.parse(\"2.3.2\"):\n raise ImportError(\n \"Please `pip install modin[dask]` to install compatible Dask version.\"\n )\n return \"Dask\"\n\n\ndef get_partition_format():\n # See note above about engine + backing.\n return os.environ.get(\"MODIN_BACKEND\", \"Pandas\").title()\n\n\n__version__ = \"0.6.3\"\n__execution_engine__ = get_execution_engine()\n__partition_format__ = get_partition_format()\n\n# We don't want these used outside of this file.\ndel get_execution_engine\ndel get_partition_format\n\n__version__ = get_versions()[\"version\"]\ndel get_versions\n", "path": "modin/__init__.py"}]}
| 1,891 | 254 |
gh_patches_debug_3750
|
rasdani/github-patches
|
git_diff
|
aws__aws-cli-565
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
documentation: Show reduced redundancy option in "aws s3 cp help"
When a user types `aws s3 cp help` it describes this option:
```
--storage-class The type of storage to use for the object. Defaults to
'STANDARD'
```
It would be super-helpful to list the string the user should specify here if they want reduced redundancy storage (i.e., `'REDUCED_REDUNDANCY'`).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `awscli/customizations/s3/description.py`
Content:
```
1 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"). You
4 # may not use this file except in compliance with the License. A copy of
5 # the License is located at
6 #
7 # http://aws.amazon.com/apache2.0/
8 #
9 # or in the "license" file accompanying this file. This file is
10 # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific
12 # language governing permissions and limitations under the License.
13
14
15 def add_command_descriptions(cmd_dict):
16 """
17 This function adds descritpions to the various commands along with
18 usage.
19 """
20 cmd_dict['cp']['description'] = "Copies a local file or S3 object to \
21 another location locally or in S3."
22 cmd_dict['cp']['usage'] = "<LocalPath> <S3Path> or <S3Path> <LocalPath> " \
23 "or <S3Path> <S3Path>"
24
25 cmd_dict['mv']['description'] = "Moves a local file or S3 object to " \
26 "another location locally or in S3."
27 cmd_dict['mv']['usage'] = "<LocalPath> <S3Path> or <S3Path> <LocalPath> " \
28 "or <S3Path> <S3Path>"
29
30 cmd_dict['rm']['description'] = "Deletes an S3 object."
31 cmd_dict['rm']['usage'] = "<S3Path>"
32
33 cmd_dict['sync']['description'] = "Syncs directories and S3 prefixes."
34 cmd_dict['sync']['usage'] = "<LocalPath> <S3Path> or <S3Path> " \
35 "<LocalPath> or <S3Path> <S3Path>"
36
37 cmd_dict['ls']['description'] = "List S3 objects and common prefixes " \
38 "under a prefix or all S3 buckets."
39 cmd_dict['ls']['usage'] = "<S3Path> or NONE"
40
41 cmd_dict['mb']['description'] = "Creates an S3 bucket."
42 cmd_dict['mb']['usage'] = "<S3Path>"
43
44 cmd_dict['rb']['description'] = "Deletes an S3 bucket."
45 cmd_dict['rb']['usage'] = "<S3Path>"
46
47
48 def add_param_descriptions(params_dict):
49 """
50 This function adds descriptions to the various parameters that can be
51 used in commands.
52 """
53 params_dict['dryrun']['documents'] = "Displays the operations that " \
54 "would be performed using the specified command without actually" \
55 "running them."
56
57 params_dict['quiet']['documents'] = "Does not display the operations " \
58 "performed from the specified command."
59
60 params_dict['recursive']['documents'] = "Command is performed on all" \
61 "files or objects under the specified directory or prefix."
62
63 params_dict['delete']['documents'] = "Files that exist in the " \
64 "destination but not in the source are deleted during sync."
65
66 params_dict['exclude']['documents'] = "Exclude all files or objects" \
67 " from the command that matches the specified pattern."
68
69 params_dict['include']['documents'] = "Don't exclude files or objects in " \
70 "the command that match the specified pattern"
71
72 params_dict['acl']['documents'] = "Sets the ACl for the object when the " \
73 "command is performed. Only accepts values of ``private``, \
74 ``public-read``, or ``public-read-write``."
75
76 params_dict['force']['documents'] = "Deletes all objects in the bucket " \
77 "including the bucket itself."
78
79 params_dict['no-guess-mime-type']['documents'] = (
80 "Do not try to guess the mime type for uploaded files. By default the "
81 "mime type of a file is guessed when it is uploaded.")
82
83 params_dict['content-type']['documents'] = (
84 "Specify an explicit content type for this operation. "
85 "This value overrides any guessed mime types.")
86
87 params_dict['cache-control']['documents'] = \
88 "Specifies caching behavior along the request/reply chain."
89
90 params_dict['content-disposition']['documents'] = \
91 "Specifies presentational information for the object."
92
93 params_dict['content-encoding']['documents'] = (
94 "Specifies what content encodings have been "
95 "applied to the object and thus what decoding mechanisms "
96 "must be applied to obtain the media-type referenced "
97 "by the Content-Type header field.")
98
99 params_dict['content-language']['documents'] = \
100 "The language the content is in."
101
102 params_dict['expires']['documents'] = \
103 "The date and time at which the object is no longer cacheable."
104
105 params_dict['sse']['documents'] = (
106 "Enable Server Side Encryption of the object in S3")
107
108 params_dict['storage-class']['documents'] = (
109 "The type of storage to use for the object. "
110 "Defaults to 'STANDARD'")
111
112 params_dict['website-redirect']['documents'] = (
113 "If the bucket is configured as a website, redirects requests "
114 "for this object to another object in the same bucket or to an "
115 "external URL. Amazon S3 stores the value of this header in the "
116 "object metadata.")
117
118 params_dict['grants']['documents'] = (
119 "Grant specific permissions to individual users or groups. "
120 "You can supply a list of grants of the form "
121 "``permission=grantee`` where permission is one of: "
122 "``read``, ``readacl``, ``writeacp``, ``full``")
123
124
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/awscli/customizations/s3/description.py b/awscli/customizations/s3/description.py
--- a/awscli/customizations/s3/description.py
+++ b/awscli/customizations/s3/description.py
@@ -107,6 +107,7 @@
params_dict['storage-class']['documents'] = (
"The type of storage to use for the object. "
+ "Valid choices are: STANDARD | REDUCED_REDUNDANCY. "
"Defaults to 'STANDARD'")
params_dict['website-redirect']['documents'] = (
|
{"golden_diff": "diff --git a/awscli/customizations/s3/description.py b/awscli/customizations/s3/description.py\n--- a/awscli/customizations/s3/description.py\n+++ b/awscli/customizations/s3/description.py\n@@ -107,6 +107,7 @@\n \n params_dict['storage-class']['documents'] = (\n \"The type of storage to use for the object. \"\n+ \"Valid choices are: STANDARD | REDUCED_REDUNDANCY. \"\n \"Defaults to 'STANDARD'\")\n \n params_dict['website-redirect']['documents'] = (\n", "issue": "documentation: Show reduced redundancy option in \"aws s3 cp help\"\nWhen a user types `aws s3 cp help` it describes this option:\n\n```\n--storage-class The type of storage to use for the object. Defaults to\n'STANDARD'\n```\n\nIt would be super-helpful to list the string the user should specify here if they want reduced redundancy storage (i.e., `'REDUCED_REDUNDANCY'`).\n\n", "before_files": [{"content": "# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\n\n\ndef add_command_descriptions(cmd_dict):\n \"\"\"\n This function adds descritpions to the various commands along with\n usage.\n \"\"\"\n cmd_dict['cp']['description'] = \"Copies a local file or S3 object to \\\n another location locally or in S3.\"\n cmd_dict['cp']['usage'] = \"<LocalPath> <S3Path> or <S3Path> <LocalPath> \" \\\n \"or <S3Path> <S3Path>\"\n\n cmd_dict['mv']['description'] = \"Moves a local file or S3 object to \" \\\n \"another location locally or in S3.\"\n cmd_dict['mv']['usage'] = \"<LocalPath> <S3Path> or <S3Path> <LocalPath> \" \\\n \"or <S3Path> <S3Path>\"\n\n cmd_dict['rm']['description'] = \"Deletes an S3 object.\"\n cmd_dict['rm']['usage'] = \"<S3Path>\"\n\n cmd_dict['sync']['description'] = \"Syncs directories and S3 prefixes.\"\n cmd_dict['sync']['usage'] = \"<LocalPath> <S3Path> or <S3Path> \" \\\n \"<LocalPath> or <S3Path> <S3Path>\"\n\n cmd_dict['ls']['description'] = \"List S3 objects and common prefixes \" \\\n \"under a prefix or all S3 buckets.\"\n cmd_dict['ls']['usage'] = \"<S3Path> or NONE\"\n\n cmd_dict['mb']['description'] = \"Creates an S3 bucket.\"\n cmd_dict['mb']['usage'] = \"<S3Path>\"\n\n cmd_dict['rb']['description'] = \"Deletes an S3 bucket.\"\n cmd_dict['rb']['usage'] = \"<S3Path>\"\n\n\ndef add_param_descriptions(params_dict):\n \"\"\"\n This function adds descriptions to the various parameters that can be\n used in commands.\n \"\"\"\n params_dict['dryrun']['documents'] = \"Displays the operations that \" \\\n \"would be performed using the specified command without actually\" \\\n \"running them.\"\n\n params_dict['quiet']['documents'] = \"Does not display the operations \" \\\n \"performed from the specified command.\"\n\n params_dict['recursive']['documents'] = \"Command is performed on all\" \\\n \"files or objects under the specified directory or prefix.\"\n\n params_dict['delete']['documents'] = \"Files that exist in the \" \\\n \"destination but not in the source are deleted during sync.\"\n\n params_dict['exclude']['documents'] = \"Exclude all files or objects\" \\\n \" from the command that matches the specified pattern.\"\n\n params_dict['include']['documents'] = \"Don't exclude files or objects in \" \\\n \"the command that match the specified pattern\"\n\n params_dict['acl']['documents'] = \"Sets the ACl for the object when the \" \\\n \"command is performed. Only accepts values of ``private``, \\\n ``public-read``, or ``public-read-write``.\"\n\n params_dict['force']['documents'] = \"Deletes all objects in the bucket \" \\\n \"including the bucket itself.\"\n\n params_dict['no-guess-mime-type']['documents'] = (\n \"Do not try to guess the mime type for uploaded files. By default the \"\n \"mime type of a file is guessed when it is uploaded.\")\n\n params_dict['content-type']['documents'] = (\n \"Specify an explicit content type for this operation. \"\n \"This value overrides any guessed mime types.\")\n\n params_dict['cache-control']['documents'] = \\\n \"Specifies caching behavior along the request/reply chain.\"\n\n params_dict['content-disposition']['documents'] = \\\n \"Specifies presentational information for the object.\"\n \n params_dict['content-encoding']['documents'] = (\n \"Specifies what content encodings have been \"\n \"applied to the object and thus what decoding mechanisms \"\n \"must be applied to obtain the media-type referenced \"\n \"by the Content-Type header field.\")\n \n params_dict['content-language']['documents'] = \\\n \"The language the content is in.\"\n\n params_dict['expires']['documents'] = \\\n \"The date and time at which the object is no longer cacheable.\"\n \n params_dict['sse']['documents'] = (\n \"Enable Server Side Encryption of the object in S3\")\n\n params_dict['storage-class']['documents'] = (\n \"The type of storage to use for the object. \"\n \"Defaults to 'STANDARD'\")\n\n params_dict['website-redirect']['documents'] = (\n \"If the bucket is configured as a website, redirects requests \"\n \"for this object to another object in the same bucket or to an \"\n \"external URL. Amazon S3 stores the value of this header in the \"\n \"object metadata.\")\n\n params_dict['grants']['documents'] = (\n \"Grant specific permissions to individual users or groups. \"\n \"You can supply a list of grants of the form \"\n \"``permission=grantee`` where permission is one of: \"\n \"``read``, ``readacl``, ``writeacp``, ``full``\")\n\n", "path": "awscli/customizations/s3/description.py"}], "after_files": [{"content": "# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\n\n\ndef add_command_descriptions(cmd_dict):\n \"\"\"\n This function adds descritpions to the various commands along with\n usage.\n \"\"\"\n cmd_dict['cp']['description'] = \"Copies a local file or S3 object to \\\n another location locally or in S3.\"\n cmd_dict['cp']['usage'] = \"<LocalPath> <S3Path> or <S3Path> <LocalPath> \" \\\n \"or <S3Path> <S3Path>\"\n\n cmd_dict['mv']['description'] = \"Moves a local file or S3 object to \" \\\n \"another location locally or in S3.\"\n cmd_dict['mv']['usage'] = \"<LocalPath> <S3Path> or <S3Path> <LocalPath> \" \\\n \"or <S3Path> <S3Path>\"\n\n cmd_dict['rm']['description'] = \"Deletes an S3 object.\"\n cmd_dict['rm']['usage'] = \"<S3Path>\"\n\n cmd_dict['sync']['description'] = \"Syncs directories and S3 prefixes.\"\n cmd_dict['sync']['usage'] = \"<LocalPath> <S3Path> or <S3Path> \" \\\n \"<LocalPath> or <S3Path> <S3Path>\"\n\n cmd_dict['ls']['description'] = \"List S3 objects and common prefixes \" \\\n \"under a prefix or all S3 buckets.\"\n cmd_dict['ls']['usage'] = \"<S3Path> or NONE\"\n\n cmd_dict['mb']['description'] = \"Creates an S3 bucket.\"\n cmd_dict['mb']['usage'] = \"<S3Path>\"\n\n cmd_dict['rb']['description'] = \"Deletes an S3 bucket.\"\n cmd_dict['rb']['usage'] = \"<S3Path>\"\n\n\ndef add_param_descriptions(params_dict):\n \"\"\"\n This function adds descriptions to the various parameters that can be\n used in commands.\n \"\"\"\n params_dict['dryrun']['documents'] = \"Displays the operations that \" \\\n \"would be performed using the specified command without actually\" \\\n \"running them.\"\n\n params_dict['quiet']['documents'] = \"Does not display the operations \" \\\n \"performed from the specified command.\"\n\n params_dict['recursive']['documents'] = \"Command is performed on all\" \\\n \"files or objects under the specified directory or prefix.\"\n\n params_dict['delete']['documents'] = \"Files that exist in the \" \\\n \"destination but not in the source are deleted during sync.\"\n\n params_dict['exclude']['documents'] = \"Exclude all files or objects\" \\\n \" from the command that matches the specified pattern.\"\n\n params_dict['include']['documents'] = \"Don't exclude files or objects in \" \\\n \"the command that match the specified pattern\"\n\n params_dict['acl']['documents'] = \"Sets the ACl for the object when the \" \\\n \"command is performed. Only accepts values of ``private``, \\\n ``public-read``, or ``public-read-write``.\"\n\n params_dict['force']['documents'] = \"Deletes all objects in the bucket \" \\\n \"including the bucket itself.\"\n\n params_dict['no-guess-mime-type']['documents'] = (\n \"Do not try to guess the mime type for uploaded files. By default the \"\n \"mime type of a file is guessed when it is uploaded.\")\n\n params_dict['content-type']['documents'] = (\n \"Specify an explicit content type for this operation. \"\n \"This value overrides any guessed mime types.\")\n\n params_dict['cache-control']['documents'] = \\\n \"Specifies caching behavior along the request/reply chain.\"\n\n params_dict['content-disposition']['documents'] = \\\n \"Specifies presentational information for the object.\"\n \n params_dict['content-encoding']['documents'] = (\n \"Specifies what content encodings have been \"\n \"applied to the object and thus what decoding mechanisms \"\n \"must be applied to obtain the media-type referenced \"\n \"by the Content-Type header field.\")\n \n params_dict['content-language']['documents'] = \\\n \"The language the content is in.\"\n\n params_dict['expires']['documents'] = \\\n \"The date and time at which the object is no longer cacheable.\"\n \n params_dict['sse']['documents'] = (\n \"Enable Server Side Encryption of the object in S3\")\n\n params_dict['storage-class']['documents'] = (\n \"The type of storage to use for the object. \"\n \"Valid choices are: STANDARD | REDUCED_REDUNDANCY. \"\n \"Defaults to 'STANDARD'\")\n\n params_dict['website-redirect']['documents'] = (\n \"If the bucket is configured as a website, redirects requests \"\n \"for this object to another object in the same bucket or to an \"\n \"external URL. Amazon S3 stores the value of this header in the \"\n \"object metadata.\")\n\n params_dict['grants']['documents'] = (\n \"Grant specific permissions to individual users or groups. \"\n \"You can supply a list of grants of the form \"\n \"``permission=grantee`` where permission is one of: \"\n \"``read``, ``readacl``, ``writeacp``, ``full``\")\n\n", "path": "awscli/customizations/s3/description.py"}]}
| 1,855 | 126 |
gh_patches_debug_11395
|
rasdani/github-patches
|
git_diff
|
sopel-irc__sopel-958
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
AttributeError: 'NoneType' object has no attribute 'strip' (file "/usr/local/lib/python2.7/dist-packages/sopel/modules/unicode_info.py", line 23, in codepoint)
[01:04am] <Ant> .u
01:04AM <Sopel> AttributeError: 'NoneType' object has no attribute 'strip' (file "/usr/local/lib/python2.7/dist-packages/sopel/modules/unicode_info.py", line 23, in codepoint)
01:04AM <Sopel> Ant: Sopel v. 6.1.1
This is in my Debian oldstable with Python v2.7.3. :(
AttributeError: 'NoneType' object has no attribute 'strip' (file "/usr/local/lib/python2.7/dist-packages/sopel/modules/unicode_info.py", line 23, in codepoint)
[01:04am] <Ant> .u
01:04AM <Sopel> AttributeError: 'NoneType' object has no attribute 'strip' (file "/usr/local/lib/python2.7/dist-packages/sopel/modules/unicode_info.py", line 23, in codepoint)
01:04AM <Sopel> Ant: Sopel v. 6.1.1
This is in my Debian oldstable with Python v2.7.3. :(
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sopel/modules/unicode_info.py`
Content:
```
1 # coding=utf-8
2 """Codepoints Module"""
3 # Copyright 2013, Elsie Powell, embolalia.com
4 # Copyright 2008, Sean B. Palmer, inamidst.com
5 # Licensed under the Eiffel Forum License 2.
6 from __future__ import unicode_literals, absolute_import, print_function, division
7 import unicodedata
8 import sys
9 from sopel.module import commands, example, NOLIMIT
10
11 if sys.version_info.major >= 3:
12 unichr = chr
13
14
15 @commands('u')
16 @example('.u ‽', 'U+203D INTERROBANG (‽)')
17 @example('.u 203D', 'U+203D INTERROBANG (‽)')
18 def codepoint(bot, trigger):
19 arg = trigger.group(2).strip()
20 if len(arg) == 0:
21 bot.reply('What code point do you want me to look up?')
22 return NOLIMIT
23 elif len(arg) > 1:
24 if arg.startswith('U+'):
25 arg = arg[2:]
26 try:
27 arg = unichr(int(arg, 16))
28 except:
29 bot.reply("That's not a valid code point.")
30 return NOLIMIT
31
32 # Get the hex value for the code point, and drop the 0x from the front
33 point = str(hex(ord(u'' + arg)))[2:]
34 # Make the hex 4 characters long with preceding 0s, and all upper case
35 point = point.rjust(4, str('0')).upper()
36 try:
37 name = unicodedata.name(arg)
38 except ValueError:
39 return 'U+%s (No name found)' % point
40
41 if not unicodedata.combining(arg):
42 template = 'U+%s %s (%s)'
43 else:
44 template = 'U+%s %s (\xe2\x97\x8c%s)'
45 bot.say(template % (point, name, arg))
46
47 if __name__ == "__main__":
48 from sopel.test_tools import run_example_tests
49 run_example_tests(__file__)
50
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sopel/modules/unicode_info.py b/sopel/modules/unicode_info.py
--- a/sopel/modules/unicode_info.py
+++ b/sopel/modules/unicode_info.py
@@ -16,11 +16,14 @@
@example('.u ‽', 'U+203D INTERROBANG (‽)')
@example('.u 203D', 'U+203D INTERROBANG (‽)')
def codepoint(bot, trigger):
- arg = trigger.group(2).strip()
- if len(arg) == 0:
+ arg = trigger.group(2)
+ if not arg:
bot.reply('What code point do you want me to look up?')
return NOLIMIT
- elif len(arg) > 1:
+ stripped = arg.strip()
+ if len(stripped) > 0:
+ arg = stripped
+ if len(arg) > 1:
if arg.startswith('U+'):
arg = arg[2:]
try:
|
{"golden_diff": "diff --git a/sopel/modules/unicode_info.py b/sopel/modules/unicode_info.py\n--- a/sopel/modules/unicode_info.py\n+++ b/sopel/modules/unicode_info.py\n@@ -16,11 +16,14 @@\n @example('.u \u203d', 'U+203D INTERROBANG (\u203d)')\n @example('.u 203D', 'U+203D INTERROBANG (\u203d)')\n def codepoint(bot, trigger):\n- arg = trigger.group(2).strip()\n- if len(arg) == 0:\n+ arg = trigger.group(2)\n+ if not arg:\n bot.reply('What code point do you want me to look up?')\n return NOLIMIT\n- elif len(arg) > 1:\n+ stripped = arg.strip()\n+ if len(stripped) > 0:\n+ arg = stripped\n+ if len(arg) > 1:\n if arg.startswith('U+'):\n arg = arg[2:]\n try:\n", "issue": "AttributeError: 'NoneType' object has no attribute 'strip' (file \"/usr/local/lib/python2.7/dist-packages/sopel/modules/unicode_info.py\", line 23, in codepoint)\n[01:04am] <Ant> .u\n01:04AM <Sopel> AttributeError: 'NoneType' object has no attribute 'strip' (file \"/usr/local/lib/python2.7/dist-packages/sopel/modules/unicode_info.py\", line 23, in codepoint)\n01:04AM <Sopel> Ant: Sopel v. 6.1.1\n\nThis is in my Debian oldstable with Python v2.7.3. :(\n\nAttributeError: 'NoneType' object has no attribute 'strip' (file \"/usr/local/lib/python2.7/dist-packages/sopel/modules/unicode_info.py\", line 23, in codepoint)\n[01:04am] <Ant> .u\n01:04AM <Sopel> AttributeError: 'NoneType' object has no attribute 'strip' (file \"/usr/local/lib/python2.7/dist-packages/sopel/modules/unicode_info.py\", line 23, in codepoint)\n01:04AM <Sopel> Ant: Sopel v. 6.1.1\n\nThis is in my Debian oldstable with Python v2.7.3. :(\n\n", "before_files": [{"content": "# coding=utf-8\n\"\"\"Codepoints Module\"\"\"\n# Copyright 2013, Elsie Powell, embolalia.com\n# Copyright 2008, Sean B. Palmer, inamidst.com\n# Licensed under the Eiffel Forum License 2.\nfrom __future__ import unicode_literals, absolute_import, print_function, division\nimport unicodedata\nimport sys\nfrom sopel.module import commands, example, NOLIMIT\n\nif sys.version_info.major >= 3:\n unichr = chr\n\n\n@commands('u')\n@example('.u \u203d', 'U+203D INTERROBANG (\u203d)')\n@example('.u 203D', 'U+203D INTERROBANG (\u203d)')\ndef codepoint(bot, trigger):\n arg = trigger.group(2).strip()\n if len(arg) == 0:\n bot.reply('What code point do you want me to look up?')\n return NOLIMIT\n elif len(arg) > 1:\n if arg.startswith('U+'):\n arg = arg[2:]\n try:\n arg = unichr(int(arg, 16))\n except:\n bot.reply(\"That's not a valid code point.\")\n return NOLIMIT\n\n # Get the hex value for the code point, and drop the 0x from the front\n point = str(hex(ord(u'' + arg)))[2:]\n # Make the hex 4 characters long with preceding 0s, and all upper case\n point = point.rjust(4, str('0')).upper()\n try:\n name = unicodedata.name(arg)\n except ValueError:\n return 'U+%s (No name found)' % point\n\n if not unicodedata.combining(arg):\n template = 'U+%s %s (%s)'\n else:\n template = 'U+%s %s (\\xe2\\x97\\x8c%s)'\n bot.say(template % (point, name, arg))\n\nif __name__ == \"__main__\":\n from sopel.test_tools import run_example_tests\n run_example_tests(__file__)\n", "path": "sopel/modules/unicode_info.py"}], "after_files": [{"content": "# coding=utf-8\n\"\"\"Codepoints Module\"\"\"\n# Copyright 2013, Elsie Powell, embolalia.com\n# Copyright 2008, Sean B. Palmer, inamidst.com\n# Licensed under the Eiffel Forum License 2.\nfrom __future__ import unicode_literals, absolute_import, print_function, division\nimport unicodedata\nimport sys\nfrom sopel.module import commands, example, NOLIMIT\n\nif sys.version_info.major >= 3:\n unichr = chr\n\n\n@commands('u')\n@example('.u \u203d', 'U+203D INTERROBANG (\u203d)')\n@example('.u 203D', 'U+203D INTERROBANG (\u203d)')\ndef codepoint(bot, trigger):\n arg = trigger.group(2)\n if not arg:\n bot.reply('What code point do you want me to look up?')\n return NOLIMIT\n stripped = arg.strip()\n if len(stripped) > 0:\n arg = stripped\n if len(arg) > 1:\n if arg.startswith('U+'):\n arg = arg[2:]\n try:\n arg = unichr(int(arg, 16))\n except:\n bot.reply(\"That's not a valid code point.\")\n return NOLIMIT\n\n # Get the hex value for the code point, and drop the 0x from the front\n point = str(hex(ord(u'' + arg)))[2:]\n # Make the hex 4 characters long with preceding 0s, and all upper case\n point = point.rjust(4, str('0')).upper()\n try:\n name = unicodedata.name(arg)\n except ValueError:\n return 'U+%s (No name found)' % point\n\n if not unicodedata.combining(arg):\n template = 'U+%s %s (%s)'\n else:\n template = 'U+%s %s (\\xe2\\x97\\x8c%s)'\n bot.say(template % (point, name, arg))\n\nif __name__ == \"__main__\":\n from sopel.test_tools import run_example_tests\n run_example_tests(__file__)\n", "path": "sopel/modules/unicode_info.py"}]}
| 1,124 | 231 |
gh_patches_debug_38450
|
rasdani/github-patches
|
git_diff
|
searx__searx-1452
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Findx is shutting down
https://privacore.github.io/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `searx/engines/findx.py`
Content:
```
1 """
2 FindX (General, Images, Videos)
3
4 @website https://www.findx.com
5 @provide-api no
6 @using-api no
7 @results HTML
8 @stable no
9 @parse url, title, content, embedded, img_src, thumbnail_src
10 """
11
12 from dateutil import parser
13 from json import loads
14 import re
15
16 from lxml import html
17
18 from searx import logger
19 from searx.engines.xpath import extract_text
20 from searx.engines.youtube_noapi import base_youtube_url, embedded_url
21 from searx.url_utils import urlencode
22
23
24 paging = True
25 results_xpath = '//script[@id="initial-state"]'
26 search_url = 'https://www.findx.com/{category}?{q}'
27 type_map = {
28 'none': 'web',
29 'general': 'web',
30 'images': 'images',
31 'videos': 'videos',
32 }
33
34
35 def request(query, params):
36 params['url'] = search_url.format(
37 category=type_map[params['category']],
38 q=urlencode({
39 'q': query,
40 'page': params['pageno']
41 })
42 )
43 return params
44
45
46 def response(resp):
47 dom = html.fromstring(resp.text)
48 results_raw_json = dom.xpath(results_xpath)
49 results_json = loads(extract_text(results_raw_json))
50
51 if len(results_json['web']['results']) > 0:
52 return _general_results(results_json['web']['results']['webSearch']['results'])
53
54 if len(results_json['images']['results']) > 0:
55 return _images_results(results_json['images']['results'])
56
57 if len(results_json['video']['results']) > 0:
58 return _videos_results(results_json['video']['results'])
59
60 return []
61
62
63 def _general_results(general_results):
64 results = []
65 for result in general_results:
66 results.append({
67 'url': result['url'],
68 'title': result['title'],
69 'content': result['sum'],
70 })
71 return results
72
73
74 def _images_results(image_results):
75 results = []
76 for result in image_results:
77 results.append({
78 'url': result['sourceURL'],
79 'title': result['title'],
80 'content': result['source'],
81 'thumbnail_src': _extract_url(result['assets']['thumb']['url']),
82 'img_src': _extract_url(result['assets']['file']['url']),
83 'template': 'images.html',
84 })
85 return results
86
87
88 def _videos_results(video_results):
89 results = []
90 for result in video_results:
91 if not result['kind'].startswith('youtube'):
92 logger.warn('Unknown video kind in findx: {}'.format(result['kind']))
93 continue
94
95 description = result['snippet']['description']
96 if len(description) > 300:
97 description = description[:300] + '...'
98
99 results.append({
100 'url': base_youtube_url + result['id'],
101 'title': result['snippet']['title'],
102 'content': description,
103 'thumbnail': _extract_url(result['snippet']['thumbnails']['default']['url']),
104 'publishedDate': parser.parse(result['snippet']['publishedAt']),
105 'embedded': embedded_url.format(videoid=result['id']),
106 'template': 'videos.html',
107 })
108 return results
109
110
111 def _extract_url(url):
112 matching = re.search('(/https?://[^)]+)', url)
113 if matching:
114 return matching.group(0)[1:]
115 return ''
116
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/searx/engines/findx.py b/searx/engines/findx.py
deleted file mode 100644
--- a/searx/engines/findx.py
+++ /dev/null
@@ -1,115 +0,0 @@
-"""
-FindX (General, Images, Videos)
-
-@website https://www.findx.com
-@provide-api no
-@using-api no
-@results HTML
-@stable no
-@parse url, title, content, embedded, img_src, thumbnail_src
-"""
-
-from dateutil import parser
-from json import loads
-import re
-
-from lxml import html
-
-from searx import logger
-from searx.engines.xpath import extract_text
-from searx.engines.youtube_noapi import base_youtube_url, embedded_url
-from searx.url_utils import urlencode
-
-
-paging = True
-results_xpath = '//script[@id="initial-state"]'
-search_url = 'https://www.findx.com/{category}?{q}'
-type_map = {
- 'none': 'web',
- 'general': 'web',
- 'images': 'images',
- 'videos': 'videos',
-}
-
-
-def request(query, params):
- params['url'] = search_url.format(
- category=type_map[params['category']],
- q=urlencode({
- 'q': query,
- 'page': params['pageno']
- })
- )
- return params
-
-
-def response(resp):
- dom = html.fromstring(resp.text)
- results_raw_json = dom.xpath(results_xpath)
- results_json = loads(extract_text(results_raw_json))
-
- if len(results_json['web']['results']) > 0:
- return _general_results(results_json['web']['results']['webSearch']['results'])
-
- if len(results_json['images']['results']) > 0:
- return _images_results(results_json['images']['results'])
-
- if len(results_json['video']['results']) > 0:
- return _videos_results(results_json['video']['results'])
-
- return []
-
-
-def _general_results(general_results):
- results = []
- for result in general_results:
- results.append({
- 'url': result['url'],
- 'title': result['title'],
- 'content': result['sum'],
- })
- return results
-
-
-def _images_results(image_results):
- results = []
- for result in image_results:
- results.append({
- 'url': result['sourceURL'],
- 'title': result['title'],
- 'content': result['source'],
- 'thumbnail_src': _extract_url(result['assets']['thumb']['url']),
- 'img_src': _extract_url(result['assets']['file']['url']),
- 'template': 'images.html',
- })
- return results
-
-
-def _videos_results(video_results):
- results = []
- for result in video_results:
- if not result['kind'].startswith('youtube'):
- logger.warn('Unknown video kind in findx: {}'.format(result['kind']))
- continue
-
- description = result['snippet']['description']
- if len(description) > 300:
- description = description[:300] + '...'
-
- results.append({
- 'url': base_youtube_url + result['id'],
- 'title': result['snippet']['title'],
- 'content': description,
- 'thumbnail': _extract_url(result['snippet']['thumbnails']['default']['url']),
- 'publishedDate': parser.parse(result['snippet']['publishedAt']),
- 'embedded': embedded_url.format(videoid=result['id']),
- 'template': 'videos.html',
- })
- return results
-
-
-def _extract_url(url):
- matching = re.search('(/https?://[^)]+)', url)
- if matching:
- return matching.group(0)[1:]
- return ''
|
{"golden_diff": "diff --git a/searx/engines/findx.py b/searx/engines/findx.py\ndeleted file mode 100644\n--- a/searx/engines/findx.py\n+++ /dev/null\n@@ -1,115 +0,0 @@\n-\"\"\"\n-FindX (General, Images, Videos)\n-\n-@website https://www.findx.com\n-@provide-api no\n-@using-api no\n-@results HTML\n-@stable no\n-@parse url, title, content, embedded, img_src, thumbnail_src\n-\"\"\"\n-\n-from dateutil import parser\n-from json import loads\n-import re\n-\n-from lxml import html\n-\n-from searx import logger\n-from searx.engines.xpath import extract_text\n-from searx.engines.youtube_noapi import base_youtube_url, embedded_url\n-from searx.url_utils import urlencode\n-\n-\n-paging = True\n-results_xpath = '//script[@id=\"initial-state\"]'\n-search_url = 'https://www.findx.com/{category}?{q}'\n-type_map = {\n- 'none': 'web',\n- 'general': 'web',\n- 'images': 'images',\n- 'videos': 'videos',\n-}\n-\n-\n-def request(query, params):\n- params['url'] = search_url.format(\n- category=type_map[params['category']],\n- q=urlencode({\n- 'q': query,\n- 'page': params['pageno']\n- })\n- )\n- return params\n-\n-\n-def response(resp):\n- dom = html.fromstring(resp.text)\n- results_raw_json = dom.xpath(results_xpath)\n- results_json = loads(extract_text(results_raw_json))\n-\n- if len(results_json['web']['results']) > 0:\n- return _general_results(results_json['web']['results']['webSearch']['results'])\n-\n- if len(results_json['images']['results']) > 0:\n- return _images_results(results_json['images']['results'])\n-\n- if len(results_json['video']['results']) > 0:\n- return _videos_results(results_json['video']['results'])\n-\n- return []\n-\n-\n-def _general_results(general_results):\n- results = []\n- for result in general_results:\n- results.append({\n- 'url': result['url'],\n- 'title': result['title'],\n- 'content': result['sum'],\n- })\n- return results\n-\n-\n-def _images_results(image_results):\n- results = []\n- for result in image_results:\n- results.append({\n- 'url': result['sourceURL'],\n- 'title': result['title'],\n- 'content': result['source'],\n- 'thumbnail_src': _extract_url(result['assets']['thumb']['url']),\n- 'img_src': _extract_url(result['assets']['file']['url']),\n- 'template': 'images.html',\n- })\n- return results\n-\n-\n-def _videos_results(video_results):\n- results = []\n- for result in video_results:\n- if not result['kind'].startswith('youtube'):\n- logger.warn('Unknown video kind in findx: {}'.format(result['kind']))\n- continue\n-\n- description = result['snippet']['description']\n- if len(description) > 300:\n- description = description[:300] + '...'\n-\n- results.append({\n- 'url': base_youtube_url + result['id'],\n- 'title': result['snippet']['title'],\n- 'content': description,\n- 'thumbnail': _extract_url(result['snippet']['thumbnails']['default']['url']),\n- 'publishedDate': parser.parse(result['snippet']['publishedAt']),\n- 'embedded': embedded_url.format(videoid=result['id']),\n- 'template': 'videos.html',\n- })\n- return results\n-\n-\n-def _extract_url(url):\n- matching = re.search('(/https?://[^)]+)', url)\n- if matching:\n- return matching.group(0)[1:]\n- return ''\n", "issue": "Findx is shutting down\nhttps://privacore.github.io/\n", "before_files": [{"content": "\"\"\"\nFindX (General, Images, Videos)\n\n@website https://www.findx.com\n@provide-api no\n@using-api no\n@results HTML\n@stable no\n@parse url, title, content, embedded, img_src, thumbnail_src\n\"\"\"\n\nfrom dateutil import parser\nfrom json import loads\nimport re\n\nfrom lxml import html\n\nfrom searx import logger\nfrom searx.engines.xpath import extract_text\nfrom searx.engines.youtube_noapi import base_youtube_url, embedded_url\nfrom searx.url_utils import urlencode\n\n\npaging = True\nresults_xpath = '//script[@id=\"initial-state\"]'\nsearch_url = 'https://www.findx.com/{category}?{q}'\ntype_map = {\n 'none': 'web',\n 'general': 'web',\n 'images': 'images',\n 'videos': 'videos',\n}\n\n\ndef request(query, params):\n params['url'] = search_url.format(\n category=type_map[params['category']],\n q=urlencode({\n 'q': query,\n 'page': params['pageno']\n })\n )\n return params\n\n\ndef response(resp):\n dom = html.fromstring(resp.text)\n results_raw_json = dom.xpath(results_xpath)\n results_json = loads(extract_text(results_raw_json))\n\n if len(results_json['web']['results']) > 0:\n return _general_results(results_json['web']['results']['webSearch']['results'])\n\n if len(results_json['images']['results']) > 0:\n return _images_results(results_json['images']['results'])\n\n if len(results_json['video']['results']) > 0:\n return _videos_results(results_json['video']['results'])\n\n return []\n\n\ndef _general_results(general_results):\n results = []\n for result in general_results:\n results.append({\n 'url': result['url'],\n 'title': result['title'],\n 'content': result['sum'],\n })\n return results\n\n\ndef _images_results(image_results):\n results = []\n for result in image_results:\n results.append({\n 'url': result['sourceURL'],\n 'title': result['title'],\n 'content': result['source'],\n 'thumbnail_src': _extract_url(result['assets']['thumb']['url']),\n 'img_src': _extract_url(result['assets']['file']['url']),\n 'template': 'images.html',\n })\n return results\n\n\ndef _videos_results(video_results):\n results = []\n for result in video_results:\n if not result['kind'].startswith('youtube'):\n logger.warn('Unknown video kind in findx: {}'.format(result['kind']))\n continue\n\n description = result['snippet']['description']\n if len(description) > 300:\n description = description[:300] + '...'\n\n results.append({\n 'url': base_youtube_url + result['id'],\n 'title': result['snippet']['title'],\n 'content': description,\n 'thumbnail': _extract_url(result['snippet']['thumbnails']['default']['url']),\n 'publishedDate': parser.parse(result['snippet']['publishedAt']),\n 'embedded': embedded_url.format(videoid=result['id']),\n 'template': 'videos.html',\n })\n return results\n\n\ndef _extract_url(url):\n matching = re.search('(/https?://[^)]+)', url)\n if matching:\n return matching.group(0)[1:]\n return ''\n", "path": "searx/engines/findx.py"}], "after_files": [{"content": null, "path": "searx/engines/findx.py"}]}
| 1,261 | 882 |
gh_patches_debug_16627
|
rasdani/github-patches
|
git_diff
|
kserve__kserve-3551
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support .json and .ubj model format for XGBoost server image
/kind feature
**Description**
In the XGBoost image, the only supported model format is .bst: https://github.com/kserve/kserve/blob/56b8fe0d189fc0d557e9a8af07eab0c12852d5fd/python/xgbserver/xgbserver/model.py#L28
This format has been deprecated for a while and is not backwards compatible between xgboost framework versions. The recommended model format is .json or .ubj: https://xgboost.readthedocs.io/en/stable/tutorials/saving_model.html
Users that want to use the recommended model format for XGBoost models, are currently not able to do so.
**Proposed solution**
Support the recommended file formats, while also keeping support for the old .bst format.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/xgbserver/xgbserver/model.py`
Content:
```
1 # Copyright 2021 The KServe Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15
16 import os
17 from typing import Dict, Union
18
19 import xgboost as xgb
20 from kserve.errors import InferenceError, ModelMissingError
21 from kserve.protocol.infer_type import InferRequest, InferResponse
22 from kserve.utils.utils import get_predict_input, get_predict_response
23 from xgboost import XGBModel
24
25 from kserve import Model
26 from kserve.storage import Storage
27
28 BOOSTER_FILE_EXTENSION = ".bst"
29
30
31 class XGBoostModel(Model):
32 def __init__(
33 self, name: str, model_dir: str, nthread: int, booster: XGBModel = None
34 ):
35 super().__init__(name)
36 self.name = name
37 self.model_dir = model_dir
38 self.nthread = nthread
39 if booster is not None:
40 self._booster = booster
41 self.ready = True
42
43 def load(self) -> bool:
44 model_path = Storage.download(self.model_dir)
45 model_files = []
46 for file in os.listdir(model_path):
47 file_path = os.path.join(model_path, file)
48 if os.path.isfile(file_path) and file.endswith(BOOSTER_FILE_EXTENSION):
49 model_files.append(file_path)
50 if len(model_files) == 0:
51 raise ModelMissingError(model_path)
52 elif len(model_files) > 1:
53 raise RuntimeError(
54 "More than one model file is detected, "
55 f"Only one is allowed within model_dir: {model_files}"
56 )
57
58 self._booster = xgb.Booster(
59 params={"nthread": self.nthread}, model_file=model_files[0]
60 )
61 self.ready = True
62 return self.ready
63
64 def predict(
65 self, payload: Union[Dict, InferRequest], headers: Dict[str, str] = None
66 ) -> Union[Dict, InferResponse]:
67 try:
68 # Use of list as input is deprecated see https://github.com/dmlc/xgboost/pull/3970
69 instances = get_predict_input(payload)
70 dmatrix = xgb.DMatrix(instances, nthread=self.nthread)
71 result = self._booster.predict(dmatrix)
72 return get_predict_response(payload, result, self.name)
73 except Exception as e:
74 raise InferenceError(str(e))
75
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/python/xgbserver/xgbserver/model.py b/python/xgbserver/xgbserver/model.py
--- a/python/xgbserver/xgbserver/model.py
+++ b/python/xgbserver/xgbserver/model.py
@@ -25,7 +25,7 @@
from kserve import Model
from kserve.storage import Storage
-BOOSTER_FILE_EXTENSION = ".bst"
+BOOSTER_FILE_EXTENSIONS = (".bst", ".json", ".ubj")
class XGBoostModel(Model):
@@ -45,7 +45,7 @@
model_files = []
for file in os.listdir(model_path):
file_path = os.path.join(model_path, file)
- if os.path.isfile(file_path) and file.endswith(BOOSTER_FILE_EXTENSION):
+ if os.path.isfile(file_path) and file.endswith(BOOSTER_FILE_EXTENSIONS):
model_files.append(file_path)
if len(model_files) == 0:
raise ModelMissingError(model_path)
|
{"golden_diff": "diff --git a/python/xgbserver/xgbserver/model.py b/python/xgbserver/xgbserver/model.py\n--- a/python/xgbserver/xgbserver/model.py\n+++ b/python/xgbserver/xgbserver/model.py\n@@ -25,7 +25,7 @@\n from kserve import Model\n from kserve.storage import Storage\n \n-BOOSTER_FILE_EXTENSION = \".bst\"\n+BOOSTER_FILE_EXTENSIONS = (\".bst\", \".json\", \".ubj\")\n \n \n class XGBoostModel(Model):\n@@ -45,7 +45,7 @@\n model_files = []\n for file in os.listdir(model_path):\n file_path = os.path.join(model_path, file)\n- if os.path.isfile(file_path) and file.endswith(BOOSTER_FILE_EXTENSION):\n+ if os.path.isfile(file_path) and file.endswith(BOOSTER_FILE_EXTENSIONS):\n model_files.append(file_path)\n if len(model_files) == 0:\n raise ModelMissingError(model_path)\n", "issue": "Support .json and .ubj model format for XGBoost server image\n/kind feature\r\n\r\n\r\n**Description**\r\nIn the XGBoost image, the only supported model format is .bst: https://github.com/kserve/kserve/blob/56b8fe0d189fc0d557e9a8af07eab0c12852d5fd/python/xgbserver/xgbserver/model.py#L28\r\n\r\nThis format has been deprecated for a while and is not backwards compatible between xgboost framework versions. The recommended model format is .json or .ubj: https://xgboost.readthedocs.io/en/stable/tutorials/saving_model.html\r\n\r\nUsers that want to use the recommended model format for XGBoost models, are currently not able to do so.\r\n\r\n\r\n**Proposed solution**\r\nSupport the recommended file formats, while also keeping support for the old .bst format. \r\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright 2021 The KServe Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nimport os\nfrom typing import Dict, Union\n\nimport xgboost as xgb\nfrom kserve.errors import InferenceError, ModelMissingError\nfrom kserve.protocol.infer_type import InferRequest, InferResponse\nfrom kserve.utils.utils import get_predict_input, get_predict_response\nfrom xgboost import XGBModel\n\nfrom kserve import Model\nfrom kserve.storage import Storage\n\nBOOSTER_FILE_EXTENSION = \".bst\"\n\n\nclass XGBoostModel(Model):\n def __init__(\n self, name: str, model_dir: str, nthread: int, booster: XGBModel = None\n ):\n super().__init__(name)\n self.name = name\n self.model_dir = model_dir\n self.nthread = nthread\n if booster is not None:\n self._booster = booster\n self.ready = True\n\n def load(self) -> bool:\n model_path = Storage.download(self.model_dir)\n model_files = []\n for file in os.listdir(model_path):\n file_path = os.path.join(model_path, file)\n if os.path.isfile(file_path) and file.endswith(BOOSTER_FILE_EXTENSION):\n model_files.append(file_path)\n if len(model_files) == 0:\n raise ModelMissingError(model_path)\n elif len(model_files) > 1:\n raise RuntimeError(\n \"More than one model file is detected, \"\n f\"Only one is allowed within model_dir: {model_files}\"\n )\n\n self._booster = xgb.Booster(\n params={\"nthread\": self.nthread}, model_file=model_files[0]\n )\n self.ready = True\n return self.ready\n\n def predict(\n self, payload: Union[Dict, InferRequest], headers: Dict[str, str] = None\n ) -> Union[Dict, InferResponse]:\n try:\n # Use of list as input is deprecated see https://github.com/dmlc/xgboost/pull/3970\n instances = get_predict_input(payload)\n dmatrix = xgb.DMatrix(instances, nthread=self.nthread)\n result = self._booster.predict(dmatrix)\n return get_predict_response(payload, result, self.name)\n except Exception as e:\n raise InferenceError(str(e))\n", "path": "python/xgbserver/xgbserver/model.py"}], "after_files": [{"content": "# Copyright 2021 The KServe Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nimport os\nfrom typing import Dict, Union\n\nimport xgboost as xgb\nfrom kserve.errors import InferenceError, ModelMissingError\nfrom kserve.protocol.infer_type import InferRequest, InferResponse\nfrom kserve.utils.utils import get_predict_input, get_predict_response\nfrom xgboost import XGBModel\n\nfrom kserve import Model\nfrom kserve.storage import Storage\n\nBOOSTER_FILE_EXTENSIONS = (\".bst\", \".json\", \".ubj\")\n\n\nclass XGBoostModel(Model):\n def __init__(\n self, name: str, model_dir: str, nthread: int, booster: XGBModel = None\n ):\n super().__init__(name)\n self.name = name\n self.model_dir = model_dir\n self.nthread = nthread\n if booster is not None:\n self._booster = booster\n self.ready = True\n\n def load(self) -> bool:\n model_path = Storage.download(self.model_dir)\n model_files = []\n for file in os.listdir(model_path):\n file_path = os.path.join(model_path, file)\n if os.path.isfile(file_path) and file.endswith(BOOSTER_FILE_EXTENSIONS):\n model_files.append(file_path)\n if len(model_files) == 0:\n raise ModelMissingError(model_path)\n elif len(model_files) > 1:\n raise RuntimeError(\n \"More than one model file is detected, \"\n f\"Only one is allowed within model_dir: {model_files}\"\n )\n\n self._booster = xgb.Booster(\n params={\"nthread\": self.nthread}, model_file=model_files[0]\n )\n self.ready = True\n return self.ready\n\n def predict(\n self, payload: Union[Dict, InferRequest], headers: Dict[str, str] = None\n ) -> Union[Dict, InferResponse]:\n try:\n # Use of list as input is deprecated see https://github.com/dmlc/xgboost/pull/3970\n instances = get_predict_input(payload)\n dmatrix = xgb.DMatrix(instances, nthread=self.nthread)\n result = self._booster.predict(dmatrix)\n return get_predict_response(payload, result, self.name)\n except Exception as e:\n raise InferenceError(str(e))\n", "path": "python/xgbserver/xgbserver/model.py"}]}
| 1,222 | 206 |
gh_patches_debug_47979
|
rasdani/github-patches
|
git_diff
|
TheAlgorithms__Python-10664
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improve our test coverage
### Feature description
Many of our existing algorithm files have little to no unit testing. This is problematic because this can easily let bugs slip through. We want some assurance that the code we currently have is correct and functional. We welcome all contributors to open PRs to help us add tests to our codebase.
### How to find low-coverage files
Go to the Actions tab in this repository and find the most recent **build** workflow run. Open the logs under "Run Tests" and scroll down until you find the section on code coverage:
```
---------- coverage: platform linux, python 3.12.0-final-0 -----------
Name Stmts Miss Cover Missing
-----------------------------------------------------------------------------------------------------------
quantum/q_fourier_transform.py 30 30 0% 14-93
scripts/validate_solutions.py 54 54 0% 2-94
strings/min_cost_string_conversion.py 78 75 4% 20-57, 61-75, 79-129
...
```
The "Cover" column tells you what percentage of the lines in that file are covered by tests. We want to increase this percentage for existing files. Find a file with low coverage percentage that you wish to write tests for, add doctests for each function, and open a PR with your changes. You do not need to have a perfect coverage percentage, but all functions should have doctests.
Some files will naturally be hard to write tests for. For example, the file may be poorly written because they lack any functions. Other files might be how-tos, meaning they simply demonstrate how to use an existing library's functions rather than implementing the algorithm themselves. Ignore these kinds of files, as they will need to be rewritten eventually. Furthermore, ignore files in the `web_programming` and `project_euler` directories. Web programming files are inherently hard to test and Project Euler files have their own validation workflow, so don't worry about their test coverage.
_**When you open your PR, put "Contributes to #9943" in the PR description.**_ Do not use the word "fixes", "resolves", or "closes". This issue is an ongoing one, and your PR will not single-handedly resolve this issue.
### How to add doctests
A doctest is a unit test that is contained within the documentation comment (docstring) for a function. Here is an example of what doctests look like within a docstring:
```py
def add(a: int, b: int) -> int:
"""
Adds two non-negative numbers.
>>> add(1, 1)
2
>>> add(2, 5)
7
>>> add(1, 0)
1
>>> add(-1, -1)
Traceback (most recent last):
...
ValueError: Numbers must be non-negative
"""
```
For every function in the file you choose, you should write doctests like the ones shown above in its docstring. If a function doesn't have a docstring, add one. Your doctests should be comprehensive but not excessive: you should write just enough tests to cover all basic cases as well as all edge cases (e.g., negative numbers, empty lists, etc).
Do not simply run a function on some example inputs and put its output as the expected output for a doctest. This assumes that the function is implemented correctly when it might not be. Verify independently that your doctests and their expected outputs are correct. **Your PR will not be merged if it has failing tests.** If you happen to discover a bug while writing doctests, please fix it.
_**Please read our [contributing guidelines](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md) before you contribute.**_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `maths/power_using_recursion.py`
Content:
```
1 """
2 == Raise base to the power of exponent using recursion ==
3 Input -->
4 Enter the base: 3
5 Enter the exponent: 4
6 Output -->
7 3 to the power of 4 is 81
8 Input -->
9 Enter the base: 2
10 Enter the exponent: 0
11 Output -->
12 2 to the power of 0 is 1
13 """
14
15
16 def power(base: int, exponent: int) -> float:
17 """
18 >>> power(3, 4)
19 81
20 >>> power(2, 0)
21 1
22 >>> all(power(base, exponent) == pow(base, exponent)
23 ... for base in range(-10, 10) for exponent in range(10))
24 True
25 >>> power('a', 1)
26 'a'
27 >>> power('a', 2)
28 Traceback (most recent call last):
29 ...
30 TypeError: can't multiply sequence by non-int of type 'str'
31 >>> power('a', 'b')
32 Traceback (most recent call last):
33 ...
34 TypeError: unsupported operand type(s) for -: 'str' and 'int'
35 >>> power(2, -1)
36 Traceback (most recent call last):
37 ...
38 RecursionError: maximum recursion depth exceeded
39 """
40 return base * power(base, (exponent - 1)) if exponent else 1
41
42
43 if __name__ == "__main__":
44 from doctests import testmod
45
46 testmod()
47 print("Raise base to the power of exponent using recursion...")
48 base = int(input("Enter the base: ").strip())
49 exponent = int(input("Enter the exponent: ").strip())
50 result = power(base, abs(exponent))
51 if exponent < 0: # power() does not properly deal w/ negative exponents
52 result = 1 / result
53 print(f"{base} to the power of {exponent} is {result}")
54
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/maths/power_using_recursion.py b/maths/power_using_recursion.py
--- a/maths/power_using_recursion.py
+++ b/maths/power_using_recursion.py
@@ -15,6 +15,8 @@
def power(base: int, exponent: int) -> float:
"""
+ Calculate the power of a base raised to an exponent.
+
>>> power(3, 4)
81
>>> power(2, 0)
|
{"golden_diff": "diff --git a/maths/power_using_recursion.py b/maths/power_using_recursion.py\n--- a/maths/power_using_recursion.py\n+++ b/maths/power_using_recursion.py\n@@ -15,6 +15,8 @@\n \n def power(base: int, exponent: int) -> float:\n \"\"\"\n+ Calculate the power of a base raised to an exponent.\n+\n >>> power(3, 4)\n 81\n >>> power(2, 0)\n", "issue": "Improve our test coverage\n### Feature description\r\n\r\nMany of our existing algorithm files have little to no unit testing. This is problematic because this can easily let bugs slip through. We want some assurance that the code we currently have is correct and functional. We welcome all contributors to open PRs to help us add tests to our codebase.\r\n\r\n### How to find low-coverage files\r\n\r\nGo to the Actions tab in this repository and find the most recent **build** workflow run. Open the logs under \"Run Tests\" and scroll down until you find the section on code coverage:\r\n```\r\n---------- coverage: platform linux, python 3.12.0-final-0 -----------\r\nName Stmts Miss Cover Missing\r\n-----------------------------------------------------------------------------------------------------------\r\nquantum/q_fourier_transform.py 30 30 0% 14-93\r\nscripts/validate_solutions.py 54 54 0% 2-94\r\nstrings/min_cost_string_conversion.py 78 75 4% 20-57, 61-75, 79-129\r\n...\r\n```\r\nThe \"Cover\" column tells you what percentage of the lines in that file are covered by tests. We want to increase this percentage for existing files. Find a file with low coverage percentage that you wish to write tests for, add doctests for each function, and open a PR with your changes. You do not need to have a perfect coverage percentage, but all functions should have doctests.\r\n\r\nSome files will naturally be hard to write tests for. For example, the file may be poorly written because they lack any functions. Other files might be how-tos, meaning they simply demonstrate how to use an existing library's functions rather than implementing the algorithm themselves. Ignore these kinds of files, as they will need to be rewritten eventually. Furthermore, ignore files in the `web_programming` and `project_euler` directories. Web programming files are inherently hard to test and Project Euler files have their own validation workflow, so don't worry about their test coverage.\r\n\r\n_**When you open your PR, put \"Contributes to #9943\" in the PR description.**_ Do not use the word \"fixes\", \"resolves\", or \"closes\". This issue is an ongoing one, and your PR will not single-handedly resolve this issue.\r\n\r\n### How to add doctests\r\n\r\nA doctest is a unit test that is contained within the documentation comment (docstring) for a function. Here is an example of what doctests look like within a docstring:\r\n```py\r\ndef add(a: int, b: int) -> int:\r\n \"\"\"\r\n Adds two non-negative numbers.\r\n >>> add(1, 1)\r\n 2\r\n >>> add(2, 5)\r\n 7\r\n >>> add(1, 0)\r\n 1\r\n >>> add(-1, -1)\r\n Traceback (most recent last):\r\n ...\r\n ValueError: Numbers must be non-negative\r\n \"\"\"\r\n```\r\nFor every function in the file you choose, you should write doctests like the ones shown above in its docstring. If a function doesn't have a docstring, add one. Your doctests should be comprehensive but not excessive: you should write just enough tests to cover all basic cases as well as all edge cases (e.g., negative numbers, empty lists, etc).\r\n\r\nDo not simply run a function on some example inputs and put its output as the expected output for a doctest. This assumes that the function is implemented correctly when it might not be. Verify independently that your doctests and their expected outputs are correct. **Your PR will not be merged if it has failing tests.** If you happen to discover a bug while writing doctests, please fix it.\r\n\r\n_**Please read our [contributing guidelines](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md) before you contribute.**_\n", "before_files": [{"content": "\"\"\"\n== Raise base to the power of exponent using recursion ==\n Input -->\n Enter the base: 3\n Enter the exponent: 4\n Output -->\n 3 to the power of 4 is 81\n Input -->\n Enter the base: 2\n Enter the exponent: 0\n Output -->\n 2 to the power of 0 is 1\n\"\"\"\n\n\ndef power(base: int, exponent: int) -> float:\n \"\"\"\n >>> power(3, 4)\n 81\n >>> power(2, 0)\n 1\n >>> all(power(base, exponent) == pow(base, exponent)\n ... for base in range(-10, 10) for exponent in range(10))\n True\n >>> power('a', 1)\n 'a'\n >>> power('a', 2)\n Traceback (most recent call last):\n ...\n TypeError: can't multiply sequence by non-int of type 'str'\n >>> power('a', 'b')\n Traceback (most recent call last):\n ...\n TypeError: unsupported operand type(s) for -: 'str' and 'int'\n >>> power(2, -1)\n Traceback (most recent call last):\n ...\n RecursionError: maximum recursion depth exceeded\n \"\"\"\n return base * power(base, (exponent - 1)) if exponent else 1\n\n\nif __name__ == \"__main__\":\n from doctests import testmod\n\n testmod()\n print(\"Raise base to the power of exponent using recursion...\")\n base = int(input(\"Enter the base: \").strip())\n exponent = int(input(\"Enter the exponent: \").strip())\n result = power(base, abs(exponent))\n if exponent < 0: # power() does not properly deal w/ negative exponents\n result = 1 / result\n print(f\"{base} to the power of {exponent} is {result}\")\n", "path": "maths/power_using_recursion.py"}], "after_files": [{"content": "\"\"\"\n== Raise base to the power of exponent using recursion ==\n Input -->\n Enter the base: 3\n Enter the exponent: 4\n Output -->\n 3 to the power of 4 is 81\n Input -->\n Enter the base: 2\n Enter the exponent: 0\n Output -->\n 2 to the power of 0 is 1\n\"\"\"\n\n\ndef power(base: int, exponent: int) -> float:\n \"\"\"\n Calculate the power of a base raised to an exponent.\n\n >>> power(3, 4)\n 81\n >>> power(2, 0)\n 1\n >>> all(power(base, exponent) == pow(base, exponent)\n ... for base in range(-10, 10) for exponent in range(10))\n True\n >>> power('a', 1)\n 'a'\n >>> power('a', 2)\n Traceback (most recent call last):\n ...\n TypeError: can't multiply sequence by non-int of type 'str'\n >>> power('a', 'b')\n Traceback (most recent call last):\n ...\n TypeError: unsupported operand type(s) for -: 'str' and 'int'\n >>> power(2, -1)\n Traceback (most recent call last):\n ...\n RecursionError: maximum recursion depth exceeded\n \"\"\"\n return base * power(base, (exponent - 1)) if exponent else 1\n\n\nif __name__ == \"__main__\":\n from doctests import testmod\n\n testmod()\n print(\"Raise base to the power of exponent using recursion...\")\n base = int(input(\"Enter the base: \").strip())\n exponent = int(input(\"Enter the exponent: \").strip())\n result = power(base, abs(exponent))\n if exponent < 0: # power() does not properly deal w/ negative exponents\n result = 1 / result\n print(f\"{base} to the power of {exponent} is {result}\")\n", "path": "maths/power_using_recursion.py"}]}
| 1,627 | 105 |
gh_patches_debug_28309
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-1142
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
include version information in error log
would be useful to include things like:
- pre-commit version
- sys.version
- sys.executable
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/error_handler.py`
Content:
```
1 from __future__ import absolute_import
2 from __future__ import print_function
3 from __future__ import unicode_literals
4
5 import contextlib
6 import os.path
7 import traceback
8
9 import six
10
11 from pre_commit import five
12 from pre_commit import output
13 from pre_commit.store import Store
14
15
16 class FatalError(RuntimeError):
17 pass
18
19
20 def _to_bytes(exc):
21 try:
22 return bytes(exc)
23 except Exception:
24 return six.text_type(exc).encode('UTF-8')
25
26
27 def _log_and_exit(msg, exc, formatted):
28 error_msg = b''.join((
29 five.to_bytes(msg), b': ',
30 five.to_bytes(type(exc).__name__), b': ',
31 _to_bytes(exc), b'\n',
32 ))
33 output.write(error_msg)
34 store = Store()
35 log_path = os.path.join(store.directory, 'pre-commit.log')
36 output.write_line('Check the log at {}'.format(log_path))
37 with open(log_path, 'wb') as log:
38 output.write(error_msg, stream=log)
39 output.write_line(formatted, stream=log)
40 raise SystemExit(1)
41
42
43 @contextlib.contextmanager
44 def error_handler():
45 try:
46 yield
47 except (Exception, KeyboardInterrupt) as e:
48 if isinstance(e, FatalError):
49 msg = 'An error has occurred'
50 elif isinstance(e, KeyboardInterrupt):
51 msg = 'Interrupted (^C)'
52 else:
53 msg = 'An unexpected error has occurred'
54 _log_and_exit(msg, e, traceback.format_exc())
55
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pre_commit/error_handler.py b/pre_commit/error_handler.py
--- a/pre_commit/error_handler.py
+++ b/pre_commit/error_handler.py
@@ -4,10 +4,12 @@
import contextlib
import os.path
+import sys
import traceback
import six
+import pre_commit.constants as C
from pre_commit import five
from pre_commit import output
from pre_commit.store import Store
@@ -34,9 +36,36 @@
store = Store()
log_path = os.path.join(store.directory, 'pre-commit.log')
output.write_line('Check the log at {}'.format(log_path))
+
with open(log_path, 'wb') as log:
+ output.write_line(
+ '### version information\n```', stream=log,
+ )
+ output.write_line(
+ 'pre-commit.version: {}'.format(C.VERSION), stream=log,
+ )
+ output.write_line(
+ 'sys.version:\n{}'.format(
+ '\n'.join(
+ [
+ ' {}'.format(line)
+ for line in sys.version.splitlines()
+ ],
+ ),
+ ),
+ stream=log,
+ )
+ output.write_line(
+ 'sys.executable: {}'.format(sys.executable), stream=log,
+ )
+ output.write_line('os.name: {}'.format(os.name), stream=log)
+ output.write_line(
+ 'sys.platform: {}\n```'.format(sys.platform), stream=log,
+ )
+ output.write_line('### error information\n```', stream=log)
output.write(error_msg, stream=log)
output.write_line(formatted, stream=log)
+ output.write('\n```\n', stream=log)
raise SystemExit(1)
|
{"golden_diff": "diff --git a/pre_commit/error_handler.py b/pre_commit/error_handler.py\n--- a/pre_commit/error_handler.py\n+++ b/pre_commit/error_handler.py\n@@ -4,10 +4,12 @@\n \n import contextlib\n import os.path\n+import sys\n import traceback\n \n import six\n \n+import pre_commit.constants as C\n from pre_commit import five\n from pre_commit import output\n from pre_commit.store import Store\n@@ -34,9 +36,36 @@\n store = Store()\n log_path = os.path.join(store.directory, 'pre-commit.log')\n output.write_line('Check the log at {}'.format(log_path))\n+\n with open(log_path, 'wb') as log:\n+ output.write_line(\n+ '### version information\\n```', stream=log,\n+ )\n+ output.write_line(\n+ 'pre-commit.version: {}'.format(C.VERSION), stream=log,\n+ )\n+ output.write_line(\n+ 'sys.version:\\n{}'.format(\n+ '\\n'.join(\n+ [\n+ ' {}'.format(line)\n+ for line in sys.version.splitlines()\n+ ],\n+ ),\n+ ),\n+ stream=log,\n+ )\n+ output.write_line(\n+ 'sys.executable: {}'.format(sys.executable), stream=log,\n+ )\n+ output.write_line('os.name: {}'.format(os.name), stream=log)\n+ output.write_line(\n+ 'sys.platform: {}\\n```'.format(sys.platform), stream=log,\n+ )\n+ output.write_line('### error information\\n```', stream=log)\n output.write(error_msg, stream=log)\n output.write_line(formatted, stream=log)\n+ output.write('\\n```\\n', stream=log)\n raise SystemExit(1)\n", "issue": "include version information in error log\nwould be useful to include things like:\r\n\r\n- pre-commit version\r\n- sys.version\r\n- sys.executable\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport contextlib\nimport os.path\nimport traceback\n\nimport six\n\nfrom pre_commit import five\nfrom pre_commit import output\nfrom pre_commit.store import Store\n\n\nclass FatalError(RuntimeError):\n pass\n\n\ndef _to_bytes(exc):\n try:\n return bytes(exc)\n except Exception:\n return six.text_type(exc).encode('UTF-8')\n\n\ndef _log_and_exit(msg, exc, formatted):\n error_msg = b''.join((\n five.to_bytes(msg), b': ',\n five.to_bytes(type(exc).__name__), b': ',\n _to_bytes(exc), b'\\n',\n ))\n output.write(error_msg)\n store = Store()\n log_path = os.path.join(store.directory, 'pre-commit.log')\n output.write_line('Check the log at {}'.format(log_path))\n with open(log_path, 'wb') as log:\n output.write(error_msg, stream=log)\n output.write_line(formatted, stream=log)\n raise SystemExit(1)\n\n\[email protected]\ndef error_handler():\n try:\n yield\n except (Exception, KeyboardInterrupt) as e:\n if isinstance(e, FatalError):\n msg = 'An error has occurred'\n elif isinstance(e, KeyboardInterrupt):\n msg = 'Interrupted (^C)'\n else:\n msg = 'An unexpected error has occurred'\n _log_and_exit(msg, e, traceback.format_exc())\n", "path": "pre_commit/error_handler.py"}], "after_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport contextlib\nimport os.path\nimport sys\nimport traceback\n\nimport six\n\nimport pre_commit.constants as C\nfrom pre_commit import five\nfrom pre_commit import output\nfrom pre_commit.store import Store\n\n\nclass FatalError(RuntimeError):\n pass\n\n\ndef _to_bytes(exc):\n try:\n return bytes(exc)\n except Exception:\n return six.text_type(exc).encode('UTF-8')\n\n\ndef _log_and_exit(msg, exc, formatted):\n error_msg = b''.join((\n five.to_bytes(msg), b': ',\n five.to_bytes(type(exc).__name__), b': ',\n _to_bytes(exc), b'\\n',\n ))\n output.write(error_msg)\n store = Store()\n log_path = os.path.join(store.directory, 'pre-commit.log')\n output.write_line('Check the log at {}'.format(log_path))\n\n with open(log_path, 'wb') as log:\n output.write_line(\n '### version information\\n```', stream=log,\n )\n output.write_line(\n 'pre-commit.version: {}'.format(C.VERSION), stream=log,\n )\n output.write_line(\n 'sys.version:\\n{}'.format(\n '\\n'.join(\n [\n ' {}'.format(line)\n for line in sys.version.splitlines()\n ],\n ),\n ),\n stream=log,\n )\n output.write_line(\n 'sys.executable: {}'.format(sys.executable), stream=log,\n )\n output.write_line('os.name: {}'.format(os.name), stream=log)\n output.write_line(\n 'sys.platform: {}\\n```'.format(sys.platform), stream=log,\n )\n output.write_line('### error information\\n```', stream=log)\n output.write(error_msg, stream=log)\n output.write_line(formatted, stream=log)\n output.write('\\n```\\n', stream=log)\n raise SystemExit(1)\n\n\[email protected]\ndef error_handler():\n try:\n yield\n except (Exception, KeyboardInterrupt) as e:\n if isinstance(e, FatalError):\n msg = 'An error has occurred'\n elif isinstance(e, KeyboardInterrupt):\n msg = 'Interrupted (^C)'\n else:\n msg = 'An unexpected error has occurred'\n _log_and_exit(msg, e, traceback.format_exc())\n", "path": "pre_commit/error_handler.py"}]}
| 712 | 378 |
gh_patches_debug_32024
|
rasdani/github-patches
|
git_diff
|
medtagger__MedTagger-391
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove error about not picking category properly
## Current Behavior
When user access labeling page without choosing the category via the category page he/she receives an error about not choosing the category properly. While this is necessary for preventing users accessing this page, it makes development more difficult. Every time when front-end loads, developer has to go back to category page.
## Expected Behavior
There shouldn't be an error about not picking category properly.
## Steps to Reproduce the Problem
1. Go to labeling page `/labeling` without going through category page.
## Additional comment (optional)
We should probably get category using `queryParams` like before and load current category on marker page.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `backend/medtagger/api/tasks/service_rest.py`
Content:
```
1 """Module responsible for definition of Tasks service available via HTTP REST API."""
2 from typing import Any
3
4 from flask import request
5 from flask_restplus import Resource
6
7 from medtagger.api import api
8 from medtagger.api.tasks import business, serializers
9 from medtagger.api.security import login_required, role_required
10 from medtagger.database.models import LabelTag
11
12 tasks_ns = api.namespace('tasks', 'Methods related with tasks')
13
14
15 @tasks_ns.route('')
16 class Tasks(Resource):
17 """Endpoint that manages tasks."""
18
19 @staticmethod
20 @login_required
21 @tasks_ns.marshal_with(serializers.out__task)
22 @tasks_ns.doc(security='token')
23 @tasks_ns.doc(description='Return all available tasks.')
24 @tasks_ns.doc(responses={200: 'Success'})
25 def get() -> Any:
26 """Return all available tasks."""
27 return business.get_tasks()
28
29 @staticmethod
30 @login_required
31 @role_required('admin')
32 @tasks_ns.expect(serializers.in__task)
33 @tasks_ns.marshal_with(serializers.out__task)
34 @tasks_ns.doc(security='token')
35 @tasks_ns.doc(description='Create new Task.')
36 @tasks_ns.doc(responses={201: 'Success'})
37 def post() -> Any:
38 """Create new Task."""
39 payload = request.json
40
41 key = payload['key']
42 name = payload['name']
43 image_path = payload['image_path']
44 datasets_keys = payload['datasets_keys']
45 tags = [LabelTag(tag['key'], tag['name'], tag['tools']) for tag in payload['tags']]
46
47 return business.create_task(key, name, image_path, datasets_keys, tags), 201
48
```
Path: `backend/medtagger/api/tasks/business.py`
Content:
```
1 """Module responsible for business logic in all Tasks endpoints."""
2 from typing import List
3
4 from medtagger.database.models import Task, LabelTag
5 from medtagger.repositories import (
6 tasks as TasksRepository,
7 )
8
9
10 def get_tasks() -> List[Task]:
11 """Fetch all tasks.
12
13 :return: list of tasks
14 """
15 return TasksRepository.get_all_tasks()
16
17
18 def create_task(key: str, name: str, image_path: str, datasets_keys: List[str], tags: List[LabelTag]) -> Task:
19 """Create new Task.
20
21 :param key: unique key representing Task
22 :param name: name which describes this Task
23 :param image_path: path to the image which is located on the frontend
24 :param datasets_keys: Keys of Datasets that Task takes Scans from
25 :param tags: Label Tags that will be created and assigned to Task
26 :return: Task object
27 """
28 return TasksRepository.add_task(key, name, image_path, datasets_keys, tags)
29
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/backend/medtagger/api/tasks/business.py b/backend/medtagger/api/tasks/business.py
--- a/backend/medtagger/api/tasks/business.py
+++ b/backend/medtagger/api/tasks/business.py
@@ -1,6 +1,9 @@
"""Module responsible for business logic in all Tasks endpoints."""
from typing import List
+from sqlalchemy.orm.exc import NoResultFound
+
+from medtagger.api.exceptions import NotFoundException
from medtagger.database.models import Task, LabelTag
from medtagger.repositories import (
tasks as TasksRepository,
@@ -15,6 +18,17 @@
return TasksRepository.get_all_tasks()
+def get_task_for_key(task_key: str) -> Task:
+ """Fetch Task for given key.
+
+ :return: Task
+ """
+ try:
+ return TasksRepository.get_task_by_key(task_key)
+ except NoResultFound:
+ raise NotFoundException('Did not found task for {} key!'.format(task_key))
+
+
def create_task(key: str, name: str, image_path: str, datasets_keys: List[str], tags: List[LabelTag]) -> Task:
"""Create new Task.
diff --git a/backend/medtagger/api/tasks/service_rest.py b/backend/medtagger/api/tasks/service_rest.py
--- a/backend/medtagger/api/tasks/service_rest.py
+++ b/backend/medtagger/api/tasks/service_rest.py
@@ -43,5 +43,19 @@
image_path = payload['image_path']
datasets_keys = payload['datasets_keys']
tags = [LabelTag(tag['key'], tag['name'], tag['tools']) for tag in payload['tags']]
-
return business.create_task(key, name, image_path, datasets_keys, tags), 201
+
+
+@tasks_ns.route('/<string:task_key>')
+class Task(Resource):
+ """Endpoint that manages single task."""
+
+ @staticmethod
+ @login_required
+ @tasks_ns.marshal_with(serializers.out__task)
+ @tasks_ns.doc(security='token')
+ @tasks_ns.doc(description='Get task for given key.')
+ @tasks_ns.doc(responses={200: 'Success', 404: 'Could not find task'})
+ def get(task_key: str) -> Any:
+ """Return task for given key."""
+ return business.get_task_for_key(task_key)
|
{"golden_diff": "diff --git a/backend/medtagger/api/tasks/business.py b/backend/medtagger/api/tasks/business.py\n--- a/backend/medtagger/api/tasks/business.py\n+++ b/backend/medtagger/api/tasks/business.py\n@@ -1,6 +1,9 @@\n \"\"\"Module responsible for business logic in all Tasks endpoints.\"\"\"\n from typing import List\n \n+from sqlalchemy.orm.exc import NoResultFound\n+\n+from medtagger.api.exceptions import NotFoundException\n from medtagger.database.models import Task, LabelTag\n from medtagger.repositories import (\n tasks as TasksRepository,\n@@ -15,6 +18,17 @@\n return TasksRepository.get_all_tasks()\n \n \n+def get_task_for_key(task_key: str) -> Task:\n+ \"\"\"Fetch Task for given key.\n+\n+ :return: Task\n+ \"\"\"\n+ try:\n+ return TasksRepository.get_task_by_key(task_key)\n+ except NoResultFound:\n+ raise NotFoundException('Did not found task for {} key!'.format(task_key))\n+\n+\n def create_task(key: str, name: str, image_path: str, datasets_keys: List[str], tags: List[LabelTag]) -> Task:\n \"\"\"Create new Task.\n \ndiff --git a/backend/medtagger/api/tasks/service_rest.py b/backend/medtagger/api/tasks/service_rest.py\n--- a/backend/medtagger/api/tasks/service_rest.py\n+++ b/backend/medtagger/api/tasks/service_rest.py\n@@ -43,5 +43,19 @@\n image_path = payload['image_path']\n datasets_keys = payload['datasets_keys']\n tags = [LabelTag(tag['key'], tag['name'], tag['tools']) for tag in payload['tags']]\n-\n return business.create_task(key, name, image_path, datasets_keys, tags), 201\n+\n+\n+@tasks_ns.route('/<string:task_key>')\n+class Task(Resource):\n+ \"\"\"Endpoint that manages single task.\"\"\"\n+\n+ @staticmethod\n+ @login_required\n+ @tasks_ns.marshal_with(serializers.out__task)\n+ @tasks_ns.doc(security='token')\n+ @tasks_ns.doc(description='Get task for given key.')\n+ @tasks_ns.doc(responses={200: 'Success', 404: 'Could not find task'})\n+ def get(task_key: str) -> Any:\n+ \"\"\"Return task for given key.\"\"\"\n+ return business.get_task_for_key(task_key)\n", "issue": "Remove error about not picking category properly\n## Current Behavior\r\n\r\nWhen user access labeling page without choosing the category via the category page he/she receives an error about not choosing the category properly. While this is necessary for preventing users accessing this page, it makes development more difficult. Every time when front-end loads, developer has to go back to category page.\r\n\r\n## Expected Behavior\r\n\r\nThere shouldn't be an error about not picking category properly. \r\n\r\n## Steps to Reproduce the Problem\r\n\r\n 1. Go to labeling page `/labeling` without going through category page.\r\n\r\n## Additional comment (optional)\r\n\r\nWe should probably get category using `queryParams` like before and load current category on marker page.\r\n\n", "before_files": [{"content": "\"\"\"Module responsible for definition of Tasks service available via HTTP REST API.\"\"\"\nfrom typing import Any\n\nfrom flask import request\nfrom flask_restplus import Resource\n\nfrom medtagger.api import api\nfrom medtagger.api.tasks import business, serializers\nfrom medtagger.api.security import login_required, role_required\nfrom medtagger.database.models import LabelTag\n\ntasks_ns = api.namespace('tasks', 'Methods related with tasks')\n\n\n@tasks_ns.route('')\nclass Tasks(Resource):\n \"\"\"Endpoint that manages tasks.\"\"\"\n\n @staticmethod\n @login_required\n @tasks_ns.marshal_with(serializers.out__task)\n @tasks_ns.doc(security='token')\n @tasks_ns.doc(description='Return all available tasks.')\n @tasks_ns.doc(responses={200: 'Success'})\n def get() -> Any:\n \"\"\"Return all available tasks.\"\"\"\n return business.get_tasks()\n\n @staticmethod\n @login_required\n @role_required('admin')\n @tasks_ns.expect(serializers.in__task)\n @tasks_ns.marshal_with(serializers.out__task)\n @tasks_ns.doc(security='token')\n @tasks_ns.doc(description='Create new Task.')\n @tasks_ns.doc(responses={201: 'Success'})\n def post() -> Any:\n \"\"\"Create new Task.\"\"\"\n payload = request.json\n\n key = payload['key']\n name = payload['name']\n image_path = payload['image_path']\n datasets_keys = payload['datasets_keys']\n tags = [LabelTag(tag['key'], tag['name'], tag['tools']) for tag in payload['tags']]\n\n return business.create_task(key, name, image_path, datasets_keys, tags), 201\n", "path": "backend/medtagger/api/tasks/service_rest.py"}, {"content": "\"\"\"Module responsible for business logic in all Tasks endpoints.\"\"\"\nfrom typing import List\n\nfrom medtagger.database.models import Task, LabelTag\nfrom medtagger.repositories import (\n tasks as TasksRepository,\n)\n\n\ndef get_tasks() -> List[Task]:\n \"\"\"Fetch all tasks.\n\n :return: list of tasks\n \"\"\"\n return TasksRepository.get_all_tasks()\n\n\ndef create_task(key: str, name: str, image_path: str, datasets_keys: List[str], tags: List[LabelTag]) -> Task:\n \"\"\"Create new Task.\n\n :param key: unique key representing Task\n :param name: name which describes this Task\n :param image_path: path to the image which is located on the frontend\n :param datasets_keys: Keys of Datasets that Task takes Scans from\n :param tags: Label Tags that will be created and assigned to Task\n :return: Task object\n \"\"\"\n return TasksRepository.add_task(key, name, image_path, datasets_keys, tags)\n", "path": "backend/medtagger/api/tasks/business.py"}], "after_files": [{"content": "\"\"\"Module responsible for definition of Tasks service available via HTTP REST API.\"\"\"\nfrom typing import Any\n\nfrom flask import request\nfrom flask_restplus import Resource\n\nfrom medtagger.api import api\nfrom medtagger.api.tasks import business, serializers\nfrom medtagger.api.security import login_required, role_required\nfrom medtagger.database.models import LabelTag\n\ntasks_ns = api.namespace('tasks', 'Methods related with tasks')\n\n\n@tasks_ns.route('')\nclass Tasks(Resource):\n \"\"\"Endpoint that manages tasks.\"\"\"\n\n @staticmethod\n @login_required\n @tasks_ns.marshal_with(serializers.out__task)\n @tasks_ns.doc(security='token')\n @tasks_ns.doc(description='Return all available tasks.')\n @tasks_ns.doc(responses={200: 'Success'})\n def get() -> Any:\n \"\"\"Return all available tasks.\"\"\"\n return business.get_tasks()\n\n @staticmethod\n @login_required\n @role_required('admin')\n @tasks_ns.expect(serializers.in__task)\n @tasks_ns.marshal_with(serializers.out__task)\n @tasks_ns.doc(security='token')\n @tasks_ns.doc(description='Create new Task.')\n @tasks_ns.doc(responses={201: 'Success'})\n def post() -> Any:\n \"\"\"Create new Task.\"\"\"\n payload = request.json\n\n key = payload['key']\n name = payload['name']\n image_path = payload['image_path']\n datasets_keys = payload['datasets_keys']\n tags = [LabelTag(tag['key'], tag['name'], tag['tools']) for tag in payload['tags']]\n return business.create_task(key, name, image_path, datasets_keys, tags), 201\n\n\n@tasks_ns.route('/<string:task_key>')\nclass Task(Resource):\n \"\"\"Endpoint that manages single task.\"\"\"\n\n @staticmethod\n @login_required\n @tasks_ns.marshal_with(serializers.out__task)\n @tasks_ns.doc(security='token')\n @tasks_ns.doc(description='Get task for given key.')\n @tasks_ns.doc(responses={200: 'Success', 404: 'Could not find task'})\n def get(task_key: str) -> Any:\n \"\"\"Return task for given key.\"\"\"\n return business.get_task_for_key(task_key)\n", "path": "backend/medtagger/api/tasks/service_rest.py"}, {"content": "\"\"\"Module responsible for business logic in all Tasks endpoints.\"\"\"\nfrom typing import List\n\nfrom sqlalchemy.orm.exc import NoResultFound\n\nfrom medtagger.api.exceptions import NotFoundException\nfrom medtagger.database.models import Task, LabelTag\nfrom medtagger.repositories import (\n tasks as TasksRepository,\n)\n\n\ndef get_tasks() -> List[Task]:\n \"\"\"Fetch all tasks.\n\n :return: list of tasks\n \"\"\"\n return TasksRepository.get_all_tasks()\n\n\ndef get_task_for_key(task_key: str) -> Task:\n \"\"\"Fetch Task for given key.\n\n :return: Task\n \"\"\"\n try:\n return TasksRepository.get_task_by_key(task_key)\n except NoResultFound:\n raise NotFoundException('Did not found task for {} key!'.format(task_key))\n\n\ndef create_task(key: str, name: str, image_path: str, datasets_keys: List[str], tags: List[LabelTag]) -> Task:\n \"\"\"Create new Task.\n\n :param key: unique key representing Task\n :param name: name which describes this Task\n :param image_path: path to the image which is located on the frontend\n :param datasets_keys: Keys of Datasets that Task takes Scans from\n :param tags: Label Tags that will be created and assigned to Task\n :return: Task object\n \"\"\"\n return TasksRepository.add_task(key, name, image_path, datasets_keys, tags)\n", "path": "backend/medtagger/api/tasks/business.py"}]}
| 1,140 | 526 |
gh_patches_debug_25052
|
rasdani/github-patches
|
git_diff
|
Pylons__pyramid-2759
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Drop Python 3.3 support
This is a placeholder for Pyramid 1.8 to drop Python 3.3 support.
Creating a new issue, splitting it off from https://github.com/Pylons/pyramid/issues/2368.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 ##############################################################################
2 #
3 # Copyright (c) 2008-2013 Agendaless Consulting and Contributors.
4 # All Rights Reserved.
5 #
6 # This software is subject to the provisions of the BSD-like license at
7 # http://www.repoze.org/LICENSE.txt. A copy of the license should accompany
8 # this distribution. THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL
9 # EXPRESS OR IMPLIED WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO,
10 # THE IMPLIED WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND
11 # FITNESS FOR A PARTICULAR PURPOSE
12 #
13 ##############################################################################
14
15 import os
16 import sys
17
18 from setuptools import setup, find_packages
19
20 py_version = sys.version_info[:2]
21 is_pypy = '__pypy__' in sys.builtin_module_names
22
23 PY3 = py_version[0] == 3
24
25 if PY3:
26 if py_version < (3, 3) and not is_pypy: # PyPy3 masquerades as Python 3.2...
27 raise RuntimeError('On Python 3, Pyramid requires Python 3.3 or better')
28 else:
29 if py_version < (2, 6):
30 raise RuntimeError('On Python 2, Pyramid requires Python 2.6 or better')
31
32 here = os.path.abspath(os.path.dirname(__file__))
33 try:
34 with open(os.path.join(here, 'README.rst')) as f:
35 README = f.read()
36 with open(os.path.join(here, 'CHANGES.txt')) as f:
37 CHANGES = f.read()
38 except IOError:
39 README = CHANGES = ''
40
41 install_requires = [
42 'setuptools',
43 'WebOb >= 1.3.1', # request.domain and CookieProfile
44 'repoze.lru >= 0.4', # py3 compat
45 'zope.interface >= 3.8.0', # has zope.interface.registry
46 'zope.deprecation >= 3.5.0', # py3 compat
47 'venusian >= 1.0a3', # ``ignore``
48 'translationstring >= 0.4', # py3 compat
49 'PasteDeploy >= 1.5.0', # py3 compat
50 ]
51
52 tests_require = [
53 'WebTest >= 1.3.1', # py3 compat
54 ]
55
56 if not PY3:
57 tests_require.append('zope.component>=3.11.0')
58
59 docs_extras = [
60 'Sphinx >= 1.3.5',
61 'docutils',
62 'repoze.sphinx.autointerface',
63 'pylons_sphinx_latesturl',
64 'pylons-sphinx-themes',
65 'sphinxcontrib-programoutput',
66 ]
67
68 testing_extras = tests_require + [
69 'nose',
70 'coverage',
71 'virtualenv', # for scaffolding tests
72 ]
73
74 setup(name='pyramid',
75 version='1.8.dev0',
76 description='The Pyramid Web Framework, a Pylons project',
77 long_description=README + '\n\n' + CHANGES,
78 classifiers=[
79 "Development Status :: 6 - Mature",
80 "Intended Audience :: Developers",
81 "Programming Language :: Python",
82 "Programming Language :: Python :: 2.7",
83 "Programming Language :: Python :: 3",
84 "Programming Language :: Python :: 3.3",
85 "Programming Language :: Python :: 3.4",
86 "Programming Language :: Python :: 3.5",
87 "Programming Language :: Python :: Implementation :: CPython",
88 "Programming Language :: Python :: Implementation :: PyPy",
89 "Framework :: Pyramid",
90 "Topic :: Internet :: WWW/HTTP",
91 "Topic :: Internet :: WWW/HTTP :: WSGI",
92 "License :: Repoze Public License",
93 ],
94 keywords='web wsgi pylons pyramid',
95 author="Chris McDonough, Agendaless Consulting",
96 author_email="[email protected]",
97 url="https://trypyramid.com",
98 license="BSD-derived (http://www.repoze.org/LICENSE.txt)",
99 packages=find_packages(),
100 include_package_data=True,
101 zip_safe=False,
102 install_requires=install_requires,
103 extras_require={
104 'testing': testing_extras,
105 'docs': docs_extras,
106 },
107 tests_require=tests_require,
108 test_suite="pyramid.tests",
109 entry_points="""\
110 [pyramid.scaffold]
111 starter=pyramid.scaffolds:StarterProjectTemplate
112 zodb=pyramid.scaffolds:ZODBProjectTemplate
113 alchemy=pyramid.scaffolds:AlchemyProjectTemplate
114 [pyramid.pshell_runner]
115 python=pyramid.scripts.pshell:python_shell_runner
116 [console_scripts]
117 pcreate = pyramid.scripts.pcreate:main
118 pserve = pyramid.scripts.pserve:main
119 pshell = pyramid.scripts.pshell:main
120 proutes = pyramid.scripts.proutes:main
121 pviews = pyramid.scripts.pviews:main
122 ptweens = pyramid.scripts.ptweens:main
123 prequest = pyramid.scripts.prequest:main
124 pdistreport = pyramid.scripts.pdistreport:main
125 [paste.server_runner]
126 wsgiref = pyramid.scripts.pserve:wsgiref_server_runner
127 cherrypy = pyramid.scripts.pserve:cherrypy_server_runner
128 """
129 )
130
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -18,16 +18,15 @@
from setuptools import setup, find_packages
py_version = sys.version_info[:2]
-is_pypy = '__pypy__' in sys.builtin_module_names
PY3 = py_version[0] == 3
if PY3:
- if py_version < (3, 3) and not is_pypy: # PyPy3 masquerades as Python 3.2...
- raise RuntimeError('On Python 3, Pyramid requires Python 3.3 or better')
+ if py_version < (3, 4):
+ raise RuntimeError('On Python 3, Pyramid requires Python 3.4 or better')
else:
- if py_version < (2, 6):
- raise RuntimeError('On Python 2, Pyramid requires Python 2.6 or better')
+ if py_version < (2, 7):
+ raise RuntimeError('On Python 2, Pyramid requires Python 2.7 or better')
here = os.path.abspath(os.path.dirname(__file__))
try:
@@ -81,7 +80,6 @@
"Programming Language :: Python",
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3",
- "Programming Language :: Python :: 3.3",
"Programming Language :: Python :: 3.4",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: Implementation :: CPython",
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -18,16 +18,15 @@\n from setuptools import setup, find_packages\n \n py_version = sys.version_info[:2]\n-is_pypy = '__pypy__' in sys.builtin_module_names\n \n PY3 = py_version[0] == 3\n \n if PY3:\n- if py_version < (3, 3) and not is_pypy: # PyPy3 masquerades as Python 3.2...\n- raise RuntimeError('On Python 3, Pyramid requires Python 3.3 or better')\n+ if py_version < (3, 4):\n+ raise RuntimeError('On Python 3, Pyramid requires Python 3.4 or better')\n else:\n- if py_version < (2, 6):\n- raise RuntimeError('On Python 2, Pyramid requires Python 2.6 or better')\n+ if py_version < (2, 7):\n+ raise RuntimeError('On Python 2, Pyramid requires Python 2.7 or better')\n \n here = os.path.abspath(os.path.dirname(__file__))\n try:\n@@ -81,7 +80,6 @@\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n- \"Programming Language :: Python :: 3.3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n", "issue": "Drop Python 3.3 support\nThis is a placeholder for Pyramid 1.8 to drop Python 3.3 support.\n\nCreating a new issue, splitting it off from https://github.com/Pylons/pyramid/issues/2368.\n\n", "before_files": [{"content": "##############################################################################\n#\n# Copyright (c) 2008-2013 Agendaless Consulting and Contributors.\n# All Rights Reserved.\n#\n# This software is subject to the provisions of the BSD-like license at\n# http://www.repoze.org/LICENSE.txt. A copy of the license should accompany\n# this distribution. THIS SOFTWARE IS PROVIDED \"AS IS\" AND ANY AND ALL\n# EXPRESS OR IMPLIED WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO,\n# THE IMPLIED WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND\n# FITNESS FOR A PARTICULAR PURPOSE\n#\n##############################################################################\n\nimport os\nimport sys\n\nfrom setuptools import setup, find_packages\n\npy_version = sys.version_info[:2]\nis_pypy = '__pypy__' in sys.builtin_module_names\n\nPY3 = py_version[0] == 3\n\nif PY3:\n if py_version < (3, 3) and not is_pypy: # PyPy3 masquerades as Python 3.2...\n raise RuntimeError('On Python 3, Pyramid requires Python 3.3 or better')\nelse:\n if py_version < (2, 6):\n raise RuntimeError('On Python 2, Pyramid requires Python 2.6 or better')\n\nhere = os.path.abspath(os.path.dirname(__file__))\ntry:\n with open(os.path.join(here, 'README.rst')) as f:\n README = f.read()\n with open(os.path.join(here, 'CHANGES.txt')) as f:\n CHANGES = f.read()\nexcept IOError:\n README = CHANGES = ''\n\ninstall_requires = [\n 'setuptools',\n 'WebOb >= 1.3.1', # request.domain and CookieProfile\n 'repoze.lru >= 0.4', # py3 compat\n 'zope.interface >= 3.8.0', # has zope.interface.registry\n 'zope.deprecation >= 3.5.0', # py3 compat\n 'venusian >= 1.0a3', # ``ignore``\n 'translationstring >= 0.4', # py3 compat\n 'PasteDeploy >= 1.5.0', # py3 compat\n ]\n\ntests_require = [\n 'WebTest >= 1.3.1', # py3 compat\n ]\n\nif not PY3:\n tests_require.append('zope.component>=3.11.0')\n\ndocs_extras = [\n 'Sphinx >= 1.3.5',\n 'docutils',\n 'repoze.sphinx.autointerface',\n 'pylons_sphinx_latesturl',\n 'pylons-sphinx-themes',\n 'sphinxcontrib-programoutput',\n ]\n\ntesting_extras = tests_require + [\n 'nose',\n 'coverage',\n 'virtualenv', # for scaffolding tests\n ]\n\nsetup(name='pyramid',\n version='1.8.dev0',\n description='The Pyramid Web Framework, a Pylons project',\n long_description=README + '\\n\\n' + CHANGES,\n classifiers=[\n \"Development Status :: 6 - Mature\",\n \"Intended Audience :: Developers\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Framework :: Pyramid\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Internet :: WWW/HTTP :: WSGI\",\n \"License :: Repoze Public License\",\n ],\n keywords='web wsgi pylons pyramid',\n author=\"Chris McDonough, Agendaless Consulting\",\n author_email=\"[email protected]\",\n url=\"https://trypyramid.com\",\n license=\"BSD-derived (http://www.repoze.org/LICENSE.txt)\",\n packages=find_packages(),\n include_package_data=True,\n zip_safe=False,\n install_requires=install_requires,\n extras_require={\n 'testing': testing_extras,\n 'docs': docs_extras,\n },\n tests_require=tests_require,\n test_suite=\"pyramid.tests\",\n entry_points=\"\"\"\\\n [pyramid.scaffold]\n starter=pyramid.scaffolds:StarterProjectTemplate\n zodb=pyramid.scaffolds:ZODBProjectTemplate\n alchemy=pyramid.scaffolds:AlchemyProjectTemplate\n [pyramid.pshell_runner]\n python=pyramid.scripts.pshell:python_shell_runner\n [console_scripts]\n pcreate = pyramid.scripts.pcreate:main\n pserve = pyramid.scripts.pserve:main\n pshell = pyramid.scripts.pshell:main\n proutes = pyramid.scripts.proutes:main\n pviews = pyramid.scripts.pviews:main\n ptweens = pyramid.scripts.ptweens:main\n prequest = pyramid.scripts.prequest:main\n pdistreport = pyramid.scripts.pdistreport:main\n [paste.server_runner]\n wsgiref = pyramid.scripts.pserve:wsgiref_server_runner\n cherrypy = pyramid.scripts.pserve:cherrypy_server_runner\n \"\"\"\n )\n", "path": "setup.py"}], "after_files": [{"content": "##############################################################################\n#\n# Copyright (c) 2008-2013 Agendaless Consulting and Contributors.\n# All Rights Reserved.\n#\n# This software is subject to the provisions of the BSD-like license at\n# http://www.repoze.org/LICENSE.txt. A copy of the license should accompany\n# this distribution. THIS SOFTWARE IS PROVIDED \"AS IS\" AND ANY AND ALL\n# EXPRESS OR IMPLIED WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO,\n# THE IMPLIED WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND\n# FITNESS FOR A PARTICULAR PURPOSE\n#\n##############################################################################\n\nimport os\nimport sys\n\nfrom setuptools import setup, find_packages\n\npy_version = sys.version_info[:2]\n\nPY3 = py_version[0] == 3\n\nif PY3:\n if py_version < (3, 4):\n raise RuntimeError('On Python 3, Pyramid requires Python 3.4 or better')\nelse:\n if py_version < (2, 7):\n raise RuntimeError('On Python 2, Pyramid requires Python 2.7 or better')\n\nhere = os.path.abspath(os.path.dirname(__file__))\ntry:\n with open(os.path.join(here, 'README.rst')) as f:\n README = f.read()\n with open(os.path.join(here, 'CHANGES.txt')) as f:\n CHANGES = f.read()\nexcept IOError:\n README = CHANGES = ''\n\ninstall_requires = [\n 'setuptools',\n 'WebOb >= 1.3.1', # request.domain and CookieProfile\n 'repoze.lru >= 0.4', # py3 compat\n 'zope.interface >= 3.8.0', # has zope.interface.registry\n 'zope.deprecation >= 3.5.0', # py3 compat\n 'venusian >= 1.0a3', # ``ignore``\n 'translationstring >= 0.4', # py3 compat\n 'PasteDeploy >= 1.5.0', # py3 compat\n ]\n\ntests_require = [\n 'WebTest >= 1.3.1', # py3 compat\n ]\n\nif not PY3:\n tests_require.append('zope.component>=3.11.0')\n\ndocs_extras = [\n 'Sphinx >= 1.3.5',\n 'docutils',\n 'repoze.sphinx.autointerface',\n 'pylons_sphinx_latesturl',\n 'pylons-sphinx-themes',\n 'sphinxcontrib-programoutput',\n ]\n\ntesting_extras = tests_require + [\n 'nose',\n 'coverage',\n 'virtualenv', # for scaffolding tests\n ]\n\nsetup(name='pyramid',\n version='1.8.dev0',\n description='The Pyramid Web Framework, a Pylons project',\n long_description=README + '\\n\\n' + CHANGES,\n classifiers=[\n \"Development Status :: 6 - Mature\",\n \"Intended Audience :: Developers\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Framework :: Pyramid\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Internet :: WWW/HTTP :: WSGI\",\n \"License :: Repoze Public License\",\n ],\n keywords='web wsgi pylons pyramid',\n author=\"Chris McDonough, Agendaless Consulting\",\n author_email=\"[email protected]\",\n url=\"https://trypyramid.com\",\n license=\"BSD-derived (http://www.repoze.org/LICENSE.txt)\",\n packages=find_packages(),\n include_package_data=True,\n zip_safe=False,\n install_requires=install_requires,\n extras_require={\n 'testing': testing_extras,\n 'docs': docs_extras,\n },\n tests_require=tests_require,\n test_suite=\"pyramid.tests\",\n entry_points=\"\"\"\\\n [pyramid.scaffold]\n starter=pyramid.scaffolds:StarterProjectTemplate\n zodb=pyramid.scaffolds:ZODBProjectTemplate\n alchemy=pyramid.scaffolds:AlchemyProjectTemplate\n [pyramid.pshell_runner]\n python=pyramid.scripts.pshell:python_shell_runner\n [console_scripts]\n pcreate = pyramid.scripts.pcreate:main\n pserve = pyramid.scripts.pserve:main\n pshell = pyramid.scripts.pshell:main\n proutes = pyramid.scripts.proutes:main\n pviews = pyramid.scripts.pviews:main\n ptweens = pyramid.scripts.ptweens:main\n prequest = pyramid.scripts.prequest:main\n pdistreport = pyramid.scripts.pdistreport:main\n [paste.server_runner]\n wsgiref = pyramid.scripts.pserve:wsgiref_server_runner\n cherrypy = pyramid.scripts.pserve:cherrypy_server_runner\n \"\"\"\n )\n", "path": "setup.py"}]}
| 1,747 | 337 |
gh_patches_debug_19166
|
rasdani/github-patches
|
git_diff
|
airctic__icevision-870
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add EfficientDet AdvProp-AA
## 🚀 Feature
Add EfficientDet AdvProp-AA pretrained backbones for D0-D5
See https://github.com/google/automl/blob/master/efficientdet/Det-AdvProp.md
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `icevision/models/ross/efficientdet/backbones.py`
Content:
```
1 __all__ = [
2 "tf_lite0",
3 "tf_lite1",
4 "tf_lite2",
5 "tf_lite3",
6 "tf_d0",
7 "tf_d1",
8 "tf_d2",
9 "tf_d3",
10 "tf_d4",
11 "tf_d5",
12 "tf_d6",
13 "tf_d7",
14 "tf_d7x",
15 "d0",
16 "d1",
17 "d2",
18 "d3",
19 "d4",
20 "d5",
21 "d6",
22 "d7",
23 "d7x",
24 ]
25
26 from icevision.models.ross.efficientdet.utils import *
27
28
29 tf_lite0 = EfficientDetBackboneConfig(model_name="tf_efficientdet_lite0")
30 tf_lite1 = EfficientDetBackboneConfig(model_name="tf_efficientdet_lite1")
31 tf_lite2 = EfficientDetBackboneConfig(model_name="tf_efficientdet_lite2")
32 tf_lite3 = EfficientDetBackboneConfig(model_name="tf_efficientdet_lite3")
33
34 tf_d0 = EfficientDetBackboneConfig(model_name="tf_efficientdet_d0")
35 tf_d1 = EfficientDetBackboneConfig(model_name="tf_efficientdet_d1")
36 tf_d2 = EfficientDetBackboneConfig(model_name="tf_efficientdet_d2")
37 tf_d3 = EfficientDetBackboneConfig(model_name="tf_efficientdet_d3")
38 tf_d4 = EfficientDetBackboneConfig(model_name="tf_efficientdet_d4")
39 tf_d5 = EfficientDetBackboneConfig(model_name="tf_efficientdet_d5")
40 tf_d6 = EfficientDetBackboneConfig(model_name="tf_efficientdet_d6")
41 tf_d7 = EfficientDetBackboneConfig(model_name="tf_efficientdet_d7")
42 tf_d7x = EfficientDetBackboneConfig(model_name="tf_efficientdet_d7x")
43
44 d0 = EfficientDetBackboneConfig(model_name="efficientdet_d0")
45 d1 = EfficientDetBackboneConfig(model_name="efficientdet_d1")
46 d2 = EfficientDetBackboneConfig(model_name="efficientdet_d2")
47 d3 = EfficientDetBackboneConfig(model_name="efficientdet_d3")
48 d4 = EfficientDetBackboneConfig(model_name="efficientdet_d4")
49 d5 = EfficientDetBackboneConfig(model_name="efficientdet_d5")
50 d6 = EfficientDetBackboneConfig(model_name="efficientdet_d6")
51 d7 = EfficientDetBackboneConfig(model_name="efficientdet_d7")
52 d7x = EfficientDetBackboneConfig(model_name="efficientdet_d7x")
53
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/icevision/models/ross/efficientdet/backbones.py b/icevision/models/ross/efficientdet/backbones.py
--- a/icevision/models/ross/efficientdet/backbones.py
+++ b/icevision/models/ross/efficientdet/backbones.py
@@ -21,6 +21,12 @@
"d6",
"d7",
"d7x",
+ "tf_d0_ap",
+ "tf_d1_ap",
+ "tf_d2_ap",
+ "tf_d3_ap",
+ "tf_d4_ap",
+ "tf_d5_ap",
]
from icevision.models.ross.efficientdet.utils import *
@@ -50,3 +56,10 @@
d6 = EfficientDetBackboneConfig(model_name="efficientdet_d6")
d7 = EfficientDetBackboneConfig(model_name="efficientdet_d7")
d7x = EfficientDetBackboneConfig(model_name="efficientdet_d7x")
+
+tf_d0_ap = EfficientDetBackboneConfig(model_name="tf_efficientdet_d0_ap")
+tf_d1_ap = EfficientDetBackboneConfig(model_name="tf_efficientdet_d1_ap")
+tf_d2_ap = EfficientDetBackboneConfig(model_name="tf_efficientdet_d2_ap")
+tf_d3_ap = EfficientDetBackboneConfig(model_name="tf_efficientdet_d3_ap")
+tf_d4_ap = EfficientDetBackboneConfig(model_name="tf_efficientdet_d4_ap")
+tf_d5_ap = EfficientDetBackboneConfig(model_name="tf_efficientdet_d5_ap")
|
{"golden_diff": "diff --git a/icevision/models/ross/efficientdet/backbones.py b/icevision/models/ross/efficientdet/backbones.py\n--- a/icevision/models/ross/efficientdet/backbones.py\n+++ b/icevision/models/ross/efficientdet/backbones.py\n@@ -21,6 +21,12 @@\n \"d6\",\n \"d7\",\n \"d7x\",\n+ \"tf_d0_ap\",\n+ \"tf_d1_ap\",\n+ \"tf_d2_ap\",\n+ \"tf_d3_ap\",\n+ \"tf_d4_ap\",\n+ \"tf_d5_ap\",\n ]\n \n from icevision.models.ross.efficientdet.utils import *\n@@ -50,3 +56,10 @@\n d6 = EfficientDetBackboneConfig(model_name=\"efficientdet_d6\")\n d7 = EfficientDetBackboneConfig(model_name=\"efficientdet_d7\")\n d7x = EfficientDetBackboneConfig(model_name=\"efficientdet_d7x\")\n+\n+tf_d0_ap = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d0_ap\")\n+tf_d1_ap = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d1_ap\")\n+tf_d2_ap = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d2_ap\")\n+tf_d3_ap = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d3_ap\")\n+tf_d4_ap = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d4_ap\")\n+tf_d5_ap = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d5_ap\")\n", "issue": "Add EfficientDet AdvProp-AA\n## \ud83d\ude80 Feature\r\nAdd EfficientDet AdvProp-AA pretrained backbones for D0-D5\r\n\r\nSee https://github.com/google/automl/blob/master/efficientdet/Det-AdvProp.md\n", "before_files": [{"content": "__all__ = [\n \"tf_lite0\",\n \"tf_lite1\",\n \"tf_lite2\",\n \"tf_lite3\",\n \"tf_d0\",\n \"tf_d1\",\n \"tf_d2\",\n \"tf_d3\",\n \"tf_d4\",\n \"tf_d5\",\n \"tf_d6\",\n \"tf_d7\",\n \"tf_d7x\",\n \"d0\",\n \"d1\",\n \"d2\",\n \"d3\",\n \"d4\",\n \"d5\",\n \"d6\",\n \"d7\",\n \"d7x\",\n]\n\nfrom icevision.models.ross.efficientdet.utils import *\n\n\ntf_lite0 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_lite0\")\ntf_lite1 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_lite1\")\ntf_lite2 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_lite2\")\ntf_lite3 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_lite3\")\n\ntf_d0 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d0\")\ntf_d1 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d1\")\ntf_d2 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d2\")\ntf_d3 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d3\")\ntf_d4 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d4\")\ntf_d5 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d5\")\ntf_d6 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d6\")\ntf_d7 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d7\")\ntf_d7x = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d7x\")\n\nd0 = EfficientDetBackboneConfig(model_name=\"efficientdet_d0\")\nd1 = EfficientDetBackboneConfig(model_name=\"efficientdet_d1\")\nd2 = EfficientDetBackboneConfig(model_name=\"efficientdet_d2\")\nd3 = EfficientDetBackboneConfig(model_name=\"efficientdet_d3\")\nd4 = EfficientDetBackboneConfig(model_name=\"efficientdet_d4\")\nd5 = EfficientDetBackboneConfig(model_name=\"efficientdet_d5\")\nd6 = EfficientDetBackboneConfig(model_name=\"efficientdet_d6\")\nd7 = EfficientDetBackboneConfig(model_name=\"efficientdet_d7\")\nd7x = EfficientDetBackboneConfig(model_name=\"efficientdet_d7x\")\n", "path": "icevision/models/ross/efficientdet/backbones.py"}], "after_files": [{"content": "__all__ = [\n \"tf_lite0\",\n \"tf_lite1\",\n \"tf_lite2\",\n \"tf_lite3\",\n \"tf_d0\",\n \"tf_d1\",\n \"tf_d2\",\n \"tf_d3\",\n \"tf_d4\",\n \"tf_d5\",\n \"tf_d6\",\n \"tf_d7\",\n \"tf_d7x\",\n \"d0\",\n \"d1\",\n \"d2\",\n \"d3\",\n \"d4\",\n \"d5\",\n \"d6\",\n \"d7\",\n \"d7x\",\n \"tf_d0_ap\",\n \"tf_d1_ap\",\n \"tf_d2_ap\",\n \"tf_d3_ap\",\n \"tf_d4_ap\",\n \"tf_d5_ap\",\n]\n\nfrom icevision.models.ross.efficientdet.utils import *\n\n\ntf_lite0 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_lite0\")\ntf_lite1 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_lite1\")\ntf_lite2 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_lite2\")\ntf_lite3 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_lite3\")\n\ntf_d0 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d0\")\ntf_d1 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d1\")\ntf_d2 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d2\")\ntf_d3 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d3\")\ntf_d4 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d4\")\ntf_d5 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d5\")\ntf_d6 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d6\")\ntf_d7 = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d7\")\ntf_d7x = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d7x\")\n\nd0 = EfficientDetBackboneConfig(model_name=\"efficientdet_d0\")\nd1 = EfficientDetBackboneConfig(model_name=\"efficientdet_d1\")\nd2 = EfficientDetBackboneConfig(model_name=\"efficientdet_d2\")\nd3 = EfficientDetBackboneConfig(model_name=\"efficientdet_d3\")\nd4 = EfficientDetBackboneConfig(model_name=\"efficientdet_d4\")\nd5 = EfficientDetBackboneConfig(model_name=\"efficientdet_d5\")\nd6 = EfficientDetBackboneConfig(model_name=\"efficientdet_d6\")\nd7 = EfficientDetBackboneConfig(model_name=\"efficientdet_d7\")\nd7x = EfficientDetBackboneConfig(model_name=\"efficientdet_d7x\")\n\ntf_d0_ap = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d0_ap\")\ntf_d1_ap = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d1_ap\")\ntf_d2_ap = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d2_ap\")\ntf_d3_ap = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d3_ap\")\ntf_d4_ap = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d4_ap\")\ntf_d5_ap = EfficientDetBackboneConfig(model_name=\"tf_efficientdet_d5_ap\")\n", "path": "icevision/models/ross/efficientdet/backbones.py"}]}
| 956 | 348 |
gh_patches_debug_20891
|
rasdani/github-patches
|
git_diff
|
zigpy__zha-device-handlers-392
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Device Support Request] Add support for updated Legrand Dimmer switch w/o neutral
**Is your feature request related to a problem? Please describe.**
I've updated the firmware of my Legrand Dimmer switch w/o neutral for which support was added in https://github.com/zigpy/zha-device-handlers/issues/299
Before OTA upgrade:
- app_version: 0
- hw_version: 1
- stack_version: 64
- sw_build_id: 01a (26)
- zcl_version: 2
- Firmware: 0x03401a00
After OTA upgrade (2020-06-08):
- app_version: 0
- hw_version: 6
- stack_version: 66
- sw_build_id: 02b (43)
- zcl_version: 2
- Firmware: 0x002b4203
And now it reports a new `GreenPowerProxy` endpoint with id 242:
```
{
"node_descriptor": "<NodeDescriptor byte1=17 byte2=64 mac_capability_flags=142 manufacturer_code=4129 maximum_buffer_size=89 maximum_incoming_transfer_size=63 server_mask=10752 maximum_outgoing_transfer_size=63 descriptor_capability_field=0>",
"endpoints": {
"1": {
"profile_id": 260,
"device_type": "0x0100",
"in_clusters": [
"0x0000",
"0x0003",
"0x0004",
"0x0005",
"0x0006",
"0x0008",
"0x000f",
"0xfc01"
],
"out_clusters": [
"0x0000",
"0x0019",
"0xfc01"
]
},
"242": {
"profile_id": 41440,
"device_type": "0x0061",
"in_clusters": [],
"out_clusters": [
"0x0021"
]
}
},
"manufacturer": " Legrand",
"model": " Dimmer switch w/o neutral",
"class": "zigpy.device.Device"
}
```
The issue is that prevents the quirk from matching:
```
2020-06-17 06:45:05 DEBUG (MainThread) [zigpy.quirks.registry] Checking quirks for Legrand Dimmer switch w/o neutral (00:04:74:00:00:8b:0e:a2)
2020-06-17 06:45:05 DEBUG (MainThread) [zigpy.quirks.registry] Considering <class 'zhaquirks.legrand.dimmer.DimmerWithoutNeutral'>
2020-06-17 06:45:05 DEBUG (MainThread) [zigpy.quirks.registry] Fail because endpoint list mismatch: {1} {1, 242}
```
**Describe the solution you'd like**
Could the quirk be updated to also support new firmwares?
**Device signature - this can be acquired by removing the device from ZHA and pairing it again from the add devices screen. Be sure to add the entire content of the log panel after pairing the device to a code block below this line.**
TODO
**Additional context**

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `zhaquirks/legrand/dimmer.py`
Content:
```
1 """Device handler for Legrand Dimmer switch w/o neutral."""
2 from zigpy.profiles import zha
3 from zigpy.quirks import CustomCluster, CustomDevice
4 import zigpy.types as t
5 from zigpy.zcl.clusters.general import (
6 Basic,
7 BinaryInput,
8 Groups,
9 Identify,
10 LevelControl,
11 OnOff,
12 Ota,
13 Scenes,
14 )
15 from zigpy.zcl.clusters.manufacturer_specific import ManufacturerSpecificCluster
16
17 from . import LEGRAND
18 from ..const import (
19 DEVICE_TYPE,
20 ENDPOINTS,
21 INPUT_CLUSTERS,
22 MODELS_INFO,
23 OUTPUT_CLUSTERS,
24 PROFILE_ID,
25 )
26
27 MANUFACTURER_SPECIFIC_CLUSTER_ID = 0xFC01 # decimal = 64513
28
29
30 class LegrandCluster(CustomCluster, ManufacturerSpecificCluster):
31 """LegrandCluster."""
32
33 cluster_id = MANUFACTURER_SPECIFIC_CLUSTER_ID
34 name = "LegrandCluster"
35 ep_attribute = "legrand_cluster"
36 attributes = {
37 0x0000: ("dimmer", t.data16),
38 0x0001: ("led_dark", t.Bool),
39 0x0002: ("led_on", t.Bool),
40 }
41 server_commands = {}
42 client_commands = {}
43
44
45 class DimmerWithoutNeutral(CustomDevice):
46 """Dimmer switch w/o neutral."""
47
48 signature = {
49 # <SimpleDescriptor endpoint=1 profile=260 device_type=256
50 # device_version=1
51 # input_clusters=[0, 3, 4, 8, 6, 5, 15, 64513]
52 # output_clusters=[0, 64513, 25]>
53 MODELS_INFO: [(f" {LEGRAND}", " Dimmer switch w/o neutral")],
54 ENDPOINTS: {
55 1: {
56 PROFILE_ID: zha.PROFILE_ID,
57 DEVICE_TYPE: zha.DeviceType.ON_OFF_LIGHT,
58 INPUT_CLUSTERS: [
59 Basic.cluster_id,
60 Identify.cluster_id,
61 Groups.cluster_id,
62 OnOff.cluster_id,
63 LevelControl.cluster_id,
64 Scenes.cluster_id,
65 BinaryInput.cluster_id,
66 MANUFACTURER_SPECIFIC_CLUSTER_ID,
67 ],
68 OUTPUT_CLUSTERS: [
69 Basic.cluster_id,
70 MANUFACTURER_SPECIFIC_CLUSTER_ID,
71 Ota.cluster_id,
72 ],
73 }
74 },
75 }
76
77 replacement = {
78 ENDPOINTS: {
79 1: {
80 PROFILE_ID: zha.PROFILE_ID,
81 DEVICE_TYPE: zha.DeviceType.ON_OFF_LIGHT,
82 INPUT_CLUSTERS: [
83 Basic.cluster_id,
84 Identify.cluster_id,
85 Groups.cluster_id,
86 OnOff.cluster_id,
87 LevelControl.cluster_id,
88 Scenes.cluster_id,
89 BinaryInput.cluster_id,
90 LegrandCluster,
91 ],
92 OUTPUT_CLUSTERS: [Basic.cluster_id, LegrandCluster, Ota.cluster_id],
93 }
94 }
95 }
96
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/zhaquirks/legrand/dimmer.py b/zhaquirks/legrand/dimmer.py
--- a/zhaquirks/legrand/dimmer.py
+++ b/zhaquirks/legrand/dimmer.py
@@ -93,3 +93,42 @@
}
}
}
+
+
+class DimmerWithoutNeutral2(DimmerWithoutNeutral):
+ """Dimmer switch w/o neutral 2."""
+
+ signature = {
+ # <SimpleDescriptor endpoint=1 profile=260 device_type=256
+ # device_version=1
+ # input_clusters=[0, 3, 4, 8, 6, 5, 15, 64513]
+ # output_clusters=[0, 64513, 25]>
+ MODELS_INFO: [(f" {LEGRAND}", " Dimmer switch w/o neutral")],
+ ENDPOINTS: {
+ 1: {
+ PROFILE_ID: zha.PROFILE_ID,
+ DEVICE_TYPE: zha.DeviceType.ON_OFF_LIGHT,
+ INPUT_CLUSTERS: [
+ Basic.cluster_id,
+ Identify.cluster_id,
+ Groups.cluster_id,
+ OnOff.cluster_id,
+ LevelControl.cluster_id,
+ Scenes.cluster_id,
+ BinaryInput.cluster_id,
+ MANUFACTURER_SPECIFIC_CLUSTER_ID,
+ ],
+ OUTPUT_CLUSTERS: [
+ Basic.cluster_id,
+ MANUFACTURER_SPECIFIC_CLUSTER_ID,
+ Ota.cluster_id,
+ ],
+ },
+ 242: {
+ PROFILE_ID: 41440,
+ DEVICE_TYPE: 0x0061,
+ INPUT_CLUSTERS: [],
+ OUTPUT_CLUSTERS: [0x0021],
+ },
+ },
+ }
|
{"golden_diff": "diff --git a/zhaquirks/legrand/dimmer.py b/zhaquirks/legrand/dimmer.py\n--- a/zhaquirks/legrand/dimmer.py\n+++ b/zhaquirks/legrand/dimmer.py\n@@ -93,3 +93,42 @@\n }\n }\n }\n+\n+\n+class DimmerWithoutNeutral2(DimmerWithoutNeutral):\n+ \"\"\"Dimmer switch w/o neutral 2.\"\"\"\n+\n+ signature = {\n+ # <SimpleDescriptor endpoint=1 profile=260 device_type=256\n+ # device_version=1\n+ # input_clusters=[0, 3, 4, 8, 6, 5, 15, 64513]\n+ # output_clusters=[0, 64513, 25]>\n+ MODELS_INFO: [(f\" {LEGRAND}\", \" Dimmer switch w/o neutral\")],\n+ ENDPOINTS: {\n+ 1: {\n+ PROFILE_ID: zha.PROFILE_ID,\n+ DEVICE_TYPE: zha.DeviceType.ON_OFF_LIGHT,\n+ INPUT_CLUSTERS: [\n+ Basic.cluster_id,\n+ Identify.cluster_id,\n+ Groups.cluster_id,\n+ OnOff.cluster_id,\n+ LevelControl.cluster_id,\n+ Scenes.cluster_id,\n+ BinaryInput.cluster_id,\n+ MANUFACTURER_SPECIFIC_CLUSTER_ID,\n+ ],\n+ OUTPUT_CLUSTERS: [\n+ Basic.cluster_id,\n+ MANUFACTURER_SPECIFIC_CLUSTER_ID,\n+ Ota.cluster_id,\n+ ],\n+ },\n+ 242: {\n+ PROFILE_ID: 41440,\n+ DEVICE_TYPE: 0x0061,\n+ INPUT_CLUSTERS: [],\n+ OUTPUT_CLUSTERS: [0x0021],\n+ },\n+ },\n+ }\n", "issue": "[Device Support Request] Add support for updated Legrand Dimmer switch w/o neutral\n**Is your feature request related to a problem? Please describe.**\r\n\r\nI've updated the firmware of my Legrand Dimmer switch w/o neutral for which support was added in https://github.com/zigpy/zha-device-handlers/issues/299\r\n\r\nBefore OTA upgrade:\r\n- app_version: 0\r\n- hw_version: 1\r\n- stack_version: 64\r\n- sw_build_id: 01a (26)\r\n- zcl_version: 2\r\n- Firmware: 0x03401a00\r\n\r\nAfter OTA upgrade (2020-06-08):\r\n- app_version: 0\r\n- hw_version: 6\r\n- stack_version: 66\r\n- sw_build_id: 02b (43)\r\n- zcl_version: 2\r\n- Firmware: 0x002b4203\r\n\r\nAnd now it reports a new `GreenPowerProxy` endpoint with id 242:\r\n\r\n```\r\n{\r\n \"node_descriptor\": \"<NodeDescriptor byte1=17 byte2=64 mac_capability_flags=142 manufacturer_code=4129 maximum_buffer_size=89 maximum_incoming_transfer_size=63 server_mask=10752 maximum_outgoing_transfer_size=63 descriptor_capability_field=0>\",\r\n \"endpoints\": {\r\n \"1\": {\r\n \"profile_id\": 260,\r\n \"device_type\": \"0x0100\",\r\n \"in_clusters\": [\r\n \"0x0000\",\r\n \"0x0003\",\r\n \"0x0004\",\r\n \"0x0005\",\r\n \"0x0006\",\r\n \"0x0008\",\r\n \"0x000f\",\r\n \"0xfc01\"\r\n ],\r\n \"out_clusters\": [\r\n \"0x0000\",\r\n \"0x0019\",\r\n \"0xfc01\"\r\n ]\r\n },\r\n \"242\": {\r\n \"profile_id\": 41440,\r\n \"device_type\": \"0x0061\",\r\n \"in_clusters\": [],\r\n \"out_clusters\": [\r\n \"0x0021\"\r\n ]\r\n }\r\n },\r\n \"manufacturer\": \" Legrand\",\r\n \"model\": \" Dimmer switch w/o neutral\",\r\n \"class\": \"zigpy.device.Device\"\r\n}\r\n```\r\n\r\n\r\nThe issue is that prevents the quirk from matching:\r\n\r\n```\r\n2020-06-17 06:45:05 DEBUG (MainThread) [zigpy.quirks.registry] Checking quirks for Legrand Dimmer switch w/o neutral (00:04:74:00:00:8b:0e:a2)\r\n2020-06-17 06:45:05 DEBUG (MainThread) [zigpy.quirks.registry] Considering <class 'zhaquirks.legrand.dimmer.DimmerWithoutNeutral'>\r\n2020-06-17 06:45:05 DEBUG (MainThread) [zigpy.quirks.registry] Fail because endpoint list mismatch: {1} {1, 242}\r\n```\r\n\r\n**Describe the solution you'd like**\r\n\r\nCould the quirk be updated to also support new firmwares?\r\n\r\n**Device signature - this can be acquired by removing the device from ZHA and pairing it again from the add devices screen. Be sure to add the entire content of the log panel after pairing the device to a code block below this line.**\r\n\r\nTODO\r\n\r\n**Additional context**\r\n\r\n\n", "before_files": [{"content": "\"\"\"Device handler for Legrand Dimmer switch w/o neutral.\"\"\"\nfrom zigpy.profiles import zha\nfrom zigpy.quirks import CustomCluster, CustomDevice\nimport zigpy.types as t\nfrom zigpy.zcl.clusters.general import (\n Basic,\n BinaryInput,\n Groups,\n Identify,\n LevelControl,\n OnOff,\n Ota,\n Scenes,\n)\nfrom zigpy.zcl.clusters.manufacturer_specific import ManufacturerSpecificCluster\n\nfrom . import LEGRAND\nfrom ..const import (\n DEVICE_TYPE,\n ENDPOINTS,\n INPUT_CLUSTERS,\n MODELS_INFO,\n OUTPUT_CLUSTERS,\n PROFILE_ID,\n)\n\nMANUFACTURER_SPECIFIC_CLUSTER_ID = 0xFC01 # decimal = 64513\n\n\nclass LegrandCluster(CustomCluster, ManufacturerSpecificCluster):\n \"\"\"LegrandCluster.\"\"\"\n\n cluster_id = MANUFACTURER_SPECIFIC_CLUSTER_ID\n name = \"LegrandCluster\"\n ep_attribute = \"legrand_cluster\"\n attributes = {\n 0x0000: (\"dimmer\", t.data16),\n 0x0001: (\"led_dark\", t.Bool),\n 0x0002: (\"led_on\", t.Bool),\n }\n server_commands = {}\n client_commands = {}\n\n\nclass DimmerWithoutNeutral(CustomDevice):\n \"\"\"Dimmer switch w/o neutral.\"\"\"\n\n signature = {\n # <SimpleDescriptor endpoint=1 profile=260 device_type=256\n # device_version=1\n # input_clusters=[0, 3, 4, 8, 6, 5, 15, 64513]\n # output_clusters=[0, 64513, 25]>\n MODELS_INFO: [(f\" {LEGRAND}\", \" Dimmer switch w/o neutral\")],\n ENDPOINTS: {\n 1: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.ON_OFF_LIGHT,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n Identify.cluster_id,\n Groups.cluster_id,\n OnOff.cluster_id,\n LevelControl.cluster_id,\n Scenes.cluster_id,\n BinaryInput.cluster_id,\n MANUFACTURER_SPECIFIC_CLUSTER_ID,\n ],\n OUTPUT_CLUSTERS: [\n Basic.cluster_id,\n MANUFACTURER_SPECIFIC_CLUSTER_ID,\n Ota.cluster_id,\n ],\n }\n },\n }\n\n replacement = {\n ENDPOINTS: {\n 1: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.ON_OFF_LIGHT,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n Identify.cluster_id,\n Groups.cluster_id,\n OnOff.cluster_id,\n LevelControl.cluster_id,\n Scenes.cluster_id,\n BinaryInput.cluster_id,\n LegrandCluster,\n ],\n OUTPUT_CLUSTERS: [Basic.cluster_id, LegrandCluster, Ota.cluster_id],\n }\n }\n }\n", "path": "zhaquirks/legrand/dimmer.py"}], "after_files": [{"content": "\"\"\"Device handler for Legrand Dimmer switch w/o neutral.\"\"\"\nfrom zigpy.profiles import zha\nfrom zigpy.quirks import CustomCluster, CustomDevice\nimport zigpy.types as t\nfrom zigpy.zcl.clusters.general import (\n Basic,\n BinaryInput,\n Groups,\n Identify,\n LevelControl,\n OnOff,\n Ota,\n Scenes,\n)\nfrom zigpy.zcl.clusters.manufacturer_specific import ManufacturerSpecificCluster\n\nfrom . import LEGRAND\nfrom ..const import (\n DEVICE_TYPE,\n ENDPOINTS,\n INPUT_CLUSTERS,\n MODELS_INFO,\n OUTPUT_CLUSTERS,\n PROFILE_ID,\n)\n\nMANUFACTURER_SPECIFIC_CLUSTER_ID = 0xFC01 # decimal = 64513\n\n\nclass LegrandCluster(CustomCluster, ManufacturerSpecificCluster):\n \"\"\"LegrandCluster.\"\"\"\n\n cluster_id = MANUFACTURER_SPECIFIC_CLUSTER_ID\n name = \"LegrandCluster\"\n ep_attribute = \"legrand_cluster\"\n attributes = {\n 0x0000: (\"dimmer\", t.data16),\n 0x0001: (\"led_dark\", t.Bool),\n 0x0002: (\"led_on\", t.Bool),\n }\n server_commands = {}\n client_commands = {}\n\n\nclass DimmerWithoutNeutral(CustomDevice):\n \"\"\"Dimmer switch w/o neutral.\"\"\"\n\n signature = {\n # <SimpleDescriptor endpoint=1 profile=260 device_type=256\n # device_version=1\n # input_clusters=[0, 3, 4, 8, 6, 5, 15, 64513]\n # output_clusters=[0, 64513, 25]>\n MODELS_INFO: [(f\" {LEGRAND}\", \" Dimmer switch w/o neutral\")],\n ENDPOINTS: {\n 1: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.ON_OFF_LIGHT,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n Identify.cluster_id,\n Groups.cluster_id,\n OnOff.cluster_id,\n LevelControl.cluster_id,\n Scenes.cluster_id,\n BinaryInput.cluster_id,\n MANUFACTURER_SPECIFIC_CLUSTER_ID,\n ],\n OUTPUT_CLUSTERS: [\n Basic.cluster_id,\n MANUFACTURER_SPECIFIC_CLUSTER_ID,\n Ota.cluster_id,\n ],\n }\n },\n }\n\n replacement = {\n ENDPOINTS: {\n 1: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.ON_OFF_LIGHT,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n Identify.cluster_id,\n Groups.cluster_id,\n OnOff.cluster_id,\n LevelControl.cluster_id,\n Scenes.cluster_id,\n BinaryInput.cluster_id,\n LegrandCluster,\n ],\n OUTPUT_CLUSTERS: [Basic.cluster_id, LegrandCluster, Ota.cluster_id],\n }\n }\n }\n\n\nclass DimmerWithoutNeutral2(DimmerWithoutNeutral):\n \"\"\"Dimmer switch w/o neutral 2.\"\"\"\n\n signature = {\n # <SimpleDescriptor endpoint=1 profile=260 device_type=256\n # device_version=1\n # input_clusters=[0, 3, 4, 8, 6, 5, 15, 64513]\n # output_clusters=[0, 64513, 25]>\n MODELS_INFO: [(f\" {LEGRAND}\", \" Dimmer switch w/o neutral\")],\n ENDPOINTS: {\n 1: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.ON_OFF_LIGHT,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n Identify.cluster_id,\n Groups.cluster_id,\n OnOff.cluster_id,\n LevelControl.cluster_id,\n Scenes.cluster_id,\n BinaryInput.cluster_id,\n MANUFACTURER_SPECIFIC_CLUSTER_ID,\n ],\n OUTPUT_CLUSTERS: [\n Basic.cluster_id,\n MANUFACTURER_SPECIFIC_CLUSTER_ID,\n Ota.cluster_id,\n ],\n },\n 242: {\n PROFILE_ID: 41440,\n DEVICE_TYPE: 0x0061,\n INPUT_CLUSTERS: [],\n OUTPUT_CLUSTERS: [0x0021],\n },\n },\n }\n", "path": "zhaquirks/legrand/dimmer.py"}]}
| 1,953 | 415 |
gh_patches_debug_25654
|
rasdani/github-patches
|
git_diff
|
bokeh__bokeh-5348
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Depreciated example
https://github.com/bokeh/bokeh/blob/0.12.3/examples/embed/simple/simple.py
```
Because the ``resources`` argument is no longer needed, it is deprecated and no longer has any effect.
```
The link is also broken:
http://bokeh.pydata.org/en/latest/docs/user_guide/embedding.html#components
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/embed/simple/simple.py`
Content:
```
1 '''This example demonstrates embedding a standalone Bokeh document
2 into a simple Flask application, with a basic HTML web form.
3
4 To view the example, run:
5
6 python simple.py
7
8 in this directory, and navigate to:
9
10 http://localhost:5000
11
12 '''
13 from __future__ import print_function
14
15 import flask
16
17 from bokeh.embed import components
18 from bokeh.plotting import figure
19 from bokeh.resources import INLINE
20 from bokeh.util.string import encode_utf8
21
22 app = flask.Flask(__name__)
23
24 colors = {
25 'Black': '#000000',
26 'Red': '#FF0000',
27 'Green': '#00FF00',
28 'Blue': '#0000FF',
29 }
30
31 def getitem(obj, item, default):
32 if item not in obj:
33 return default
34 else:
35 return obj[item]
36
37 @app.route("/")
38 def polynomial():
39 """ Very simple embedding of a polynomial chart
40
41 """
42
43 # Grab the inputs arguments from the URL
44 # This is automated by the button
45 args = flask.request.args
46
47 # Get all the form arguments in the url with defaults
48 color = colors[getitem(args, 'color', 'Black')]
49 _from = int(getitem(args, '_from', 0))
50 to = int(getitem(args, 'to', 10))
51
52 # Create a polynomial line graph
53 x = list(range(_from, to + 1))
54 fig = figure(title="Polynomial")
55 fig.line(x, [i ** 2 for i in x], color=color, line_width=2)
56
57 # Configure resources to include BokehJS inline in the document.
58 # For more details see:
59 # http://bokeh.pydata.org/en/latest/docs/reference/resources_embedding.html#bokeh-embed
60 js_resources = INLINE.render_js()
61 css_resources = INLINE.render_css()
62
63 # For more details see:
64 # http://bokeh.pydata.org/en/latest/docs/user_guide/embedding.html#components
65 script, div = components(fig, INLINE)
66 html = flask.render_template(
67 'embed.html',
68 plot_script=script,
69 plot_div=div,
70 js_resources=js_resources,
71 css_resources=css_resources,
72 color=color,
73 _from=_from,
74 to=to
75 )
76 return encode_utf8(html)
77
78 if __name__ == "__main__":
79 print(__doc__)
80 app.run()
81
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/examples/embed/simple/simple.py b/examples/embed/simple/simple.py
--- a/examples/embed/simple/simple.py
+++ b/examples/embed/simple/simple.py
@@ -41,7 +41,6 @@
"""
# Grab the inputs arguments from the URL
- # This is automated by the button
args = flask.request.args
# Get all the form arguments in the url with defaults
@@ -49,20 +48,15 @@
_from = int(getitem(args, '_from', 0))
to = int(getitem(args, 'to', 10))
- # Create a polynomial line graph
+ # Create a polynomial line graph with those arguments
x = list(range(_from, to + 1))
fig = figure(title="Polynomial")
fig.line(x, [i ** 2 for i in x], color=color, line_width=2)
- # Configure resources to include BokehJS inline in the document.
- # For more details see:
- # http://bokeh.pydata.org/en/latest/docs/reference/resources_embedding.html#bokeh-embed
js_resources = INLINE.render_js()
css_resources = INLINE.render_css()
- # For more details see:
- # http://bokeh.pydata.org/en/latest/docs/user_guide/embedding.html#components
- script, div = components(fig, INLINE)
+ script, div = components(fig)
html = flask.render_template(
'embed.html',
plot_script=script,
|
{"golden_diff": "diff --git a/examples/embed/simple/simple.py b/examples/embed/simple/simple.py\n--- a/examples/embed/simple/simple.py\n+++ b/examples/embed/simple/simple.py\n@@ -41,7 +41,6 @@\n \"\"\"\n \n # Grab the inputs arguments from the URL\n- # This is automated by the button\n args = flask.request.args\n \n # Get all the form arguments in the url with defaults\n@@ -49,20 +48,15 @@\n _from = int(getitem(args, '_from', 0))\n to = int(getitem(args, 'to', 10))\n \n- # Create a polynomial line graph\n+ # Create a polynomial line graph with those arguments\n x = list(range(_from, to + 1))\n fig = figure(title=\"Polynomial\")\n fig.line(x, [i ** 2 for i in x], color=color, line_width=2)\n \n- # Configure resources to include BokehJS inline in the document.\n- # For more details see:\n- # http://bokeh.pydata.org/en/latest/docs/reference/resources_embedding.html#bokeh-embed\n js_resources = INLINE.render_js()\n css_resources = INLINE.render_css()\n \n- # For more details see:\n- # http://bokeh.pydata.org/en/latest/docs/user_guide/embedding.html#components\n- script, div = components(fig, INLINE)\n+ script, div = components(fig)\n html = flask.render_template(\n 'embed.html',\n plot_script=script,\n", "issue": "Depreciated example\nhttps://github.com/bokeh/bokeh/blob/0.12.3/examples/embed/simple/simple.py\n\n```\nBecause the ``resources`` argument is no longer needed, it is deprecated and no longer has any effect.\n```\n\nThe link is also broken:\nhttp://bokeh.pydata.org/en/latest/docs/user_guide/embedding.html#components\n\n", "before_files": [{"content": "'''This example demonstrates embedding a standalone Bokeh document\ninto a simple Flask application, with a basic HTML web form.\n\nTo view the example, run:\n\n python simple.py\n\nin this directory, and navigate to:\n\n http://localhost:5000\n\n'''\nfrom __future__ import print_function\n\nimport flask\n\nfrom bokeh.embed import components\nfrom bokeh.plotting import figure\nfrom bokeh.resources import INLINE\nfrom bokeh.util.string import encode_utf8\n\napp = flask.Flask(__name__)\n\ncolors = {\n 'Black': '#000000',\n 'Red': '#FF0000',\n 'Green': '#00FF00',\n 'Blue': '#0000FF',\n}\n\ndef getitem(obj, item, default):\n if item not in obj:\n return default\n else:\n return obj[item]\n\[email protected](\"/\")\ndef polynomial():\n \"\"\" Very simple embedding of a polynomial chart\n\n \"\"\"\n\n # Grab the inputs arguments from the URL\n # This is automated by the button\n args = flask.request.args\n\n # Get all the form arguments in the url with defaults\n color = colors[getitem(args, 'color', 'Black')]\n _from = int(getitem(args, '_from', 0))\n to = int(getitem(args, 'to', 10))\n\n # Create a polynomial line graph\n x = list(range(_from, to + 1))\n fig = figure(title=\"Polynomial\")\n fig.line(x, [i ** 2 for i in x], color=color, line_width=2)\n\n # Configure resources to include BokehJS inline in the document.\n # For more details see:\n # http://bokeh.pydata.org/en/latest/docs/reference/resources_embedding.html#bokeh-embed\n js_resources = INLINE.render_js()\n css_resources = INLINE.render_css()\n\n # For more details see:\n # http://bokeh.pydata.org/en/latest/docs/user_guide/embedding.html#components\n script, div = components(fig, INLINE)\n html = flask.render_template(\n 'embed.html',\n plot_script=script,\n plot_div=div,\n js_resources=js_resources,\n css_resources=css_resources,\n color=color,\n _from=_from,\n to=to\n )\n return encode_utf8(html)\n\nif __name__ == \"__main__\":\n print(__doc__)\n app.run()\n", "path": "examples/embed/simple/simple.py"}], "after_files": [{"content": "'''This example demonstrates embedding a standalone Bokeh document\ninto a simple Flask application, with a basic HTML web form.\n\nTo view the example, run:\n\n python simple.py\n\nin this directory, and navigate to:\n\n http://localhost:5000\n\n'''\nfrom __future__ import print_function\n\nimport flask\n\nfrom bokeh.embed import components\nfrom bokeh.plotting import figure\nfrom bokeh.resources import INLINE\nfrom bokeh.util.string import encode_utf8\n\napp = flask.Flask(__name__)\n\ncolors = {\n 'Black': '#000000',\n 'Red': '#FF0000',\n 'Green': '#00FF00',\n 'Blue': '#0000FF',\n}\n\ndef getitem(obj, item, default):\n if item not in obj:\n return default\n else:\n return obj[item]\n\[email protected](\"/\")\ndef polynomial():\n \"\"\" Very simple embedding of a polynomial chart\n\n \"\"\"\n\n # Grab the inputs arguments from the URL\n args = flask.request.args\n\n # Get all the form arguments in the url with defaults\n color = colors[getitem(args, 'color', 'Black')]\n _from = int(getitem(args, '_from', 0))\n to = int(getitem(args, 'to', 10))\n\n # Create a polynomial line graph with those arguments\n x = list(range(_from, to + 1))\n fig = figure(title=\"Polynomial\")\n fig.line(x, [i ** 2 for i in x], color=color, line_width=2)\n\n js_resources = INLINE.render_js()\n css_resources = INLINE.render_css()\n\n script, div = components(fig)\n html = flask.render_template(\n 'embed.html',\n plot_script=script,\n plot_div=div,\n js_resources=js_resources,\n css_resources=css_resources,\n color=color,\n _from=_from,\n to=to\n )\n return encode_utf8(html)\n\nif __name__ == \"__main__\":\n print(__doc__)\n app.run()\n", "path": "examples/embed/simple/simple.py"}]}
| 1,025 | 331 |
gh_patches_debug_3980
|
rasdani/github-patches
|
git_diff
|
data-for-change__anyway-291
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improve cluster accuracy
Cluster aggregates markers in `in_cluster` is using box instead of a circle parameter calculation which I think may cause duplications and inaccuracy
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `static/pymapcluster.py`
Content:
```
1 ##
2 import globalmaptiles as globaltiles
3 from math import cos, sin, atan2, sqrt
4 import time
5 ##
6
7 def center_geolocation(geolocations):
8 """
9 Provide a relatively accurate center lat, lon returned as a list pair, given
10 a list of list pairs.
11 ex: in: geolocations = ((lat1,lon1), (lat2,lon2),)
12 out: (center_lat, center_lon)
13 """
14 x = 0
15 y = 0
16 z = 0
17
18 for lat, lon in geolocations:
19 lat = float(lat)
20 lon = float(lon)
21 x += cos(lat) * cos(lon)
22 y += cos(lat) * sin(lon)
23 z += sin(lat)
24
25 x = float(x / len(geolocations))
26 y = float(y / len(geolocations))
27 z = float(z / len(geolocations))
28
29 return (atan2(y, x), atan2(z, sqrt(x * x + y * y)))
30
31 def latlng_to_zoompixels(mercator, lat, lng, zoom):
32 mx, my = mercator.LatLonToMeters(lat, lng)
33 pix = mercator.MetersToPixels(mx, my, zoom)
34 return pix
35
36 def in_cluster(center, radius, point):
37 return (point[0] >= center[0] - radius) and (point[0] <= center[0] + radius) \
38 and (point[1] >= center[1] - radius) and (point[1] <= center[1] + radius)
39
40 def cluster_markers(mercator, latlngs, zoom, gridsize=50):
41 """
42 Args:
43 mercator: instance of GlobalMercator()
44 latlngs: list of (lat,lng) tuple
45 zoom: current zoom level
46 gridsize: cluster radius (in pixels in current zoom level)
47 Returns:
48 centers: list of indices in latlngs of points used as centers
49 clusters: list of same length as latlngs giving assigning each point to
50 a cluster
51 """
52 start_time = time.time()
53 centers = []
54 clusters = []
55 sizes = []
56 latlngs = map(lambda latlng: latlng.serialize(), latlngs)
57 for i, latlng in enumerate(latlngs):
58 lat = latlng['latitude']
59 lng = latlng['longitude']
60 point_pix = latlng_to_zoompixels(mercator, lat, lng, zoom)
61 assigned = False
62 for cidx, c in enumerate(centers):
63 center = latlngs[c]
64 center = latlng_to_zoompixels(mercator, center['latitude'], center['longitude'], zoom)
65 if in_cluster(center, gridsize, point_pix):
66 # Assign point to cluster
67 clusters.append(cidx)
68 sizes[cidx] += 1
69 assigned = True
70 break
71 if not assigned:
72 # Create new cluster for point
73 #TODO center_geolocation the center!
74 centers.append(i)
75 sizes.append(1)
76 clusters.append(len(centers) - 1)
77
78 print('time for cluster_markers: ' + str(time.time() - start_time))
79 return centers, clusters, sizes
80
81 def create_clusters_centers(markers, zoom, radius):
82 mercator = globaltiles.GlobalMercator()
83 centers, clusters, sizes = cluster_markers(mercator, markers, zoom, radius)
84 centers_markers = [markers[i] for i in centers]
85 return centers_markers, clusters, sizes
86
87 def get_cluster_json(clust_marker, clust_size):
88 return {
89 'longitude': clust_marker.longitude,
90 'latitude': clust_marker.latitude,
91 'size': clust_size
92 }
93
94 def get_cluster_size(index, clusters):
95 from collections import Counter
96 #TODO: don't call Counter for every cluster in the array
97 return Counter(clusters)[index]
98
99 def generate_clusters_json(markers, zoom, radius=50):
100 centers, clusters, sizes = create_clusters_centers(markers, zoom, radius)
101 json_clusts=[]
102
103 for i, point in enumerate(centers):
104 json_clusts.append(get_cluster_json(point, sizes[i]))
105
106 return {
107 'clusters': json_clusts
108 }
109
110 ##
111 if __name__ == '__main__':
112 ##
113 mercator = globaltiles.GlobalMercator()
114 latlngs = [(28.43, 8), (28.43, 8), (28.44, 8), (35, 8)]
115 centers, clusters = cluster_markers(mercator, latlngs, 21)
116 ##
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/static/pymapcluster.py b/static/pymapcluster.py
--- a/static/pymapcluster.py
+++ b/static/pymapcluster.py
@@ -34,8 +34,7 @@
return pix
def in_cluster(center, radius, point):
- return (point[0] >= center[0] - radius) and (point[0] <= center[0] + radius) \
- and (point[1] >= center[1] - radius) and (point[1] <= center[1] + radius)
+ return sqrt((point[0] - center[0])**2 + (point[1] - center[1])**2) <= radius
def cluster_markers(mercator, latlngs, zoom, gridsize=50):
"""
|
{"golden_diff": "diff --git a/static/pymapcluster.py b/static/pymapcluster.py\n--- a/static/pymapcluster.py\n+++ b/static/pymapcluster.py\n@@ -34,8 +34,7 @@\n return pix\n \n def in_cluster(center, radius, point):\n- return (point[0] >= center[0] - radius) and (point[0] <= center[0] + radius) \\\n- and (point[1] >= center[1] - radius) and (point[1] <= center[1] + radius)\n+ return sqrt((point[0] - center[0])**2 + (point[1] - center[1])**2) <= radius\n \n def cluster_markers(mercator, latlngs, zoom, gridsize=50):\n \"\"\"\n", "issue": "Improve cluster accuracy\nCluster aggregates markers in `in_cluster` is using box instead of a circle parameter calculation which I think may cause duplications and inaccuracy\n\n", "before_files": [{"content": "##\nimport globalmaptiles as globaltiles\nfrom math import cos, sin, atan2, sqrt\nimport time\n##\n \ndef center_geolocation(geolocations):\n \"\"\"\n Provide a relatively accurate center lat, lon returned as a list pair, given\n a list of list pairs.\n ex: in: geolocations = ((lat1,lon1), (lat2,lon2),)\n out: (center_lat, center_lon)\n \"\"\"\n x = 0\n y = 0\n z = 0\n \n for lat, lon in geolocations:\n lat = float(lat)\n lon = float(lon)\n x += cos(lat) * cos(lon)\n y += cos(lat) * sin(lon)\n z += sin(lat)\n \n x = float(x / len(geolocations))\n y = float(y / len(geolocations))\n z = float(z / len(geolocations))\n \n return (atan2(y, x), atan2(z, sqrt(x * x + y * y)))\n\ndef latlng_to_zoompixels(mercator, lat, lng, zoom):\n mx, my = mercator.LatLonToMeters(lat, lng)\n pix = mercator.MetersToPixels(mx, my, zoom)\n return pix\n\ndef in_cluster(center, radius, point):\n return (point[0] >= center[0] - radius) and (point[0] <= center[0] + radius) \\\n and (point[1] >= center[1] - radius) and (point[1] <= center[1] + radius)\n\ndef cluster_markers(mercator, latlngs, zoom, gridsize=50):\n \"\"\"\n Args:\n mercator: instance of GlobalMercator()\n latlngs: list of (lat,lng) tuple\n zoom: current zoom level\n gridsize: cluster radius (in pixels in current zoom level)\n Returns:\n centers: list of indices in latlngs of points used as centers\n clusters: list of same length as latlngs giving assigning each point to\n a cluster\n \"\"\"\n start_time = time.time()\n centers = []\n clusters = []\n sizes = []\n latlngs = map(lambda latlng: latlng.serialize(), latlngs)\n for i, latlng in enumerate(latlngs):\n lat = latlng['latitude']\n lng = latlng['longitude']\n point_pix = latlng_to_zoompixels(mercator, lat, lng, zoom)\n assigned = False\n for cidx, c in enumerate(centers):\n center = latlngs[c]\n center = latlng_to_zoompixels(mercator, center['latitude'], center['longitude'], zoom)\n if in_cluster(center, gridsize, point_pix):\n # Assign point to cluster\n clusters.append(cidx)\n sizes[cidx] += 1\n assigned = True\n break\n if not assigned:\n # Create new cluster for point\n #TODO center_geolocation the center!\n centers.append(i)\n sizes.append(1)\n clusters.append(len(centers) - 1)\n\n print('time for cluster_markers: ' + str(time.time() - start_time))\n return centers, clusters, sizes\n\ndef create_clusters_centers(markers, zoom, radius):\n mercator = globaltiles.GlobalMercator()\n centers, clusters, sizes = cluster_markers(mercator, markers, zoom, radius)\n centers_markers = [markers[i] for i in centers]\n return centers_markers, clusters, sizes\n\ndef get_cluster_json(clust_marker, clust_size):\n return {\n 'longitude': clust_marker.longitude,\n 'latitude': clust_marker.latitude,\n 'size': clust_size\n }\n\ndef get_cluster_size(index, clusters):\n from collections import Counter\n #TODO: don't call Counter for every cluster in the array\n return Counter(clusters)[index]\n\ndef generate_clusters_json(markers, zoom, radius=50):\n centers, clusters, sizes = create_clusters_centers(markers, zoom, radius)\n json_clusts=[]\n\n for i, point in enumerate(centers):\n json_clusts.append(get_cluster_json(point, sizes[i]))\n\n return {\n 'clusters': json_clusts\n }\n\n##\nif __name__ == '__main__':\n ##\n mercator = globaltiles.GlobalMercator()\n latlngs = [(28.43, 8), (28.43, 8), (28.44, 8), (35, 8)]\n centers, clusters = cluster_markers(mercator, latlngs, 21)\n ##", "path": "static/pymapcluster.py"}], "after_files": [{"content": "##\nimport globalmaptiles as globaltiles\nfrom math import cos, sin, atan2, sqrt\nimport time\n##\n \ndef center_geolocation(geolocations):\n \"\"\"\n Provide a relatively accurate center lat, lon returned as a list pair, given\n a list of list pairs.\n ex: in: geolocations = ((lat1,lon1), (lat2,lon2),)\n out: (center_lat, center_lon)\n \"\"\"\n x = 0\n y = 0\n z = 0\n \n for lat, lon in geolocations:\n lat = float(lat)\n lon = float(lon)\n x += cos(lat) * cos(lon)\n y += cos(lat) * sin(lon)\n z += sin(lat)\n \n x = float(x / len(geolocations))\n y = float(y / len(geolocations))\n z = float(z / len(geolocations))\n \n return (atan2(y, x), atan2(z, sqrt(x * x + y * y)))\n\ndef latlng_to_zoompixels(mercator, lat, lng, zoom):\n mx, my = mercator.LatLonToMeters(lat, lng)\n pix = mercator.MetersToPixels(mx, my, zoom)\n return pix\n\ndef in_cluster(center, radius, point):\n return sqrt((point[0] - center[0])**2 + (point[1] - center[1])**2) <= radius\n\ndef cluster_markers(mercator, latlngs, zoom, gridsize=50):\n \"\"\"\n Args:\n mercator: instance of GlobalMercator()\n latlngs: list of (lat,lng) tuple\n zoom: current zoom level\n gridsize: cluster radius (in pixels in current zoom level)\n Returns:\n centers: list of indices in latlngs of points used as centers\n clusters: list of same length as latlngs giving assigning each point to\n a cluster\n \"\"\"\n start_time = time.time()\n centers = []\n clusters = []\n sizes = []\n latlngs = map(lambda latlng: latlng.serialize(), latlngs)\n for i, latlng in enumerate(latlngs):\n lat = latlng['latitude']\n lng = latlng['longitude']\n point_pix = latlng_to_zoompixels(mercator, lat, lng, zoom)\n assigned = False\n for cidx, c in enumerate(centers):\n center = latlngs[c]\n center = latlng_to_zoompixels(mercator, center['latitude'], center['longitude'], zoom)\n if in_cluster(center, gridsize, point_pix):\n # Assign point to cluster\n clusters.append(cidx)\n sizes[cidx] += 1\n assigned = True\n break\n if not assigned:\n # Create new cluster for point\n #TODO center_geolocation the center!\n centers.append(i)\n sizes.append(1)\n clusters.append(len(centers) - 1)\n\n print('time for cluster_markers: ' + str(time.time() - start_time))\n return centers, clusters, sizes\n\ndef create_clusters_centers(markers, zoom, radius):\n mercator = globaltiles.GlobalMercator()\n centers, clusters, sizes = cluster_markers(mercator, markers, zoom, radius)\n centers_markers = [markers[i] for i in centers]\n return centers_markers, clusters, sizes\n\ndef get_cluster_json(clust_marker, clust_size):\n return {\n 'longitude': clust_marker.longitude,\n 'latitude': clust_marker.latitude,\n 'size': clust_size\n }\n\ndef get_cluster_size(index, clusters):\n from collections import Counter\n #TODO: don't call Counter for every cluster in the array\n return Counter(clusters)[index]\n\ndef generate_clusters_json(markers, zoom, radius=50):\n centers, clusters, sizes = create_clusters_centers(markers, zoom, radius)\n json_clusts=[]\n\n for i, point in enumerate(centers):\n json_clusts.append(get_cluster_json(point, sizes[i]))\n\n return {\n 'clusters': json_clusts\n }\n\n##\nif __name__ == '__main__':\n ##\n mercator = globaltiles.GlobalMercator()\n latlngs = [(28.43, 8), (28.43, 8), (28.44, 8), (35, 8)]\n centers, clusters = cluster_markers(mercator, latlngs, 21)\n ##", "path": "static/pymapcluster.py"}]}
| 1,541 | 174 |
gh_patches_debug_6729
|
rasdani/github-patches
|
git_diff
|
OCHA-DAP__hdx-ckan-1829
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Organization view pages result in 500 error
Only on stag. I tested several different orgs.

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ckanext-hdx_search/ckanext/hdx_search/plugin.py`
Content:
```
1 import logging, re
2 import ckan.plugins as plugins
3 import ckan.plugins.toolkit as tk
4 import ckan.lib.plugins as lib_plugins
5
6 def convert_country(q):
7 for c in tk.get_action('group_list')({'user':'127.0.0.1'},{'all_fields': True}):
8 if re.findall(c['display_name'].lower(),q.lower()):
9 q += ' '+c['name']
10 return q
11
12 class HDXSearchPlugin(plugins.SingletonPlugin):
13 plugins.implements(plugins.IConfigurer, inherit=False)
14 plugins.implements(plugins.IRoutes, inherit=True)
15 plugins.implements(plugins.ITemplateHelpers, inherit=False)
16 plugins.implements(plugins.IPackageController, inherit=True)
17
18 def update_config(self, config):
19 tk.add_template_directory(config, 'templates')
20
21 def get_helpers(self):
22 return {}
23
24 def before_map(self, map):
25 map.connect('search', '/search',
26 controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='search')
27 map.connect('simple_search',
28 '/dataset', controller='ckanext.hdx_search.controllers.simple_search_controller:HDXSimpleSearchController', action='package_search')
29 return map
30
31 def after_map(self, map):
32 map.connect('search', '/search',
33 controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='search')
34 map.connect('simple_search',
35 '/dataset', controller='ckanext.hdx_search.controllers.simple_search_controller:HDXSimpleSearchController', action='package_search')
36 return map
37
38 def before_search(self, search_params):
39 search_params['q'] = convert_country(search_params['q'])
40 if 'facet.field' in search_params and 'vocab_Topics' not in search_params['facet.field']:
41 search_params['facet.field'].append('vocab_Topics')
42
43 # If indicator flag is set, search only that type
44 if 'ext_indicator' in search_params['extras']:
45 if int(search_params['extras']['ext_indicator']) == 1:
46 search_params['fq'] = search_params['fq'] + ' +extras_indicator:1'
47 elif int(search_params['extras']['ext_indicator']) == 0:
48 search_params['fq'] = search_params[
49 'fq'] + ' -extras_indicator:1'
50 return search_params
51
52 def after_search(self, search_results, search_params):
53 return search_results
54
55 def before_view(self, pkg_dict):
56 return pkg_dict
57
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ckanext-hdx_search/ckanext/hdx_search/plugin.py b/ckanext-hdx_search/ckanext/hdx_search/plugin.py
--- a/ckanext-hdx_search/ckanext/hdx_search/plugin.py
+++ b/ckanext-hdx_search/ckanext/hdx_search/plugin.py
@@ -36,7 +36,7 @@
return map
def before_search(self, search_params):
- search_params['q'] = convert_country(search_params['q'])
+ #search_params['q'] = convert_country(search_params['q'])
if 'facet.field' in search_params and 'vocab_Topics' not in search_params['facet.field']:
search_params['facet.field'].append('vocab_Topics')
|
{"golden_diff": "diff --git a/ckanext-hdx_search/ckanext/hdx_search/plugin.py b/ckanext-hdx_search/ckanext/hdx_search/plugin.py\n--- a/ckanext-hdx_search/ckanext/hdx_search/plugin.py\n+++ b/ckanext-hdx_search/ckanext/hdx_search/plugin.py\n@@ -36,7 +36,7 @@\n return map\n \n def before_search(self, search_params):\n- search_params['q'] = convert_country(search_params['q'])\n+ #search_params['q'] = convert_country(search_params['q'])\n if 'facet.field' in search_params and 'vocab_Topics' not in search_params['facet.field']:\n search_params['facet.field'].append('vocab_Topics')\n", "issue": "Organization view pages result in 500 error\nOnly on stag. I tested several different orgs. \n\n\n\n", "before_files": [{"content": "import logging, re\nimport ckan.plugins as plugins\nimport ckan.plugins.toolkit as tk\nimport ckan.lib.plugins as lib_plugins\n\ndef convert_country(q):\n for c in tk.get_action('group_list')({'user':'127.0.0.1'},{'all_fields': True}):\n if re.findall(c['display_name'].lower(),q.lower()):\n q += ' '+c['name']\n return q\n\nclass HDXSearchPlugin(plugins.SingletonPlugin):\n plugins.implements(plugins.IConfigurer, inherit=False)\n plugins.implements(plugins.IRoutes, inherit=True)\n plugins.implements(plugins.ITemplateHelpers, inherit=False)\n plugins.implements(plugins.IPackageController, inherit=True)\n\n def update_config(self, config):\n tk.add_template_directory(config, 'templates')\n\n def get_helpers(self):\n return {}\n\n def before_map(self, map):\n map.connect('search', '/search',\n controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='search')\n map.connect('simple_search',\n '/dataset', controller='ckanext.hdx_search.controllers.simple_search_controller:HDXSimpleSearchController', action='package_search')\n return map\n\n def after_map(self, map):\n map.connect('search', '/search',\n controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='search')\n map.connect('simple_search',\n '/dataset', controller='ckanext.hdx_search.controllers.simple_search_controller:HDXSimpleSearchController', action='package_search')\n return map\n\n def before_search(self, search_params):\n search_params['q'] = convert_country(search_params['q'])\n if 'facet.field' in search_params and 'vocab_Topics' not in search_params['facet.field']:\n search_params['facet.field'].append('vocab_Topics')\n\n # If indicator flag is set, search only that type\n if 'ext_indicator' in search_params['extras']:\n if int(search_params['extras']['ext_indicator']) == 1:\n search_params['fq'] = search_params['fq'] + ' +extras_indicator:1'\n elif int(search_params['extras']['ext_indicator']) == 0:\n search_params['fq'] = search_params[\n 'fq'] + ' -extras_indicator:1'\n return search_params\n\n def after_search(self, search_results, search_params):\n return search_results\n\n def before_view(self, pkg_dict):\n return pkg_dict\n", "path": "ckanext-hdx_search/ckanext/hdx_search/plugin.py"}], "after_files": [{"content": "import logging, re\nimport ckan.plugins as plugins\nimport ckan.plugins.toolkit as tk\nimport ckan.lib.plugins as lib_plugins\n\ndef convert_country(q):\n for c in tk.get_action('group_list')({'user':'127.0.0.1'},{'all_fields': True}):\n if re.findall(c['display_name'].lower(),q.lower()):\n q += ' '+c['name']\n return q\n\nclass HDXSearchPlugin(plugins.SingletonPlugin):\n plugins.implements(plugins.IConfigurer, inherit=False)\n plugins.implements(plugins.IRoutes, inherit=True)\n plugins.implements(plugins.ITemplateHelpers, inherit=False)\n plugins.implements(plugins.IPackageController, inherit=True)\n\n def update_config(self, config):\n tk.add_template_directory(config, 'templates')\n\n def get_helpers(self):\n return {}\n\n def before_map(self, map):\n map.connect('search', '/search',\n controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='search')\n map.connect('simple_search',\n '/dataset', controller='ckanext.hdx_search.controllers.simple_search_controller:HDXSimpleSearchController', action='package_search')\n return map\n\n def after_map(self, map):\n map.connect('search', '/search',\n controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='search')\n map.connect('simple_search',\n '/dataset', controller='ckanext.hdx_search.controllers.simple_search_controller:HDXSimpleSearchController', action='package_search')\n return map\n\n def before_search(self, search_params):\n #search_params['q'] = convert_country(search_params['q'])\n if 'facet.field' in search_params and 'vocab_Topics' not in search_params['facet.field']:\n search_params['facet.field'].append('vocab_Topics')\n\n # If indicator flag is set, search only that type\n if 'ext_indicator' in search_params['extras']:\n if int(search_params['extras']['ext_indicator']) == 1:\n search_params['fq'] = search_params['fq'] + ' +extras_indicator:1'\n elif int(search_params['extras']['ext_indicator']) == 0:\n search_params['fq'] = search_params[\n 'fq'] + ' -extras_indicator:1'\n return search_params\n\n def after_search(self, search_results, search_params):\n return search_results\n\n def before_view(self, pkg_dict):\n return pkg_dict\n", "path": "ckanext-hdx_search/ckanext/hdx_search/plugin.py"}]}
| 994 | 168 |
gh_patches_debug_12807
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-1086
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CKV_AWS_119 - DynamoDB table encryption
**Describe the bug**
In general DynamoDB tables are encrypted by default and this can't be turned off, you can change it to use a KMS key of your choice. Therefore the check description is incorrect.
Further infos can be found in the API documentation https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_SSESpecification.html
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/terraform/checks/resource/aws/DynamoDBTablesEncrypted.py`
Content:
```
1 from checkov.common.models.enums import CheckCategories
2 from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
3
4
5 class DynamoDBTablesEncrypted(BaseResourceValueCheck):
6 def __init__(self):
7 name = "Ensure DynamoDB Tables are encrypted"
8 id = "CKV_AWS_119"
9 supported_resources = ['aws_dynamodb_table']
10 categories = [CheckCategories.NETWORKING]
11 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
12
13 def get_inspected_key(self):
14 return "server_side_encryption/[0]/enabled"
15
16
17 check = DynamoDBTablesEncrypted()
18
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/checkov/terraform/checks/resource/aws/DynamoDBTablesEncrypted.py b/checkov/terraform/checks/resource/aws/DynamoDBTablesEncrypted.py
--- a/checkov/terraform/checks/resource/aws/DynamoDBTablesEncrypted.py
+++ b/checkov/terraform/checks/resource/aws/DynamoDBTablesEncrypted.py
@@ -4,10 +4,10 @@
class DynamoDBTablesEncrypted(BaseResourceValueCheck):
def __init__(self):
- name = "Ensure DynamoDB Tables are encrypted"
+ name = "Ensure DynamoDB Tables are encrypted using KMS"
id = "CKV_AWS_119"
- supported_resources = ['aws_dynamodb_table']
- categories = [CheckCategories.NETWORKING]
+ supported_resources = ["aws_dynamodb_table"]
+ categories = [CheckCategories.ENCRYPTION]
super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
def get_inspected_key(self):
|
{"golden_diff": "diff --git a/checkov/terraform/checks/resource/aws/DynamoDBTablesEncrypted.py b/checkov/terraform/checks/resource/aws/DynamoDBTablesEncrypted.py\n--- a/checkov/terraform/checks/resource/aws/DynamoDBTablesEncrypted.py\n+++ b/checkov/terraform/checks/resource/aws/DynamoDBTablesEncrypted.py\n@@ -4,10 +4,10 @@\n \n class DynamoDBTablesEncrypted(BaseResourceValueCheck):\n def __init__(self):\n- name = \"Ensure DynamoDB Tables are encrypted\"\n+ name = \"Ensure DynamoDB Tables are encrypted using KMS\"\n id = \"CKV_AWS_119\"\n- supported_resources = ['aws_dynamodb_table']\n- categories = [CheckCategories.NETWORKING]\n+ supported_resources = [\"aws_dynamodb_table\"]\n+ categories = [CheckCategories.ENCRYPTION]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n \n def get_inspected_key(self):\n", "issue": "CKV_AWS_119 - DynamoDB table encryption\n**Describe the bug**\r\nIn general DynamoDB tables are encrypted by default and this can't be turned off, you can change it to use a KMS key of your choice. Therefore the check description is incorrect.\r\n\r\nFurther infos can be found in the API documentation https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_SSESpecification.html\r\n\r\n\n", "before_files": [{"content": "from checkov.common.models.enums import CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\n\n\nclass DynamoDBTablesEncrypted(BaseResourceValueCheck):\n def __init__(self):\n name = \"Ensure DynamoDB Tables are encrypted\"\n id = \"CKV_AWS_119\"\n supported_resources = ['aws_dynamodb_table']\n categories = [CheckCategories.NETWORKING]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def get_inspected_key(self):\n return \"server_side_encryption/[0]/enabled\"\n\n\ncheck = DynamoDBTablesEncrypted()\n", "path": "checkov/terraform/checks/resource/aws/DynamoDBTablesEncrypted.py"}], "after_files": [{"content": "from checkov.common.models.enums import CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\n\n\nclass DynamoDBTablesEncrypted(BaseResourceValueCheck):\n def __init__(self):\n name = \"Ensure DynamoDB Tables are encrypted using KMS\"\n id = \"CKV_AWS_119\"\n supported_resources = [\"aws_dynamodb_table\"]\n categories = [CheckCategories.ENCRYPTION]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def get_inspected_key(self):\n return \"server_side_encryption/[0]/enabled\"\n\n\ncheck = DynamoDBTablesEncrypted()\n", "path": "checkov/terraform/checks/resource/aws/DynamoDBTablesEncrypted.py"}]}
| 527 | 217 |
gh_patches_debug_6434
|
rasdani/github-patches
|
git_diff
|
python-pillow__Pillow-6973
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot identify .fits file
### What did you do?
Tried using pillow for opening/handling a .fits file for training a machine learning model. According to the documentation opening/reading fits files should be enabled? Or am I misunderstanding how a fits file should be opened?
From Issue [4054](https://github.com/python-pillow/Pillow/issues/4054)/ PR 6056
> I've created PR https://github.com/python-pillow/Pillow/pull/6056 to resolve this. If that is merged, you should no longer have to worry about register_handler(), but can instead just Image.open("sample.fits").
### What did you expect to happen?
Not recieving a "cannot identify error" while using Image.open. Expected the function to work as with other supported file formats. The .fits files in question are not corrupted, and can be opened as normal with other software.
### What happened?
```python
from PIL import Image
with Image.open('example.fits') as im:
im.verify()
```
```
---------------------------------------------------------------------------
UnidentifiedImageError Traceback (most recent call last)
Cell In [38], line 2
1 from PIL import FitsImagePlugin, ImageFile
----> 2 with Image.open('example.fits') as im:
3 im.verify()
File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\PIL\Image.py:3186, in open(fp, mode, formats)
3184 for message in accept_warnings:
3185 warnings.warn(message)
-> 3186 raise UnidentifiedImageError(
3187 "cannot identify image file %r" % (filename if filename else fp)
3188 )
UnidentifiedImageError: cannot identify image file 'example.fits'
```
### What are your OS, Python and Pillow versions?
* OS: windows 10
* Python: 3.10
* Pillow: 9.3.0
<!--
Please include **code** that reproduces the issue and whenever possible, an **image** that demonstrates the issue. Please upload images to GitHub, not to third-party file hosting sites. If necessary, add the image to a zip or tar archive.
The best reproductions are self-contained scripts with minimal dependencies. If you are using a framework such as Plone, Django, or Buildout, try to replicate the issue just using Pillow.
-->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/PIL/FitsImagePlugin.py`
Content:
```
1 #
2 # The Python Imaging Library
3 # $Id$
4 #
5 # FITS file handling
6 #
7 # Copyright (c) 1998-2003 by Fredrik Lundh
8 #
9 # See the README file for information on usage and redistribution.
10 #
11
12 import math
13
14 from . import Image, ImageFile
15
16
17 def _accept(prefix):
18 return prefix[:6] == b"SIMPLE"
19
20
21 class FitsImageFile(ImageFile.ImageFile):
22 format = "FITS"
23 format_description = "FITS"
24
25 def _open(self):
26 headers = {}
27 while True:
28 header = self.fp.read(80)
29 if not header:
30 msg = "Truncated FITS file"
31 raise OSError(msg)
32 keyword = header[:8].strip()
33 if keyword == b"END":
34 break
35 value = header[8:].strip()
36 if value.startswith(b"="):
37 value = value[1:].strip()
38 if not headers and (not _accept(keyword) or value != b"T"):
39 msg = "Not a FITS file"
40 raise SyntaxError(msg)
41 headers[keyword] = value
42
43 naxis = int(headers[b"NAXIS"])
44 if naxis == 0:
45 msg = "No image data"
46 raise ValueError(msg)
47 elif naxis == 1:
48 self._size = 1, int(headers[b"NAXIS1"])
49 else:
50 self._size = int(headers[b"NAXIS1"]), int(headers[b"NAXIS2"])
51
52 number_of_bits = int(headers[b"BITPIX"])
53 if number_of_bits == 8:
54 self.mode = "L"
55 elif number_of_bits == 16:
56 self.mode = "I"
57 # rawmode = "I;16S"
58 elif number_of_bits == 32:
59 self.mode = "I"
60 elif number_of_bits in (-32, -64):
61 self.mode = "F"
62 # rawmode = "F" if number_of_bits == -32 else "F;64F"
63
64 offset = math.ceil(self.fp.tell() / 2880) * 2880
65 self.tile = [("raw", (0, 0) + self.size, offset, (self.mode, 0, -1))]
66
67
68 # --------------------------------------------------------------------
69 # Registry
70
71 Image.register_open(FitsImageFile.format, FitsImageFile, _accept)
72
73 Image.register_extensions(FitsImageFile.format, [".fit", ".fits"])
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/PIL/FitsImagePlugin.py b/src/PIL/FitsImagePlugin.py
--- a/src/PIL/FitsImagePlugin.py
+++ b/src/PIL/FitsImagePlugin.py
@@ -32,7 +32,7 @@
keyword = header[:8].strip()
if keyword == b"END":
break
- value = header[8:].strip()
+ value = header[8:].split(b"/")[0].strip()
if value.startswith(b"="):
value = value[1:].strip()
if not headers and (not _accept(keyword) or value != b"T"):
|
{"golden_diff": "diff --git a/src/PIL/FitsImagePlugin.py b/src/PIL/FitsImagePlugin.py\n--- a/src/PIL/FitsImagePlugin.py\n+++ b/src/PIL/FitsImagePlugin.py\n@@ -32,7 +32,7 @@\n keyword = header[:8].strip()\n if keyword == b\"END\":\n break\n- value = header[8:].strip()\n+ value = header[8:].split(b\"/\")[0].strip()\n if value.startswith(b\"=\"):\n value = value[1:].strip()\n if not headers and (not _accept(keyword) or value != b\"T\"):\n", "issue": "Cannot identify .fits file\n### What did you do?\r\nTried using pillow for opening/handling a .fits file for training a machine learning model. According to the documentation opening/reading fits files should be enabled? Or am I misunderstanding how a fits file should be opened? \r\n\r\n\r\nFrom Issue [4054](https://github.com/python-pillow/Pillow/issues/4054)/ PR 6056\r\n\r\n> I've created PR https://github.com/python-pillow/Pillow/pull/6056 to resolve this. If that is merged, you should no longer have to worry about register_handler(), but can instead just Image.open(\"sample.fits\").\r\n\r\n\r\n### What did you expect to happen?\r\nNot recieving a \"cannot identify error\" while using Image.open. Expected the function to work as with other supported file formats. The .fits files in question are not corrupted, and can be opened as normal with other software. \r\n\r\n### What happened?\r\n```python\r\nfrom PIL import Image\r\nwith Image.open('example.fits') as im:\r\n im.verify()\r\n```\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nUnidentifiedImageError Traceback (most recent call last)\r\nCell In [38], line 2\r\n 1 from PIL import FitsImagePlugin, ImageFile\r\n----> 2 with Image.open('example.fits') as im:\r\n 3 im.verify()\r\n\r\nFile ~\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python310\\site-packages\\PIL\\Image.py:3186, in open(fp, mode, formats)\r\n 3184 for message in accept_warnings:\r\n 3185 warnings.warn(message)\r\n-> 3186 raise UnidentifiedImageError(\r\n 3187 \"cannot identify image file %r\" % (filename if filename else fp)\r\n 3188 )\r\n\r\nUnidentifiedImageError: cannot identify image file 'example.fits'\r\n```\r\n### What are your OS, Python and Pillow versions?\r\n\r\n* OS: windows 10\r\n* Python: 3.10\r\n* Pillow: 9.3.0\r\n\r\n<!--\r\nPlease include **code** that reproduces the issue and whenever possible, an **image** that demonstrates the issue. Please upload images to GitHub, not to third-party file hosting sites. If necessary, add the image to a zip or tar archive.\r\n\r\nThe best reproductions are self-contained scripts with minimal dependencies. If you are using a framework such as Plone, Django, or Buildout, try to replicate the issue just using Pillow.\r\n-->\r\n\r\n\n", "before_files": [{"content": "#\n# The Python Imaging Library\n# $Id$\n#\n# FITS file handling\n#\n# Copyright (c) 1998-2003 by Fredrik Lundh\n#\n# See the README file for information on usage and redistribution.\n#\n\nimport math\n\nfrom . import Image, ImageFile\n\n\ndef _accept(prefix):\n return prefix[:6] == b\"SIMPLE\"\n\n\nclass FitsImageFile(ImageFile.ImageFile):\n format = \"FITS\"\n format_description = \"FITS\"\n\n def _open(self):\n headers = {}\n while True:\n header = self.fp.read(80)\n if not header:\n msg = \"Truncated FITS file\"\n raise OSError(msg)\n keyword = header[:8].strip()\n if keyword == b\"END\":\n break\n value = header[8:].strip()\n if value.startswith(b\"=\"):\n value = value[1:].strip()\n if not headers and (not _accept(keyword) or value != b\"T\"):\n msg = \"Not a FITS file\"\n raise SyntaxError(msg)\n headers[keyword] = value\n\n naxis = int(headers[b\"NAXIS\"])\n if naxis == 0:\n msg = \"No image data\"\n raise ValueError(msg)\n elif naxis == 1:\n self._size = 1, int(headers[b\"NAXIS1\"])\n else:\n self._size = int(headers[b\"NAXIS1\"]), int(headers[b\"NAXIS2\"])\n\n number_of_bits = int(headers[b\"BITPIX\"])\n if number_of_bits == 8:\n self.mode = \"L\"\n elif number_of_bits == 16:\n self.mode = \"I\"\n # rawmode = \"I;16S\"\n elif number_of_bits == 32:\n self.mode = \"I\"\n elif number_of_bits in (-32, -64):\n self.mode = \"F\"\n # rawmode = \"F\" if number_of_bits == -32 else \"F;64F\"\n\n offset = math.ceil(self.fp.tell() / 2880) * 2880\n self.tile = [(\"raw\", (0, 0) + self.size, offset, (self.mode, 0, -1))]\n\n\n# --------------------------------------------------------------------\n# Registry\n\nImage.register_open(FitsImageFile.format, FitsImageFile, _accept)\n\nImage.register_extensions(FitsImageFile.format, [\".fit\", \".fits\"])\n", "path": "src/PIL/FitsImagePlugin.py"}], "after_files": [{"content": "#\n# The Python Imaging Library\n# $Id$\n#\n# FITS file handling\n#\n# Copyright (c) 1998-2003 by Fredrik Lundh\n#\n# See the README file for information on usage and redistribution.\n#\n\nimport math\n\nfrom . import Image, ImageFile\n\n\ndef _accept(prefix):\n return prefix[:6] == b\"SIMPLE\"\n\n\nclass FitsImageFile(ImageFile.ImageFile):\n format = \"FITS\"\n format_description = \"FITS\"\n\n def _open(self):\n headers = {}\n while True:\n header = self.fp.read(80)\n if not header:\n msg = \"Truncated FITS file\"\n raise OSError(msg)\n keyword = header[:8].strip()\n if keyword == b\"END\":\n break\n value = header[8:].split(b\"/\")[0].strip()\n if value.startswith(b\"=\"):\n value = value[1:].strip()\n if not headers and (not _accept(keyword) or value != b\"T\"):\n msg = \"Not a FITS file\"\n raise SyntaxError(msg)\n headers[keyword] = value\n\n naxis = int(headers[b\"NAXIS\"])\n if naxis == 0:\n msg = \"No image data\"\n raise ValueError(msg)\n elif naxis == 1:\n self._size = 1, int(headers[b\"NAXIS1\"])\n else:\n self._size = int(headers[b\"NAXIS1\"]), int(headers[b\"NAXIS2\"])\n\n number_of_bits = int(headers[b\"BITPIX\"])\n if number_of_bits == 8:\n self.mode = \"L\"\n elif number_of_bits == 16:\n self.mode = \"I\"\n # rawmode = \"I;16S\"\n elif number_of_bits == 32:\n self.mode = \"I\"\n elif number_of_bits in (-32, -64):\n self.mode = \"F\"\n # rawmode = \"F\" if number_of_bits == -32 else \"F;64F\"\n\n offset = math.ceil(self.fp.tell() / 2880) * 2880\n self.tile = [(\"raw\", (0, 0) + self.size, offset, (self.mode, 0, -1))]\n\n\n# --------------------------------------------------------------------\n# Registry\n\nImage.register_open(FitsImageFile.format, FitsImageFile, _accept)\n\nImage.register_extensions(FitsImageFile.format, [\".fit\", \".fits\"])\n", "path": "src/PIL/FitsImagePlugin.py"}]}
| 1,501 | 136 |
gh_patches_debug_15674
|
rasdani/github-patches
|
git_diff
|
mesonbuild__meson-10230
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unhandled python exception with 0.62 on windows (0.61 ok)
**Describe the bug**
When running meson 0.62 on win32 and a project using `dependency()` (ex glib):
Unhandled python exception
ModuleNotFoundError: No module named 'mesonbuild.dependencies.data'
```
Traceback (most recent call last):
File "mesonbuild\mesonmain.py", line 151, in run
File "mesonbuild\msetup.py", line 301, in run
File "mesonbuild\msetup.py", line 185, in generate
File "mesonbuild\msetup.py", line 229, in _generate
File "mesonbuild\interpreter\interpreter.py", line 2698, in run
File "mesonbuild\interpreterbase\interpreterbase.py", line 149, in run
File "mesonbuild\interpreterbase\interpreterbase.py", line 174, in evaluate_codeblock
File "mesonbuild\interpreterbase\interpreterbase.py", line 167, in evaluate_codeblock
File "mesonbuild\interpreterbase\interpreterbase.py", line 182, in evaluate_statement
File "mesonbuild\interpreterbase\interpreterbase.py", line 567, in assignment
File "mesonbuild\interpreterbase\interpreterbase.py", line 180, in evaluate_statement
File "mesonbuild\interpreterbase\interpreterbase.py", line 455, in function_call
File "mesonbuild\interpreterbase\decorators.py", line 768, in wrapped
File "mesonbuild\interpreterbase\decorators.py", line 768, in wrapped
File "mesonbuild\interpreterbase\decorators.py", line 768, in wrapped
[Previous line repeated 5 more times]
File "mesonbuild\interpreterbase\decorators.py", line 109, in wrapped
File "mesonbuild\interpreterbase\decorators.py", line 127, in wrapped
File "mesonbuild\interpreterbase\decorators.py", line 277, in wrapper
File "mesonbuild\interpreter\interpreter.py", line 1620, in func_dependency
File "mesonbuild\interpreter\dependencyfallbacks.py", line 352, in lookup
File "mesonbuild\interpreter\dependencyfallbacks.py", line 93, in _do_dependency
File "mesonbuild\dependencies\detect.py", line 112, in find_external_dependency
File "mesonbuild\dependencies\cmake.py", line 135, in __init__
File "mesonbuild\dependencies\cmake.py", line 183, in _get_cmake_info
File "mesonbuild\dependencies\cmake.py", line 614, in _call_cmake
File "mesonbuild\dependencies\cmake.py", line 585, in _setup_cmake_dir
File "importlib\resources.py", line 103, in read_text
File "importlib\resources.py", line 82, in open_text
File "importlib\resources.py", line 43, in open_binary
File "importlib\_common.py", line 66, in get_package
File "importlib\_common.py", line 57, in resolve
File "importlib\__init__.py", line 126, in import_module
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1004, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'mesonbuild.dependencies.data'
```
**To Reproduce**
project('foo')
pcre = dependency('libpcre')
**system parameters**
meson 0.62 (MSI) on windev VM (https://developer.microsoft.com/en-us/windows/downloads/virtual-machines/)
works as expected on 0.61
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `packaging/hook-mesonbuild.py`
Content:
```
1 #!hint/python3
2
3 """
4 PyInstaller hook to make mesonbuild include everything it needs to.
5 """
6
7 import os
8 from glob import glob
9
10 hiddenimports = []
11
12 def get_all_modules_from_dir(dirname):
13 '''
14 Get all modules required for Meson itself from directories.
15 '''
16 modname = os.path.basename(dirname)
17 modules = [os.path.splitext(os.path.split(x)[1])[0] for x in glob(os.path.join(dirname, '*'))]
18 modules = ['mesonbuild.' + modname + '.' + x for x in modules if not x.startswith('_')]
19 return modules
20
21 hiddenimports += get_all_modules_from_dir('mesonbuild/modules')
22 hiddenimports += get_all_modules_from_dir('mesonbuild/scripts')
23
24 # Python packagers want to be minimal and only copy the things
25 # that they can see being used. They are blind to many things.
26 hiddenimports += [
27 # we run distutils as a subprocess via INTROSPECT_COMMAND.
28 'distutils.archive_util',
29 'distutils.cmd',
30 'distutils.config',
31 'distutils.core',
32 'distutils.debug',
33 'distutils.dep_util',
34 'distutils.dir_util',
35 'distutils.dist',
36 'distutils.errors',
37 'distutils.extension',
38 'distutils.fancy_getopt',
39 'distutils.file_util',
40 'distutils.spawn',
41 'distutils.util',
42 'distutils.version',
43 'distutils.command.build_ext',
44 'distutils.command.build',
45 'distutils.command.install',
46
47 # needed for gtk's find_program() scripts
48 'filecmp',
49 ]
50
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/packaging/hook-mesonbuild.py b/packaging/hook-mesonbuild.py
--- a/packaging/hook-mesonbuild.py
+++ b/packaging/hook-mesonbuild.py
@@ -7,6 +7,9 @@
import os
from glob import glob
+from PyInstaller.utils.hooks import collect_data_files
+
+datas = []
hiddenimports = []
def get_all_modules_from_dir(dirname):
@@ -18,6 +21,10 @@
modules = ['mesonbuild.' + modname + '.' + x for x in modules if not x.startswith('_')]
return modules
+datas += collect_data_files('mesonbuild.scripts')
+datas += collect_data_files('mesonbuild.cmake.data')
+datas += collect_data_files('mesonbuild.dependencies.data')
+
hiddenimports += get_all_modules_from_dir('mesonbuild/modules')
hiddenimports += get_all_modules_from_dir('mesonbuild/scripts')
|
{"golden_diff": "diff --git a/packaging/hook-mesonbuild.py b/packaging/hook-mesonbuild.py\n--- a/packaging/hook-mesonbuild.py\n+++ b/packaging/hook-mesonbuild.py\n@@ -7,6 +7,9 @@\n import os\n from glob import glob\n \n+from PyInstaller.utils.hooks import collect_data_files\n+\n+datas = []\n hiddenimports = []\n \n def get_all_modules_from_dir(dirname):\n@@ -18,6 +21,10 @@\n modules = ['mesonbuild.' + modname + '.' + x for x in modules if not x.startswith('_')]\n return modules\n \n+datas += collect_data_files('mesonbuild.scripts')\n+datas += collect_data_files('mesonbuild.cmake.data')\n+datas += collect_data_files('mesonbuild.dependencies.data')\n+\n hiddenimports += get_all_modules_from_dir('mesonbuild/modules')\n hiddenimports += get_all_modules_from_dir('mesonbuild/scripts')\n", "issue": "Unhandled python exception with 0.62 on windows (0.61 ok)\n**Describe the bug**\r\nWhen running meson 0.62 on win32 and a project using `dependency()` (ex glib):\r\n\r\nUnhandled python exception\r\nModuleNotFoundError: No module named 'mesonbuild.dependencies.data'\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"mesonbuild\\mesonmain.py\", line 151, in run\r\n File \"mesonbuild\\msetup.py\", line 301, in run\r\n File \"mesonbuild\\msetup.py\", line 185, in generate\r\n File \"mesonbuild\\msetup.py\", line 229, in _generate\r\n File \"mesonbuild\\interpreter\\interpreter.py\", line 2698, in run\r\n File \"mesonbuild\\interpreterbase\\interpreterbase.py\", line 149, in run\r\n File \"mesonbuild\\interpreterbase\\interpreterbase.py\", line 174, in evaluate_codeblock\r\n File \"mesonbuild\\interpreterbase\\interpreterbase.py\", line 167, in evaluate_codeblock\r\n File \"mesonbuild\\interpreterbase\\interpreterbase.py\", line 182, in evaluate_statement\r\n File \"mesonbuild\\interpreterbase\\interpreterbase.py\", line 567, in assignment\r\n File \"mesonbuild\\interpreterbase\\interpreterbase.py\", line 180, in evaluate_statement\r\n File \"mesonbuild\\interpreterbase\\interpreterbase.py\", line 455, in function_call\r\n File \"mesonbuild\\interpreterbase\\decorators.py\", line 768, in wrapped\r\n File \"mesonbuild\\interpreterbase\\decorators.py\", line 768, in wrapped\r\n File \"mesonbuild\\interpreterbase\\decorators.py\", line 768, in wrapped\r\n [Previous line repeated 5 more times]\r\n File \"mesonbuild\\interpreterbase\\decorators.py\", line 109, in wrapped\r\n File \"mesonbuild\\interpreterbase\\decorators.py\", line 127, in wrapped\r\n File \"mesonbuild\\interpreterbase\\decorators.py\", line 277, in wrapper\r\n File \"mesonbuild\\interpreter\\interpreter.py\", line 1620, in func_dependency\r\n File \"mesonbuild\\interpreter\\dependencyfallbacks.py\", line 352, in lookup\r\n File \"mesonbuild\\interpreter\\dependencyfallbacks.py\", line 93, in _do_dependency\r\n File \"mesonbuild\\dependencies\\detect.py\", line 112, in find_external_dependency\r\n File \"mesonbuild\\dependencies\\cmake.py\", line 135, in __init__\r\n File \"mesonbuild\\dependencies\\cmake.py\", line 183, in _get_cmake_info\r\n File \"mesonbuild\\dependencies\\cmake.py\", line 614, in _call_cmake\r\n File \"mesonbuild\\dependencies\\cmake.py\", line 585, in _setup_cmake_dir\r\n File \"importlib\\resources.py\", line 103, in read_text\r\n File \"importlib\\resources.py\", line 82, in open_text\r\n File \"importlib\\resources.py\", line 43, in open_binary\r\n File \"importlib\\_common.py\", line 66, in get_package\r\n File \"importlib\\_common.py\", line 57, in resolve\r\n File \"importlib\\__init__.py\", line 126, in import_module\r\n File \"<frozen importlib._bootstrap>\", line 1050, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 1027, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 1004, in _find_and_load_unlocked\r\nModuleNotFoundError: No module named 'mesonbuild.dependencies.data'\r\n```\r\n\r\n**To Reproduce**\r\nproject('foo')\r\npcre = dependency('libpcre')\r\n\r\n**system parameters**\r\nmeson 0.62 (MSI) on windev VM (https://developer.microsoft.com/en-us/windows/downloads/virtual-machines/)\r\nworks as expected on 0.61\n", "before_files": [{"content": "#!hint/python3\n\n\"\"\"\nPyInstaller hook to make mesonbuild include everything it needs to.\n\"\"\"\n\nimport os\nfrom glob import glob\n\nhiddenimports = []\n\ndef get_all_modules_from_dir(dirname):\n '''\n Get all modules required for Meson itself from directories.\n '''\n modname = os.path.basename(dirname)\n modules = [os.path.splitext(os.path.split(x)[1])[0] for x in glob(os.path.join(dirname, '*'))]\n modules = ['mesonbuild.' + modname + '.' + x for x in modules if not x.startswith('_')]\n return modules\n\nhiddenimports += get_all_modules_from_dir('mesonbuild/modules')\nhiddenimports += get_all_modules_from_dir('mesonbuild/scripts')\n\n# Python packagers want to be minimal and only copy the things\n# that they can see being used. They are blind to many things.\nhiddenimports += [\n # we run distutils as a subprocess via INTROSPECT_COMMAND.\n 'distutils.archive_util',\n 'distutils.cmd',\n 'distutils.config',\n 'distutils.core',\n 'distutils.debug',\n 'distutils.dep_util',\n 'distutils.dir_util',\n 'distutils.dist',\n 'distutils.errors',\n 'distutils.extension',\n 'distutils.fancy_getopt',\n 'distutils.file_util',\n 'distutils.spawn',\n 'distutils.util',\n 'distutils.version',\n 'distutils.command.build_ext',\n 'distutils.command.build',\n 'distutils.command.install',\n\n # needed for gtk's find_program() scripts\n 'filecmp',\n]\n", "path": "packaging/hook-mesonbuild.py"}], "after_files": [{"content": "#!hint/python3\n\n\"\"\"\nPyInstaller hook to make mesonbuild include everything it needs to.\n\"\"\"\n\nimport os\nfrom glob import glob\n\nfrom PyInstaller.utils.hooks import collect_data_files\n\ndatas = []\nhiddenimports = []\n\ndef get_all_modules_from_dir(dirname):\n '''\n Get all modules required for Meson itself from directories.\n '''\n modname = os.path.basename(dirname)\n modules = [os.path.splitext(os.path.split(x)[1])[0] for x in glob(os.path.join(dirname, '*'))]\n modules = ['mesonbuild.' + modname + '.' + x for x in modules if not x.startswith('_')]\n return modules\n\ndatas += collect_data_files('mesonbuild.scripts')\ndatas += collect_data_files('mesonbuild.cmake.data')\ndatas += collect_data_files('mesonbuild.dependencies.data')\n\nhiddenimports += get_all_modules_from_dir('mesonbuild/modules')\nhiddenimports += get_all_modules_from_dir('mesonbuild/scripts')\n\n# Python packagers want to be minimal and only copy the things\n# that they can see being used. They are blind to many things.\nhiddenimports += [\n # we run distutils as a subprocess via INTROSPECT_COMMAND.\n 'distutils.archive_util',\n 'distutils.cmd',\n 'distutils.config',\n 'distutils.core',\n 'distutils.debug',\n 'distutils.dep_util',\n 'distutils.dir_util',\n 'distutils.dist',\n 'distutils.errors',\n 'distutils.extension',\n 'distutils.fancy_getopt',\n 'distutils.file_util',\n 'distutils.spawn',\n 'distutils.util',\n 'distutils.version',\n 'distutils.command.build_ext',\n 'distutils.command.build',\n 'distutils.command.install',\n\n # needed for gtk's find_program() scripts\n 'filecmp',\n]\n", "path": "packaging/hook-mesonbuild.py"}]}
| 1,637 | 207 |
gh_patches_debug_19687
|
rasdani/github-patches
|
git_diff
|
facebookresearch__ParlAI-1625
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
No module named 'parlai_internal'
https://parl.ai/projects/wizard_of_wikipedia/
When running ```python projects/wizard_of_wikipedia/scripts/eval_retrieval_model.py``` I get the following error:
```
Traceback (most recent call last):
File "projects/wizard_of_wikipedia/scripts/eval_retrieval_model.py", line 48, in <module>
eval_model(parser)
File "/home/ml/jwang301/Development/ParlAI/parlai/scripts/eval_model.py", line 68, in eval_model
agent = create_agent(opt, requireModelExists=True)
File "/home/ml/jwang301/Development/ParlAI/parlai/core/agents.py", line 554, in create_agent
model = load_agent_module(opt)
File "/home/ml/jwang301/Development/ParlAI/parlai/core/agents.py", line 407, in load_agent_module
model_class = get_agent_module(new_opt['model'])
File "/home/ml/jwang301/Development/ParlAI/parlai/core/agents.py", line 516, in get_agent_module
my_module = importlib.import_module(module_name)
File "/home/ml/jwang301/anaconda2/envs/ParlAI/lib/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 941, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 941, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 941, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 953, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'parlai_internal'
```
I'm assuming this is accidental since the wiki is public.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `projects/wizard_of_wikipedia/scripts/eval_retrieval_model.py`
Content:
```
1 #!/usr/bin/env python3
2
3 # Copyright (c) Facebook, Inc. and its affiliates.
4 # This source code is licensed under the MIT license found in the
5 # LICENSE file in the root directory of this source tree.
6 from parlai.core.params import ParlaiParser
7 from parlai.scripts.eval_model import eval_model
8 from parlai.zoo.wizard_of_wikipedia\
9 .full_dialogue_retrieval_model import download
10 from projects.wizard_of_wikipedia.wizard_transformer_ranker\
11 .wizard_transformer_ranker import WizardTransformerRankerAgent
12
13 """Evaluate pre-trained retrieval model on the full Wizard Dialogue task.
14
15 NOTE: Metrics here differ slightly to those reported in the paper as a result
16 of code changes.
17
18 Results on seen test set:
19 Hits@1/100: 86.7
20
21 Results on unseen test set (run with flag
22 `-t wizard_of_wikipedia:WizardDialogKnowledge:topic_split`):
23 Hits@1/100: 68.96
24 """
25
26 if __name__ == '__main__':
27 parser = ParlaiParser(add_model_args=True)
28 parser.add_argument('-n', '--num-examples', default=100000000)
29 parser.add_argument('-d', '--display-examples', type='bool', default=False)
30 parser.add_argument('-ltim', '--log-every-n-secs', type=float, default=2)
31 WizardTransformerRankerAgent.add_cmdline_args(parser)
32 parser.set_defaults(
33 task='wizard_of_wikipedia',
34 model='projects:wizard_of_wikipedia:wizard_transformer_ranker',
35 model_file='models:wizard_of_wikipedia/full_dialogue_retrieval_model/model',
36 datatype='test',
37 n_heads=6,
38 ffn_size=1200,
39 embeddings_scale=False,
40 delimiter=' __SOC__ ',
41 n_positions=1000,
42 legacy=True
43 )
44
45 opt = parser.parse_args()
46 download(opt['datapath']) # download pretrained retrieval model
47
48 eval_model(parser)
49
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/projects/wizard_of_wikipedia/scripts/eval_retrieval_model.py b/projects/wizard_of_wikipedia/scripts/eval_retrieval_model.py
--- a/projects/wizard_of_wikipedia/scripts/eval_retrieval_model.py
+++ b/projects/wizard_of_wikipedia/scripts/eval_retrieval_model.py
@@ -29,7 +29,7 @@
parser.add_argument('-d', '--display-examples', type='bool', default=False)
parser.add_argument('-ltim', '--log-every-n-secs', type=float, default=2)
WizardTransformerRankerAgent.add_cmdline_args(parser)
- parser.set_defaults(
+ parser.set_params(
task='wizard_of_wikipedia',
model='projects:wizard_of_wikipedia:wizard_transformer_ranker',
model_file='models:wizard_of_wikipedia/full_dialogue_retrieval_model/model',
@@ -45,4 +45,4 @@
opt = parser.parse_args()
download(opt['datapath']) # download pretrained retrieval model
- eval_model(parser)
+ eval_model(opt)
|
{"golden_diff": "diff --git a/projects/wizard_of_wikipedia/scripts/eval_retrieval_model.py b/projects/wizard_of_wikipedia/scripts/eval_retrieval_model.py\n--- a/projects/wizard_of_wikipedia/scripts/eval_retrieval_model.py\n+++ b/projects/wizard_of_wikipedia/scripts/eval_retrieval_model.py\n@@ -29,7 +29,7 @@\n parser.add_argument('-d', '--display-examples', type='bool', default=False)\n parser.add_argument('-ltim', '--log-every-n-secs', type=float, default=2)\n WizardTransformerRankerAgent.add_cmdline_args(parser)\n- parser.set_defaults(\n+ parser.set_params(\n task='wizard_of_wikipedia',\n model='projects:wizard_of_wikipedia:wizard_transformer_ranker',\n model_file='models:wizard_of_wikipedia/full_dialogue_retrieval_model/model',\n@@ -45,4 +45,4 @@\n opt = parser.parse_args()\n download(opt['datapath']) # download pretrained retrieval model\n \n- eval_model(parser)\n+ eval_model(opt)\n", "issue": "No module named 'parlai_internal'\nhttps://parl.ai/projects/wizard_of_wikipedia/\r\n\r\nWhen running ```python projects/wizard_of_wikipedia/scripts/eval_retrieval_model.py``` I get the following error:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"projects/wizard_of_wikipedia/scripts/eval_retrieval_model.py\", line 48, in <module>\r\n eval_model(parser)\r\n File \"/home/ml/jwang301/Development/ParlAI/parlai/scripts/eval_model.py\", line 68, in eval_model\r\n agent = create_agent(opt, requireModelExists=True)\r\n File \"/home/ml/jwang301/Development/ParlAI/parlai/core/agents.py\", line 554, in create_agent\r\n model = load_agent_module(opt)\r\n File \"/home/ml/jwang301/Development/ParlAI/parlai/core/agents.py\", line 407, in load_agent_module\r\n model_class = get_agent_module(new_opt['model'])\r\n File \"/home/ml/jwang301/Development/ParlAI/parlai/core/agents.py\", line 516, in get_agent_module\r\n my_module = importlib.import_module(module_name)\r\n File \"/home/ml/jwang301/anaconda2/envs/ParlAI/lib/python3.6/importlib/__init__.py\", line 126, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 994, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 971, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 941, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\n File \"<frozen importlib._bootstrap>\", line 994, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 971, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 941, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\n File \"<frozen importlib._bootstrap>\", line 994, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 971, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 941, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\n File \"<frozen importlib._bootstrap>\", line 994, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 971, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 953, in _find_and_load_unlocked\r\nModuleNotFoundError: No module named 'parlai_internal'\r\n```\r\n\r\nI'm assuming this is accidental since the wiki is public. \n", "before_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\nfrom parlai.core.params import ParlaiParser\nfrom parlai.scripts.eval_model import eval_model\nfrom parlai.zoo.wizard_of_wikipedia\\\n .full_dialogue_retrieval_model import download\nfrom projects.wizard_of_wikipedia.wizard_transformer_ranker\\\n .wizard_transformer_ranker import WizardTransformerRankerAgent\n\n\"\"\"Evaluate pre-trained retrieval model on the full Wizard Dialogue task.\n\nNOTE: Metrics here differ slightly to those reported in the paper as a result\nof code changes.\n\nResults on seen test set:\nHits@1/100: 86.7\n\nResults on unseen test set (run with flag\n`-t wizard_of_wikipedia:WizardDialogKnowledge:topic_split`):\nHits@1/100: 68.96\n\"\"\"\n\nif __name__ == '__main__':\n parser = ParlaiParser(add_model_args=True)\n parser.add_argument('-n', '--num-examples', default=100000000)\n parser.add_argument('-d', '--display-examples', type='bool', default=False)\n parser.add_argument('-ltim', '--log-every-n-secs', type=float, default=2)\n WizardTransformerRankerAgent.add_cmdline_args(parser)\n parser.set_defaults(\n task='wizard_of_wikipedia',\n model='projects:wizard_of_wikipedia:wizard_transformer_ranker',\n model_file='models:wizard_of_wikipedia/full_dialogue_retrieval_model/model',\n datatype='test',\n n_heads=6,\n ffn_size=1200,\n embeddings_scale=False,\n delimiter=' __SOC__ ',\n n_positions=1000,\n legacy=True\n )\n\n opt = parser.parse_args()\n download(opt['datapath']) # download pretrained retrieval model\n\n eval_model(parser)\n", "path": "projects/wizard_of_wikipedia/scripts/eval_retrieval_model.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\nfrom parlai.core.params import ParlaiParser\nfrom parlai.scripts.eval_model import eval_model\nfrom parlai.zoo.wizard_of_wikipedia\\\n .full_dialogue_retrieval_model import download\nfrom projects.wizard_of_wikipedia.wizard_transformer_ranker\\\n .wizard_transformer_ranker import WizardTransformerRankerAgent\n\n\"\"\"Evaluate pre-trained retrieval model on the full Wizard Dialogue task.\n\nNOTE: Metrics here differ slightly to those reported in the paper as a result\nof code changes.\n\nResults on seen test set:\nHits@1/100: 86.7\n\nResults on unseen test set (run with flag\n`-t wizard_of_wikipedia:WizardDialogKnowledge:topic_split`):\nHits@1/100: 68.96\n\"\"\"\n\nif __name__ == '__main__':\n parser = ParlaiParser(add_model_args=True)\n parser.add_argument('-n', '--num-examples', default=100000000)\n parser.add_argument('-d', '--display-examples', type='bool', default=False)\n parser.add_argument('-ltim', '--log-every-n-secs', type=float, default=2)\n WizardTransformerRankerAgent.add_cmdline_args(parser)\n parser.set_params(\n task='wizard_of_wikipedia',\n model='projects:wizard_of_wikipedia:wizard_transformer_ranker',\n model_file='models:wizard_of_wikipedia/full_dialogue_retrieval_model/model',\n datatype='test',\n n_heads=6,\n ffn_size=1200,\n embeddings_scale=False,\n delimiter=' __SOC__ ',\n n_positions=1000,\n legacy=True\n )\n\n opt = parser.parse_args()\n download(opt['datapath']) # download pretrained retrieval model\n\n eval_model(opt)\n", "path": "projects/wizard_of_wikipedia/scripts/eval_retrieval_model.py"}]}
| 1,490 | 234 |
gh_patches_debug_34331
|
rasdani/github-patches
|
git_diff
|
scoutapp__scout_apm_python-494
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Require Celery app reference to read configuration
We had a customer whose Celery tasks weren't reporting whilst their Django views were. It turns out they had configured in the Django settings file, which isn't applied when Celery runs. This is because it doesn't run "under" Django through `manage.py`, but separately through `celery worker`.
The django pattern is to use [Celery's `app.config_from_object`](https://docs.celeryproject.org/en/latest/reference/celery.html#celery.Celery.config_from_object) to read the Django settings. If we then read out of there for the scout settings, we would again allow shared configuration between the two.
This would need changing the Celery install process to take an `app` argument:
```python
app = celery.Celery(..)
...
scout_apm.celery.install(app)
```
We should work without this for backwards compatibility reasons, but throw a warninng when it's not passed as I predict this issue will appear repeatedly if we don't encourage users this way.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/scout_apm/celery.py`
Content:
```
1 # coding=utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 import datetime as dt
5
6 from celery.signals import before_task_publish, task_postrun, task_prerun
7
8 import scout_apm.core
9 from scout_apm.compat import datetime_to_timestamp
10 from scout_apm.core.tracked_request import TrackedRequest
11
12
13 def before_publish_callback(headers=None, properties=None, **kwargs):
14 if "scout_task_start" not in headers:
15 headers["scout_task_start"] = datetime_to_timestamp(dt.datetime.utcnow())
16
17
18 def prerun_callback(task=None, **kwargs):
19 tracked_request = TrackedRequest.instance()
20 tracked_request.is_real_request = True
21
22 start = getattr(task.request, "scout_task_start", None)
23 if start is not None:
24 now = datetime_to_timestamp(dt.datetime.utcnow())
25 try:
26 queue_time = now - start
27 except TypeError:
28 pass
29 else:
30 tracked_request.tag("queue_time", queue_time)
31
32 task_id = getattr(task.request, "id", None)
33 if task_id:
34 tracked_request.tag("task_id", task_id)
35 parent_task_id = getattr(task.request, "parent_id", None)
36 if parent_task_id:
37 tracked_request.tag("parent_task_id", parent_task_id)
38
39 delivery_info = task.request.delivery_info
40 tracked_request.tag("is_eager", delivery_info.get("is_eager", False))
41 tracked_request.tag("exchange", delivery_info.get("exchange", "unknown"))
42 tracked_request.tag("routing_key", delivery_info.get("routing_key", "unknown"))
43 tracked_request.tag("queue", delivery_info.get("queue", "unknown"))
44
45 tracked_request.start_span(operation=("Job/" + task.name))
46
47
48 def postrun_callback(task=None, **kwargs):
49 tracked_request = TrackedRequest.instance()
50 tracked_request.stop_span()
51
52
53 def install():
54 installed = scout_apm.core.install()
55 if not installed:
56 return
57
58 before_task_publish.connect(before_publish_callback)
59 task_prerun.connect(prerun_callback)
60 task_postrun.connect(postrun_callback)
61
62
63 def uninstall():
64 before_task_publish.disconnect(before_publish_callback)
65 task_prerun.disconnect(prerun_callback)
66 task_postrun.disconnect(postrun_callback)
67
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/scout_apm/celery.py b/src/scout_apm/celery.py
--- a/src/scout_apm/celery.py
+++ b/src/scout_apm/celery.py
@@ -7,15 +7,16 @@
import scout_apm.core
from scout_apm.compat import datetime_to_timestamp
+from scout_apm.core.config import scout_config
from scout_apm.core.tracked_request import TrackedRequest
-def before_publish_callback(headers=None, properties=None, **kwargs):
+def before_task_publish_callback(headers=None, properties=None, **kwargs):
if "scout_task_start" not in headers:
headers["scout_task_start"] = datetime_to_timestamp(dt.datetime.utcnow())
-def prerun_callback(task=None, **kwargs):
+def task_prerun_callback(task=None, **kwargs):
tracked_request = TrackedRequest.instance()
tracked_request.is_real_request = True
@@ -45,22 +46,39 @@
tracked_request.start_span(operation=("Job/" + task.name))
-def postrun_callback(task=None, **kwargs):
+def task_postrun_callback(task=None, **kwargs):
tracked_request = TrackedRequest.instance()
tracked_request.stop_span()
-def install():
+def install(app=None):
+ if app is not None:
+ copy_configuration(app)
+
installed = scout_apm.core.install()
if not installed:
return
- before_task_publish.connect(before_publish_callback)
- task_prerun.connect(prerun_callback)
- task_postrun.connect(postrun_callback)
+ before_task_publish.connect(before_task_publish_callback)
+ task_prerun.connect(task_prerun_callback)
+ task_postrun.connect(task_postrun_callback)
+
+
+def copy_configuration(app):
+ prefix = "scout_"
+ prefix_len = len(prefix)
+
+ to_set = {}
+ for key, value in app.conf.items():
+ key_lower = key.lower()
+ if key_lower.startswith(prefix) and len(key_lower) > prefix_len:
+ scout_key = key_lower[prefix_len:]
+ to_set[scout_key] = value
+
+ scout_config.set(**to_set)
def uninstall():
- before_task_publish.disconnect(before_publish_callback)
- task_prerun.disconnect(prerun_callback)
- task_postrun.disconnect(postrun_callback)
+ before_task_publish.disconnect(before_task_publish_callback)
+ task_prerun.disconnect(task_prerun_callback)
+ task_postrun.disconnect(task_postrun_callback)
|
{"golden_diff": "diff --git a/src/scout_apm/celery.py b/src/scout_apm/celery.py\n--- a/src/scout_apm/celery.py\n+++ b/src/scout_apm/celery.py\n@@ -7,15 +7,16 @@\n \n import scout_apm.core\n from scout_apm.compat import datetime_to_timestamp\n+from scout_apm.core.config import scout_config\n from scout_apm.core.tracked_request import TrackedRequest\n \n \n-def before_publish_callback(headers=None, properties=None, **kwargs):\n+def before_task_publish_callback(headers=None, properties=None, **kwargs):\n if \"scout_task_start\" not in headers:\n headers[\"scout_task_start\"] = datetime_to_timestamp(dt.datetime.utcnow())\n \n \n-def prerun_callback(task=None, **kwargs):\n+def task_prerun_callback(task=None, **kwargs):\n tracked_request = TrackedRequest.instance()\n tracked_request.is_real_request = True\n \n@@ -45,22 +46,39 @@\n tracked_request.start_span(operation=(\"Job/\" + task.name))\n \n \n-def postrun_callback(task=None, **kwargs):\n+def task_postrun_callback(task=None, **kwargs):\n tracked_request = TrackedRequest.instance()\n tracked_request.stop_span()\n \n \n-def install():\n+def install(app=None):\n+ if app is not None:\n+ copy_configuration(app)\n+\n installed = scout_apm.core.install()\n if not installed:\n return\n \n- before_task_publish.connect(before_publish_callback)\n- task_prerun.connect(prerun_callback)\n- task_postrun.connect(postrun_callback)\n+ before_task_publish.connect(before_task_publish_callback)\n+ task_prerun.connect(task_prerun_callback)\n+ task_postrun.connect(task_postrun_callback)\n+\n+\n+def copy_configuration(app):\n+ prefix = \"scout_\"\n+ prefix_len = len(prefix)\n+\n+ to_set = {}\n+ for key, value in app.conf.items():\n+ key_lower = key.lower()\n+ if key_lower.startswith(prefix) and len(key_lower) > prefix_len:\n+ scout_key = key_lower[prefix_len:]\n+ to_set[scout_key] = value\n+\n+ scout_config.set(**to_set)\n \n \n def uninstall():\n- before_task_publish.disconnect(before_publish_callback)\n- task_prerun.disconnect(prerun_callback)\n- task_postrun.disconnect(postrun_callback)\n+ before_task_publish.disconnect(before_task_publish_callback)\n+ task_prerun.disconnect(task_prerun_callback)\n+ task_postrun.disconnect(task_postrun_callback)\n", "issue": "Require Celery app reference to read configuration\nWe had a customer whose Celery tasks weren't reporting whilst their Django views were. It turns out they had configured in the Django settings file, which isn't applied when Celery runs. This is because it doesn't run \"under\" Django through `manage.py`, but separately through `celery worker`.\r\n\r\nThe django pattern is to use [Celery's `app.config_from_object`](https://docs.celeryproject.org/en/latest/reference/celery.html#celery.Celery.config_from_object) to read the Django settings. If we then read out of there for the scout settings, we would again allow shared configuration between the two.\r\n\r\nThis would need changing the Celery install process to take an `app` argument:\r\n\r\n```python\r\napp = celery.Celery(..)\r\n...\r\nscout_apm.celery.install(app)\r\n```\r\n\r\nWe should work without this for backwards compatibility reasons, but throw a warninng when it's not passed as I predict this issue will appear repeatedly if we don't encourage users this way.\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport datetime as dt\n\nfrom celery.signals import before_task_publish, task_postrun, task_prerun\n\nimport scout_apm.core\nfrom scout_apm.compat import datetime_to_timestamp\nfrom scout_apm.core.tracked_request import TrackedRequest\n\n\ndef before_publish_callback(headers=None, properties=None, **kwargs):\n if \"scout_task_start\" not in headers:\n headers[\"scout_task_start\"] = datetime_to_timestamp(dt.datetime.utcnow())\n\n\ndef prerun_callback(task=None, **kwargs):\n tracked_request = TrackedRequest.instance()\n tracked_request.is_real_request = True\n\n start = getattr(task.request, \"scout_task_start\", None)\n if start is not None:\n now = datetime_to_timestamp(dt.datetime.utcnow())\n try:\n queue_time = now - start\n except TypeError:\n pass\n else:\n tracked_request.tag(\"queue_time\", queue_time)\n\n task_id = getattr(task.request, \"id\", None)\n if task_id:\n tracked_request.tag(\"task_id\", task_id)\n parent_task_id = getattr(task.request, \"parent_id\", None)\n if parent_task_id:\n tracked_request.tag(\"parent_task_id\", parent_task_id)\n\n delivery_info = task.request.delivery_info\n tracked_request.tag(\"is_eager\", delivery_info.get(\"is_eager\", False))\n tracked_request.tag(\"exchange\", delivery_info.get(\"exchange\", \"unknown\"))\n tracked_request.tag(\"routing_key\", delivery_info.get(\"routing_key\", \"unknown\"))\n tracked_request.tag(\"queue\", delivery_info.get(\"queue\", \"unknown\"))\n\n tracked_request.start_span(operation=(\"Job/\" + task.name))\n\n\ndef postrun_callback(task=None, **kwargs):\n tracked_request = TrackedRequest.instance()\n tracked_request.stop_span()\n\n\ndef install():\n installed = scout_apm.core.install()\n if not installed:\n return\n\n before_task_publish.connect(before_publish_callback)\n task_prerun.connect(prerun_callback)\n task_postrun.connect(postrun_callback)\n\n\ndef uninstall():\n before_task_publish.disconnect(before_publish_callback)\n task_prerun.disconnect(prerun_callback)\n task_postrun.disconnect(postrun_callback)\n", "path": "src/scout_apm/celery.py"}], "after_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport datetime as dt\n\nfrom celery.signals import before_task_publish, task_postrun, task_prerun\n\nimport scout_apm.core\nfrom scout_apm.compat import datetime_to_timestamp\nfrom scout_apm.core.config import scout_config\nfrom scout_apm.core.tracked_request import TrackedRequest\n\n\ndef before_task_publish_callback(headers=None, properties=None, **kwargs):\n if \"scout_task_start\" not in headers:\n headers[\"scout_task_start\"] = datetime_to_timestamp(dt.datetime.utcnow())\n\n\ndef task_prerun_callback(task=None, **kwargs):\n tracked_request = TrackedRequest.instance()\n tracked_request.is_real_request = True\n\n start = getattr(task.request, \"scout_task_start\", None)\n if start is not None:\n now = datetime_to_timestamp(dt.datetime.utcnow())\n try:\n queue_time = now - start\n except TypeError:\n pass\n else:\n tracked_request.tag(\"queue_time\", queue_time)\n\n task_id = getattr(task.request, \"id\", None)\n if task_id:\n tracked_request.tag(\"task_id\", task_id)\n parent_task_id = getattr(task.request, \"parent_id\", None)\n if parent_task_id:\n tracked_request.tag(\"parent_task_id\", parent_task_id)\n\n delivery_info = task.request.delivery_info\n tracked_request.tag(\"is_eager\", delivery_info.get(\"is_eager\", False))\n tracked_request.tag(\"exchange\", delivery_info.get(\"exchange\", \"unknown\"))\n tracked_request.tag(\"routing_key\", delivery_info.get(\"routing_key\", \"unknown\"))\n tracked_request.tag(\"queue\", delivery_info.get(\"queue\", \"unknown\"))\n\n tracked_request.start_span(operation=(\"Job/\" + task.name))\n\n\ndef task_postrun_callback(task=None, **kwargs):\n tracked_request = TrackedRequest.instance()\n tracked_request.stop_span()\n\n\ndef install(app=None):\n if app is not None:\n copy_configuration(app)\n\n installed = scout_apm.core.install()\n if not installed:\n return\n\n before_task_publish.connect(before_task_publish_callback)\n task_prerun.connect(task_prerun_callback)\n task_postrun.connect(task_postrun_callback)\n\n\ndef copy_configuration(app):\n prefix = \"scout_\"\n prefix_len = len(prefix)\n\n to_set = {}\n for key, value in app.conf.items():\n key_lower = key.lower()\n if key_lower.startswith(prefix) and len(key_lower) > prefix_len:\n scout_key = key_lower[prefix_len:]\n to_set[scout_key] = value\n\n scout_config.set(**to_set)\n\n\ndef uninstall():\n before_task_publish.disconnect(before_task_publish_callback)\n task_prerun.disconnect(task_prerun_callback)\n task_postrun.disconnect(task_postrun_callback)\n", "path": "src/scout_apm/celery.py"}]}
| 1,098 | 556 |
gh_patches_debug_271
|
rasdani/github-patches
|
git_diff
|
codespell-project__codespell-3218
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Codespell don't handle KeyboardInterrupt exception
This should be catched and the program should stop gracefully but instead show default stack trace:
```
^CTraceback (most recent call last):
File "/home/kuba/.local/bin/codespell", line 8, in <module>
sys.exit(_script_main())
^^^^^^^^^^^^^^
File "/home/kuba/.local/lib/python3.12/site-packages/codespell_lib/_codespell.py", line 1017, in _script_main
return main(*sys.argv[1:])
^^^^^^^^^^^^^^^^^^^
File "/home/kuba/.local/lib/python3.12/site-packages/codespell_lib/_codespell.py", line 1185, in main
bad_count += parse_file(
^^^^^^^^^^^
File "/home/kuba/.local/lib/python3.12/site-packages/codespell_lib/_codespell.py", line 903, in parse_file
check_matches = extract_words_iter(line, word_regex, ignore_word_regex)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/kuba/.local/lib/python3.12/site-packages/codespell_lib/_codespell.py", line 793, in extract_words_iter
return list(word_regex.finditer(_ignore_word_sub(text, ignore_word_regex)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
KeyboardInterrupt
```
There is no need to show `KeyboardInterrupt` exception stack trace.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `codespell_lib/__main__.py`
Content:
```
1 import sys
2
3 from ._codespell import _script_main
4
5 if __name__ == "__main__":
6 sys.exit(_script_main())
7
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/codespell_lib/__main__.py b/codespell_lib/__main__.py
--- a/codespell_lib/__main__.py
+++ b/codespell_lib/__main__.py
@@ -3,4 +3,7 @@
from ._codespell import _script_main
if __name__ == "__main__":
- sys.exit(_script_main())
+ try:
+ sys.exit(_script_main())
+ except KeyboardInterrupt:
+ pass
|
{"golden_diff": "diff --git a/codespell_lib/__main__.py b/codespell_lib/__main__.py\n--- a/codespell_lib/__main__.py\n+++ b/codespell_lib/__main__.py\n@@ -3,4 +3,7 @@\n from ._codespell import _script_main\n \n if __name__ == \"__main__\":\n- sys.exit(_script_main())\n+ try:\n+ sys.exit(_script_main())\n+ except KeyboardInterrupt:\n+ pass\n", "issue": "Codespell don't handle KeyboardInterrupt exception\nThis should be catched and the program should stop gracefully but instead show default stack trace:\r\n\r\n```\r\n^CTraceback (most recent call last):\r\n File \"/home/kuba/.local/bin/codespell\", line 8, in <module>\r\n sys.exit(_script_main())\r\n ^^^^^^^^^^^^^^\r\n File \"/home/kuba/.local/lib/python3.12/site-packages/codespell_lib/_codespell.py\", line 1017, in _script_main\r\n return main(*sys.argv[1:])\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/home/kuba/.local/lib/python3.12/site-packages/codespell_lib/_codespell.py\", line 1185, in main\r\n bad_count += parse_file(\r\n ^^^^^^^^^^^\r\n File \"/home/kuba/.local/lib/python3.12/site-packages/codespell_lib/_codespell.py\", line 903, in parse_file\r\n check_matches = extract_words_iter(line, word_regex, ignore_word_regex)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/kuba/.local/lib/python3.12/site-packages/codespell_lib/_codespell.py\", line 793, in extract_words_iter\r\n return list(word_regex.finditer(_ignore_word_sub(text, ignore_word_regex)))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nKeyboardInterrupt\r\n```\r\n\r\nThere is no need to show `KeyboardInterrupt` exception stack trace.\n", "before_files": [{"content": "import sys\n\nfrom ._codespell import _script_main\n\nif __name__ == \"__main__\":\n sys.exit(_script_main())\n", "path": "codespell_lib/__main__.py"}], "after_files": [{"content": "import sys\n\nfrom ._codespell import _script_main\n\nif __name__ == \"__main__\":\n try:\n sys.exit(_script_main())\n except KeyboardInterrupt:\n pass\n", "path": "codespell_lib/__main__.py"}]}
| 634 | 100 |
gh_patches_debug_9054
|
rasdani/github-patches
|
git_diff
|
python__peps-632
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pep2rss disregards PEPs written in reStructuredText format
This can be seen at https://www.python.org/dev/peps/peps.rss/ where the last (most recent) RSS entry is the last PEP written in plaintext.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pep2rss.py`
Content:
```
1 #!/usr/bin/env python
2
3 # usage: pep-hook.py $REPOS $REV
4 # (standard post-commit args)
5
6 import os, glob, time, datetime, stat, re, sys
7 import codecs
8 import PyRSS2Gen as rssgen
9
10 RSS_PATH = os.path.join(sys.argv[1], 'peps.rss')
11
12 def firstline_startingwith(full_path, text):
13 for line in codecs.open(full_path, encoding="utf-8"):
14 if line.startswith(text):
15 return line[len(text):].strip()
16 return None
17
18 # get list of peps with creation time (from "Created:" string in pep .txt)
19 peps = glob.glob('pep-*.txt')
20 def pep_creation_dt(full_path):
21 created_str = firstline_startingwith(full_path, 'Created:')
22 # bleh, I was hoping to avoid re but some PEPs editorialize
23 # on the Created line
24 m = re.search(r'''(\d+-\w+-\d{4})''', created_str)
25 if not m:
26 # some older ones have an empty line, that's okay, if it's old
27 # we ipso facto don't care about it.
28 # "return None" would make the most sense but datetime objects
29 # refuse to compare with that. :-|
30 return datetime.datetime(*time.localtime(0)[:6])
31 created_str = m.group(1)
32 try:
33 t = time.strptime(created_str, '%d-%b-%Y')
34 except ValueError:
35 t = time.strptime(created_str, '%d-%B-%Y')
36 return datetime.datetime(*t[:6])
37 peps_with_dt = [(pep_creation_dt(full_path), full_path) for full_path in peps]
38 # sort peps by date, newest first
39 peps_with_dt.sort(reverse=True)
40
41 # generate rss items for 10 most recent peps
42 items = []
43 for dt, full_path in peps_with_dt[:10]:
44 try:
45 n = int(full_path.split('-')[-1].split('.')[0])
46 except ValueError:
47 pass
48 title = firstline_startingwith(full_path, 'Title:')
49 author = firstline_startingwith(full_path, 'Author:')
50 url = 'http://www.python.org/dev/peps/pep-%0.4d' % n
51 item = rssgen.RSSItem(
52 title = 'PEP %d: %s' % (n, title),
53 link = url,
54 description = 'Author: %s' % author,
55 guid = rssgen.Guid(url),
56 pubDate = dt)
57 items.append(item)
58
59 # the rss envelope
60 desc = """
61 Newest Python Enhancement Proposals (PEPs) - Information on new
62 language features, and some meta-information like release
63 procedure and schedules
64 """.strip()
65 rss = rssgen.RSS2(
66 title = 'Newest Python PEPs',
67 link = 'http://www.python.org/dev/peps',
68 description = desc,
69 lastBuildDate = datetime.datetime.now(),
70 items = items)
71
72 with open(RSS_PATH, 'w') as fp:
73 fp.write(rss.to_xml())
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pep2rss.py b/pep2rss.py
--- a/pep2rss.py
+++ b/pep2rss.py
@@ -15,8 +15,10 @@
return line[len(text):].strip()
return None
-# get list of peps with creation time (from "Created:" string in pep .txt)
+# get list of peps with creation time
+# (from "Created:" string in pep .rst or .txt)
peps = glob.glob('pep-*.txt')
+peps.extend(glob.glob('pep-*.rst'))
def pep_creation_dt(full_path):
created_str = firstline_startingwith(full_path, 'Created:')
# bleh, I was hoping to avoid re but some PEPs editorialize
|
{"golden_diff": "diff --git a/pep2rss.py b/pep2rss.py\n--- a/pep2rss.py\n+++ b/pep2rss.py\n@@ -15,8 +15,10 @@\n return line[len(text):].strip()\n return None\n \n-# get list of peps with creation time (from \"Created:\" string in pep .txt)\n+# get list of peps with creation time\n+# (from \"Created:\" string in pep .rst or .txt)\n peps = glob.glob('pep-*.txt')\n+peps.extend(glob.glob('pep-*.rst'))\n def pep_creation_dt(full_path):\n created_str = firstline_startingwith(full_path, 'Created:')\n # bleh, I was hoping to avoid re but some PEPs editorialize\n", "issue": "pep2rss disregards PEPs written in reStructuredText format\nThis can be seen at https://www.python.org/dev/peps/peps.rss/ where the last (most recent) RSS entry is the last PEP written in plaintext.\n", "before_files": [{"content": "#!/usr/bin/env python\n\n# usage: pep-hook.py $REPOS $REV\n# (standard post-commit args)\n\nimport os, glob, time, datetime, stat, re, sys\nimport codecs\nimport PyRSS2Gen as rssgen\n\nRSS_PATH = os.path.join(sys.argv[1], 'peps.rss')\n\ndef firstline_startingwith(full_path, text):\n for line in codecs.open(full_path, encoding=\"utf-8\"):\n if line.startswith(text):\n return line[len(text):].strip()\n return None\n\n# get list of peps with creation time (from \"Created:\" string in pep .txt)\npeps = glob.glob('pep-*.txt')\ndef pep_creation_dt(full_path):\n created_str = firstline_startingwith(full_path, 'Created:')\n # bleh, I was hoping to avoid re but some PEPs editorialize\n # on the Created line\n m = re.search(r'''(\\d+-\\w+-\\d{4})''', created_str)\n if not m:\n # some older ones have an empty line, that's okay, if it's old\n # we ipso facto don't care about it.\n # \"return None\" would make the most sense but datetime objects\n # refuse to compare with that. :-|\n return datetime.datetime(*time.localtime(0)[:6])\n created_str = m.group(1)\n try:\n t = time.strptime(created_str, '%d-%b-%Y')\n except ValueError:\n t = time.strptime(created_str, '%d-%B-%Y')\n return datetime.datetime(*t[:6])\npeps_with_dt = [(pep_creation_dt(full_path), full_path) for full_path in peps]\n# sort peps by date, newest first\npeps_with_dt.sort(reverse=True)\n\n# generate rss items for 10 most recent peps\nitems = []\nfor dt, full_path in peps_with_dt[:10]:\n try:\n n = int(full_path.split('-')[-1].split('.')[0])\n except ValueError:\n pass\n title = firstline_startingwith(full_path, 'Title:')\n author = firstline_startingwith(full_path, 'Author:')\n url = 'http://www.python.org/dev/peps/pep-%0.4d' % n\n item = rssgen.RSSItem(\n title = 'PEP %d: %s' % (n, title),\n link = url,\n description = 'Author: %s' % author,\n guid = rssgen.Guid(url),\n pubDate = dt)\n items.append(item)\n\n# the rss envelope\ndesc = \"\"\"\nNewest Python Enhancement Proposals (PEPs) - Information on new\nlanguage features, and some meta-information like release\nprocedure and schedules\n\"\"\".strip()\nrss = rssgen.RSS2(\n title = 'Newest Python PEPs',\n link = 'http://www.python.org/dev/peps',\n description = desc,\n lastBuildDate = datetime.datetime.now(),\n items = items)\n\nwith open(RSS_PATH, 'w') as fp:\n fp.write(rss.to_xml())\n", "path": "pep2rss.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n# usage: pep-hook.py $REPOS $REV\n# (standard post-commit args)\n\nimport os, glob, time, datetime, stat, re, sys\nimport codecs\nimport PyRSS2Gen as rssgen\n\nRSS_PATH = os.path.join(sys.argv[1], 'peps.rss')\n\ndef firstline_startingwith(full_path, text):\n for line in codecs.open(full_path, encoding=\"utf-8\"):\n if line.startswith(text):\n return line[len(text):].strip()\n return None\n\n# get list of peps with creation time\n# (from \"Created:\" string in pep .rst or .txt)\npeps = glob.glob('pep-*.txt')\npeps.extend(glob.glob('pep-*.rst'))\ndef pep_creation_dt(full_path):\n created_str = firstline_startingwith(full_path, 'Created:')\n # bleh, I was hoping to avoid re but some PEPs editorialize\n # on the Created line\n m = re.search(r'''(\\d+-\\w+-\\d{4})''', created_str)\n if not m:\n # some older ones have an empty line, that's okay, if it's old\n # we ipso facto don't care about it.\n # \"return None\" would make the most sense but datetime objects\n # refuse to compare with that. :-|\n return datetime.datetime(*time.localtime(0)[:6])\n created_str = m.group(1)\n try:\n t = time.strptime(created_str, '%d-%b-%Y')\n except ValueError:\n t = time.strptime(created_str, '%d-%B-%Y')\n return datetime.datetime(*t[:6])\npeps_with_dt = [(pep_creation_dt(full_path), full_path) for full_path in peps]\n# sort peps by date, newest first\npeps_with_dt.sort(reverse=True)\n\n# generate rss items for 10 most recent peps\nitems = []\nfor dt, full_path in peps_with_dt[:10]:\n try:\n n = int(full_path.split('-')[-1].split('.')[0])\n except ValueError:\n pass\n title = firstline_startingwith(full_path, 'Title:')\n author = firstline_startingwith(full_path, 'Author:')\n url = 'http://www.python.org/dev/peps/pep-%0.4d' % n\n item = rssgen.RSSItem(\n title = 'PEP %d: %s' % (n, title),\n link = url,\n description = 'Author: %s' % author,\n guid = rssgen.Guid(url),\n pubDate = dt)\n items.append(item)\n\n# the rss envelope\ndesc = \"\"\"\nNewest Python Enhancement Proposals (PEPs) - Information on new\nlanguage features, and some meta-information like release\nprocedure and schedules\n\"\"\".strip()\nrss = rssgen.RSS2(\n title = 'Newest Python PEPs',\n link = 'http://www.python.org/dev/peps',\n description = desc,\n lastBuildDate = datetime.datetime.now(),\n items = items)\n\nwith open(RSS_PATH, 'w') as fp:\n fp.write(rss.to_xml())\n", "path": "pep2rss.py"}]}
| 1,130 | 175 |
gh_patches_debug_14253
|
rasdani/github-patches
|
git_diff
|
oppia__oppia-7996
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Exploration Cards Show "Invalid date" as date
**Describe the bug**
In the library, exploration cards have `Invalid date` in the lower right-hand corner.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to https://oppiatestserver.appspot.com/library
**Observed behavior**
The exploration cards show `Invalid date`
**Expected behavior**
The cards should show the creation date.
**Screenshots**

**Desktop (please complete the following information; delete this section if the issue does not arise on desktop):**
- OS: macOS
- Browser: Firefox
- Version: 2.8.7
Publish change button has overflowing text
**Describe the bug**
Publish change text while publishing a collection moves out of the button box.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a collection and check the publish button. The text moves out of the button box.
**Screenshots**
<img width="1440" alt="Screenshot 2019-11-14 at 12 35 14 AM" src="https://user-images.githubusercontent.com/15226041/68795290-a9a08b80-0676-11ea-8b46-57b6b68c3077.png">
**Desktop (please complete the following information; delete this section if the issue does not arise on desktop):**
- OS: Mac
- Browser: Chrome
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/typescript_checks.py`
Content:
```
1 # Copyright 2019 The Oppia Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS-IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """File for compiling and checking typescript."""
16 from __future__ import absolute_import # pylint: disable=import-only-modules
17 from __future__ import unicode_literals # pylint: disable=import-only-modules
18
19 import json
20 import os
21 import shutil
22 import subprocess
23 import sys
24
25 import python_utils
26
27 COMPILED_JS_DIR = os.path.join('local_compiled_js_for_test', '')
28 TSCONFIG_FILEPATH = 'tsconfig-for-compile-check.json'
29
30
31 def validate_compiled_js_dir():
32 """Validates that compiled js dir matches out dir in tsconfig."""
33 with python_utils.open_file(TSCONFIG_FILEPATH, 'r') as f:
34 config_data = json.load(f)
35 out_dir = os.path.join(config_data['compilerOptions']['outDir'], '')
36 if out_dir != COMPILED_JS_DIR:
37 raise Exception(
38 'COMPILED_JS_DIR: %s does not match the output directory '
39 'in %s: %s' % (COMPILED_JS_DIR, TSCONFIG_FILEPATH, out_dir))
40
41
42 def compile_and_check_typescript():
43 """Compiles typescript files and checks the compilation errors."""
44 node_path = os.path.join(os.pardir, 'oppia_tools/node-10.15.3')
45 os.environ['PATH'] = '%s/bin:' % node_path + os.environ['PATH']
46
47 validate_compiled_js_dir()
48
49 if os.path.exists(COMPILED_JS_DIR):
50 shutil.rmtree(COMPILED_JS_DIR)
51
52 python_utils.PRINT('Compiling and testing typescript...')
53 cmd = [
54 './node_modules/typescript/bin/tsc', '--project',
55 TSCONFIG_FILEPATH]
56 process = subprocess.Popen(cmd, stdout=subprocess.PIPE)
57 if os.path.exists(COMPILED_JS_DIR):
58 shutil.rmtree(COMPILED_JS_DIR)
59 error_messages = []
60 for line in iter(process.stdout.readline, ''):
61 error_messages.append(line)
62 if error_messages:
63 python_utils.PRINT('Errors found during compilation\n')
64 for message in error_messages:
65 python_utils.PRINT(message)
66 sys.exit(1)
67 else:
68 python_utils.PRINT('Compilation successful!')
69
70
71 # The 'no coverage' pragma is used as this line is un-testable. This is because
72 # it will only be called when typescript_checks.py is used as a script.
73 if __name__ == '__main__': # pragma: no cover
74 compile_and_check_typescript()
75
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scripts/typescript_checks.py b/scripts/typescript_checks.py
--- a/scripts/typescript_checks.py
+++ b/scripts/typescript_checks.py
@@ -54,11 +54,11 @@
'./node_modules/typescript/bin/tsc', '--project',
TSCONFIG_FILEPATH]
process = subprocess.Popen(cmd, stdout=subprocess.PIPE)
- if os.path.exists(COMPILED_JS_DIR):
- shutil.rmtree(COMPILED_JS_DIR)
error_messages = []
for line in iter(process.stdout.readline, ''):
error_messages.append(line)
+ if os.path.exists(COMPILED_JS_DIR):
+ shutil.rmtree(COMPILED_JS_DIR)
if error_messages:
python_utils.PRINT('Errors found during compilation\n')
for message in error_messages:
|
{"golden_diff": "diff --git a/scripts/typescript_checks.py b/scripts/typescript_checks.py\n--- a/scripts/typescript_checks.py\n+++ b/scripts/typescript_checks.py\n@@ -54,11 +54,11 @@\n './node_modules/typescript/bin/tsc', '--project',\n TSCONFIG_FILEPATH]\n process = subprocess.Popen(cmd, stdout=subprocess.PIPE)\n- if os.path.exists(COMPILED_JS_DIR):\n- shutil.rmtree(COMPILED_JS_DIR)\n error_messages = []\n for line in iter(process.stdout.readline, ''):\n error_messages.append(line)\n+ if os.path.exists(COMPILED_JS_DIR):\n+ shutil.rmtree(COMPILED_JS_DIR)\n if error_messages:\n python_utils.PRINT('Errors found during compilation\\n')\n for message in error_messages:\n", "issue": "Exploration Cards Show \"Invalid date\" as date\n**Describe the bug**\r\nIn the library, exploration cards have `Invalid date` in the lower right-hand corner.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n 1. Go to https://oppiatestserver.appspot.com/library\r\n\r\n**Observed behavior**\r\nThe exploration cards show `Invalid date`\r\n\r\n**Expected behavior**\r\nThe cards should show the creation date.\r\n\r\n**Screenshots**\r\n\r\n\r\n\r\n**Desktop (please complete the following information; delete this section if the issue does not arise on desktop):**\r\n - OS: macOS\r\n - Browser: Firefox\r\n - Version: 2.8.7\nPublish change button has overflowing text\n**Describe the bug**\r\nPublish change text while publishing a collection moves out of the button box.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n 1. Create a collection and check the publish button. The text moves out of the button box.\r\n\r\n**Screenshots**\r\n<img width=\"1440\" alt=\"Screenshot 2019-11-14 at 12 35 14 AM\" src=\"https://user-images.githubusercontent.com/15226041/68795290-a9a08b80-0676-11ea-8b46-57b6b68c3077.png\">\r\n\r\n\r\n**Desktop (please complete the following information; delete this section if the issue does not arise on desktop):**\r\n - OS: Mac\r\n - Browser: Chrome\r\n\r\n\n", "before_files": [{"content": "# Copyright 2019 The Oppia Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS-IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"File for compiling and checking typescript.\"\"\"\nfrom __future__ import absolute_import # pylint: disable=import-only-modules\nfrom __future__ import unicode_literals # pylint: disable=import-only-modules\n\nimport json\nimport os\nimport shutil\nimport subprocess\nimport sys\n\nimport python_utils\n\nCOMPILED_JS_DIR = os.path.join('local_compiled_js_for_test', '')\nTSCONFIG_FILEPATH = 'tsconfig-for-compile-check.json'\n\n\ndef validate_compiled_js_dir():\n \"\"\"Validates that compiled js dir matches out dir in tsconfig.\"\"\"\n with python_utils.open_file(TSCONFIG_FILEPATH, 'r') as f:\n config_data = json.load(f)\n out_dir = os.path.join(config_data['compilerOptions']['outDir'], '')\n if out_dir != COMPILED_JS_DIR:\n raise Exception(\n 'COMPILED_JS_DIR: %s does not match the output directory '\n 'in %s: %s' % (COMPILED_JS_DIR, TSCONFIG_FILEPATH, out_dir))\n\n\ndef compile_and_check_typescript():\n \"\"\"Compiles typescript files and checks the compilation errors.\"\"\"\n node_path = os.path.join(os.pardir, 'oppia_tools/node-10.15.3')\n os.environ['PATH'] = '%s/bin:' % node_path + os.environ['PATH']\n\n validate_compiled_js_dir()\n\n if os.path.exists(COMPILED_JS_DIR):\n shutil.rmtree(COMPILED_JS_DIR)\n\n python_utils.PRINT('Compiling and testing typescript...')\n cmd = [\n './node_modules/typescript/bin/tsc', '--project',\n TSCONFIG_FILEPATH]\n process = subprocess.Popen(cmd, stdout=subprocess.PIPE)\n if os.path.exists(COMPILED_JS_DIR):\n shutil.rmtree(COMPILED_JS_DIR)\n error_messages = []\n for line in iter(process.stdout.readline, ''):\n error_messages.append(line)\n if error_messages:\n python_utils.PRINT('Errors found during compilation\\n')\n for message in error_messages:\n python_utils.PRINT(message)\n sys.exit(1)\n else:\n python_utils.PRINT('Compilation successful!')\n\n\n# The 'no coverage' pragma is used as this line is un-testable. This is because\n# it will only be called when typescript_checks.py is used as a script.\nif __name__ == '__main__': # pragma: no cover\n compile_and_check_typescript()\n", "path": "scripts/typescript_checks.py"}], "after_files": [{"content": "# Copyright 2019 The Oppia Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS-IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"File for compiling and checking typescript.\"\"\"\nfrom __future__ import absolute_import # pylint: disable=import-only-modules\nfrom __future__ import unicode_literals # pylint: disable=import-only-modules\n\nimport json\nimport os\nimport shutil\nimport subprocess\nimport sys\n\nimport python_utils\n\nCOMPILED_JS_DIR = os.path.join('local_compiled_js_for_test', '')\nTSCONFIG_FILEPATH = 'tsconfig-for-compile-check.json'\n\n\ndef validate_compiled_js_dir():\n \"\"\"Validates that compiled js dir matches out dir in tsconfig.\"\"\"\n with python_utils.open_file(TSCONFIG_FILEPATH, 'r') as f:\n config_data = json.load(f)\n out_dir = os.path.join(config_data['compilerOptions']['outDir'], '')\n if out_dir != COMPILED_JS_DIR:\n raise Exception(\n 'COMPILED_JS_DIR: %s does not match the output directory '\n 'in %s: %s' % (COMPILED_JS_DIR, TSCONFIG_FILEPATH, out_dir))\n\n\ndef compile_and_check_typescript():\n \"\"\"Compiles typescript files and checks the compilation errors.\"\"\"\n node_path = os.path.join(os.pardir, 'oppia_tools/node-10.15.3')\n os.environ['PATH'] = '%s/bin:' % node_path + os.environ['PATH']\n\n validate_compiled_js_dir()\n\n if os.path.exists(COMPILED_JS_DIR):\n shutil.rmtree(COMPILED_JS_DIR)\n\n python_utils.PRINT('Compiling and testing typescript...')\n cmd = [\n './node_modules/typescript/bin/tsc', '--project',\n TSCONFIG_FILEPATH]\n process = subprocess.Popen(cmd, stdout=subprocess.PIPE)\n error_messages = []\n for line in iter(process.stdout.readline, ''):\n error_messages.append(line)\n if os.path.exists(COMPILED_JS_DIR):\n shutil.rmtree(COMPILED_JS_DIR)\n if error_messages:\n python_utils.PRINT('Errors found during compilation\\n')\n for message in error_messages:\n python_utils.PRINT(message)\n sys.exit(1)\n else:\n python_utils.PRINT('Compilation successful!')\n\n\n# The 'no coverage' pragma is used as this line is un-testable. This is because\n# it will only be called when typescript_checks.py is used as a script.\nif __name__ == '__main__': # pragma: no cover\n compile_and_check_typescript()\n", "path": "scripts/typescript_checks.py"}]}
| 1,475 | 168 |
gh_patches_debug_33380
|
rasdani/github-patches
|
git_diff
|
apache__airflow-26343
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
API Endpoints - /xcomEntries/{xcom_key} cannot deserialize customized xcom backend
### Description
We use S3 as our xcom backend database and write serialize/deserialize method for xcoms.
However, when we want to access xcom through REST API, it returns the s3 file url instead of the deserialized value. Could you please add the feature to support customized xcom backend for REST API access?
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `airflow/api_connexion/endpoints/xcom_endpoint.py`
Content:
```
1 # Licensed to the Apache Software Foundation (ASF) under one
2 # or more contributor license agreements. See the NOTICE file
3 # distributed with this work for additional information
4 # regarding copyright ownership. The ASF licenses this file
5 # to you under the Apache License, Version 2.0 (the
6 # "License"); you may not use this file except in compliance
7 # with the License. You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing,
12 # software distributed under the License is distributed on an
13 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
14 # KIND, either express or implied. See the License for the
15 # specific language governing permissions and limitations
16 # under the License.
17 from typing import Optional
18
19 from flask import g
20 from sqlalchemy import and_
21 from sqlalchemy.orm import Session
22
23 from airflow.api_connexion import security
24 from airflow.api_connexion.exceptions import NotFound
25 from airflow.api_connexion.parameters import check_limit, format_parameters
26 from airflow.api_connexion.schemas.xcom_schema import XComCollection, xcom_collection_schema, xcom_schema
27 from airflow.api_connexion.types import APIResponse
28 from airflow.models import DagRun as DR, XCom
29 from airflow.security import permissions
30 from airflow.utils.airflow_flask_app import get_airflow_app
31 from airflow.utils.session import NEW_SESSION, provide_session
32
33
34 @security.requires_access(
35 [
36 (permissions.ACTION_CAN_READ, permissions.RESOURCE_DAG),
37 (permissions.ACTION_CAN_READ, permissions.RESOURCE_DAG_RUN),
38 (permissions.ACTION_CAN_READ, permissions.RESOURCE_TASK_INSTANCE),
39 (permissions.ACTION_CAN_READ, permissions.RESOURCE_XCOM),
40 ],
41 )
42 @format_parameters({"limit": check_limit})
43 @provide_session
44 def get_xcom_entries(
45 *,
46 dag_id: str,
47 dag_run_id: str,
48 task_id: str,
49 limit: Optional[int],
50 offset: Optional[int] = None,
51 session: Session = NEW_SESSION,
52 ) -> APIResponse:
53 """Get all XCom values"""
54 query = session.query(XCom)
55 if dag_id == '~':
56 appbuilder = get_airflow_app().appbuilder
57 readable_dag_ids = appbuilder.sm.get_readable_dag_ids(g.user)
58 query = query.filter(XCom.dag_id.in_(readable_dag_ids))
59 query = query.join(DR, and_(XCom.dag_id == DR.dag_id, XCom.run_id == DR.run_id))
60 else:
61 query = query.filter(XCom.dag_id == dag_id)
62 query = query.join(DR, and_(XCom.dag_id == DR.dag_id, XCom.run_id == DR.run_id))
63
64 if task_id != '~':
65 query = query.filter(XCom.task_id == task_id)
66 if dag_run_id != '~':
67 query = query.filter(DR.run_id == dag_run_id)
68 query = query.order_by(DR.execution_date, XCom.task_id, XCom.dag_id, XCom.key)
69 total_entries = query.count()
70 query = query.offset(offset).limit(limit)
71 return xcom_collection_schema.dump(XComCollection(xcom_entries=query.all(), total_entries=total_entries))
72
73
74 @security.requires_access(
75 [
76 (permissions.ACTION_CAN_READ, permissions.RESOURCE_DAG),
77 (permissions.ACTION_CAN_READ, permissions.RESOURCE_DAG_RUN),
78 (permissions.ACTION_CAN_READ, permissions.RESOURCE_TASK_INSTANCE),
79 (permissions.ACTION_CAN_READ, permissions.RESOURCE_XCOM),
80 ],
81 )
82 @provide_session
83 def get_xcom_entry(
84 *,
85 dag_id: str,
86 task_id: str,
87 dag_run_id: str,
88 xcom_key: str,
89 session: Session = NEW_SESSION,
90 ) -> APIResponse:
91 """Get an XCom entry"""
92 query = session.query(XCom).filter(XCom.dag_id == dag_id, XCom.task_id == task_id, XCom.key == xcom_key)
93 query = query.join(DR, and_(XCom.dag_id == DR.dag_id, XCom.run_id == DR.run_id))
94 query = query.filter(DR.run_id == dag_run_id)
95
96 query_object = query.one_or_none()
97 if not query_object:
98 raise NotFound("XCom entry not found")
99 return xcom_schema.dump(query_object)
100
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/airflow/api_connexion/endpoints/xcom_endpoint.py b/airflow/api_connexion/endpoints/xcom_endpoint.py
--- a/airflow/api_connexion/endpoints/xcom_endpoint.py
+++ b/airflow/api_connexion/endpoints/xcom_endpoint.py
@@ -14,6 +14,7 @@
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
+import copy
from typing import Optional
from flask import g
@@ -68,7 +69,7 @@
query = query.order_by(DR.execution_date, XCom.task_id, XCom.dag_id, XCom.key)
total_entries = query.count()
query = query.offset(offset).limit(limit)
- return xcom_collection_schema.dump(XComCollection(xcom_entries=query.all(), total_entries=total_entries))
+ return xcom_collection_schema.dump(XComCollection(xcom_entries=query, total_entries=total_entries))
@security.requires_access(
@@ -86,14 +87,28 @@
task_id: str,
dag_run_id: str,
xcom_key: str,
+ deserialize: bool = False,
session: Session = NEW_SESSION,
) -> APIResponse:
"""Get an XCom entry"""
- query = session.query(XCom).filter(XCom.dag_id == dag_id, XCom.task_id == task_id, XCom.key == xcom_key)
+ if deserialize:
+ query = session.query(XCom, XCom.value)
+ else:
+ query = session.query(XCom)
+
+ query = query.filter(XCom.dag_id == dag_id, XCom.task_id == task_id, XCom.key == xcom_key)
query = query.join(DR, and_(XCom.dag_id == DR.dag_id, XCom.run_id == DR.run_id))
query = query.filter(DR.run_id == dag_run_id)
- query_object = query.one_or_none()
- if not query_object:
+ item = query.one_or_none()
+ if item is None:
raise NotFound("XCom entry not found")
- return xcom_schema.dump(query_object)
+
+ if deserialize:
+ xcom, value = item
+ stub = copy.copy(xcom)
+ stub.value = value
+ stub.value = XCom.deserialize_value(stub)
+ item = stub
+
+ return xcom_schema.dump(item)
|
{"golden_diff": "diff --git a/airflow/api_connexion/endpoints/xcom_endpoint.py b/airflow/api_connexion/endpoints/xcom_endpoint.py\n--- a/airflow/api_connexion/endpoints/xcom_endpoint.py\n+++ b/airflow/api_connexion/endpoints/xcom_endpoint.py\n@@ -14,6 +14,7 @@\n # KIND, either express or implied. See the License for the\n # specific language governing permissions and limitations\n # under the License.\n+import copy\n from typing import Optional\n \n from flask import g\n@@ -68,7 +69,7 @@\n query = query.order_by(DR.execution_date, XCom.task_id, XCom.dag_id, XCom.key)\n total_entries = query.count()\n query = query.offset(offset).limit(limit)\n- return xcom_collection_schema.dump(XComCollection(xcom_entries=query.all(), total_entries=total_entries))\n+ return xcom_collection_schema.dump(XComCollection(xcom_entries=query, total_entries=total_entries))\n \n \n @security.requires_access(\n@@ -86,14 +87,28 @@\n task_id: str,\n dag_run_id: str,\n xcom_key: str,\n+ deserialize: bool = False,\n session: Session = NEW_SESSION,\n ) -> APIResponse:\n \"\"\"Get an XCom entry\"\"\"\n- query = session.query(XCom).filter(XCom.dag_id == dag_id, XCom.task_id == task_id, XCom.key == xcom_key)\n+ if deserialize:\n+ query = session.query(XCom, XCom.value)\n+ else:\n+ query = session.query(XCom)\n+\n+ query = query.filter(XCom.dag_id == dag_id, XCom.task_id == task_id, XCom.key == xcom_key)\n query = query.join(DR, and_(XCom.dag_id == DR.dag_id, XCom.run_id == DR.run_id))\n query = query.filter(DR.run_id == dag_run_id)\n \n- query_object = query.one_or_none()\n- if not query_object:\n+ item = query.one_or_none()\n+ if item is None:\n raise NotFound(\"XCom entry not found\")\n- return xcom_schema.dump(query_object)\n+\n+ if deserialize:\n+ xcom, value = item\n+ stub = copy.copy(xcom)\n+ stub.value = value\n+ stub.value = XCom.deserialize_value(stub)\n+ item = stub\n+\n+ return xcom_schema.dump(item)\n", "issue": "API Endpoints - /xcomEntries/{xcom_key} cannot deserialize customized xcom backend\n### Description\n\nWe use S3 as our xcom backend database and write serialize/deserialize method for xcoms.\r\nHowever, when we want to access xcom through REST API, it returns the s3 file url instead of the deserialized value. Could you please add the feature to support customized xcom backend for REST API access?\n\n### Use case/motivation\n\n_No response_\n\n### Related issues\n\n_No response_\n\n### Are you willing to submit a PR?\n\n- [ ] Yes I am willing to submit a PR!\n\n### Code of Conduct\n\n- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)\n\n", "before_files": [{"content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\nfrom typing import Optional\n\nfrom flask import g\nfrom sqlalchemy import and_\nfrom sqlalchemy.orm import Session\n\nfrom airflow.api_connexion import security\nfrom airflow.api_connexion.exceptions import NotFound\nfrom airflow.api_connexion.parameters import check_limit, format_parameters\nfrom airflow.api_connexion.schemas.xcom_schema import XComCollection, xcom_collection_schema, xcom_schema\nfrom airflow.api_connexion.types import APIResponse\nfrom airflow.models import DagRun as DR, XCom\nfrom airflow.security import permissions\nfrom airflow.utils.airflow_flask_app import get_airflow_app\nfrom airflow.utils.session import NEW_SESSION, provide_session\n\n\[email protected]_access(\n [\n (permissions.ACTION_CAN_READ, permissions.RESOURCE_DAG),\n (permissions.ACTION_CAN_READ, permissions.RESOURCE_DAG_RUN),\n (permissions.ACTION_CAN_READ, permissions.RESOURCE_TASK_INSTANCE),\n (permissions.ACTION_CAN_READ, permissions.RESOURCE_XCOM),\n ],\n)\n@format_parameters({\"limit\": check_limit})\n@provide_session\ndef get_xcom_entries(\n *,\n dag_id: str,\n dag_run_id: str,\n task_id: str,\n limit: Optional[int],\n offset: Optional[int] = None,\n session: Session = NEW_SESSION,\n) -> APIResponse:\n \"\"\"Get all XCom values\"\"\"\n query = session.query(XCom)\n if dag_id == '~':\n appbuilder = get_airflow_app().appbuilder\n readable_dag_ids = appbuilder.sm.get_readable_dag_ids(g.user)\n query = query.filter(XCom.dag_id.in_(readable_dag_ids))\n query = query.join(DR, and_(XCom.dag_id == DR.dag_id, XCom.run_id == DR.run_id))\n else:\n query = query.filter(XCom.dag_id == dag_id)\n query = query.join(DR, and_(XCom.dag_id == DR.dag_id, XCom.run_id == DR.run_id))\n\n if task_id != '~':\n query = query.filter(XCom.task_id == task_id)\n if dag_run_id != '~':\n query = query.filter(DR.run_id == dag_run_id)\n query = query.order_by(DR.execution_date, XCom.task_id, XCom.dag_id, XCom.key)\n total_entries = query.count()\n query = query.offset(offset).limit(limit)\n return xcom_collection_schema.dump(XComCollection(xcom_entries=query.all(), total_entries=total_entries))\n\n\[email protected]_access(\n [\n (permissions.ACTION_CAN_READ, permissions.RESOURCE_DAG),\n (permissions.ACTION_CAN_READ, permissions.RESOURCE_DAG_RUN),\n (permissions.ACTION_CAN_READ, permissions.RESOURCE_TASK_INSTANCE),\n (permissions.ACTION_CAN_READ, permissions.RESOURCE_XCOM),\n ],\n)\n@provide_session\ndef get_xcom_entry(\n *,\n dag_id: str,\n task_id: str,\n dag_run_id: str,\n xcom_key: str,\n session: Session = NEW_SESSION,\n) -> APIResponse:\n \"\"\"Get an XCom entry\"\"\"\n query = session.query(XCom).filter(XCom.dag_id == dag_id, XCom.task_id == task_id, XCom.key == xcom_key)\n query = query.join(DR, and_(XCom.dag_id == DR.dag_id, XCom.run_id == DR.run_id))\n query = query.filter(DR.run_id == dag_run_id)\n\n query_object = query.one_or_none()\n if not query_object:\n raise NotFound(\"XCom entry not found\")\n return xcom_schema.dump(query_object)\n", "path": "airflow/api_connexion/endpoints/xcom_endpoint.py"}], "after_files": [{"content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\nimport copy\nfrom typing import Optional\n\nfrom flask import g\nfrom sqlalchemy import and_\nfrom sqlalchemy.orm import Session\n\nfrom airflow.api_connexion import security\nfrom airflow.api_connexion.exceptions import NotFound\nfrom airflow.api_connexion.parameters import check_limit, format_parameters\nfrom airflow.api_connexion.schemas.xcom_schema import XComCollection, xcom_collection_schema, xcom_schema\nfrom airflow.api_connexion.types import APIResponse\nfrom airflow.models import DagRun as DR, XCom\nfrom airflow.security import permissions\nfrom airflow.utils.airflow_flask_app import get_airflow_app\nfrom airflow.utils.session import NEW_SESSION, provide_session\n\n\[email protected]_access(\n [\n (permissions.ACTION_CAN_READ, permissions.RESOURCE_DAG),\n (permissions.ACTION_CAN_READ, permissions.RESOURCE_DAG_RUN),\n (permissions.ACTION_CAN_READ, permissions.RESOURCE_TASK_INSTANCE),\n (permissions.ACTION_CAN_READ, permissions.RESOURCE_XCOM),\n ],\n)\n@format_parameters({\"limit\": check_limit})\n@provide_session\ndef get_xcom_entries(\n *,\n dag_id: str,\n dag_run_id: str,\n task_id: str,\n limit: Optional[int],\n offset: Optional[int] = None,\n session: Session = NEW_SESSION,\n) -> APIResponse:\n \"\"\"Get all XCom values\"\"\"\n query = session.query(XCom)\n if dag_id == '~':\n appbuilder = get_airflow_app().appbuilder\n readable_dag_ids = appbuilder.sm.get_readable_dag_ids(g.user)\n query = query.filter(XCom.dag_id.in_(readable_dag_ids))\n query = query.join(DR, and_(XCom.dag_id == DR.dag_id, XCom.run_id == DR.run_id))\n else:\n query = query.filter(XCom.dag_id == dag_id)\n query = query.join(DR, and_(XCom.dag_id == DR.dag_id, XCom.run_id == DR.run_id))\n\n if task_id != '~':\n query = query.filter(XCom.task_id == task_id)\n if dag_run_id != '~':\n query = query.filter(DR.run_id == dag_run_id)\n query = query.order_by(DR.execution_date, XCom.task_id, XCom.dag_id, XCom.key)\n total_entries = query.count()\n query = query.offset(offset).limit(limit)\n return xcom_collection_schema.dump(XComCollection(xcom_entries=query, total_entries=total_entries))\n\n\[email protected]_access(\n [\n (permissions.ACTION_CAN_READ, permissions.RESOURCE_DAG),\n (permissions.ACTION_CAN_READ, permissions.RESOURCE_DAG_RUN),\n (permissions.ACTION_CAN_READ, permissions.RESOURCE_TASK_INSTANCE),\n (permissions.ACTION_CAN_READ, permissions.RESOURCE_XCOM),\n ],\n)\n@provide_session\ndef get_xcom_entry(\n *,\n dag_id: str,\n task_id: str,\n dag_run_id: str,\n xcom_key: str,\n deserialize: bool = False,\n session: Session = NEW_SESSION,\n) -> APIResponse:\n \"\"\"Get an XCom entry\"\"\"\n if deserialize:\n query = session.query(XCom, XCom.value)\n else:\n query = session.query(XCom)\n\n query = query.filter(XCom.dag_id == dag_id, XCom.task_id == task_id, XCom.key == xcom_key)\n query = query.join(DR, and_(XCom.dag_id == DR.dag_id, XCom.run_id == DR.run_id))\n query = query.filter(DR.run_id == dag_run_id)\n\n item = query.one_or_none()\n if item is None:\n raise NotFound(\"XCom entry not found\")\n\n if deserialize:\n xcom, value = item\n stub = copy.copy(xcom)\n stub.value = value\n stub.value = XCom.deserialize_value(stub)\n item = stub\n\n return xcom_schema.dump(item)\n", "path": "airflow/api_connexion/endpoints/xcom_endpoint.py"}]}
| 1,551 | 538 |
gh_patches_debug_23835
|
rasdani/github-patches
|
git_diff
|
ResonantGeoData__ResonantGeoData-70
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add an endpoint to get status of workers
It would be useful to know if we have any workers associated with the system, and, if so, if they are busy.
Specifically, this could probably be something like is done in girder_worker (see https://github.com/girder/girder_worker/blob/master/girder_worker/girder_plugin/api/worker.py#L40-L55). For this purpose, the celery app can be reached via `from rgd import celery_app`.
Ideally, this let's us determine the following conditions:
- The broker is unavailable
- There are no workers
- The number of idle workers
- The number of busy workers (and, ideally, what they are busy doing)
In the future, we may have multiple worker pools (for instance, for GPU and non-GPU tasks), so this will probably change exactly what gets reported in the future.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import setup
2
3 setup(
4 name='resonantgeodata',
5 version='0.1',
6 python_requires='>=3.8.0',
7 install_requires=[
8 'boto3',
9 'celery!=4.4.4',
10 'django',
11 'django-admin-display',
12 'django-allauth',
13 'django-cleanup',
14 'django-configurations[database]',
15 'django-cors-headers',
16 'django-crispy-forms',
17 'django-extensions',
18 'django-storages',
19 'djangorestframework',
20 'docker',
21 'drf-yasg',
22 'gputil',
23 'psycopg2',
24 'python-magic',
25 'rules',
26 'uritemplate',
27 'whitenoise[brotli]',
28 # Production-only
29 'django-storages',
30 'gunicorn',
31 # Development-only
32 'django-debug-toolbar',
33 'django-minio-storage',
34 ],
35 )
36
```
Path: `core/urls.py`
Content:
```
1 from django.urls import path
2
3 from . import views
4
5 urlpatterns = [
6 path('', views.index, name='index'),
7 path('algorithms/', views.algorithms, name='algorithms'),
8 path(
9 'algorithms/<str:creator>/<int:pk>/',
10 views.AlgorithmDetailView.as_view(),
11 name='algorithm-detail',
12 ),
13 path(
14 'algorithms/<str:creator>/<int:pk>/delete/',
15 views.AlgorithmDeleteView.as_view(),
16 name='delete-algorithm',
17 ),
18 path('algorithms/new/', views.AlgorithmCreateView.as_view(), name='new-algorithm'),
19 path('jobs/', views.jobs, name='jobs'),
20 path('jobs/new/', views.JobCreateView.as_view(), name='new-job'),
21 path('jobs/<str:creator>/<int:pk>/', views.JobDetailView.as_view(), name='job-detail'),
22 path('tasks/', views.tasks, name='tasks'),
23 path('task/<int:pk>-<str:name>/', views.TaskDetailView.as_view(), name='task-detail'),
24 path('api/download/<model>/<int:id>/<field>', views.download_file, name='download-file'),
25 ]
26
27 handler500 = views.handler500
28
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/core/urls.py b/core/urls.py
--- a/core/urls.py
+++ b/core/urls.py
@@ -1,7 +1,11 @@
+from django.contrib import admin
from django.urls import path
+from djproxy.urls import generate_routes
from . import views
+
+admin.site.index_template = 'admin/add_flower.html'
urlpatterns = [
path('', views.index, name='index'),
path('algorithms/', views.algorithms, name='algorithms'),
@@ -22,6 +26,6 @@
path('tasks/', views.tasks, name='tasks'),
path('task/<int:pk>-<str:name>/', views.TaskDetailView.as_view(), name='task-detail'),
path('api/download/<model>/<int:id>/<field>', views.download_file, name='download-file'),
-]
+] + generate_routes({'flower-proxy': {'base_url': 'http://flower:5555/', 'prefix': '/flower/'}})
handler500 = views.handler500
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -17,6 +17,7 @@
'django-extensions',
'django-storages',
'djangorestframework',
+ 'djproxy',
'docker',
'drf-yasg',
'gputil',
|
{"golden_diff": "diff --git a/core/urls.py b/core/urls.py\n--- a/core/urls.py\n+++ b/core/urls.py\n@@ -1,7 +1,11 @@\n+from django.contrib import admin\n from django.urls import path\n+from djproxy.urls import generate_routes\n \n from . import views\n \n+\n+admin.site.index_template = 'admin/add_flower.html'\n urlpatterns = [\n path('', views.index, name='index'),\n path('algorithms/', views.algorithms, name='algorithms'),\n@@ -22,6 +26,6 @@\n path('tasks/', views.tasks, name='tasks'),\n path('task/<int:pk>-<str:name>/', views.TaskDetailView.as_view(), name='task-detail'),\n path('api/download/<model>/<int:id>/<field>', views.download_file, name='download-file'),\n-]\n+] + generate_routes({'flower-proxy': {'base_url': 'http://flower:5555/', 'prefix': '/flower/'}})\n \n handler500 = views.handler500\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -17,6 +17,7 @@\n 'django-extensions',\n 'django-storages',\n 'djangorestframework',\n+ 'djproxy',\n 'docker',\n 'drf-yasg',\n 'gputil',\n", "issue": "Add an endpoint to get status of workers\nIt would be useful to know if we have any workers associated with the system, and, if so, if they are busy.\r\n\r\nSpecifically, this could probably be something like is done in girder_worker (see https://github.com/girder/girder_worker/blob/master/girder_worker/girder_plugin/api/worker.py#L40-L55). For this purpose, the celery app can be reached via `from rgd import celery_app`.\r\n\r\nIdeally, this let's us determine the following conditions:\r\n- The broker is unavailable \r\n- There are no workers\r\n- The number of idle workers\r\n- The number of busy workers (and, ideally, what they are busy doing)\r\n\r\nIn the future, we may have multiple worker pools (for instance, for GPU and non-GPU tasks), so this will probably change exactly what gets reported in the future.\n", "before_files": [{"content": "from setuptools import setup\n\nsetup(\n name='resonantgeodata',\n version='0.1',\n python_requires='>=3.8.0',\n install_requires=[\n 'boto3',\n 'celery!=4.4.4',\n 'django',\n 'django-admin-display',\n 'django-allauth',\n 'django-cleanup',\n 'django-configurations[database]',\n 'django-cors-headers',\n 'django-crispy-forms',\n 'django-extensions',\n 'django-storages',\n 'djangorestframework',\n 'docker',\n 'drf-yasg',\n 'gputil',\n 'psycopg2',\n 'python-magic',\n 'rules',\n 'uritemplate',\n 'whitenoise[brotli]',\n # Production-only\n 'django-storages',\n 'gunicorn',\n # Development-only\n 'django-debug-toolbar',\n 'django-minio-storage',\n ],\n)\n", "path": "setup.py"}, {"content": "from django.urls import path\n\nfrom . import views\n\nurlpatterns = [\n path('', views.index, name='index'),\n path('algorithms/', views.algorithms, name='algorithms'),\n path(\n 'algorithms/<str:creator>/<int:pk>/',\n views.AlgorithmDetailView.as_view(),\n name='algorithm-detail',\n ),\n path(\n 'algorithms/<str:creator>/<int:pk>/delete/',\n views.AlgorithmDeleteView.as_view(),\n name='delete-algorithm',\n ),\n path('algorithms/new/', views.AlgorithmCreateView.as_view(), name='new-algorithm'),\n path('jobs/', views.jobs, name='jobs'),\n path('jobs/new/', views.JobCreateView.as_view(), name='new-job'),\n path('jobs/<str:creator>/<int:pk>/', views.JobDetailView.as_view(), name='job-detail'),\n path('tasks/', views.tasks, name='tasks'),\n path('task/<int:pk>-<str:name>/', views.TaskDetailView.as_view(), name='task-detail'),\n path('api/download/<model>/<int:id>/<field>', views.download_file, name='download-file'),\n]\n\nhandler500 = views.handler500\n", "path": "core/urls.py"}], "after_files": [{"content": "from setuptools import setup\n\nsetup(\n name='resonantgeodata',\n version='0.1',\n python_requires='>=3.8.0',\n install_requires=[\n 'boto3',\n 'celery!=4.4.4',\n 'django',\n 'django-admin-display',\n 'django-allauth',\n 'django-cleanup',\n 'django-configurations[database]',\n 'django-cors-headers',\n 'django-crispy-forms',\n 'django-extensions',\n 'django-storages',\n 'djangorestframework',\n 'djproxy',\n 'docker',\n 'drf-yasg',\n 'gputil',\n 'psycopg2',\n 'python-magic',\n 'rules',\n 'uritemplate',\n 'whitenoise[brotli]',\n # Production-only\n 'django-storages',\n 'gunicorn',\n # Development-only\n 'django-debug-toolbar',\n 'django-minio-storage',\n ],\n)\n", "path": "setup.py"}, {"content": "from django.contrib import admin\nfrom django.urls import path\nfrom djproxy.urls import generate_routes\n\nfrom . import views\n\n\nadmin.site.index_template = 'admin/add_flower.html'\nurlpatterns = [\n path('', views.index, name='index'),\n path('algorithms/', views.algorithms, name='algorithms'),\n path(\n 'algorithms/<str:creator>/<int:pk>/',\n views.AlgorithmDetailView.as_view(),\n name='algorithm-detail',\n ),\n path(\n 'algorithms/<str:creator>/<int:pk>/delete/',\n views.AlgorithmDeleteView.as_view(),\n name='delete-algorithm',\n ),\n path('algorithms/new/', views.AlgorithmCreateView.as_view(), name='new-algorithm'),\n path('jobs/', views.jobs, name='jobs'),\n path('jobs/new/', views.JobCreateView.as_view(), name='new-job'),\n path('jobs/<str:creator>/<int:pk>/', views.JobDetailView.as_view(), name='job-detail'),\n path('tasks/', views.tasks, name='tasks'),\n path('task/<int:pk>-<str:name>/', views.TaskDetailView.as_view(), name='task-detail'),\n path('api/download/<model>/<int:id>/<field>', views.download_file, name='download-file'),\n] + generate_routes({'flower-proxy': {'base_url': 'http://flower:5555/', 'prefix': '/flower/'}})\n\nhandler500 = views.handler500\n", "path": "core/urls.py"}]}
| 1,033 | 296 |
gh_patches_debug_66361
|
rasdani/github-patches
|
git_diff
|
opsdroid__opsdroid-737
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot disconnect from SQLite
<!-- Before you post an issue or if you are unsure about something join our gitter channel https://gitter.im/opsdroid/ and ask away! We are more than happy to help you. -->
# Description
SQLite database connector can’t disconnect because of wrong method signature.
## Steps to Reproduce
Enable the SQLite database module, then try to shut down the bot.
## Expected Functionality
The bot should shut down.
## Experienced Functionality
This error message on the console, and the bot remains running (but with the connectors already disconnected).
```
ERROR opsdroid.core: {'message': 'Task exception was never retrieved', 'exception': TypeError('disconnect() takes 1 positional argument but 2 were given',), 'future': <Task finished coro=<OpsDroid.handle_signal() done, defined at /home/polesz/.local/lib/python3.6/site-packages/opsdroid/core.py:121> exception=TypeError('disconnect() takes 1 positional argument but 2 were given',)>}
```
## Versions
- **Opsdroid version:** 0.13.0
- **Python version:** 3.6.6 (bundled with Fedora 28)
- **OS/Docker version:** Fedora 28, no Docker involved
## Additional information
It seems the method signature of `Database.disconnect()` is wrong (should be `async def disconnect(self, opsdroid)`) or the caller (`OpsDroid.unload()`) should not pass the `opsdroid` instance to `database.disconnect()` (personally i’d vote for the former).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opsdroid/database/__init__.py`
Content:
```
1 """A base class for databases to inherit from."""
2
3
4 class Database():
5 """A base database.
6
7 Database classes are used to persist key/value pairs in a database.
8
9 """
10
11 def __init__(self, config):
12 """Create the database.
13
14 Set some basic properties from the database config such as the name
15 of this database. It could also be a good place to setup properties
16 to hold things like the database connection object and the database
17 name.
18
19 Args:
20 config (dict): The config for this database specified in the
21 `configuration.yaml` file.
22
23 """
24 self.name = ""
25 self.config = config
26 self.client = None
27 self.database = None
28
29 async def connect(self, opsdroid):
30 """Connect to database service and store the connection object.
31
32 This method should connect to the given database using a native
33 python library for that database. The library will most likely involve
34 a connection object which will be used by the put and get methods.
35 This object should be stored in self.
36
37 Args:
38 opsdroid (OpsDroid): An instance of the opsdroid core.
39
40 """
41 raise NotImplementedError
42
43 async def disconnect(self):
44 """Disconnect from the database.
45
46 This method should disconnect from the given database using a native
47 python library for that database.
48
49 """
50 pass
51
52 async def put(self, key, data):
53 """Store the data object in a database against the key.
54
55 The data object will need to be serialised in a sensible way which
56 suits the database being used and allows for reconstruction of the
57 object.
58
59 Args:
60 key (string): The key to store the data object under.
61 data (object): The data object to store.
62
63 Returns:
64 bool: True for data successfully stored, False otherwise.
65
66 """
67 raise NotImplementedError
68
69 async def get(self, key):
70 """Return a data object for a given key.
71
72 Args:
73 key (string): The key to lookup in the database.
74
75 Returns:
76 object or None: The data object stored for that key, or None if no
77 object found for that key.
78
79 """
80 raise NotImplementedError
81
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/opsdroid/database/__init__.py b/opsdroid/database/__init__.py
--- a/opsdroid/database/__init__.py
+++ b/opsdroid/database/__init__.py
@@ -40,7 +40,7 @@
"""
raise NotImplementedError
- async def disconnect(self):
+ async def disconnect(self, opsdroid):
"""Disconnect from the database.
This method should disconnect from the given database using a native
|
{"golden_diff": "diff --git a/opsdroid/database/__init__.py b/opsdroid/database/__init__.py\n--- a/opsdroid/database/__init__.py\n+++ b/opsdroid/database/__init__.py\n@@ -40,7 +40,7 @@\n \"\"\"\n raise NotImplementedError\n \n- async def disconnect(self):\n+ async def disconnect(self, opsdroid):\n \"\"\"Disconnect from the database.\n \n This method should disconnect from the given database using a native\n", "issue": "Cannot disconnect from SQLite\n<!-- Before you post an issue or if you are unsure about something join our gitter channel https://gitter.im/opsdroid/ and ask away! We are more than happy to help you. -->\r\n# Description\r\nSQLite database connector can\u2019t disconnect because of wrong method signature.\r\n\r\n## Steps to Reproduce\r\nEnable the SQLite database module, then try to shut down the bot.\r\n\r\n\r\n## Expected Functionality\r\nThe bot should shut down.\r\n\r\n## Experienced Functionality\r\nThis error message on the console, and the bot remains running (but with the connectors already disconnected).\r\n\r\n```\r\nERROR opsdroid.core: {'message': 'Task exception was never retrieved', 'exception': TypeError('disconnect() takes 1 positional argument but 2 were given',), 'future': <Task finished coro=<OpsDroid.handle_signal() done, defined at /home/polesz/.local/lib/python3.6/site-packages/opsdroid/core.py:121> exception=TypeError('disconnect() takes 1 positional argument but 2 were given',)>}\r\n```\r\n\r\n## Versions\r\n- **Opsdroid version:** 0.13.0\r\n- **Python version:** 3.6.6 (bundled with Fedora 28)\r\n- **OS/Docker version:** Fedora 28, no Docker involved\r\n\r\n## Additional information\r\nIt seems the method signature of `Database.disconnect()` is wrong (should be `async def disconnect(self, opsdroid)`) or the caller (`OpsDroid.unload()`) should not pass the `opsdroid` instance to `database.disconnect()` (personally i\u2019d vote for the former).\n", "before_files": [{"content": "\"\"\"A base class for databases to inherit from.\"\"\"\n\n\nclass Database():\n \"\"\"A base database.\n\n Database classes are used to persist key/value pairs in a database.\n\n \"\"\"\n\n def __init__(self, config):\n \"\"\"Create the database.\n\n Set some basic properties from the database config such as the name\n of this database. It could also be a good place to setup properties\n to hold things like the database connection object and the database\n name.\n\n Args:\n config (dict): The config for this database specified in the\n `configuration.yaml` file.\n\n \"\"\"\n self.name = \"\"\n self.config = config\n self.client = None\n self.database = None\n\n async def connect(self, opsdroid):\n \"\"\"Connect to database service and store the connection object.\n\n This method should connect to the given database using a native\n python library for that database. The library will most likely involve\n a connection object which will be used by the put and get methods.\n This object should be stored in self.\n\n Args:\n opsdroid (OpsDroid): An instance of the opsdroid core.\n\n \"\"\"\n raise NotImplementedError\n\n async def disconnect(self):\n \"\"\"Disconnect from the database.\n\n This method should disconnect from the given database using a native\n python library for that database.\n\n \"\"\"\n pass\n\n async def put(self, key, data):\n \"\"\"Store the data object in a database against the key.\n\n The data object will need to be serialised in a sensible way which\n suits the database being used and allows for reconstruction of the\n object.\n\n Args:\n key (string): The key to store the data object under.\n data (object): The data object to store.\n\n Returns:\n bool: True for data successfully stored, False otherwise.\n\n \"\"\"\n raise NotImplementedError\n\n async def get(self, key):\n \"\"\"Return a data object for a given key.\n\n Args:\n key (string): The key to lookup in the database.\n\n Returns:\n object or None: The data object stored for that key, or None if no\n object found for that key.\n\n \"\"\"\n raise NotImplementedError\n", "path": "opsdroid/database/__init__.py"}], "after_files": [{"content": "\"\"\"A base class for databases to inherit from.\"\"\"\n\n\nclass Database():\n \"\"\"A base database.\n\n Database classes are used to persist key/value pairs in a database.\n\n \"\"\"\n\n def __init__(self, config):\n \"\"\"Create the database.\n\n Set some basic properties from the database config such as the name\n of this database. It could also be a good place to setup properties\n to hold things like the database connection object and the database\n name.\n\n Args:\n config (dict): The config for this database specified in the\n `configuration.yaml` file.\n\n \"\"\"\n self.name = \"\"\n self.config = config\n self.client = None\n self.database = None\n\n async def connect(self, opsdroid):\n \"\"\"Connect to database service and store the connection object.\n\n This method should connect to the given database using a native\n python library for that database. The library will most likely involve\n a connection object which will be used by the put and get methods.\n This object should be stored in self.\n\n Args:\n opsdroid (OpsDroid): An instance of the opsdroid core.\n\n \"\"\"\n raise NotImplementedError\n\n async def disconnect(self, opsdroid):\n \"\"\"Disconnect from the database.\n\n This method should disconnect from the given database using a native\n python library for that database.\n\n \"\"\"\n pass\n\n async def put(self, key, data):\n \"\"\"Store the data object in a database against the key.\n\n The data object will need to be serialised in a sensible way which\n suits the database being used and allows for reconstruction of the\n object.\n\n Args:\n key (string): The key to store the data object under.\n data (object): The data object to store.\n\n Returns:\n bool: True for data successfully stored, False otherwise.\n\n \"\"\"\n raise NotImplementedError\n\n async def get(self, key):\n \"\"\"Return a data object for a given key.\n\n Args:\n key (string): The key to lookup in the database.\n\n Returns:\n object or None: The data object stored for that key, or None if no\n object found for that key.\n\n \"\"\"\n raise NotImplementedError\n", "path": "opsdroid/database/__init__.py"}]}
| 1,238 | 105 |
gh_patches_debug_32631
|
rasdani/github-patches
|
git_diff
|
pallets__werkzeug-2493
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Upgrading to 2.2.x results in type errors when importing from werkzeug.routing
After upgrading to werkzeug 2.2.1 importing any class from `werkzeug.routing` results in an error from mypy if `no_implicit_reexport=True`. This was not the case in previous versions as `werkzeug.routing` was a single file submodule.
### Reproduction
Given `eg.py`:
```python
from werkzeug.routing import Rule
```
With `werkzeug==2.2.1`
```shell
$ mypy eg.py --strict
eg.py:1: error: Module "werkzeug.routing" does not explicitly export attribute "Rule"; implicit reexport disabled [attr-defined]
Found 1 error in 1 file (checked 1 source file)
```
With `werkzeug==2.1.0`
```shell
$ mypy eg.py --strict
Success: no issues found in 1 source file```
```
### Environment:
- Python version: 3.10
- Werkzeug version: 2.2.1
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/werkzeug/routing/__init__.py`
Content:
```
1 """When it comes to combining multiple controller or view functions
2 (however you want to call them) you need a dispatcher. A simple way
3 would be applying regular expression tests on the ``PATH_INFO`` and
4 calling registered callback functions that return the value then.
5
6 This module implements a much more powerful system than simple regular
7 expression matching because it can also convert values in the URLs and
8 build URLs.
9
10 Here a simple example that creates a URL map for an application with
11 two subdomains (www and kb) and some URL rules:
12
13 .. code-block:: python
14
15 m = Map([
16 # Static URLs
17 Rule('/', endpoint='static/index'),
18 Rule('/about', endpoint='static/about'),
19 Rule('/help', endpoint='static/help'),
20 # Knowledge Base
21 Subdomain('kb', [
22 Rule('/', endpoint='kb/index'),
23 Rule('/browse/', endpoint='kb/browse'),
24 Rule('/browse/<int:id>/', endpoint='kb/browse'),
25 Rule('/browse/<int:id>/<int:page>', endpoint='kb/browse')
26 ])
27 ], default_subdomain='www')
28
29 If the application doesn't use subdomains it's perfectly fine to not set
30 the default subdomain and not use the `Subdomain` rule factory. The
31 endpoint in the rules can be anything, for example import paths or
32 unique identifiers. The WSGI application can use those endpoints to get the
33 handler for that URL. It doesn't have to be a string at all but it's
34 recommended.
35
36 Now it's possible to create a URL adapter for one of the subdomains and
37 build URLs:
38
39 .. code-block:: python
40
41 c = m.bind('example.com')
42
43 c.build("kb/browse", dict(id=42))
44 'http://kb.example.com/browse/42/'
45
46 c.build("kb/browse", dict())
47 'http://kb.example.com/browse/'
48
49 c.build("kb/browse", dict(id=42, page=3))
50 'http://kb.example.com/browse/42/3'
51
52 c.build("static/about")
53 '/about'
54
55 c.build("static/index", force_external=True)
56 'http://www.example.com/'
57
58 c = m.bind('example.com', subdomain='kb')
59
60 c.build("static/about")
61 'http://www.example.com/about'
62
63 The first argument to bind is the server name *without* the subdomain.
64 Per default it will assume that the script is mounted on the root, but
65 often that's not the case so you can provide the real mount point as
66 second argument:
67
68 .. code-block:: python
69
70 c = m.bind('example.com', '/applications/example')
71
72 The third argument can be the subdomain, if not given the default
73 subdomain is used. For more details about binding have a look at the
74 documentation of the `MapAdapter`.
75
76 And here is how you can match URLs:
77
78 .. code-block:: python
79
80 c = m.bind('example.com')
81
82 c.match("/")
83 ('static/index', {})
84
85 c.match("/about")
86 ('static/about', {})
87
88 c = m.bind('example.com', '/', 'kb')
89
90 c.match("/")
91 ('kb/index', {})
92
93 c.match("/browse/42/23")
94 ('kb/browse', {'id': 42, 'page': 23})
95
96 If matching fails you get a ``NotFound`` exception, if the rule thinks
97 it's a good idea to redirect (for example because the URL was defined
98 to have a slash at the end but the request was missing that slash) it
99 will raise a ``RequestRedirect`` exception. Both are subclasses of
100 ``HTTPException`` so you can use those errors as responses in the
101 application.
102
103 If matching succeeded but the URL rule was incompatible to the given
104 method (for example there were only rules for ``GET`` and ``HEAD`` but
105 routing tried to match a ``POST`` request) a ``MethodNotAllowed``
106 exception is raised.
107 """
108 from .converters import AnyConverter
109 from .converters import BaseConverter
110 from .converters import FloatConverter
111 from .converters import IntegerConverter
112 from .converters import PathConverter
113 from .converters import UnicodeConverter
114 from .converters import UUIDConverter
115 from .converters import ValidationError
116 from .exceptions import BuildError
117 from .exceptions import NoMatch
118 from .exceptions import RequestAliasRedirect
119 from .exceptions import RequestPath
120 from .exceptions import RequestRedirect
121 from .exceptions import RoutingException
122 from .exceptions import WebsocketMismatch
123 from .map import Map
124 from .map import MapAdapter
125 from .matcher import StateMachineMatcher
126 from .rules import EndpointPrefix
127 from .rules import parse_converter_args
128 from .rules import Rule
129 from .rules import RuleFactory
130 from .rules import RuleTemplate
131 from .rules import RuleTemplateFactory
132 from .rules import Subdomain
133 from .rules import Submount
134
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/werkzeug/routing/__init__.py b/src/werkzeug/routing/__init__.py
--- a/src/werkzeug/routing/__init__.py
+++ b/src/werkzeug/routing/__init__.py
@@ -105,29 +105,29 @@
routing tried to match a ``POST`` request) a ``MethodNotAllowed``
exception is raised.
"""
-from .converters import AnyConverter
-from .converters import BaseConverter
-from .converters import FloatConverter
-from .converters import IntegerConverter
-from .converters import PathConverter
-from .converters import UnicodeConverter
-from .converters import UUIDConverter
-from .converters import ValidationError
-from .exceptions import BuildError
-from .exceptions import NoMatch
-from .exceptions import RequestAliasRedirect
-from .exceptions import RequestPath
-from .exceptions import RequestRedirect
-from .exceptions import RoutingException
-from .exceptions import WebsocketMismatch
-from .map import Map
-from .map import MapAdapter
-from .matcher import StateMachineMatcher
-from .rules import EndpointPrefix
-from .rules import parse_converter_args
-from .rules import Rule
-from .rules import RuleFactory
-from .rules import RuleTemplate
-from .rules import RuleTemplateFactory
-from .rules import Subdomain
-from .rules import Submount
+from .converters import AnyConverter as AnyConverter
+from .converters import BaseConverter as BaseConverter
+from .converters import FloatConverter as FloatConverter
+from .converters import IntegerConverter as IntegerConverter
+from .converters import PathConverter as PathConverter
+from .converters import UnicodeConverter as UnicodeConverter
+from .converters import UUIDConverter as UUIDConverter
+from .converters import ValidationError as ValidationError
+from .exceptions import BuildError as BuildError
+from .exceptions import NoMatch as NoMatch
+from .exceptions import RequestAliasRedirect as RequestAliasRedirect
+from .exceptions import RequestPath as RequestPath
+from .exceptions import RequestRedirect as RequestRedirect
+from .exceptions import RoutingException as RoutingException
+from .exceptions import WebsocketMismatch as WebsocketMismatch
+from .map import Map as Map
+from .map import MapAdapter as MapAdapter
+from .matcher import StateMachineMatcher as StateMachineMatcher
+from .rules import EndpointPrefix as EndpointPrefix
+from .rules import parse_converter_args as parse_converter_args
+from .rules import Rule as Rule
+from .rules import RuleFactory as RuleFactory
+from .rules import RuleTemplate as RuleTemplate
+from .rules import RuleTemplateFactory as RuleTemplateFactory
+from .rules import Subdomain as Subdomain
+from .rules import Submount as Submount
|
{"golden_diff": "diff --git a/src/werkzeug/routing/__init__.py b/src/werkzeug/routing/__init__.py\n--- a/src/werkzeug/routing/__init__.py\n+++ b/src/werkzeug/routing/__init__.py\n@@ -105,29 +105,29 @@\n routing tried to match a ``POST`` request) a ``MethodNotAllowed``\n exception is raised.\n \"\"\"\n-from .converters import AnyConverter\n-from .converters import BaseConverter\n-from .converters import FloatConverter\n-from .converters import IntegerConverter\n-from .converters import PathConverter\n-from .converters import UnicodeConverter\n-from .converters import UUIDConverter\n-from .converters import ValidationError\n-from .exceptions import BuildError\n-from .exceptions import NoMatch\n-from .exceptions import RequestAliasRedirect\n-from .exceptions import RequestPath\n-from .exceptions import RequestRedirect\n-from .exceptions import RoutingException\n-from .exceptions import WebsocketMismatch\n-from .map import Map\n-from .map import MapAdapter\n-from .matcher import StateMachineMatcher\n-from .rules import EndpointPrefix\n-from .rules import parse_converter_args\n-from .rules import Rule\n-from .rules import RuleFactory\n-from .rules import RuleTemplate\n-from .rules import RuleTemplateFactory\n-from .rules import Subdomain\n-from .rules import Submount\n+from .converters import AnyConverter as AnyConverter\n+from .converters import BaseConverter as BaseConverter\n+from .converters import FloatConverter as FloatConverter\n+from .converters import IntegerConverter as IntegerConverter\n+from .converters import PathConverter as PathConverter\n+from .converters import UnicodeConverter as UnicodeConverter\n+from .converters import UUIDConverter as UUIDConverter\n+from .converters import ValidationError as ValidationError\n+from .exceptions import BuildError as BuildError\n+from .exceptions import NoMatch as NoMatch\n+from .exceptions import RequestAliasRedirect as RequestAliasRedirect\n+from .exceptions import RequestPath as RequestPath\n+from .exceptions import RequestRedirect as RequestRedirect\n+from .exceptions import RoutingException as RoutingException\n+from .exceptions import WebsocketMismatch as WebsocketMismatch\n+from .map import Map as Map\n+from .map import MapAdapter as MapAdapter\n+from .matcher import StateMachineMatcher as StateMachineMatcher\n+from .rules import EndpointPrefix as EndpointPrefix\n+from .rules import parse_converter_args as parse_converter_args\n+from .rules import Rule as Rule\n+from .rules import RuleFactory as RuleFactory\n+from .rules import RuleTemplate as RuleTemplate\n+from .rules import RuleTemplateFactory as RuleTemplateFactory\n+from .rules import Subdomain as Subdomain\n+from .rules import Submount as Submount\n", "issue": "Upgrading to 2.2.x results in type errors when importing from werkzeug.routing\nAfter upgrading to werkzeug 2.2.1 importing any class from `werkzeug.routing` results in an error from mypy if `no_implicit_reexport=True`. This was not the case in previous versions as `werkzeug.routing` was a single file submodule. \r\n\r\n\r\n### Reproduction\r\nGiven `eg.py`:\r\n```python\r\nfrom werkzeug.routing import Rule\r\n```\r\nWith `werkzeug==2.2.1`\r\n```shell\r\n$ mypy eg.py --strict\r\neg.py:1: error: Module \"werkzeug.routing\" does not explicitly export attribute \"Rule\"; implicit reexport disabled [attr-defined]\r\nFound 1 error in 1 file (checked 1 source file)\r\n```\r\n\r\nWith `werkzeug==2.1.0`\r\n```shell\r\n$ mypy eg.py --strict\r\nSuccess: no issues found in 1 source file```\r\n```\r\n\r\n### Environment:\r\n\r\n- Python version: 3.10\r\n- Werkzeug version: 2.2.1\r\n\n", "before_files": [{"content": "\"\"\"When it comes to combining multiple controller or view functions\n(however you want to call them) you need a dispatcher. A simple way\nwould be applying regular expression tests on the ``PATH_INFO`` and\ncalling registered callback functions that return the value then.\n\nThis module implements a much more powerful system than simple regular\nexpression matching because it can also convert values in the URLs and\nbuild URLs.\n\nHere a simple example that creates a URL map for an application with\ntwo subdomains (www and kb) and some URL rules:\n\n.. code-block:: python\n\n m = Map([\n # Static URLs\n Rule('/', endpoint='static/index'),\n Rule('/about', endpoint='static/about'),\n Rule('/help', endpoint='static/help'),\n # Knowledge Base\n Subdomain('kb', [\n Rule('/', endpoint='kb/index'),\n Rule('/browse/', endpoint='kb/browse'),\n Rule('/browse/<int:id>/', endpoint='kb/browse'),\n Rule('/browse/<int:id>/<int:page>', endpoint='kb/browse')\n ])\n ], default_subdomain='www')\n\nIf the application doesn't use subdomains it's perfectly fine to not set\nthe default subdomain and not use the `Subdomain` rule factory. The\nendpoint in the rules can be anything, for example import paths or\nunique identifiers. The WSGI application can use those endpoints to get the\nhandler for that URL. It doesn't have to be a string at all but it's\nrecommended.\n\nNow it's possible to create a URL adapter for one of the subdomains and\nbuild URLs:\n\n.. code-block:: python\n\n c = m.bind('example.com')\n\n c.build(\"kb/browse\", dict(id=42))\n 'http://kb.example.com/browse/42/'\n\n c.build(\"kb/browse\", dict())\n 'http://kb.example.com/browse/'\n\n c.build(\"kb/browse\", dict(id=42, page=3))\n 'http://kb.example.com/browse/42/3'\n\n c.build(\"static/about\")\n '/about'\n\n c.build(\"static/index\", force_external=True)\n 'http://www.example.com/'\n\n c = m.bind('example.com', subdomain='kb')\n\n c.build(\"static/about\")\n 'http://www.example.com/about'\n\nThe first argument to bind is the server name *without* the subdomain.\nPer default it will assume that the script is mounted on the root, but\noften that's not the case so you can provide the real mount point as\nsecond argument:\n\n.. code-block:: python\n\n c = m.bind('example.com', '/applications/example')\n\nThe third argument can be the subdomain, if not given the default\nsubdomain is used. For more details about binding have a look at the\ndocumentation of the `MapAdapter`.\n\nAnd here is how you can match URLs:\n\n.. code-block:: python\n\n c = m.bind('example.com')\n\n c.match(\"/\")\n ('static/index', {})\n\n c.match(\"/about\")\n ('static/about', {})\n\n c = m.bind('example.com', '/', 'kb')\n\n c.match(\"/\")\n ('kb/index', {})\n\n c.match(\"/browse/42/23\")\n ('kb/browse', {'id': 42, 'page': 23})\n\nIf matching fails you get a ``NotFound`` exception, if the rule thinks\nit's a good idea to redirect (for example because the URL was defined\nto have a slash at the end but the request was missing that slash) it\nwill raise a ``RequestRedirect`` exception. Both are subclasses of\n``HTTPException`` so you can use those errors as responses in the\napplication.\n\nIf matching succeeded but the URL rule was incompatible to the given\nmethod (for example there were only rules for ``GET`` and ``HEAD`` but\nrouting tried to match a ``POST`` request) a ``MethodNotAllowed``\nexception is raised.\n\"\"\"\nfrom .converters import AnyConverter\nfrom .converters import BaseConverter\nfrom .converters import FloatConverter\nfrom .converters import IntegerConverter\nfrom .converters import PathConverter\nfrom .converters import UnicodeConverter\nfrom .converters import UUIDConverter\nfrom .converters import ValidationError\nfrom .exceptions import BuildError\nfrom .exceptions import NoMatch\nfrom .exceptions import RequestAliasRedirect\nfrom .exceptions import RequestPath\nfrom .exceptions import RequestRedirect\nfrom .exceptions import RoutingException\nfrom .exceptions import WebsocketMismatch\nfrom .map import Map\nfrom .map import MapAdapter\nfrom .matcher import StateMachineMatcher\nfrom .rules import EndpointPrefix\nfrom .rules import parse_converter_args\nfrom .rules import Rule\nfrom .rules import RuleFactory\nfrom .rules import RuleTemplate\nfrom .rules import RuleTemplateFactory\nfrom .rules import Subdomain\nfrom .rules import Submount\n", "path": "src/werkzeug/routing/__init__.py"}], "after_files": [{"content": "\"\"\"When it comes to combining multiple controller or view functions\n(however you want to call them) you need a dispatcher. A simple way\nwould be applying regular expression tests on the ``PATH_INFO`` and\ncalling registered callback functions that return the value then.\n\nThis module implements a much more powerful system than simple regular\nexpression matching because it can also convert values in the URLs and\nbuild URLs.\n\nHere a simple example that creates a URL map for an application with\ntwo subdomains (www and kb) and some URL rules:\n\n.. code-block:: python\n\n m = Map([\n # Static URLs\n Rule('/', endpoint='static/index'),\n Rule('/about', endpoint='static/about'),\n Rule('/help', endpoint='static/help'),\n # Knowledge Base\n Subdomain('kb', [\n Rule('/', endpoint='kb/index'),\n Rule('/browse/', endpoint='kb/browse'),\n Rule('/browse/<int:id>/', endpoint='kb/browse'),\n Rule('/browse/<int:id>/<int:page>', endpoint='kb/browse')\n ])\n ], default_subdomain='www')\n\nIf the application doesn't use subdomains it's perfectly fine to not set\nthe default subdomain and not use the `Subdomain` rule factory. The\nendpoint in the rules can be anything, for example import paths or\nunique identifiers. The WSGI application can use those endpoints to get the\nhandler for that URL. It doesn't have to be a string at all but it's\nrecommended.\n\nNow it's possible to create a URL adapter for one of the subdomains and\nbuild URLs:\n\n.. code-block:: python\n\n c = m.bind('example.com')\n\n c.build(\"kb/browse\", dict(id=42))\n 'http://kb.example.com/browse/42/'\n\n c.build(\"kb/browse\", dict())\n 'http://kb.example.com/browse/'\n\n c.build(\"kb/browse\", dict(id=42, page=3))\n 'http://kb.example.com/browse/42/3'\n\n c.build(\"static/about\")\n '/about'\n\n c.build(\"static/index\", force_external=True)\n 'http://www.example.com/'\n\n c = m.bind('example.com', subdomain='kb')\n\n c.build(\"static/about\")\n 'http://www.example.com/about'\n\nThe first argument to bind is the server name *without* the subdomain.\nPer default it will assume that the script is mounted on the root, but\noften that's not the case so you can provide the real mount point as\nsecond argument:\n\n.. code-block:: python\n\n c = m.bind('example.com', '/applications/example')\n\nThe third argument can be the subdomain, if not given the default\nsubdomain is used. For more details about binding have a look at the\ndocumentation of the `MapAdapter`.\n\nAnd here is how you can match URLs:\n\n.. code-block:: python\n\n c = m.bind('example.com')\n\n c.match(\"/\")\n ('static/index', {})\n\n c.match(\"/about\")\n ('static/about', {})\n\n c = m.bind('example.com', '/', 'kb')\n\n c.match(\"/\")\n ('kb/index', {})\n\n c.match(\"/browse/42/23\")\n ('kb/browse', {'id': 42, 'page': 23})\n\nIf matching fails you get a ``NotFound`` exception, if the rule thinks\nit's a good idea to redirect (for example because the URL was defined\nto have a slash at the end but the request was missing that slash) it\nwill raise a ``RequestRedirect`` exception. Both are subclasses of\n``HTTPException`` so you can use those errors as responses in the\napplication.\n\nIf matching succeeded but the URL rule was incompatible to the given\nmethod (for example there were only rules for ``GET`` and ``HEAD`` but\nrouting tried to match a ``POST`` request) a ``MethodNotAllowed``\nexception is raised.\n\"\"\"\nfrom .converters import AnyConverter as AnyConverter\nfrom .converters import BaseConverter as BaseConverter\nfrom .converters import FloatConverter as FloatConverter\nfrom .converters import IntegerConverter as IntegerConverter\nfrom .converters import PathConverter as PathConverter\nfrom .converters import UnicodeConverter as UnicodeConverter\nfrom .converters import UUIDConverter as UUIDConverter\nfrom .converters import ValidationError as ValidationError\nfrom .exceptions import BuildError as BuildError\nfrom .exceptions import NoMatch as NoMatch\nfrom .exceptions import RequestAliasRedirect as RequestAliasRedirect\nfrom .exceptions import RequestPath as RequestPath\nfrom .exceptions import RequestRedirect as RequestRedirect\nfrom .exceptions import RoutingException as RoutingException\nfrom .exceptions import WebsocketMismatch as WebsocketMismatch\nfrom .map import Map as Map\nfrom .map import MapAdapter as MapAdapter\nfrom .matcher import StateMachineMatcher as StateMachineMatcher\nfrom .rules import EndpointPrefix as EndpointPrefix\nfrom .rules import parse_converter_args as parse_converter_args\nfrom .rules import Rule as Rule\nfrom .rules import RuleFactory as RuleFactory\nfrom .rules import RuleTemplate as RuleTemplate\nfrom .rules import RuleTemplateFactory as RuleTemplateFactory\nfrom .rules import Subdomain as Subdomain\nfrom .rules import Submount as Submount\n", "path": "src/werkzeug/routing/__init__.py"}]}
| 1,825 | 578 |
gh_patches_debug_37224
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-5254
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Checkov Managed Disk Encryption check in Bicep IaC failing
**Describe the issue**
Checkov Managed Disk Encryption check will fail despite having the required check in Bicep code. It will only be successful if both checks are in the code, but need to be hashed out.
**Examples**
```
resource Disks 'Microsoft.Compute/disks@2022-07-02' = [for (disk, i) in dataDisks: {
name: disk.diskName
location: location
tags: tags
sku: {
name: disk.storageAccountType
}
zones: [
avZone
]
properties: {
creationData: {
createOption: 'Empty'
}
diskSizeGB: disk.diskSizeGB
// encryption: {
// type: 'EncryptionAtRestWithCustomerKey'
// diskEncryptionSetId: diskEncryptionSetId
// }
encryption: {
type: 'EncryptionAtRestWithCustomerKey'
diskEncryptionSetId: diskEncryptionSetId
}
// encryptionSettingsCollection: {
// enabled: true
// encryptionSettings: [
// {
// diskEncryptionKey: {
// secretUrl: keyURL
// sourceVault: {
// id: keyVaultId
// }
// }
// }
// ]
// }
}
}]
```
**Version :**
- Latest
**Additional context**
Even if I remove the commented out sections, the check will fail. If I have the "encryptionSettingsCollection" block, the check will fail. It will only work if it is formatted like the above.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/arm/checks/resource/AzureManagedDiscEncryption.py`
Content:
```
1 from __future__ import annotations
2
3 from typing import Any
4
5 from checkov.common.models.enums import CheckResult, CheckCategories
6 from checkov.arm.base_resource_check import BaseResourceCheck
7
8
9 class AzureManagedDiscEncryption(BaseResourceCheck):
10 def __init__(self) -> None:
11 name = "Ensure Azure managed disk have encryption enabled"
12 id = "CKV_AZURE_2"
13 supported_resources = ("Microsoft.Compute/disks",)
14 categories = (CheckCategories.ENCRYPTION,)
15 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
16
17 def scan_resource_conf(self, conf: dict[str, Any]) -> CheckResult:
18 if "properties" in conf:
19 if "encryptionSettingsCollection" in conf["properties"]:
20 if "enabled" in conf["properties"]["encryptionSettingsCollection"]:
21 if str(conf["properties"]["encryptionSettingsCollection"]["enabled"]).lower() == "true":
22 return CheckResult.PASSED
23 elif "encryptionSettings" in conf["properties"]:
24 if "enabled" in conf["properties"]["encryptionSettings"]:
25 if str(conf["properties"]["encryptionSettings"]["enabled"]).lower() == "true":
26 return CheckResult.PASSED
27 return CheckResult.FAILED
28
29
30 check = AzureManagedDiscEncryption()
31
```
Path: `checkov/arm/base_resource_check.py`
Content:
```
1 from __future__ import annotations
2
3 from abc import abstractmethod
4 from collections.abc import Iterable
5 from typing import Any, Callable
6
7 from checkov.arm.registry import arm_resource_registry
8 from checkov.bicep.checks.resource.registry import registry as bicep_registry
9 from checkov.common.checks.base_check import BaseCheck
10 from checkov.common.models.enums import CheckCategories, CheckResult
11 from checkov.common.multi_signature import multi_signature
12
13
14 class BaseResourceCheck(BaseCheck):
15 def __init__(
16 self,
17 name: str,
18 id: str,
19 categories: "Iterable[CheckCategories]",
20 supported_resources: "Iterable[str]",
21 guideline: str | None = None,
22 ) -> None:
23 super().__init__(
24 name=name,
25 id=id,
26 categories=categories,
27 supported_entities=supported_resources,
28 block_type="resource",
29 guideline=guideline,
30 )
31 self.supported_resources = supported_resources
32 arm_resource_registry.register(self)
33 # leverage ARM checks to use with bicep runner
34 bicep_registry.register(self)
35
36 def scan_entity_conf(self, conf: dict[str, Any], entity_type: str) -> CheckResult: # type:ignore[override] # it's ok
37 self.entity_type = entity_type
38
39 # the "existing" key indicates a Bicep resource
40 if "existing" in conf:
41 if conf["existing"] is True:
42 # the existing keyword is used to retrieve information about an already deployed resource
43 return CheckResult.UNKNOWN
44
45 self.api_version = conf["api_version"]
46 conf["config"]["apiVersion"] = conf["api_version"] # set for better reusability of existing ARM checks
47
48 return self.scan_resource_conf(conf["config"], entity_type) # type:ignore[no-any-return] # issue with multi_signature annotation
49
50 self.api_version = None
51
52 return self.scan_resource_conf(conf, entity_type) # type:ignore[no-any-return] # issue with multi_signature annotation
53
54 @multi_signature()
55 @abstractmethod
56 def scan_resource_conf(self, conf: dict[str, Any], entity_type: str) -> CheckResult:
57 raise NotImplementedError()
58
59 @classmethod
60 @scan_resource_conf.add_signature(args=["self", "conf"])
61 def _scan_resource_conf_self_conf(cls, wrapped: Callable[..., CheckResult]) -> Callable[..., CheckResult]:
62 def wrapper(self: BaseCheck, conf: dict[str, Any], entity_type: str | None = None) -> CheckResult:
63 # keep default argument for entity_type so old code, that doesn't set it, will work.
64 return wrapped(self, conf)
65
66 return wrapper
67
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/checkov/arm/base_resource_check.py b/checkov/arm/base_resource_check.py
--- a/checkov/arm/base_resource_check.py
+++ b/checkov/arm/base_resource_check.py
@@ -45,7 +45,12 @@
self.api_version = conf["api_version"]
conf["config"]["apiVersion"] = conf["api_version"] # set for better reusability of existing ARM checks
- return self.scan_resource_conf(conf["config"], entity_type) # type:ignore[no-any-return] # issue with multi_signature annotation
+ resource_conf = conf["config"]
+ if "loop_type" in resource_conf:
+ # this means the whole resource block is surrounded by a for loop
+ resource_conf = resource_conf["config"]
+
+ return self.scan_resource_conf(resource_conf, entity_type) # type:ignore[no-any-return] # issue with multi_signature annotation
self.api_version = None
diff --git a/checkov/arm/checks/resource/AzureManagedDiscEncryption.py b/checkov/arm/checks/resource/AzureManagedDiscEncryption.py
--- a/checkov/arm/checks/resource/AzureManagedDiscEncryption.py
+++ b/checkov/arm/checks/resource/AzureManagedDiscEncryption.py
@@ -4,6 +4,7 @@
from checkov.common.models.enums import CheckResult, CheckCategories
from checkov.arm.base_resource_check import BaseResourceCheck
+from checkov.common.util.data_structures_utils import find_in_dict
class AzureManagedDiscEncryption(BaseResourceCheck):
@@ -15,15 +16,21 @@
super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
def scan_resource_conf(self, conf: dict[str, Any]) -> CheckResult:
- if "properties" in conf:
- if "encryptionSettingsCollection" in conf["properties"]:
- if "enabled" in conf["properties"]["encryptionSettingsCollection"]:
- if str(conf["properties"]["encryptionSettingsCollection"]["enabled"]).lower() == "true":
- return CheckResult.PASSED
- elif "encryptionSettings" in conf["properties"]:
- if "enabled" in conf["properties"]["encryptionSettings"]:
- if str(conf["properties"]["encryptionSettings"]["enabled"]).lower() == "true":
- return CheckResult.PASSED
+ properties = conf.get("properties")
+ if properties:
+ encryption = properties.get("encryption")
+ if encryption:
+ # if the block exists, then it is enabled
+ return CheckResult.PASSED
+
+ encryption_enabled = find_in_dict(input_dict=properties, key_path="encryptionSettingsCollection/enabled")
+ if str(encryption_enabled).lower() == "true":
+ return CheckResult.PASSED
+
+ encryption_enabled = find_in_dict(input_dict=properties, key_path="encryptionSettings/enabled")
+ if str(encryption_enabled).lower() == "true":
+ return CheckResult.PASSED
+
return CheckResult.FAILED
|
{"golden_diff": "diff --git a/checkov/arm/base_resource_check.py b/checkov/arm/base_resource_check.py\n--- a/checkov/arm/base_resource_check.py\n+++ b/checkov/arm/base_resource_check.py\n@@ -45,7 +45,12 @@\n self.api_version = conf[\"api_version\"]\n conf[\"config\"][\"apiVersion\"] = conf[\"api_version\"] # set for better reusability of existing ARM checks\n \n- return self.scan_resource_conf(conf[\"config\"], entity_type) # type:ignore[no-any-return] # issue with multi_signature annotation\n+ resource_conf = conf[\"config\"]\n+ if \"loop_type\" in resource_conf:\n+ # this means the whole resource block is surrounded by a for loop\n+ resource_conf = resource_conf[\"config\"]\n+\n+ return self.scan_resource_conf(resource_conf, entity_type) # type:ignore[no-any-return] # issue with multi_signature annotation\n \n self.api_version = None\n \ndiff --git a/checkov/arm/checks/resource/AzureManagedDiscEncryption.py b/checkov/arm/checks/resource/AzureManagedDiscEncryption.py\n--- a/checkov/arm/checks/resource/AzureManagedDiscEncryption.py\n+++ b/checkov/arm/checks/resource/AzureManagedDiscEncryption.py\n@@ -4,6 +4,7 @@\n \n from checkov.common.models.enums import CheckResult, CheckCategories\n from checkov.arm.base_resource_check import BaseResourceCheck\n+from checkov.common.util.data_structures_utils import find_in_dict\n \n \n class AzureManagedDiscEncryption(BaseResourceCheck):\n@@ -15,15 +16,21 @@\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n \n def scan_resource_conf(self, conf: dict[str, Any]) -> CheckResult:\n- if \"properties\" in conf:\n- if \"encryptionSettingsCollection\" in conf[\"properties\"]:\n- if \"enabled\" in conf[\"properties\"][\"encryptionSettingsCollection\"]:\n- if str(conf[\"properties\"][\"encryptionSettingsCollection\"][\"enabled\"]).lower() == \"true\":\n- return CheckResult.PASSED\n- elif \"encryptionSettings\" in conf[\"properties\"]:\n- if \"enabled\" in conf[\"properties\"][\"encryptionSettings\"]:\n- if str(conf[\"properties\"][\"encryptionSettings\"][\"enabled\"]).lower() == \"true\":\n- return CheckResult.PASSED\n+ properties = conf.get(\"properties\")\n+ if properties:\n+ encryption = properties.get(\"encryption\")\n+ if encryption:\n+ # if the block exists, then it is enabled\n+ return CheckResult.PASSED\n+\n+ encryption_enabled = find_in_dict(input_dict=properties, key_path=\"encryptionSettingsCollection/enabled\")\n+ if str(encryption_enabled).lower() == \"true\":\n+ return CheckResult.PASSED\n+\n+ encryption_enabled = find_in_dict(input_dict=properties, key_path=\"encryptionSettings/enabled\")\n+ if str(encryption_enabled).lower() == \"true\":\n+ return CheckResult.PASSED\n+\n return CheckResult.FAILED\n", "issue": "Checkov Managed Disk Encryption check in Bicep IaC failing\n**Describe the issue**\r\nCheckov Managed Disk Encryption check will fail despite having the required check in Bicep code. It will only be successful if both checks are in the code, but need to be hashed out.\r\n\r\n**Examples**\r\n```\r\nresource Disks 'Microsoft.Compute/disks@2022-07-02' = [for (disk, i) in dataDisks: {\r\n name: disk.diskName\r\n location: location\r\n tags: tags\r\n sku: {\r\n name: disk.storageAccountType\r\n }\r\n zones: [\r\n avZone\r\n ]\r\n properties: {\r\n creationData: {\r\n createOption: 'Empty'\r\n }\r\n diskSizeGB: disk.diskSizeGB\r\n // encryption: {\r\n // type: 'EncryptionAtRestWithCustomerKey'\r\n // diskEncryptionSetId: diskEncryptionSetId\r\n // }\r\n encryption: {\r\n type: 'EncryptionAtRestWithCustomerKey'\r\n diskEncryptionSetId: diskEncryptionSetId\r\n }\r\n // encryptionSettingsCollection: {\r\n // enabled: true\r\n // encryptionSettings: [\r\n // {\r\n // diskEncryptionKey: {\r\n // secretUrl: keyURL\r\n // sourceVault: {\r\n // id: keyVaultId\r\n // }\r\n // }\r\n // }\r\n // ]\r\n // }\r\n }\r\n}]\r\n```\r\n\r\n**Version :**\r\n - Latest\r\n\r\n**Additional context**\r\nEven if I remove the commented out sections, the check will fail. If I have the \"encryptionSettingsCollection\" block, the check will fail. It will only work if it is formatted like the above.\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import Any\n\nfrom checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.arm.base_resource_check import BaseResourceCheck\n\n\nclass AzureManagedDiscEncryption(BaseResourceCheck):\n def __init__(self) -> None:\n name = \"Ensure Azure managed disk have encryption enabled\"\n id = \"CKV_AZURE_2\"\n supported_resources = (\"Microsoft.Compute/disks\",)\n categories = (CheckCategories.ENCRYPTION,)\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf: dict[str, Any]) -> CheckResult:\n if \"properties\" in conf:\n if \"encryptionSettingsCollection\" in conf[\"properties\"]:\n if \"enabled\" in conf[\"properties\"][\"encryptionSettingsCollection\"]:\n if str(conf[\"properties\"][\"encryptionSettingsCollection\"][\"enabled\"]).lower() == \"true\":\n return CheckResult.PASSED\n elif \"encryptionSettings\" in conf[\"properties\"]:\n if \"enabled\" in conf[\"properties\"][\"encryptionSettings\"]:\n if str(conf[\"properties\"][\"encryptionSettings\"][\"enabled\"]).lower() == \"true\":\n return CheckResult.PASSED\n return CheckResult.FAILED\n\n\ncheck = AzureManagedDiscEncryption()\n", "path": "checkov/arm/checks/resource/AzureManagedDiscEncryption.py"}, {"content": "from __future__ import annotations\n\nfrom abc import abstractmethod\nfrom collections.abc import Iterable\nfrom typing import Any, Callable\n\nfrom checkov.arm.registry import arm_resource_registry\nfrom checkov.bicep.checks.resource.registry import registry as bicep_registry\nfrom checkov.common.checks.base_check import BaseCheck\nfrom checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.common.multi_signature import multi_signature\n\n\nclass BaseResourceCheck(BaseCheck):\n def __init__(\n self,\n name: str,\n id: str,\n categories: \"Iterable[CheckCategories]\",\n supported_resources: \"Iterable[str]\",\n guideline: str | None = None,\n ) -> None:\n super().__init__(\n name=name,\n id=id,\n categories=categories,\n supported_entities=supported_resources,\n block_type=\"resource\",\n guideline=guideline,\n )\n self.supported_resources = supported_resources\n arm_resource_registry.register(self)\n # leverage ARM checks to use with bicep runner\n bicep_registry.register(self)\n\n def scan_entity_conf(self, conf: dict[str, Any], entity_type: str) -> CheckResult: # type:ignore[override] # it's ok\n self.entity_type = entity_type\n\n # the \"existing\" key indicates a Bicep resource\n if \"existing\" in conf:\n if conf[\"existing\"] is True:\n # the existing keyword is used to retrieve information about an already deployed resource\n return CheckResult.UNKNOWN\n\n self.api_version = conf[\"api_version\"]\n conf[\"config\"][\"apiVersion\"] = conf[\"api_version\"] # set for better reusability of existing ARM checks\n\n return self.scan_resource_conf(conf[\"config\"], entity_type) # type:ignore[no-any-return] # issue with multi_signature annotation\n\n self.api_version = None\n\n return self.scan_resource_conf(conf, entity_type) # type:ignore[no-any-return] # issue with multi_signature annotation\n\n @multi_signature()\n @abstractmethod\n def scan_resource_conf(self, conf: dict[str, Any], entity_type: str) -> CheckResult:\n raise NotImplementedError()\n\n @classmethod\n @scan_resource_conf.add_signature(args=[\"self\", \"conf\"])\n def _scan_resource_conf_self_conf(cls, wrapped: Callable[..., CheckResult]) -> Callable[..., CheckResult]:\n def wrapper(self: BaseCheck, conf: dict[str, Any], entity_type: str | None = None) -> CheckResult:\n # keep default argument for entity_type so old code, that doesn't set it, will work.\n return wrapped(self, conf)\n\n return wrapper\n", "path": "checkov/arm/base_resource_check.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom typing import Any\n\nfrom checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.arm.base_resource_check import BaseResourceCheck\nfrom checkov.common.util.data_structures_utils import find_in_dict\n\n\nclass AzureManagedDiscEncryption(BaseResourceCheck):\n def __init__(self) -> None:\n name = \"Ensure Azure managed disk have encryption enabled\"\n id = \"CKV_AZURE_2\"\n supported_resources = (\"Microsoft.Compute/disks\",)\n categories = (CheckCategories.ENCRYPTION,)\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf: dict[str, Any]) -> CheckResult:\n properties = conf.get(\"properties\")\n if properties:\n encryption = properties.get(\"encryption\")\n if encryption:\n # if the block exists, then it is enabled\n return CheckResult.PASSED\n\n encryption_enabled = find_in_dict(input_dict=properties, key_path=\"encryptionSettingsCollection/enabled\")\n if str(encryption_enabled).lower() == \"true\":\n return CheckResult.PASSED\n\n encryption_enabled = find_in_dict(input_dict=properties, key_path=\"encryptionSettings/enabled\")\n if str(encryption_enabled).lower() == \"true\":\n return CheckResult.PASSED\n\n return CheckResult.FAILED\n\n\ncheck = AzureManagedDiscEncryption()\n", "path": "checkov/arm/checks/resource/AzureManagedDiscEncryption.py"}, {"content": "from __future__ import annotations\n\nfrom abc import abstractmethod\nfrom collections.abc import Iterable\nfrom typing import Any, Callable\n\nfrom checkov.arm.registry import arm_resource_registry\nfrom checkov.bicep.checks.resource.registry import registry as bicep_registry\nfrom checkov.common.checks.base_check import BaseCheck\nfrom checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.common.multi_signature import multi_signature\n\n\nclass BaseResourceCheck(BaseCheck):\n def __init__(\n self,\n name: str,\n id: str,\n categories: \"Iterable[CheckCategories]\",\n supported_resources: \"Iterable[str]\",\n guideline: str | None = None,\n ) -> None:\n super().__init__(\n name=name,\n id=id,\n categories=categories,\n supported_entities=supported_resources,\n block_type=\"resource\",\n guideline=guideline,\n )\n self.supported_resources = supported_resources\n arm_resource_registry.register(self)\n # leverage ARM checks to use with bicep runner\n bicep_registry.register(self)\n\n def scan_entity_conf(self, conf: dict[str, Any], entity_type: str) -> CheckResult: # type:ignore[override] # it's ok\n self.entity_type = entity_type\n\n # the \"existing\" key indicates a Bicep resource\n if \"existing\" in conf:\n if conf[\"existing\"] is True:\n # the existing keyword is used to retrieve information about an already deployed resource\n return CheckResult.UNKNOWN\n\n self.api_version = conf[\"api_version\"]\n conf[\"config\"][\"apiVersion\"] = conf[\"api_version\"] # set for better reusability of existing ARM checks\n\n resource_conf = conf[\"config\"]\n if \"loop_type\" in resource_conf:\n # this means the whole resource block is surrounded by a for loop\n resource_conf = resource_conf[\"config\"]\n\n return self.scan_resource_conf(resource_conf, entity_type) # type:ignore[no-any-return] # issue with multi_signature annotation\n\n self.api_version = None\n\n return self.scan_resource_conf(conf, entity_type) # type:ignore[no-any-return] # issue with multi_signature annotation\n\n @multi_signature()\n @abstractmethod\n def scan_resource_conf(self, conf: dict[str, Any], entity_type: str) -> CheckResult:\n raise NotImplementedError()\n\n @classmethod\n @scan_resource_conf.add_signature(args=[\"self\", \"conf\"])\n def _scan_resource_conf_self_conf(cls, wrapped: Callable[..., CheckResult]) -> Callable[..., CheckResult]:\n def wrapper(self: BaseCheck, conf: dict[str, Any], entity_type: str | None = None) -> CheckResult:\n # keep default argument for entity_type so old code, that doesn't set it, will work.\n return wrapped(self, conf)\n\n return wrapper\n", "path": "checkov/arm/base_resource_check.py"}]}
| 1,676 | 652 |
gh_patches_debug_23225
|
rasdani/github-patches
|
git_diff
|
replicate__cog-843
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Set python package version explicitly and expose in package
The cog python package sets version metadata but this has never been updated:
```python
In [1]: from importlib.metadata import version
In [2]: version('cog')
Out[2]: '0.0.1'
```
In addition, there's no `__version__` property on the package. This isn't essential but it would be nice to have this too:
```python
In [3]: import cog
In [4]: cog.__version__
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In [4], line 1
----> 1 cog.__version__
AttributeError: module 'cog' has no attribute '__version__'
```
It would be really nice to do this in a way that:
- returns the same version from both of the above
- returns the tagged version in tagged builds (e.g. `0.3.4`)
- appends git metadata when not on a tagged build (e.g. `0.3.4-dev+630e696`)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/cog/__init__.py`
Content:
```
1 from pydantic import BaseModel
2
3 from .predictor import BasePredictor
4 from .types import File, Input, Path
5
6 __all__ = [
7 "BaseModel",
8 "BasePredictor",
9 "File",
10 "Input",
11 "Path",
12 ]
13
```
Path: `python/setup.py`
Content:
```
1 import setuptools
2
3 with open("../README.md", "r", encoding="utf-8") as fh:
4 long_description = fh.read()
5
6
7 setuptools.setup(
8 name="cog",
9 version="0.0.1",
10 author_email="[email protected]",
11 description="Containers for machine learning",
12 long_description=long_description,
13 long_description_content_type="text/markdown",
14 url="https://github.com/replicate/cog",
15 license="Apache License 2.0",
16 python_requires=">=3.6.0",
17 install_requires=[
18 # intentionally loose. perhaps these should be vendored to not collide with user code?
19 "attrs>=20.1,<23",
20 "fastapi>=0.75.2,<1",
21 "opentelemetry-exporter-otlp>=1.11.1,<2",
22 "opentelemetry-sdk>=1.11.1,<2",
23 "protobuf<=3.20.3",
24 "pydantic>=1,<2",
25 "PyYAML",
26 "redis>=4,<5",
27 "requests>=2,<3",
28 "typing_extensions>=4.1.0",
29 "uvicorn[standard]>=0.12,<1",
30 ],
31 packages=setuptools.find_packages(),
32 )
33
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/python/cog/__init__.py b/python/cog/__init__.py
--- a/python/cog/__init__.py
+++ b/python/cog/__init__.py
@@ -3,7 +3,14 @@
from .predictor import BasePredictor
from .types import File, Input, Path
+try:
+ from ._version import __version__
+except ImportError:
+ __version__ = "0.0.0+unknown"
+
+
__all__ = [
+ "__version__",
"BaseModel",
"BasePredictor",
"File",
diff --git a/python/setup.py b/python/setup.py
deleted file mode 100644
--- a/python/setup.py
+++ /dev/null
@@ -1,32 +0,0 @@
-import setuptools
-
-with open("../README.md", "r", encoding="utf-8") as fh:
- long_description = fh.read()
-
-
-setuptools.setup(
- name="cog",
- version="0.0.1",
- author_email="[email protected]",
- description="Containers for machine learning",
- long_description=long_description,
- long_description_content_type="text/markdown",
- url="https://github.com/replicate/cog",
- license="Apache License 2.0",
- python_requires=">=3.6.0",
- install_requires=[
- # intentionally loose. perhaps these should be vendored to not collide with user code?
- "attrs>=20.1,<23",
- "fastapi>=0.75.2,<1",
- "opentelemetry-exporter-otlp>=1.11.1,<2",
- "opentelemetry-sdk>=1.11.1,<2",
- "protobuf<=3.20.3",
- "pydantic>=1,<2",
- "PyYAML",
- "redis>=4,<5",
- "requests>=2,<3",
- "typing_extensions>=4.1.0",
- "uvicorn[standard]>=0.12,<1",
- ],
- packages=setuptools.find_packages(),
-)
|
{"golden_diff": "diff --git a/python/cog/__init__.py b/python/cog/__init__.py\n--- a/python/cog/__init__.py\n+++ b/python/cog/__init__.py\n@@ -3,7 +3,14 @@\n from .predictor import BasePredictor\n from .types import File, Input, Path\n \n+try:\n+ from ._version import __version__\n+except ImportError:\n+ __version__ = \"0.0.0+unknown\"\n+\n+\n __all__ = [\n+ \"__version__\",\n \"BaseModel\",\n \"BasePredictor\",\n \"File\",\ndiff --git a/python/setup.py b/python/setup.py\ndeleted file mode 100644\n--- a/python/setup.py\n+++ /dev/null\n@@ -1,32 +0,0 @@\n-import setuptools\n-\n-with open(\"../README.md\", \"r\", encoding=\"utf-8\") as fh:\n- long_description = fh.read()\n-\n-\n-setuptools.setup(\n- name=\"cog\",\n- version=\"0.0.1\",\n- author_email=\"[email protected]\",\n- description=\"Containers for machine learning\",\n- long_description=long_description,\n- long_description_content_type=\"text/markdown\",\n- url=\"https://github.com/replicate/cog\",\n- license=\"Apache License 2.0\",\n- python_requires=\">=3.6.0\",\n- install_requires=[\n- # intentionally loose. perhaps these should be vendored to not collide with user code?\n- \"attrs>=20.1,<23\",\n- \"fastapi>=0.75.2,<1\",\n- \"opentelemetry-exporter-otlp>=1.11.1,<2\",\n- \"opentelemetry-sdk>=1.11.1,<2\",\n- \"protobuf<=3.20.3\",\n- \"pydantic>=1,<2\",\n- \"PyYAML\",\n- \"redis>=4,<5\",\n- \"requests>=2,<3\",\n- \"typing_extensions>=4.1.0\",\n- \"uvicorn[standard]>=0.12,<1\",\n- ],\n- packages=setuptools.find_packages(),\n-)\n", "issue": "Set python package version explicitly and expose in package\nThe cog python package sets version metadata but this has never been updated:\r\n\r\n```python\r\nIn [1]: from importlib.metadata import version\r\n\r\nIn [2]: version('cog')\r\nOut[2]: '0.0.1'\r\n```\r\n\r\nIn addition, there's no `__version__` property on the package. This isn't essential but it would be nice to have this too:\r\n\r\n```python\r\nIn [3]: import cog\r\n\r\nIn [4]: cog.__version__\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\nCell In [4], line 1\r\n----> 1 cog.__version__\r\n\r\nAttributeError: module 'cog' has no attribute '__version__'\r\n```\r\n\r\nIt would be really nice to do this in a way that:\r\n\r\n- returns the same version from both of the above\r\n- returns the tagged version in tagged builds (e.g. `0.3.4`)\r\n- appends git metadata when not on a tagged build (e.g. `0.3.4-dev+630e696`)\r\n\r\n\n", "before_files": [{"content": "from pydantic import BaseModel\n\nfrom .predictor import BasePredictor\nfrom .types import File, Input, Path\n\n__all__ = [\n \"BaseModel\",\n \"BasePredictor\",\n \"File\",\n \"Input\",\n \"Path\",\n]\n", "path": "python/cog/__init__.py"}, {"content": "import setuptools\n\nwith open(\"../README.md\", \"r\", encoding=\"utf-8\") as fh:\n long_description = fh.read()\n\n\nsetuptools.setup(\n name=\"cog\",\n version=\"0.0.1\",\n author_email=\"[email protected]\",\n description=\"Containers for machine learning\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/replicate/cog\",\n license=\"Apache License 2.0\",\n python_requires=\">=3.6.0\",\n install_requires=[\n # intentionally loose. perhaps these should be vendored to not collide with user code?\n \"attrs>=20.1,<23\",\n \"fastapi>=0.75.2,<1\",\n \"opentelemetry-exporter-otlp>=1.11.1,<2\",\n \"opentelemetry-sdk>=1.11.1,<2\",\n \"protobuf<=3.20.3\",\n \"pydantic>=1,<2\",\n \"PyYAML\",\n \"redis>=4,<5\",\n \"requests>=2,<3\",\n \"typing_extensions>=4.1.0\",\n \"uvicorn[standard]>=0.12,<1\",\n ],\n packages=setuptools.find_packages(),\n)\n", "path": "python/setup.py"}], "after_files": [{"content": "from pydantic import BaseModel\n\nfrom .predictor import BasePredictor\nfrom .types import File, Input, Path\n\ntry:\n from ._version import __version__\nexcept ImportError:\n __version__ = \"0.0.0+unknown\"\n\n\n__all__ = [\n \"__version__\",\n \"BaseModel\",\n \"BasePredictor\",\n \"File\",\n \"Input\",\n \"Path\",\n]\n", "path": "python/cog/__init__.py"}, {"content": null, "path": "python/setup.py"}]}
| 920 | 481 |
gh_patches_debug_28138
|
rasdani/github-patches
|
git_diff
|
open-mmlab__mmdetection-1050
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
When resize_keep_ratio is False, rescaling for masks does not work.
Thanks for your error report and we appreciate it a lot.
**Checklist**
1. I have searched related issues but cannot get the expected help.
2. The bug has not been fixed in the latest version.
**Describe the bug**
When `resize_keep_ratio=False`, rescaling for masks in loading the dataset will not work. The error is:
```
Scale must be a number or tuple of int, but got <class 'numpy.ndarray'>
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mmdet/datasets/transforms.py`
Content:
```
1 import mmcv
2 import numpy as np
3 import torch
4
5 __all__ = [
6 'ImageTransform', 'BboxTransform', 'MaskTransform', 'SegMapTransform',
7 'Numpy2Tensor'
8 ]
9
10
11 class ImageTransform(object):
12 """Preprocess an image.
13
14 1. rescale the image to expected size
15 2. normalize the image
16 3. flip the image (if needed)
17 4. pad the image (if needed)
18 5. transpose to (c, h, w)
19 """
20
21 def __init__(self,
22 mean=(0, 0, 0),
23 std=(1, 1, 1),
24 to_rgb=True,
25 size_divisor=None):
26 self.mean = np.array(mean, dtype=np.float32)
27 self.std = np.array(std, dtype=np.float32)
28 self.to_rgb = to_rgb
29 self.size_divisor = size_divisor
30
31 def __call__(self, img, scale, flip=False, keep_ratio=True):
32 if keep_ratio:
33 img, scale_factor = mmcv.imrescale(img, scale, return_scale=True)
34 else:
35 img, w_scale, h_scale = mmcv.imresize(
36 img, scale, return_scale=True)
37 scale_factor = np.array(
38 [w_scale, h_scale, w_scale, h_scale], dtype=np.float32)
39 img_shape = img.shape
40 img = mmcv.imnormalize(img, self.mean, self.std, self.to_rgb)
41 if flip:
42 img = mmcv.imflip(img)
43 if self.size_divisor is not None:
44 img = mmcv.impad_to_multiple(img, self.size_divisor)
45 pad_shape = img.shape
46 else:
47 pad_shape = img_shape
48 img = img.transpose(2, 0, 1)
49 return img, img_shape, pad_shape, scale_factor
50
51
52 def bbox_flip(bboxes, img_shape):
53 """Flip bboxes horizontally.
54
55 Args:
56 bboxes(ndarray): shape (..., 4*k)
57 img_shape(tuple): (height, width)
58 """
59 assert bboxes.shape[-1] % 4 == 0
60 w = img_shape[1]
61 flipped = bboxes.copy()
62 flipped[..., 0::4] = w - bboxes[..., 2::4] - 1
63 flipped[..., 2::4] = w - bboxes[..., 0::4] - 1
64 return flipped
65
66
67 class BboxTransform(object):
68 """Preprocess gt bboxes.
69
70 1. rescale bboxes according to image size
71 2. flip bboxes (if needed)
72 3. pad the first dimension to `max_num_gts`
73 """
74
75 def __init__(self, max_num_gts=None):
76 self.max_num_gts = max_num_gts
77
78 def __call__(self, bboxes, img_shape, scale_factor, flip=False):
79 gt_bboxes = bboxes * scale_factor
80 if flip:
81 gt_bboxes = bbox_flip(gt_bboxes, img_shape)
82 gt_bboxes[:, 0::2] = np.clip(gt_bboxes[:, 0::2], 0, img_shape[1] - 1)
83 gt_bboxes[:, 1::2] = np.clip(gt_bboxes[:, 1::2], 0, img_shape[0] - 1)
84 if self.max_num_gts is None:
85 return gt_bboxes
86 else:
87 num_gts = gt_bboxes.shape[0]
88 padded_bboxes = np.zeros((self.max_num_gts, 4), dtype=np.float32)
89 padded_bboxes[:num_gts, :] = gt_bboxes
90 return padded_bboxes
91
92
93 class MaskTransform(object):
94 """Preprocess masks.
95
96 1. resize masks to expected size and stack to a single array
97 2. flip the masks (if needed)
98 3. pad the masks (if needed)
99 """
100
101 def __call__(self, masks, pad_shape, scale_factor, flip=False):
102 masks = [
103 mmcv.imrescale(mask, scale_factor, interpolation='nearest')
104 for mask in masks
105 ]
106 if flip:
107 masks = [mask[:, ::-1] for mask in masks]
108 padded_masks = [
109 mmcv.impad(mask, pad_shape[:2], pad_val=0) for mask in masks
110 ]
111 padded_masks = np.stack(padded_masks, axis=0)
112 return padded_masks
113
114
115 class SegMapTransform(object):
116 """Preprocess semantic segmentation maps.
117
118 1. rescale the segmentation map to expected size
119 3. flip the image (if needed)
120 4. pad the image (if needed)
121 """
122
123 def __init__(self, size_divisor=None):
124 self.size_divisor = size_divisor
125
126 def __call__(self, img, scale, flip=False, keep_ratio=True):
127 if keep_ratio:
128 img = mmcv.imrescale(img, scale, interpolation='nearest')
129 else:
130 img = mmcv.imresize(img, scale, interpolation='nearest')
131 if flip:
132 img = mmcv.imflip(img)
133 if self.size_divisor is not None:
134 img = mmcv.impad_to_multiple(img, self.size_divisor)
135 return img
136
137
138 class Numpy2Tensor(object):
139
140 def __init__(self):
141 pass
142
143 def __call__(self, *args):
144 if len(args) == 1:
145 return torch.from_numpy(args[0])
146 else:
147 return tuple([torch.from_numpy(np.array(array)) for array in args])
148
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mmdet/datasets/transforms.py b/mmdet/datasets/transforms.py
--- a/mmdet/datasets/transforms.py
+++ b/mmdet/datasets/transforms.py
@@ -34,8 +34,8 @@
else:
img, w_scale, h_scale = mmcv.imresize(
img, scale, return_scale=True)
- scale_factor = np.array(
- [w_scale, h_scale, w_scale, h_scale], dtype=np.float32)
+ scale_factor = np.array([w_scale, h_scale, w_scale, h_scale],
+ dtype=np.float32)
img_shape = img.shape
img = mmcv.imnormalize(img, self.mean, self.std, self.to_rgb)
if flip:
@@ -99,10 +99,24 @@
"""
def __call__(self, masks, pad_shape, scale_factor, flip=False):
- masks = [
- mmcv.imrescale(mask, scale_factor, interpolation='nearest')
- for mask in masks
- ]
+ # aspect ratio unchanged
+ if isinstance(scale_factor, float):
+ masks = [
+ mmcv.imrescale(mask, scale_factor, interpolation='nearest')
+ for mask in masks
+ ]
+ # aspect ratio changed
+ else:
+ w_ratio, h_ratio = scale_factor[:2]
+ if masks:
+ h, w = masks[0].shape[:2]
+ new_h = int(np.round(h * h_ratio))
+ new_w = int(np.round(w * w_ratio))
+ new_size = (new_w, new_h)
+ masks = [
+ mmcv.imresize(mask, new_size, interpolation='nearest')
+ for mask in masks
+ ]
if flip:
masks = [mask[:, ::-1] for mask in masks]
padded_masks = [
|
{"golden_diff": "diff --git a/mmdet/datasets/transforms.py b/mmdet/datasets/transforms.py\n--- a/mmdet/datasets/transforms.py\n+++ b/mmdet/datasets/transforms.py\n@@ -34,8 +34,8 @@\n else:\n img, w_scale, h_scale = mmcv.imresize(\n img, scale, return_scale=True)\n- scale_factor = np.array(\n- [w_scale, h_scale, w_scale, h_scale], dtype=np.float32)\n+ scale_factor = np.array([w_scale, h_scale, w_scale, h_scale],\n+ dtype=np.float32)\n img_shape = img.shape\n img = mmcv.imnormalize(img, self.mean, self.std, self.to_rgb)\n if flip:\n@@ -99,10 +99,24 @@\n \"\"\"\n \n def __call__(self, masks, pad_shape, scale_factor, flip=False):\n- masks = [\n- mmcv.imrescale(mask, scale_factor, interpolation='nearest')\n- for mask in masks\n- ]\n+ # aspect ratio unchanged\n+ if isinstance(scale_factor, float):\n+ masks = [\n+ mmcv.imrescale(mask, scale_factor, interpolation='nearest')\n+ for mask in masks\n+ ]\n+ # aspect ratio changed\n+ else:\n+ w_ratio, h_ratio = scale_factor[:2]\n+ if masks:\n+ h, w = masks[0].shape[:2]\n+ new_h = int(np.round(h * h_ratio))\n+ new_w = int(np.round(w * w_ratio))\n+ new_size = (new_w, new_h)\n+ masks = [\n+ mmcv.imresize(mask, new_size, interpolation='nearest')\n+ for mask in masks\n+ ]\n if flip:\n masks = [mask[:, ::-1] for mask in masks]\n padded_masks = [\n", "issue": "When resize_keep_ratio is False, rescaling for masks does not work.\nThanks for your error report and we appreciate it a lot.\r\n\r\n**Checklist**\r\n1. I have searched related issues but cannot get the expected help.\r\n2. The bug has not been fixed in the latest version.\r\n\r\n**Describe the bug**\r\n\r\nWhen `resize_keep_ratio=False`, rescaling for masks in loading the dataset will not work. The error is:\r\n```\r\nScale must be a number or tuple of int, but got <class 'numpy.ndarray'>\r\n```\r\n\n", "before_files": [{"content": "import mmcv\nimport numpy as np\nimport torch\n\n__all__ = [\n 'ImageTransform', 'BboxTransform', 'MaskTransform', 'SegMapTransform',\n 'Numpy2Tensor'\n]\n\n\nclass ImageTransform(object):\n \"\"\"Preprocess an image.\n\n 1. rescale the image to expected size\n 2. normalize the image\n 3. flip the image (if needed)\n 4. pad the image (if needed)\n 5. transpose to (c, h, w)\n \"\"\"\n\n def __init__(self,\n mean=(0, 0, 0),\n std=(1, 1, 1),\n to_rgb=True,\n size_divisor=None):\n self.mean = np.array(mean, dtype=np.float32)\n self.std = np.array(std, dtype=np.float32)\n self.to_rgb = to_rgb\n self.size_divisor = size_divisor\n\n def __call__(self, img, scale, flip=False, keep_ratio=True):\n if keep_ratio:\n img, scale_factor = mmcv.imrescale(img, scale, return_scale=True)\n else:\n img, w_scale, h_scale = mmcv.imresize(\n img, scale, return_scale=True)\n scale_factor = np.array(\n [w_scale, h_scale, w_scale, h_scale], dtype=np.float32)\n img_shape = img.shape\n img = mmcv.imnormalize(img, self.mean, self.std, self.to_rgb)\n if flip:\n img = mmcv.imflip(img)\n if self.size_divisor is not None:\n img = mmcv.impad_to_multiple(img, self.size_divisor)\n pad_shape = img.shape\n else:\n pad_shape = img_shape\n img = img.transpose(2, 0, 1)\n return img, img_shape, pad_shape, scale_factor\n\n\ndef bbox_flip(bboxes, img_shape):\n \"\"\"Flip bboxes horizontally.\n\n Args:\n bboxes(ndarray): shape (..., 4*k)\n img_shape(tuple): (height, width)\n \"\"\"\n assert bboxes.shape[-1] % 4 == 0\n w = img_shape[1]\n flipped = bboxes.copy()\n flipped[..., 0::4] = w - bboxes[..., 2::4] - 1\n flipped[..., 2::4] = w - bboxes[..., 0::4] - 1\n return flipped\n\n\nclass BboxTransform(object):\n \"\"\"Preprocess gt bboxes.\n\n 1. rescale bboxes according to image size\n 2. flip bboxes (if needed)\n 3. pad the first dimension to `max_num_gts`\n \"\"\"\n\n def __init__(self, max_num_gts=None):\n self.max_num_gts = max_num_gts\n\n def __call__(self, bboxes, img_shape, scale_factor, flip=False):\n gt_bboxes = bboxes * scale_factor\n if flip:\n gt_bboxes = bbox_flip(gt_bboxes, img_shape)\n gt_bboxes[:, 0::2] = np.clip(gt_bboxes[:, 0::2], 0, img_shape[1] - 1)\n gt_bboxes[:, 1::2] = np.clip(gt_bboxes[:, 1::2], 0, img_shape[0] - 1)\n if self.max_num_gts is None:\n return gt_bboxes\n else:\n num_gts = gt_bboxes.shape[0]\n padded_bboxes = np.zeros((self.max_num_gts, 4), dtype=np.float32)\n padded_bboxes[:num_gts, :] = gt_bboxes\n return padded_bboxes\n\n\nclass MaskTransform(object):\n \"\"\"Preprocess masks.\n\n 1. resize masks to expected size and stack to a single array\n 2. flip the masks (if needed)\n 3. pad the masks (if needed)\n \"\"\"\n\n def __call__(self, masks, pad_shape, scale_factor, flip=False):\n masks = [\n mmcv.imrescale(mask, scale_factor, interpolation='nearest')\n for mask in masks\n ]\n if flip:\n masks = [mask[:, ::-1] for mask in masks]\n padded_masks = [\n mmcv.impad(mask, pad_shape[:2], pad_val=0) for mask in masks\n ]\n padded_masks = np.stack(padded_masks, axis=0)\n return padded_masks\n\n\nclass SegMapTransform(object):\n \"\"\"Preprocess semantic segmentation maps.\n\n 1. rescale the segmentation map to expected size\n 3. flip the image (if needed)\n 4. pad the image (if needed)\n \"\"\"\n\n def __init__(self, size_divisor=None):\n self.size_divisor = size_divisor\n\n def __call__(self, img, scale, flip=False, keep_ratio=True):\n if keep_ratio:\n img = mmcv.imrescale(img, scale, interpolation='nearest')\n else:\n img = mmcv.imresize(img, scale, interpolation='nearest')\n if flip:\n img = mmcv.imflip(img)\n if self.size_divisor is not None:\n img = mmcv.impad_to_multiple(img, self.size_divisor)\n return img\n\n\nclass Numpy2Tensor(object):\n\n def __init__(self):\n pass\n\n def __call__(self, *args):\n if len(args) == 1:\n return torch.from_numpy(args[0])\n else:\n return tuple([torch.from_numpy(np.array(array)) for array in args])\n", "path": "mmdet/datasets/transforms.py"}], "after_files": [{"content": "import mmcv\nimport numpy as np\nimport torch\n\n__all__ = [\n 'ImageTransform', 'BboxTransform', 'MaskTransform', 'SegMapTransform',\n 'Numpy2Tensor'\n]\n\n\nclass ImageTransform(object):\n \"\"\"Preprocess an image.\n\n 1. rescale the image to expected size\n 2. normalize the image\n 3. flip the image (if needed)\n 4. pad the image (if needed)\n 5. transpose to (c, h, w)\n \"\"\"\n\n def __init__(self,\n mean=(0, 0, 0),\n std=(1, 1, 1),\n to_rgb=True,\n size_divisor=None):\n self.mean = np.array(mean, dtype=np.float32)\n self.std = np.array(std, dtype=np.float32)\n self.to_rgb = to_rgb\n self.size_divisor = size_divisor\n\n def __call__(self, img, scale, flip=False, keep_ratio=True):\n if keep_ratio:\n img, scale_factor = mmcv.imrescale(img, scale, return_scale=True)\n else:\n img, w_scale, h_scale = mmcv.imresize(\n img, scale, return_scale=True)\n scale_factor = np.array([w_scale, h_scale, w_scale, h_scale],\n dtype=np.float32)\n img_shape = img.shape\n img = mmcv.imnormalize(img, self.mean, self.std, self.to_rgb)\n if flip:\n img = mmcv.imflip(img)\n if self.size_divisor is not None:\n img = mmcv.impad_to_multiple(img, self.size_divisor)\n pad_shape = img.shape\n else:\n pad_shape = img_shape\n img = img.transpose(2, 0, 1)\n return img, img_shape, pad_shape, scale_factor\n\n\ndef bbox_flip(bboxes, img_shape):\n \"\"\"Flip bboxes horizontally.\n\n Args:\n bboxes(ndarray): shape (..., 4*k)\n img_shape(tuple): (height, width)\n \"\"\"\n assert bboxes.shape[-1] % 4 == 0\n w = img_shape[1]\n flipped = bboxes.copy()\n flipped[..., 0::4] = w - bboxes[..., 2::4] - 1\n flipped[..., 2::4] = w - bboxes[..., 0::4] - 1\n return flipped\n\n\nclass BboxTransform(object):\n \"\"\"Preprocess gt bboxes.\n\n 1. rescale bboxes according to image size\n 2. flip bboxes (if needed)\n 3. pad the first dimension to `max_num_gts`\n \"\"\"\n\n def __init__(self, max_num_gts=None):\n self.max_num_gts = max_num_gts\n\n def __call__(self, bboxes, img_shape, scale_factor, flip=False):\n gt_bboxes = bboxes * scale_factor\n if flip:\n gt_bboxes = bbox_flip(gt_bboxes, img_shape)\n gt_bboxes[:, 0::2] = np.clip(gt_bboxes[:, 0::2], 0, img_shape[1] - 1)\n gt_bboxes[:, 1::2] = np.clip(gt_bboxes[:, 1::2], 0, img_shape[0] - 1)\n if self.max_num_gts is None:\n return gt_bboxes\n else:\n num_gts = gt_bboxes.shape[0]\n padded_bboxes = np.zeros((self.max_num_gts, 4), dtype=np.float32)\n padded_bboxes[:num_gts, :] = gt_bboxes\n return padded_bboxes\n\n\nclass MaskTransform(object):\n \"\"\"Preprocess masks.\n\n 1. resize masks to expected size and stack to a single array\n 2. flip the masks (if needed)\n 3. pad the masks (if needed)\n \"\"\"\n\n def __call__(self, masks, pad_shape, scale_factor, flip=False):\n # aspect ratio unchanged\n if isinstance(scale_factor, float):\n masks = [\n mmcv.imrescale(mask, scale_factor, interpolation='nearest')\n for mask in masks\n ]\n # aspect ratio changed\n else:\n w_ratio, h_ratio = scale_factor[:2]\n if masks:\n h, w = masks[0].shape[:2]\n new_h = int(np.round(h * h_ratio))\n new_w = int(np.round(w * w_ratio))\n new_size = (new_w, new_h)\n masks = [\n mmcv.imresize(mask, new_size, interpolation='nearest')\n for mask in masks\n ]\n if flip:\n masks = [mask[:, ::-1] for mask in masks]\n padded_masks = [\n mmcv.impad(mask, pad_shape[:2], pad_val=0) for mask in masks\n ]\n padded_masks = np.stack(padded_masks, axis=0)\n return padded_masks\n\n\nclass SegMapTransform(object):\n \"\"\"Preprocess semantic segmentation maps.\n\n 1. rescale the segmentation map to expected size\n 3. flip the image (if needed)\n 4. pad the image (if needed)\n \"\"\"\n\n def __init__(self, size_divisor=None):\n self.size_divisor = size_divisor\n\n def __call__(self, img, scale, flip=False, keep_ratio=True):\n if keep_ratio:\n img = mmcv.imrescale(img, scale, interpolation='nearest')\n else:\n img = mmcv.imresize(img, scale, interpolation='nearest')\n if flip:\n img = mmcv.imflip(img)\n if self.size_divisor is not None:\n img = mmcv.impad_to_multiple(img, self.size_divisor)\n return img\n\n\nclass Numpy2Tensor(object):\n\n def __init__(self):\n pass\n\n def __call__(self, *args):\n if len(args) == 1:\n return torch.from_numpy(args[0])\n else:\n return tuple([torch.from_numpy(np.array(array)) for array in args])\n", "path": "mmdet/datasets/transforms.py"}]}
| 1,933 | 417 |
gh_patches_debug_656
|
rasdani/github-patches
|
git_diff
|
pex-tool__pex-2081
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 2.1.126
On the docket:
+ [x] Resolve sdist builds can race and fail. #2078
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.125"
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.125"
+__version__ = "2.1.126"
|
{"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.125\"\n+__version__ = \"2.1.126\"\n", "issue": "Release 2.1.126\nOn the docket:\r\n+ [x] Resolve sdist builds can race and fail. #2078 \n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.125\"\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.126\"\n", "path": "pex/version.py"}]}
| 342 | 98 |
gh_patches_debug_22628
|
rasdani/github-patches
|
git_diff
|
beetbox__beets-5160
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`unimported`'s `ignore_subdirectories` doesn't work
### Problem
```sh
beet unimported
```
Leads to directories specified in `ignore_subdirectories` still being listed
### Setup
* OS: Arch Linux
* Python version: 3.11.7
* beets version: 1.6.1
* Turning off plugins made problem go away (yes/no): n/a
My configuration (output of `beet config`) is:
```yaml
unimported:
ignore_extensions: jpg png txt md org mod
ignore_subdirectories: Unsorted import
```
`ignore_extensions` works as expected though
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `beetsplug/unimported.py`
Content:
```
1 # This file is part of beets.
2 # Copyright 2019, Joris Jensen
3 #
4 # Permission is hereby granted, free of charge, to any person obtaining
5 # a copy of this software and associated documentation files (the
6 # "Software"), to deal in the Software without restriction, including
7 # without limitation the rights to use, copy, modify, merge, publish,
8 # distribute, sublicense, and/or sell copies of the Software, and to
9 # permit persons to whom the Software is furnished to do so, subject to
10 # the following conditions:
11 #
12 # The above copyright notice and this permission notice shall be
13 # included in all copies or substantial portions of the Software.
14
15 """
16 List all files in the library folder which are not listed in the
17 beets library database, including art files
18 """
19
20 import os
21
22 from beets import util
23 from beets.plugins import BeetsPlugin
24 from beets.ui import Subcommand, print_
25
26 __author__ = "https://github.com/MrNuggelz"
27
28
29 class Unimported(BeetsPlugin):
30 def __init__(self):
31 super().__init__()
32 self.config.add({"ignore_extensions": [], "ignore_subdirectories": []})
33
34 def commands(self):
35 def print_unimported(lib, opts, args):
36 ignore_exts = [
37 ("." + x).encode()
38 for x in self.config["ignore_extensions"].as_str_seq()
39 ]
40 ignore_dirs = [
41 os.path.join(lib.directory, x.encode())
42 for x in self.config["ignore_subdirectories"].as_str_seq()
43 ]
44 in_folder = {
45 os.path.join(r, file)
46 for r, d, f in os.walk(lib.directory)
47 for file in f
48 if not any(
49 [file.endswith(ext) for ext in ignore_exts]
50 + [r in ignore_dirs]
51 )
52 }
53 in_library = {x.path for x in lib.items()}
54 art_files = {x.artpath for x in lib.albums()}
55 for f in in_folder - in_library - art_files:
56 print_(util.displayable_path(f))
57
58 unimported = Subcommand(
59 "unimported",
60 help="list all files in the library folder which are not listed"
61 " in the beets library database",
62 )
63 unimported.func = print_unimported
64 return [unimported]
65
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/beetsplug/unimported.py b/beetsplug/unimported.py
--- a/beetsplug/unimported.py
+++ b/beetsplug/unimported.py
@@ -41,15 +41,17 @@
os.path.join(lib.directory, x.encode())
for x in self.config["ignore_subdirectories"].as_str_seq()
]
- in_folder = {
- os.path.join(r, file)
- for r, d, f in os.walk(lib.directory)
- for file in f
- if not any(
- [file.endswith(ext) for ext in ignore_exts]
- + [r in ignore_dirs]
- )
- }
+ in_folder = set()
+ for root, _, files in os.walk(lib.directory):
+ # do not traverse if root is a child of an ignored directory
+ if any(root.startswith(ignored) for ignored in ignore_dirs):
+ continue
+ for file in files:
+ # ignore files with ignored extensions
+ if any(file.endswith(ext) for ext in ignore_exts):
+ continue
+ in_folder.add(os.path.join(root, file))
+
in_library = {x.path for x in lib.items()}
art_files = {x.artpath for x in lib.albums()}
for f in in_folder - in_library - art_files:
|
{"golden_diff": "diff --git a/beetsplug/unimported.py b/beetsplug/unimported.py\n--- a/beetsplug/unimported.py\n+++ b/beetsplug/unimported.py\n@@ -41,15 +41,17 @@\n os.path.join(lib.directory, x.encode())\n for x in self.config[\"ignore_subdirectories\"].as_str_seq()\n ]\n- in_folder = {\n- os.path.join(r, file)\n- for r, d, f in os.walk(lib.directory)\n- for file in f\n- if not any(\n- [file.endswith(ext) for ext in ignore_exts]\n- + [r in ignore_dirs]\n- )\n- }\n+ in_folder = set()\n+ for root, _, files in os.walk(lib.directory):\n+ # do not traverse if root is a child of an ignored directory\n+ if any(root.startswith(ignored) for ignored in ignore_dirs):\n+ continue\n+ for file in files:\n+ # ignore files with ignored extensions\n+ if any(file.endswith(ext) for ext in ignore_exts):\n+ continue\n+ in_folder.add(os.path.join(root, file))\n+\n in_library = {x.path for x in lib.items()}\n art_files = {x.artpath for x in lib.albums()}\n for f in in_folder - in_library - art_files:\n", "issue": "`unimported`'s `ignore_subdirectories` doesn't work\n### Problem\r\n\r\n\r\n```sh\r\nbeet unimported\r\n```\r\n\r\nLeads to directories specified in `ignore_subdirectories` still being listed\r\n\r\n### Setup\r\n\r\n* OS: Arch Linux\r\n* Python version: 3.11.7 \r\n* beets version: 1.6.1\r\n* Turning off plugins made problem go away (yes/no): n/a\r\n\r\nMy configuration (output of `beet config`) is:\r\n\r\n```yaml\r\nunimported:\r\n ignore_extensions: jpg png txt md org mod\r\n ignore_subdirectories: Unsorted import\r\n```\r\n`ignore_extensions` works as expected though\r\n\r\n\n", "before_files": [{"content": "# This file is part of beets.\n# Copyright 2019, Joris Jensen\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\n\"\"\"\nList all files in the library folder which are not listed in the\n beets library database, including art files\n\"\"\"\n\nimport os\n\nfrom beets import util\nfrom beets.plugins import BeetsPlugin\nfrom beets.ui import Subcommand, print_\n\n__author__ = \"https://github.com/MrNuggelz\"\n\n\nclass Unimported(BeetsPlugin):\n def __init__(self):\n super().__init__()\n self.config.add({\"ignore_extensions\": [], \"ignore_subdirectories\": []})\n\n def commands(self):\n def print_unimported(lib, opts, args):\n ignore_exts = [\n (\".\" + x).encode()\n for x in self.config[\"ignore_extensions\"].as_str_seq()\n ]\n ignore_dirs = [\n os.path.join(lib.directory, x.encode())\n for x in self.config[\"ignore_subdirectories\"].as_str_seq()\n ]\n in_folder = {\n os.path.join(r, file)\n for r, d, f in os.walk(lib.directory)\n for file in f\n if not any(\n [file.endswith(ext) for ext in ignore_exts]\n + [r in ignore_dirs]\n )\n }\n in_library = {x.path for x in lib.items()}\n art_files = {x.artpath for x in lib.albums()}\n for f in in_folder - in_library - art_files:\n print_(util.displayable_path(f))\n\n unimported = Subcommand(\n \"unimported\",\n help=\"list all files in the library folder which are not listed\"\n \" in the beets library database\",\n )\n unimported.func = print_unimported\n return [unimported]\n", "path": "beetsplug/unimported.py"}], "after_files": [{"content": "# This file is part of beets.\n# Copyright 2019, Joris Jensen\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\n\"\"\"\nList all files in the library folder which are not listed in the\n beets library database, including art files\n\"\"\"\n\nimport os\n\nfrom beets import util\nfrom beets.plugins import BeetsPlugin\nfrom beets.ui import Subcommand, print_\n\n__author__ = \"https://github.com/MrNuggelz\"\n\n\nclass Unimported(BeetsPlugin):\n def __init__(self):\n super().__init__()\n self.config.add({\"ignore_extensions\": [], \"ignore_subdirectories\": []})\n\n def commands(self):\n def print_unimported(lib, opts, args):\n ignore_exts = [\n (\".\" + x).encode()\n for x in self.config[\"ignore_extensions\"].as_str_seq()\n ]\n ignore_dirs = [\n os.path.join(lib.directory, x.encode())\n for x in self.config[\"ignore_subdirectories\"].as_str_seq()\n ]\n in_folder = set()\n for root, _, files in os.walk(lib.directory):\n # do not traverse if root is a child of an ignored directory\n if any(root.startswith(ignored) for ignored in ignore_dirs):\n continue\n for file in files:\n # ignore files with ignored extensions\n if any(file.endswith(ext) for ext in ignore_exts):\n continue\n in_folder.add(os.path.join(root, file))\n\n in_library = {x.path for x in lib.items()}\n art_files = {x.artpath for x in lib.albums()}\n for f in in_folder - in_library - art_files:\n print_(util.displayable_path(f))\n\n unimported = Subcommand(\n \"unimported\",\n help=\"list all files in the library folder which are not listed\"\n \" in the beets library database\",\n )\n unimported.func = print_unimported\n return [unimported]\n", "path": "beetsplug/unimported.py"}]}
| 1,032 | 296 |
gh_patches_debug_51699
|
rasdani/github-patches
|
git_diff
|
comic__grand-challenge.org-2885
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
The Phase (algorithm) input and output selects are annoying to use in the admin
A select 2 widget would be better.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/grandchallenge/evaluation/admin.py`
Content:
```
1 from django.contrib import admin
2 from django.core.exceptions import ObjectDoesNotExist, ValidationError
3 from django.forms import ModelForm
4
5 from grandchallenge.challenges.models import ChallengeRequest
6 from grandchallenge.components.admin import (
7 ComponentImageAdmin,
8 cancel_jobs,
9 deprovision_jobs,
10 requeue_jobs,
11 )
12 from grandchallenge.core.admin import (
13 GroupObjectPermissionAdmin,
14 UserObjectPermissionAdmin,
15 )
16 from grandchallenge.core.templatetags.remove_whitespace import oxford_comma
17 from grandchallenge.evaluation.models import (
18 Evaluation,
19 EvaluationGroupObjectPermission,
20 EvaluationUserObjectPermission,
21 Method,
22 MethodGroupObjectPermission,
23 MethodUserObjectPermission,
24 Phase,
25 PhaseGroupObjectPermission,
26 PhaseUserObjectPermission,
27 Submission,
28 SubmissionGroupObjectPermission,
29 SubmissionUserObjectPermission,
30 )
31 from grandchallenge.evaluation.tasks import create_evaluation
32 from grandchallenge.evaluation.utils import SubmissionKindChoices
33
34
35 class PhaseAdminForm(ModelForm):
36 class Meta:
37 model = Phase
38 fields = "__all__"
39
40 def clean(self):
41 cleaned_data = super().clean()
42
43 duplicate_interfaces = {
44 *cleaned_data.get("algorithm_inputs", [])
45 }.intersection({*cleaned_data.get("algorithm_outputs", [])})
46
47 if duplicate_interfaces:
48 raise ValidationError(
49 f"The sets of Algorithm Inputs and Algorithm Outputs must be unique: "
50 f"{oxford_comma(duplicate_interfaces)} present in both"
51 )
52
53 submission_kind = cleaned_data["submission_kind"]
54 total_number_of_submissions_allowed = cleaned_data[
55 "total_number_of_submissions_allowed"
56 ]
57
58 if (
59 submission_kind == SubmissionKindChoices.ALGORITHM
60 and not total_number_of_submissions_allowed
61 ):
62 try:
63 request = ChallengeRequest.objects.get(
64 short_name=self.instance.challenge.short_name
65 )
66 error_addition = f"The corresponding challenge request lists the following limits: Preliminary phase: {request.phase_1_number_of_submissions_per_team * request.expected_number_of_teams} Final test phase: {request.phase_2_number_of_submissions_per_team * request.expected_number_of_teams}. Set the limits according to the phase type. "
67 except ObjectDoesNotExist:
68 error_addition = "There is no corresponding challenge request."
69 raise ValidationError(
70 "For phases that take an algorithm as submission input, "
71 "the total_number_of_submissions_allowed needs to be set. "
72 + error_addition
73 )
74
75 return cleaned_data
76
77
78 @admin.register(Phase)
79 class PhaseAdmin(admin.ModelAdmin):
80 ordering = ("challenge",)
81 list_display = (
82 "slug",
83 "title",
84 "challenge",
85 "submission_kind",
86 "open_for_submissions",
87 "submissions_open_at",
88 "submissions_close_at",
89 "submissions_limit_per_user_per_period",
90 )
91 search_fields = ("pk", "title", "slug", "challenge__short_name")
92 list_filter = (
93 "submission_kind",
94 "challenge__short_name",
95 )
96 form = PhaseAdminForm
97
98 @admin.display(boolean=True)
99 def open_for_submissions(self, instance):
100 return instance.open_for_submissions
101
102
103 @admin.action(
104 description="Reevaluate selected submissions",
105 permissions=("change",),
106 )
107 def reevaluate_submissions(modeladmin, request, queryset):
108 """Creates a new evaluation for an existing submission"""
109 for submission in queryset:
110 create_evaluation.apply_async(
111 kwargs={"submission_pk": str(submission.pk)}
112 )
113
114
115 @admin.register(Submission)
116 class SubmissionAdmin(admin.ModelAdmin):
117 ordering = ("-created",)
118 list_display = ("pk", "created", "phase", "creator")
119 list_filter = ("phase__challenge__short_name",)
120 search_fields = ("pk", "creator__username", "phase__slug")
121 readonly_fields = (
122 "creator",
123 "phase",
124 "predictions_file",
125 "algorithm_image",
126 )
127 actions = (reevaluate_submissions,)
128
129
130 @admin.register(Evaluation)
131 class EvaluationAdmin(admin.ModelAdmin):
132 ordering = ("-created",)
133 list_display = ("pk", "created", "submission", "status", "error_message")
134 list_filter = ("submission__phase__challenge__short_name", "status")
135 list_select_related = (
136 "submission__phase__challenge",
137 "submission__creator",
138 )
139 search_fields = (
140 "pk",
141 "submission__pk",
142 "submission__phase__challenge__short_name",
143 "submission__creator__username",
144 )
145 readonly_fields = (
146 "status",
147 "submission",
148 "method",
149 "inputs",
150 "outputs",
151 "attempt",
152 "stdout",
153 "stderr",
154 "error_message",
155 "input_prefixes",
156 "task_on_success",
157 "task_on_failure",
158 "runtime_metrics",
159 )
160 actions = (requeue_jobs, cancel_jobs, deprovision_jobs)
161
162
163 admin.site.register(PhaseUserObjectPermission, UserObjectPermissionAdmin)
164 admin.site.register(PhaseGroupObjectPermission, GroupObjectPermissionAdmin)
165 admin.site.register(Method, ComponentImageAdmin)
166 admin.site.register(MethodUserObjectPermission, UserObjectPermissionAdmin)
167 admin.site.register(MethodGroupObjectPermission, GroupObjectPermissionAdmin)
168 admin.site.register(SubmissionUserObjectPermission, UserObjectPermissionAdmin)
169 admin.site.register(
170 SubmissionGroupObjectPermission, GroupObjectPermissionAdmin
171 )
172 admin.site.register(EvaluationUserObjectPermission, UserObjectPermissionAdmin)
173 admin.site.register(
174 EvaluationGroupObjectPermission, GroupObjectPermissionAdmin
175 )
176
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/app/grandchallenge/evaluation/admin.py b/app/grandchallenge/evaluation/admin.py
--- a/app/grandchallenge/evaluation/admin.py
+++ b/app/grandchallenge/evaluation/admin.py
@@ -93,6 +93,13 @@
"submission_kind",
"challenge__short_name",
)
+ autocomplete_fields = (
+ "inputs",
+ "outputs",
+ "algorithm_inputs",
+ "algorithm_outputs",
+ "archive",
+ )
form = PhaseAdminForm
@admin.display(boolean=True)
|
{"golden_diff": "diff --git a/app/grandchallenge/evaluation/admin.py b/app/grandchallenge/evaluation/admin.py\n--- a/app/grandchallenge/evaluation/admin.py\n+++ b/app/grandchallenge/evaluation/admin.py\n@@ -93,6 +93,13 @@\n \"submission_kind\",\n \"challenge__short_name\",\n )\n+ autocomplete_fields = (\n+ \"inputs\",\n+ \"outputs\",\n+ \"algorithm_inputs\",\n+ \"algorithm_outputs\",\n+ \"archive\",\n+ )\n form = PhaseAdminForm\n \n @admin.display(boolean=True)\n", "issue": "The Phase (algorithm) input and output selects are annoying to use in the admin\nA select 2 widget would be better.\n", "before_files": [{"content": "from django.contrib import admin\nfrom django.core.exceptions import ObjectDoesNotExist, ValidationError\nfrom django.forms import ModelForm\n\nfrom grandchallenge.challenges.models import ChallengeRequest\nfrom grandchallenge.components.admin import (\n ComponentImageAdmin,\n cancel_jobs,\n deprovision_jobs,\n requeue_jobs,\n)\nfrom grandchallenge.core.admin import (\n GroupObjectPermissionAdmin,\n UserObjectPermissionAdmin,\n)\nfrom grandchallenge.core.templatetags.remove_whitespace import oxford_comma\nfrom grandchallenge.evaluation.models import (\n Evaluation,\n EvaluationGroupObjectPermission,\n EvaluationUserObjectPermission,\n Method,\n MethodGroupObjectPermission,\n MethodUserObjectPermission,\n Phase,\n PhaseGroupObjectPermission,\n PhaseUserObjectPermission,\n Submission,\n SubmissionGroupObjectPermission,\n SubmissionUserObjectPermission,\n)\nfrom grandchallenge.evaluation.tasks import create_evaluation\nfrom grandchallenge.evaluation.utils import SubmissionKindChoices\n\n\nclass PhaseAdminForm(ModelForm):\n class Meta:\n model = Phase\n fields = \"__all__\"\n\n def clean(self):\n cleaned_data = super().clean()\n\n duplicate_interfaces = {\n *cleaned_data.get(\"algorithm_inputs\", [])\n }.intersection({*cleaned_data.get(\"algorithm_outputs\", [])})\n\n if duplicate_interfaces:\n raise ValidationError(\n f\"The sets of Algorithm Inputs and Algorithm Outputs must be unique: \"\n f\"{oxford_comma(duplicate_interfaces)} present in both\"\n )\n\n submission_kind = cleaned_data[\"submission_kind\"]\n total_number_of_submissions_allowed = cleaned_data[\n \"total_number_of_submissions_allowed\"\n ]\n\n if (\n submission_kind == SubmissionKindChoices.ALGORITHM\n and not total_number_of_submissions_allowed\n ):\n try:\n request = ChallengeRequest.objects.get(\n short_name=self.instance.challenge.short_name\n )\n error_addition = f\"The corresponding challenge request lists the following limits: Preliminary phase: {request.phase_1_number_of_submissions_per_team * request.expected_number_of_teams} Final test phase: {request.phase_2_number_of_submissions_per_team * request.expected_number_of_teams}. Set the limits according to the phase type. \"\n except ObjectDoesNotExist:\n error_addition = \"There is no corresponding challenge request.\"\n raise ValidationError(\n \"For phases that take an algorithm as submission input, \"\n \"the total_number_of_submissions_allowed needs to be set. \"\n + error_addition\n )\n\n return cleaned_data\n\n\[email protected](Phase)\nclass PhaseAdmin(admin.ModelAdmin):\n ordering = (\"challenge\",)\n list_display = (\n \"slug\",\n \"title\",\n \"challenge\",\n \"submission_kind\",\n \"open_for_submissions\",\n \"submissions_open_at\",\n \"submissions_close_at\",\n \"submissions_limit_per_user_per_period\",\n )\n search_fields = (\"pk\", \"title\", \"slug\", \"challenge__short_name\")\n list_filter = (\n \"submission_kind\",\n \"challenge__short_name\",\n )\n form = PhaseAdminForm\n\n @admin.display(boolean=True)\n def open_for_submissions(self, instance):\n return instance.open_for_submissions\n\n\[email protected](\n description=\"Reevaluate selected submissions\",\n permissions=(\"change\",),\n)\ndef reevaluate_submissions(modeladmin, request, queryset):\n \"\"\"Creates a new evaluation for an existing submission\"\"\"\n for submission in queryset:\n create_evaluation.apply_async(\n kwargs={\"submission_pk\": str(submission.pk)}\n )\n\n\[email protected](Submission)\nclass SubmissionAdmin(admin.ModelAdmin):\n ordering = (\"-created\",)\n list_display = (\"pk\", \"created\", \"phase\", \"creator\")\n list_filter = (\"phase__challenge__short_name\",)\n search_fields = (\"pk\", \"creator__username\", \"phase__slug\")\n readonly_fields = (\n \"creator\",\n \"phase\",\n \"predictions_file\",\n \"algorithm_image\",\n )\n actions = (reevaluate_submissions,)\n\n\[email protected](Evaluation)\nclass EvaluationAdmin(admin.ModelAdmin):\n ordering = (\"-created\",)\n list_display = (\"pk\", \"created\", \"submission\", \"status\", \"error_message\")\n list_filter = (\"submission__phase__challenge__short_name\", \"status\")\n list_select_related = (\n \"submission__phase__challenge\",\n \"submission__creator\",\n )\n search_fields = (\n \"pk\",\n \"submission__pk\",\n \"submission__phase__challenge__short_name\",\n \"submission__creator__username\",\n )\n readonly_fields = (\n \"status\",\n \"submission\",\n \"method\",\n \"inputs\",\n \"outputs\",\n \"attempt\",\n \"stdout\",\n \"stderr\",\n \"error_message\",\n \"input_prefixes\",\n \"task_on_success\",\n \"task_on_failure\",\n \"runtime_metrics\",\n )\n actions = (requeue_jobs, cancel_jobs, deprovision_jobs)\n\n\nadmin.site.register(PhaseUserObjectPermission, UserObjectPermissionAdmin)\nadmin.site.register(PhaseGroupObjectPermission, GroupObjectPermissionAdmin)\nadmin.site.register(Method, ComponentImageAdmin)\nadmin.site.register(MethodUserObjectPermission, UserObjectPermissionAdmin)\nadmin.site.register(MethodGroupObjectPermission, GroupObjectPermissionAdmin)\nadmin.site.register(SubmissionUserObjectPermission, UserObjectPermissionAdmin)\nadmin.site.register(\n SubmissionGroupObjectPermission, GroupObjectPermissionAdmin\n)\nadmin.site.register(EvaluationUserObjectPermission, UserObjectPermissionAdmin)\nadmin.site.register(\n EvaluationGroupObjectPermission, GroupObjectPermissionAdmin\n)\n", "path": "app/grandchallenge/evaluation/admin.py"}], "after_files": [{"content": "from django.contrib import admin\nfrom django.core.exceptions import ObjectDoesNotExist, ValidationError\nfrom django.forms import ModelForm\n\nfrom grandchallenge.challenges.models import ChallengeRequest\nfrom grandchallenge.components.admin import (\n ComponentImageAdmin,\n cancel_jobs,\n deprovision_jobs,\n requeue_jobs,\n)\nfrom grandchallenge.core.admin import (\n GroupObjectPermissionAdmin,\n UserObjectPermissionAdmin,\n)\nfrom grandchallenge.core.templatetags.remove_whitespace import oxford_comma\nfrom grandchallenge.evaluation.models import (\n Evaluation,\n EvaluationGroupObjectPermission,\n EvaluationUserObjectPermission,\n Method,\n MethodGroupObjectPermission,\n MethodUserObjectPermission,\n Phase,\n PhaseGroupObjectPermission,\n PhaseUserObjectPermission,\n Submission,\n SubmissionGroupObjectPermission,\n SubmissionUserObjectPermission,\n)\nfrom grandchallenge.evaluation.tasks import create_evaluation\nfrom grandchallenge.evaluation.utils import SubmissionKindChoices\n\n\nclass PhaseAdminForm(ModelForm):\n class Meta:\n model = Phase\n fields = \"__all__\"\n\n def clean(self):\n cleaned_data = super().clean()\n\n duplicate_interfaces = {\n *cleaned_data.get(\"algorithm_inputs\", [])\n }.intersection({*cleaned_data.get(\"algorithm_outputs\", [])})\n\n if duplicate_interfaces:\n raise ValidationError(\n f\"The sets of Algorithm Inputs and Algorithm Outputs must be unique: \"\n f\"{oxford_comma(duplicate_interfaces)} present in both\"\n )\n\n submission_kind = cleaned_data[\"submission_kind\"]\n total_number_of_submissions_allowed = cleaned_data[\n \"total_number_of_submissions_allowed\"\n ]\n\n if (\n submission_kind == SubmissionKindChoices.ALGORITHM\n and not total_number_of_submissions_allowed\n ):\n try:\n request = ChallengeRequest.objects.get(\n short_name=self.instance.challenge.short_name\n )\n error_addition = f\"The corresponding challenge request lists the following limits: Preliminary phase: {request.phase_1_number_of_submissions_per_team * request.expected_number_of_teams} Final test phase: {request.phase_2_number_of_submissions_per_team * request.expected_number_of_teams}. Set the limits according to the phase type. \"\n except ObjectDoesNotExist:\n error_addition = \"There is no corresponding challenge request.\"\n raise ValidationError(\n \"For phases that take an algorithm as submission input, \"\n \"the total_number_of_submissions_allowed needs to be set. \"\n + error_addition\n )\n\n return cleaned_data\n\n\[email protected](Phase)\nclass PhaseAdmin(admin.ModelAdmin):\n ordering = (\"challenge\",)\n list_display = (\n \"slug\",\n \"title\",\n \"challenge\",\n \"submission_kind\",\n \"open_for_submissions\",\n \"submissions_open_at\",\n \"submissions_close_at\",\n \"submissions_limit_per_user_per_period\",\n )\n search_fields = (\"pk\", \"title\", \"slug\", \"challenge__short_name\")\n list_filter = (\n \"submission_kind\",\n \"challenge__short_name\",\n )\n autocomplete_fields = (\n \"inputs\",\n \"outputs\",\n \"algorithm_inputs\",\n \"algorithm_outputs\",\n \"archive\",\n )\n form = PhaseAdminForm\n\n @admin.display(boolean=True)\n def open_for_submissions(self, instance):\n return instance.open_for_submissions\n\n\[email protected](\n description=\"Reevaluate selected submissions\",\n permissions=(\"change\",),\n)\ndef reevaluate_submissions(modeladmin, request, queryset):\n \"\"\"Creates a new evaluation for an existing submission\"\"\"\n for submission in queryset:\n create_evaluation.apply_async(\n kwargs={\"submission_pk\": str(submission.pk)}\n )\n\n\[email protected](Submission)\nclass SubmissionAdmin(admin.ModelAdmin):\n ordering = (\"-created\",)\n list_display = (\"pk\", \"created\", \"phase\", \"creator\")\n list_filter = (\"phase__challenge__short_name\",)\n search_fields = (\"pk\", \"creator__username\", \"phase__slug\")\n readonly_fields = (\n \"creator\",\n \"phase\",\n \"predictions_file\",\n \"algorithm_image\",\n )\n actions = (reevaluate_submissions,)\n\n\[email protected](Evaluation)\nclass EvaluationAdmin(admin.ModelAdmin):\n ordering = (\"-created\",)\n list_display = (\"pk\", \"created\", \"submission\", \"status\", \"error_message\")\n list_filter = (\"submission__phase__challenge__short_name\", \"status\")\n list_select_related = (\n \"submission__phase__challenge\",\n \"submission__creator\",\n )\n search_fields = (\n \"pk\",\n \"submission__pk\",\n \"submission__phase__challenge__short_name\",\n \"submission__creator__username\",\n )\n readonly_fields = (\n \"status\",\n \"submission\",\n \"method\",\n \"inputs\",\n \"outputs\",\n \"attempt\",\n \"stdout\",\n \"stderr\",\n \"error_message\",\n \"input_prefixes\",\n \"task_on_success\",\n \"task_on_failure\",\n \"runtime_metrics\",\n )\n actions = (requeue_jobs, cancel_jobs, deprovision_jobs)\n\n\nadmin.site.register(PhaseUserObjectPermission, UserObjectPermissionAdmin)\nadmin.site.register(PhaseGroupObjectPermission, GroupObjectPermissionAdmin)\nadmin.site.register(Method, ComponentImageAdmin)\nadmin.site.register(MethodUserObjectPermission, UserObjectPermissionAdmin)\nadmin.site.register(MethodGroupObjectPermission, GroupObjectPermissionAdmin)\nadmin.site.register(SubmissionUserObjectPermission, UserObjectPermissionAdmin)\nadmin.site.register(\n SubmissionGroupObjectPermission, GroupObjectPermissionAdmin\n)\nadmin.site.register(EvaluationUserObjectPermission, UserObjectPermissionAdmin)\nadmin.site.register(\n EvaluationGroupObjectPermission, GroupObjectPermissionAdmin\n)\n", "path": "app/grandchallenge/evaluation/admin.py"}]}
| 1,868 | 121 |
gh_patches_debug_31966
|
rasdani/github-patches
|
git_diff
|
nilearn__nilearn-3739
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[DOC] improve instructions in "Default Mode Network extraction of ADHD dataset" example
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Describe your proposed suggestion in detail.
It seems the instructions in [this example](https://nilearn.github.io/dev/auto_examples/04_glm_first_level/plot_adhd_dmn.html#default-mode-network-extraction-of-adhd-dataset) need some improvement. There was a confusion mentioned on [NeuroStar](https://neurostars.org/t/why-is-there-glm-for-resting-state-data/25841). After discussing with @Remi-Gau, we concluded that maybe we can add one or two lines saying that in this example we extract the activity of a seed region and then use the extracted signal as regressor in a GLM and this will yield the correlation of each region with the seed region.
### List any pages that would be impacted.
https://nilearn.github.io/dev/auto_examples/04_glm_first_level/plot_adhd_dmn.html#default-mode-network-extraction-of-adhd-dataset
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/04_glm_first_level/plot_adhd_dmn.py`
Content:
```
1 """
2 Default Mode Network extraction of ADHD dataset
3 ===============================================
4
5 This example shows a full step-by-step workflow of fitting a GLM to data
6 extracted from a seed on the Posterior Cingulate Cortex and saving the results.
7
8 More specifically:
9
10 1. A sequence of fMRI volumes are loaded.
11 2. A design matrix with the Posterior Cingulate Cortex seed is defined.
12 3. A GLM is applied to the dataset (effect/covariance,
13 then contrast estimation).
14 4. The Default Mode Network is displayed.
15
16 .. include:: ../../../examples/masker_note.rst
17
18 """
19 import numpy as np
20 from nilearn import datasets, plotting
21 from nilearn.glm.first_level import (
22 FirstLevelModel,
23 make_first_level_design_matrix,
24 )
25 from nilearn.maskers import NiftiSpheresMasker
26
27 #########################################################################
28 # Prepare data and analysis parameters
29 # ------------------------------------
30 # Prepare the data.
31 adhd_dataset = datasets.fetch_adhd(n_subjects=1)
32
33 # Prepare timing
34 t_r = 2.0
35 slice_time_ref = 0.0
36 n_scans = 176
37
38 # Prepare seed
39 pcc_coords = (0, -53, 26)
40
41 #########################################################################
42 # Estimate contrasts
43 # ------------------
44 # Specify the contrasts.
45 seed_masker = NiftiSpheresMasker(
46 [pcc_coords],
47 radius=10,
48 detrend=True,
49 standardize="zscore_sample",
50 low_pass=0.1,
51 high_pass=0.01,
52 t_r=2.0,
53 memory="nilearn_cache",
54 memory_level=1,
55 verbose=0,
56 )
57 seed_time_series = seed_masker.fit_transform(adhd_dataset.func[0])
58 frametimes = np.linspace(0, (n_scans - 1) * t_r, n_scans)
59 design_matrix = make_first_level_design_matrix(
60 frametimes,
61 hrf_model="spm",
62 add_regs=seed_time_series,
63 add_reg_names=["pcc_seed"],
64 )
65 dmn_contrast = np.array([1] + [0] * (design_matrix.shape[1] - 1))
66 contrasts = {"seed_based_glm": dmn_contrast}
67
68 #########################################################################
69 # Perform first level analysis
70 # ----------------------------
71 # Setup and fit GLM.
72 first_level_model = FirstLevelModel(t_r=t_r, slice_time_ref=slice_time_ref)
73 first_level_model = first_level_model.fit(
74 run_imgs=adhd_dataset.func[0], design_matrices=design_matrix
75 )
76
77 #########################################################################
78 # Estimate the contrast.
79 print("Contrast seed_based_glm computed.")
80 z_map = first_level_model.compute_contrast(
81 contrasts["seed_based_glm"], output_type="z_score"
82 )
83
84 # Saving snapshots of the contrasts
85 filename = "dmn_z_map.png"
86 display = plotting.plot_stat_map(
87 z_map, threshold=3.0, title="Seed based GLM", cut_coords=pcc_coords
88 )
89 display.add_markers(
90 marker_coords=[pcc_coords], marker_color="g", marker_size=300
91 )
92 display.savefig(filename)
93 print(f"Save z-map in '{filename}'.")
94
95 ###########################################################################
96 # Generating a report
97 # -------------------
98 # It can be useful to quickly generate a
99 # portable, ready-to-view report with most of the pertinent information.
100 # This is easy to do if you have a fitted model and the list of contrasts,
101 # which we do here.
102 from nilearn.reporting import make_glm_report
103
104 report = make_glm_report(
105 first_level_model,
106 contrasts=contrasts,
107 title="ADHD DMN Report",
108 cluster_threshold=15,
109 min_distance=8.0,
110 plot_type="glass",
111 )
112
113 #########################################################################
114 # We have several ways to access the report:
115
116 # report # This report can be viewed in a notebook
117 # report.save_as_html('report.html')
118 # report.open_in_browser()
119
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/examples/04_glm_first_level/plot_adhd_dmn.py b/examples/04_glm_first_level/plot_adhd_dmn.py
--- a/examples/04_glm_first_level/plot_adhd_dmn.py
+++ b/examples/04_glm_first_level/plot_adhd_dmn.py
@@ -2,8 +2,11 @@
Default Mode Network extraction of ADHD dataset
===============================================
-This example shows a full step-by-step workflow of fitting a GLM to data
+This example shows a full step-by-step workflow of fitting a GLM to signal
extracted from a seed on the Posterior Cingulate Cortex and saving the results.
+More precisely, this example shows how to use a signal extracted from a
+seed region as the regressor in a GLM to determine the correlation
+of each region in the dataset with the seed region.
More specifically:
@@ -39,9 +42,9 @@
pcc_coords = (0, -53, 26)
#########################################################################
-# Estimate contrasts
-# ------------------
-# Specify the contrasts.
+# Extract the seed region's time course
+# -------------------------------------
+# Extract the time course of the seed region.
seed_masker = NiftiSpheresMasker(
[pcc_coords],
radius=10,
@@ -56,6 +59,22 @@
)
seed_time_series = seed_masker.fit_transform(adhd_dataset.func[0])
frametimes = np.linspace(0, (n_scans - 1) * t_r, n_scans)
+
+#########################################################################
+# Plot the time course of the seed region.
+import matplotlib.pyplot as plt
+
+fig = plt.figure(figsize=(9, 3))
+ax = fig.add_subplot(111)
+ax.plot(frametimes, seed_time_series, linewidth=2, label="seed region")
+ax.legend(loc=2)
+ax.set_title("Time course of the seed region")
+plt.show()
+
+#########################################################################
+# Estimate contrasts
+# ------------------
+# Specify the contrasts.
design_matrix = make_first_level_design_matrix(
frametimes,
hrf_model="spm",
|
{"golden_diff": "diff --git a/examples/04_glm_first_level/plot_adhd_dmn.py b/examples/04_glm_first_level/plot_adhd_dmn.py\n--- a/examples/04_glm_first_level/plot_adhd_dmn.py\n+++ b/examples/04_glm_first_level/plot_adhd_dmn.py\n@@ -2,8 +2,11 @@\n Default Mode Network extraction of ADHD dataset\n ===============================================\n \n-This example shows a full step-by-step workflow of fitting a GLM to data\n+This example shows a full step-by-step workflow of fitting a GLM to signal\n extracted from a seed on the Posterior Cingulate Cortex and saving the results.\n+More precisely, this example shows how to use a signal extracted from a\n+seed region as the regressor in a GLM to determine the correlation\n+of each region in the dataset with the seed region.\n \n More specifically:\n \n@@ -39,9 +42,9 @@\n pcc_coords = (0, -53, 26)\n \n #########################################################################\n-# Estimate contrasts\n-# ------------------\n-# Specify the contrasts.\n+# Extract the seed region's time course\n+# -------------------------------------\n+# Extract the time course of the seed region.\n seed_masker = NiftiSpheresMasker(\n [pcc_coords],\n radius=10,\n@@ -56,6 +59,22 @@\n )\n seed_time_series = seed_masker.fit_transform(adhd_dataset.func[0])\n frametimes = np.linspace(0, (n_scans - 1) * t_r, n_scans)\n+\n+#########################################################################\n+# Plot the time course of the seed region.\n+import matplotlib.pyplot as plt\n+\n+fig = plt.figure(figsize=(9, 3))\n+ax = fig.add_subplot(111)\n+ax.plot(frametimes, seed_time_series, linewidth=2, label=\"seed region\")\n+ax.legend(loc=2)\n+ax.set_title(\"Time course of the seed region\")\n+plt.show()\n+\n+#########################################################################\n+# Estimate contrasts\n+# ------------------\n+# Specify the contrasts.\n design_matrix = make_first_level_design_matrix(\n frametimes,\n hrf_model=\"spm\",\n", "issue": "[DOC] improve instructions in \"Default Mode Network extraction of ADHD dataset\" example\n### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Describe your proposed suggestion in detail.\r\n\r\nIt seems the instructions in [this example](https://nilearn.github.io/dev/auto_examples/04_glm_first_level/plot_adhd_dmn.html#default-mode-network-extraction-of-adhd-dataset) need some improvement. There was a confusion mentioned on [NeuroStar](https://neurostars.org/t/why-is-there-glm-for-resting-state-data/25841). After discussing with @Remi-Gau, we concluded that maybe we can add one or two lines saying that in this example we extract the activity of a seed region and then use the extracted signal as regressor in a GLM and this will yield the correlation of each region with the seed region.\r\n\r\n### List any pages that would be impacted.\r\n\r\nhttps://nilearn.github.io/dev/auto_examples/04_glm_first_level/plot_adhd_dmn.html#default-mode-network-extraction-of-adhd-dataset\n", "before_files": [{"content": "\"\"\"\nDefault Mode Network extraction of ADHD dataset\n===============================================\n\nThis example shows a full step-by-step workflow of fitting a GLM to data\nextracted from a seed on the Posterior Cingulate Cortex and saving the results.\n\nMore specifically:\n\n1. A sequence of fMRI volumes are loaded.\n2. A design matrix with the Posterior Cingulate Cortex seed is defined.\n3. A GLM is applied to the dataset (effect/covariance,\n then contrast estimation).\n4. The Default Mode Network is displayed.\n\n.. include:: ../../../examples/masker_note.rst\n\n\"\"\"\nimport numpy as np\nfrom nilearn import datasets, plotting\nfrom nilearn.glm.first_level import (\n FirstLevelModel,\n make_first_level_design_matrix,\n)\nfrom nilearn.maskers import NiftiSpheresMasker\n\n#########################################################################\n# Prepare data and analysis parameters\n# ------------------------------------\n# Prepare the data.\nadhd_dataset = datasets.fetch_adhd(n_subjects=1)\n\n# Prepare timing\nt_r = 2.0\nslice_time_ref = 0.0\nn_scans = 176\n\n# Prepare seed\npcc_coords = (0, -53, 26)\n\n#########################################################################\n# Estimate contrasts\n# ------------------\n# Specify the contrasts.\nseed_masker = NiftiSpheresMasker(\n [pcc_coords],\n radius=10,\n detrend=True,\n standardize=\"zscore_sample\",\n low_pass=0.1,\n high_pass=0.01,\n t_r=2.0,\n memory=\"nilearn_cache\",\n memory_level=1,\n verbose=0,\n)\nseed_time_series = seed_masker.fit_transform(adhd_dataset.func[0])\nframetimes = np.linspace(0, (n_scans - 1) * t_r, n_scans)\ndesign_matrix = make_first_level_design_matrix(\n frametimes,\n hrf_model=\"spm\",\n add_regs=seed_time_series,\n add_reg_names=[\"pcc_seed\"],\n)\ndmn_contrast = np.array([1] + [0] * (design_matrix.shape[1] - 1))\ncontrasts = {\"seed_based_glm\": dmn_contrast}\n\n#########################################################################\n# Perform first level analysis\n# ----------------------------\n# Setup and fit GLM.\nfirst_level_model = FirstLevelModel(t_r=t_r, slice_time_ref=slice_time_ref)\nfirst_level_model = first_level_model.fit(\n run_imgs=adhd_dataset.func[0], design_matrices=design_matrix\n)\n\n#########################################################################\n# Estimate the contrast.\nprint(\"Contrast seed_based_glm computed.\")\nz_map = first_level_model.compute_contrast(\n contrasts[\"seed_based_glm\"], output_type=\"z_score\"\n)\n\n# Saving snapshots of the contrasts\nfilename = \"dmn_z_map.png\"\ndisplay = plotting.plot_stat_map(\n z_map, threshold=3.0, title=\"Seed based GLM\", cut_coords=pcc_coords\n)\ndisplay.add_markers(\n marker_coords=[pcc_coords], marker_color=\"g\", marker_size=300\n)\ndisplay.savefig(filename)\nprint(f\"Save z-map in '{filename}'.\")\n\n###########################################################################\n# Generating a report\n# -------------------\n# It can be useful to quickly generate a\n# portable, ready-to-view report with most of the pertinent information.\n# This is easy to do if you have a fitted model and the list of contrasts,\n# which we do here.\nfrom nilearn.reporting import make_glm_report\n\nreport = make_glm_report(\n first_level_model,\n contrasts=contrasts,\n title=\"ADHD DMN Report\",\n cluster_threshold=15,\n min_distance=8.0,\n plot_type=\"glass\",\n)\n\n#########################################################################\n# We have several ways to access the report:\n\n# report # This report can be viewed in a notebook\n# report.save_as_html('report.html')\n# report.open_in_browser()\n", "path": "examples/04_glm_first_level/plot_adhd_dmn.py"}], "after_files": [{"content": "\"\"\"\nDefault Mode Network extraction of ADHD dataset\n===============================================\n\nThis example shows a full step-by-step workflow of fitting a GLM to signal\nextracted from a seed on the Posterior Cingulate Cortex and saving the results.\nMore precisely, this example shows how to use a signal extracted from a\nseed region as the regressor in a GLM to determine the correlation\nof each region in the dataset with the seed region.\n\nMore specifically:\n\n1. A sequence of fMRI volumes are loaded.\n2. A design matrix with the Posterior Cingulate Cortex seed is defined.\n3. A GLM is applied to the dataset (effect/covariance,\n then contrast estimation).\n4. The Default Mode Network is displayed.\n\n.. include:: ../../../examples/masker_note.rst\n\n\"\"\"\nimport numpy as np\nfrom nilearn import datasets, plotting\nfrom nilearn.glm.first_level import (\n FirstLevelModel,\n make_first_level_design_matrix,\n)\nfrom nilearn.maskers import NiftiSpheresMasker\n\n#########################################################################\n# Prepare data and analysis parameters\n# ------------------------------------\n# Prepare the data.\nadhd_dataset = datasets.fetch_adhd(n_subjects=1)\n\n# Prepare timing\nt_r = 2.0\nslice_time_ref = 0.0\nn_scans = 176\n\n# Prepare seed\npcc_coords = (0, -53, 26)\n\n#########################################################################\n# Extract the seed region's time course\n# -------------------------------------\n# Extract the time course of the seed region.\nseed_masker = NiftiSpheresMasker(\n [pcc_coords],\n radius=10,\n detrend=True,\n standardize=\"zscore_sample\",\n low_pass=0.1,\n high_pass=0.01,\n t_r=2.0,\n memory=\"nilearn_cache\",\n memory_level=1,\n verbose=0,\n)\nseed_time_series = seed_masker.fit_transform(adhd_dataset.func[0])\nframetimes = np.linspace(0, (n_scans - 1) * t_r, n_scans)\n\n#########################################################################\n# Plot the time course of the seed region.\nimport matplotlib.pyplot as plt\n\nfig = plt.figure(figsize=(9, 3))\nax = fig.add_subplot(111)\nax.plot(frametimes, seed_time_series, linewidth=2, label=\"seed region\")\nax.legend(loc=2)\nax.set_title(\"Time course of the seed region\")\nplt.show()\n\n#########################################################################\n# Estimate contrasts\n# ------------------\n# Specify the contrasts.\ndesign_matrix = make_first_level_design_matrix(\n frametimes,\n hrf_model=\"spm\",\n add_regs=seed_time_series,\n add_reg_names=[\"pcc_seed\"],\n)\ndmn_contrast = np.array([1] + [0] * (design_matrix.shape[1] - 1))\ncontrasts = {\"seed_based_glm\": dmn_contrast}\n\n#########################################################################\n# Perform first level analysis\n# ----------------------------\n# Setup and fit GLM.\nfirst_level_model = FirstLevelModel(t_r=t_r, slice_time_ref=slice_time_ref)\nfirst_level_model = first_level_model.fit(\n run_imgs=adhd_dataset.func[0], design_matrices=design_matrix\n)\n\n#########################################################################\n# Estimate the contrast.\nprint(\"Contrast seed_based_glm computed.\")\nz_map = first_level_model.compute_contrast(\n contrasts[\"seed_based_glm\"], output_type=\"z_score\"\n)\n\n# Saving snapshots of the contrasts\nfilename = \"dmn_z_map.png\"\ndisplay = plotting.plot_stat_map(\n z_map, threshold=3.0, title=\"Seed based GLM\", cut_coords=pcc_coords\n)\ndisplay.add_markers(\n marker_coords=[pcc_coords], marker_color=\"g\", marker_size=300\n)\ndisplay.savefig(filename)\nprint(f\"Save z-map in '{filename}'.\")\n\n###########################################################################\n# Generating a report\n# -------------------\n# It can be useful to quickly generate a\n# portable, ready-to-view report with most of the pertinent information.\n# This is easy to do if you have a fitted model and the list of contrasts,\n# which we do here.\nfrom nilearn.reporting import make_glm_report\n\nreport = make_glm_report(\n first_level_model,\n contrasts=contrasts,\n title=\"ADHD DMN Report\",\n cluster_threshold=15,\n min_distance=8.0,\n plot_type=\"glass\",\n)\n\n#########################################################################\n# We have several ways to access the report:\n\n# report # This report can be viewed in a notebook\n# report.save_as_html('report.html')\n# report.open_in_browser()\n", "path": "examples/04_glm_first_level/plot_adhd_dmn.py"}]}
| 1,591 | 465 |
gh_patches_debug_3953
|
rasdani/github-patches
|
git_diff
|
chainer__chainer-7178
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Emit warning if compiled in Debug mode
In debug mode ChainerX runs significantly slower.
However sometimes it's difficult notice that.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `chainerx/__init__.py`
Content:
```
1 import os
2 import sys
3
4
5 if sys.version_info[0] < 3:
6 _available = False
7 else:
8 try:
9 from chainerx import _core
10 _available = True
11 except Exception:
12 _available = False
13
14
15 if _available:
16 from numpy import dtype # NOQA
17 from numpy import (
18 bool_, int8, int16, int32, int64, uint8, float16, float32, float64) # NOQA
19 all_dtypes = (
20 bool_, int8, int16, int32, int64, uint8, float16, float32, float64)
21
22 from chainerx._core import * # NOQA
23 from chainerx._core import _to_cupy # NOQA
24
25 from builtins import bool, int, float # NOQA
26
27 from chainerx import _device # NOQA
28
29 from chainerx.creation.from_data import asanyarray # NOQA
30 from chainerx.creation.from_data import fromfile # NOQA
31 from chainerx.creation.from_data import fromfunction # NOQA
32 from chainerx.creation.from_data import fromiter # NOQA
33 from chainerx.creation.from_data import fromstring # NOQA
34 from chainerx.creation.from_data import loadtxt # NOQA
35
36 from chainerx.manipulation.shape import ravel # NOQA
37
38 from chainerx.math.misc import clip # NOQA
39
40 from chainerx import random # NOQA
41
42 _global_context = _core.Context()
43 _core.set_global_default_context(_global_context)
44
45 # Implements ndarray methods in Python
46 from chainerx import _ndarray
47 _ndarray.populate()
48
49 # Temporary workaround implementations that fall back to NumPy/CuPy's
50 # respective functions.
51 from chainerx import _fallback_workarounds
52 _fallback_workarounds.populate()
53
54 # Dynamically inject docstrings
55 from chainerx import _docs
56 _docs.set_docs()
57
58 from chainerx import _cuda
59 # Share memory pool with CuPy.
60 if bool(int(os.getenv('CHAINERX_CUDA_CUPY_SHARE_ALLOCATOR', '0'))):
61 _cuda.cupy_share_allocator()
62 else:
63 class ndarray(object):
64
65 """Dummy class for type testing."""
66
67 def __init__(self, *args, **kwargs):
68 raise RuntimeError('chainerx is not available.')
69
70
71 def is_available():
72 return _available
73
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/chainerx/__init__.py b/chainerx/__init__.py
--- a/chainerx/__init__.py
+++ b/chainerx/__init__.py
@@ -1,5 +1,6 @@
import os
import sys
+import warnings
if sys.version_info[0] < 3:
@@ -70,3 +71,9 @@
def is_available():
return _available
+
+
+if _available and _core._is_debug():
+ # Warn if the ChainerX core binary is built in debug mode
+ warnings.warn(
+ 'ChainerX core binary is built in debug mode.', stacklevel=2)
|
{"golden_diff": "diff --git a/chainerx/__init__.py b/chainerx/__init__.py\n--- a/chainerx/__init__.py\n+++ b/chainerx/__init__.py\n@@ -1,5 +1,6 @@\n import os\n import sys\n+import warnings\n \n \n if sys.version_info[0] < 3:\n@@ -70,3 +71,9 @@\n \n def is_available():\n return _available\n+\n+\n+if _available and _core._is_debug():\n+ # Warn if the ChainerX core binary is built in debug mode\n+ warnings.warn(\n+ 'ChainerX core binary is built in debug mode.', stacklevel=2)\n", "issue": "Emit warning if compiled in Debug mode\nIn debug mode ChainerX runs significantly slower.\r\nHowever sometimes it's difficult notice that.\n", "before_files": [{"content": "import os\nimport sys\n\n\nif sys.version_info[0] < 3:\n _available = False\nelse:\n try:\n from chainerx import _core\n _available = True\n except Exception:\n _available = False\n\n\nif _available:\n from numpy import dtype # NOQA\n from numpy import (\n bool_, int8, int16, int32, int64, uint8, float16, float32, float64) # NOQA\n all_dtypes = (\n bool_, int8, int16, int32, int64, uint8, float16, float32, float64)\n\n from chainerx._core import * # NOQA\n from chainerx._core import _to_cupy # NOQA\n\n from builtins import bool, int, float # NOQA\n\n from chainerx import _device # NOQA\n\n from chainerx.creation.from_data import asanyarray # NOQA\n from chainerx.creation.from_data import fromfile # NOQA\n from chainerx.creation.from_data import fromfunction # NOQA\n from chainerx.creation.from_data import fromiter # NOQA\n from chainerx.creation.from_data import fromstring # NOQA\n from chainerx.creation.from_data import loadtxt # NOQA\n\n from chainerx.manipulation.shape import ravel # NOQA\n\n from chainerx.math.misc import clip # NOQA\n\n from chainerx import random # NOQA\n\n _global_context = _core.Context()\n _core.set_global_default_context(_global_context)\n\n # Implements ndarray methods in Python\n from chainerx import _ndarray\n _ndarray.populate()\n\n # Temporary workaround implementations that fall back to NumPy/CuPy's\n # respective functions.\n from chainerx import _fallback_workarounds\n _fallback_workarounds.populate()\n\n # Dynamically inject docstrings\n from chainerx import _docs\n _docs.set_docs()\n\n from chainerx import _cuda\n # Share memory pool with CuPy.\n if bool(int(os.getenv('CHAINERX_CUDA_CUPY_SHARE_ALLOCATOR', '0'))):\n _cuda.cupy_share_allocator()\nelse:\n class ndarray(object):\n\n \"\"\"Dummy class for type testing.\"\"\"\n\n def __init__(self, *args, **kwargs):\n raise RuntimeError('chainerx is not available.')\n\n\ndef is_available():\n return _available\n", "path": "chainerx/__init__.py"}], "after_files": [{"content": "import os\nimport sys\nimport warnings\n\n\nif sys.version_info[0] < 3:\n _available = False\nelse:\n try:\n from chainerx import _core\n _available = True\n except Exception:\n _available = False\n\n\nif _available:\n from numpy import dtype # NOQA\n from numpy import (\n bool_, int8, int16, int32, int64, uint8, float16, float32, float64) # NOQA\n all_dtypes = (\n bool_, int8, int16, int32, int64, uint8, float16, float32, float64)\n\n from chainerx._core import * # NOQA\n from chainerx._core import _to_cupy # NOQA\n\n from builtins import bool, int, float # NOQA\n\n from chainerx import _device # NOQA\n\n from chainerx.creation.from_data import asanyarray # NOQA\n from chainerx.creation.from_data import fromfile # NOQA\n from chainerx.creation.from_data import fromfunction # NOQA\n from chainerx.creation.from_data import fromiter # NOQA\n from chainerx.creation.from_data import fromstring # NOQA\n from chainerx.creation.from_data import loadtxt # NOQA\n\n from chainerx.manipulation.shape import ravel # NOQA\n\n from chainerx.math.misc import clip # NOQA\n\n from chainerx import random # NOQA\n\n _global_context = _core.Context()\n _core.set_global_default_context(_global_context)\n\n # Implements ndarray methods in Python\n from chainerx import _ndarray\n _ndarray.populate()\n\n # Temporary workaround implementations that fall back to NumPy/CuPy's\n # respective functions.\n from chainerx import _fallback_workarounds\n _fallback_workarounds.populate()\n\n # Dynamically inject docstrings\n from chainerx import _docs\n _docs.set_docs()\n\n from chainerx import _cuda\n # Share memory pool with CuPy.\n if bool(int(os.getenv('CHAINERX_CUDA_CUPY_SHARE_ALLOCATOR', '0'))):\n _cuda.cupy_share_allocator()\nelse:\n class ndarray(object):\n\n \"\"\"Dummy class for type testing.\"\"\"\n\n def __init__(self, *args, **kwargs):\n raise RuntimeError('chainerx is not available.')\n\n\ndef is_available():\n return _available\n\n\nif _available and _core._is_debug():\n # Warn if the ChainerX core binary is built in debug mode\n warnings.warn(\n 'ChainerX core binary is built in debug mode.', stacklevel=2)\n", "path": "chainerx/__init__.py"}]}
| 997 | 148 |
gh_patches_debug_41016
|
rasdani/github-patches
|
git_diff
|
kubeflow__pipelines-3486
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Component] AutoML Tables component should show link as an artifact
/cc @jessiezcc
/cc @jingzhang36
/assign @Ark-kun
It will be helpful if components in
https://github.com/kubeflow/pipelines/tree/b89aabbce5d48fca10817c3ed3ecc2acf6c0066a/components/gcp/automl can show related AutoML tables url as markdown artifacts.
e.g.
> We would like to be able to click on a link that would take us from the component’s page to an AutoML Tables models page
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `components/gcp/automl/create_model_for_tables/component.py`
Content:
```
1 # Copyright 2019 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from typing import NamedTuple
16
17
18 def automl_create_model_for_tables(
19 gcp_project_id: str,
20 gcp_region: str,
21 display_name: str,
22 dataset_id: str,
23 target_column_path: str = None,
24 input_feature_column_paths: list = None,
25 optimization_objective: str = 'MAXIMIZE_AU_PRC',
26 train_budget_milli_node_hours: int = 1000,
27 ) -> NamedTuple('Outputs', [('model_path', str), ('model_id', str)]):
28 import sys
29 import subprocess
30 subprocess.run([sys.executable, '-m', 'pip', 'install', 'google-cloud-automl==0.4.0', '--quiet', '--no-warn-script-location'], env={'PIP_DISABLE_PIP_VERSION_CHECK': '1'}, check=True)
31
32 from google.cloud import automl
33 client = automl.AutoMlClient()
34
35 location_path = client.location_path(gcp_project_id, gcp_region)
36 model_dict = {
37 'display_name': display_name,
38 'dataset_id': dataset_id,
39 'tables_model_metadata': {
40 'target_column_spec': automl.types.ColumnSpec(name=target_column_path),
41 'input_feature_column_specs': [automl.types.ColumnSpec(name=path) for path in input_feature_column_paths] if input_feature_column_paths else None,
42 'optimization_objective': optimization_objective,
43 'train_budget_milli_node_hours': train_budget_milli_node_hours,
44 },
45 }
46
47 create_model_response = client.create_model(location_path, model_dict)
48 print('Create model operation: {}'.format(create_model_response.operation))
49 result = create_model_response.result()
50 print(result)
51 model_name = result.name
52 model_id = model_name.rsplit('/', 1)[-1]
53 return (model_name, model_id)
54
55
56 if __name__ == '__main__':
57 import kfp
58 kfp.components.func_to_container_op(automl_create_model_for_tables, output_component_file='component.yaml', base_image='python:3.7')
59
```
Path: `components/gcp/automl/create_dataset_for_tables/component.py`
Content:
```
1 # Copyright 2019 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from typing import NamedTuple
16
17
18 def automl_create_dataset_for_tables(
19 gcp_project_id: str,
20 gcp_region: str,
21 display_name: str,
22 description: str = None,
23 tables_dataset_metadata: dict = {},
24 retry=None, #=google.api_core.gapic_v1.method.DEFAULT,
25 timeout: float = None, #=google.api_core.gapic_v1.method.DEFAULT,
26 metadata: dict = None,
27 ) -> NamedTuple('Outputs', [('dataset_path', str), ('create_time', str), ('dataset_id', str)]):
28 '''automl_create_dataset_for_tables creates an empty Dataset for AutoML tables
29 '''
30 import sys
31 import subprocess
32 subprocess.run([sys.executable, '-m', 'pip', 'install', 'google-cloud-automl==0.4.0', '--quiet', '--no-warn-script-location'], env={'PIP_DISABLE_PIP_VERSION_CHECK': '1'}, check=True)
33
34 import google
35 from google.cloud import automl
36 client = automl.AutoMlClient()
37
38 location_path = client.location_path(gcp_project_id, gcp_region)
39 dataset_dict = {
40 'display_name': display_name,
41 'description': description,
42 'tables_dataset_metadata': tables_dataset_metadata,
43 }
44 dataset = client.create_dataset(
45 location_path,
46 dataset_dict,
47 retry or google.api_core.gapic_v1.method.DEFAULT,
48 timeout or google.api_core.gapic_v1.method.DEFAULT,
49 metadata,
50 )
51 print(dataset)
52 dataset_id = dataset.name.rsplit('/', 1)[-1]
53 return (dataset.name, dataset.create_time, dataset_id)
54
55
56 if __name__ == '__main__':
57 import kfp
58 kfp.components.func_to_container_op(automl_create_dataset_for_tables, output_component_file='component.yaml', base_image='python:3.7')
59
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/components/gcp/automl/create_dataset_for_tables/component.py b/components/gcp/automl/create_dataset_for_tables/component.py
--- a/components/gcp/automl/create_dataset_for_tables/component.py
+++ b/components/gcp/automl/create_dataset_for_tables/component.py
@@ -24,13 +24,9 @@
retry=None, #=google.api_core.gapic_v1.method.DEFAULT,
timeout: float = None, #=google.api_core.gapic_v1.method.DEFAULT,
metadata: dict = None,
-) -> NamedTuple('Outputs', [('dataset_path', str), ('create_time', str), ('dataset_id', str)]):
+) -> NamedTuple('Outputs', [('dataset_path', str), ('create_time', str), ('dataset_id', str), ('dataset_url', 'URI')]):
'''automl_create_dataset_for_tables creates an empty Dataset for AutoML tables
'''
- import sys
- import subprocess
- subprocess.run([sys.executable, '-m', 'pip', 'install', 'google-cloud-automl==0.4.0', '--quiet', '--no-warn-script-location'], env={'PIP_DISABLE_PIP_VERSION_CHECK': '1'}, check=True)
-
import google
from google.cloud import automl
client = automl.AutoMlClient()
@@ -50,9 +46,19 @@
)
print(dataset)
dataset_id = dataset.name.rsplit('/', 1)[-1]
- return (dataset.name, dataset.create_time, dataset_id)
+ dataset_url = 'https://console.cloud.google.com/automl-tables/locations/{region}/datasets/{dataset_id}/schemav2?project={project_id}'.format(
+ project_id=gcp_project_id,
+ region=gcp_region,
+ dataset_id=dataset_id,
+ )
+ return (dataset.name, dataset.create_time, dataset_id, dataset_url)
if __name__ == '__main__':
import kfp
- kfp.components.func_to_container_op(automl_create_dataset_for_tables, output_component_file='component.yaml', base_image='python:3.7')
+ kfp.components.func_to_container_op(
+ automl_create_dataset_for_tables,
+ output_component_file='component.yaml',
+ base_image='python:3.7',
+ packages_to_install=['google-cloud-automl==0.4.0']
+ )
diff --git a/components/gcp/automl/create_model_for_tables/component.py b/components/gcp/automl/create_model_for_tables/component.py
--- a/components/gcp/automl/create_model_for_tables/component.py
+++ b/components/gcp/automl/create_model_for_tables/component.py
@@ -24,11 +24,7 @@
input_feature_column_paths: list = None,
optimization_objective: str = 'MAXIMIZE_AU_PRC',
train_budget_milli_node_hours: int = 1000,
-) -> NamedTuple('Outputs', [('model_path', str), ('model_id', str)]):
- import sys
- import subprocess
- subprocess.run([sys.executable, '-m', 'pip', 'install', 'google-cloud-automl==0.4.0', '--quiet', '--no-warn-script-location'], env={'PIP_DISABLE_PIP_VERSION_CHECK': '1'}, check=True)
-
+) -> NamedTuple('Outputs', [('model_path', str), ('model_id', str), ('model_page_url', 'URI'),]):
from google.cloud import automl
client = automl.AutoMlClient()
@@ -50,9 +46,21 @@
print(result)
model_name = result.name
model_id = model_name.rsplit('/', 1)[-1]
- return (model_name, model_id)
+ model_url = 'https://console.cloud.google.com/automl-tables/locations/{region}/datasets/{dataset_id};modelId={model_id};task=basic/train?project={project_id}'.format(
+ project_id=gcp_project_id,
+ region=gcp_region,
+ dataset_id=dataset_id,
+ model_id=model_id,
+ )
+
+ return (model_name, model_id, model_url)
if __name__ == '__main__':
import kfp
- kfp.components.func_to_container_op(automl_create_model_for_tables, output_component_file='component.yaml', base_image='python:3.7')
+ kfp.components.func_to_container_op(
+ automl_create_model_for_tables,
+ output_component_file='component.yaml',
+ base_image='python:3.7',
+ packages_to_install=['google-cloud-automl==0.4.0']
+ )
|
{"golden_diff": "diff --git a/components/gcp/automl/create_dataset_for_tables/component.py b/components/gcp/automl/create_dataset_for_tables/component.py\n--- a/components/gcp/automl/create_dataset_for_tables/component.py\n+++ b/components/gcp/automl/create_dataset_for_tables/component.py\n@@ -24,13 +24,9 @@\n retry=None, #=google.api_core.gapic_v1.method.DEFAULT,\n timeout: float = None, #=google.api_core.gapic_v1.method.DEFAULT,\n metadata: dict = None,\n-) -> NamedTuple('Outputs', [('dataset_path', str), ('create_time', str), ('dataset_id', str)]):\n+) -> NamedTuple('Outputs', [('dataset_path', str), ('create_time', str), ('dataset_id', str), ('dataset_url', 'URI')]):\n '''automl_create_dataset_for_tables creates an empty Dataset for AutoML tables\n '''\n- import sys\n- import subprocess\n- subprocess.run([sys.executable, '-m', 'pip', 'install', 'google-cloud-automl==0.4.0', '--quiet', '--no-warn-script-location'], env={'PIP_DISABLE_PIP_VERSION_CHECK': '1'}, check=True)\n-\n import google\n from google.cloud import automl\n client = automl.AutoMlClient()\n@@ -50,9 +46,19 @@\n )\n print(dataset)\n dataset_id = dataset.name.rsplit('/', 1)[-1]\n- return (dataset.name, dataset.create_time, dataset_id)\n+ dataset_url = 'https://console.cloud.google.com/automl-tables/locations/{region}/datasets/{dataset_id}/schemav2?project={project_id}'.format(\n+ project_id=gcp_project_id,\n+ region=gcp_region,\n+ dataset_id=dataset_id,\n+ )\n+ return (dataset.name, dataset.create_time, dataset_id, dataset_url)\n \n \n if __name__ == '__main__':\n import kfp\n- kfp.components.func_to_container_op(automl_create_dataset_for_tables, output_component_file='component.yaml', base_image='python:3.7')\n+ kfp.components.func_to_container_op(\n+ automl_create_dataset_for_tables,\n+ output_component_file='component.yaml',\n+ base_image='python:3.7',\n+ packages_to_install=['google-cloud-automl==0.4.0']\n+ )\ndiff --git a/components/gcp/automl/create_model_for_tables/component.py b/components/gcp/automl/create_model_for_tables/component.py\n--- a/components/gcp/automl/create_model_for_tables/component.py\n+++ b/components/gcp/automl/create_model_for_tables/component.py\n@@ -24,11 +24,7 @@\n input_feature_column_paths: list = None,\n optimization_objective: str = 'MAXIMIZE_AU_PRC',\n train_budget_milli_node_hours: int = 1000,\n-) -> NamedTuple('Outputs', [('model_path', str), ('model_id', str)]):\n- import sys\n- import subprocess\n- subprocess.run([sys.executable, '-m', 'pip', 'install', 'google-cloud-automl==0.4.0', '--quiet', '--no-warn-script-location'], env={'PIP_DISABLE_PIP_VERSION_CHECK': '1'}, check=True)\n-\n+) -> NamedTuple('Outputs', [('model_path', str), ('model_id', str), ('model_page_url', 'URI'),]):\n from google.cloud import automl\n client = automl.AutoMlClient()\n \n@@ -50,9 +46,21 @@\n print(result)\n model_name = result.name\n model_id = model_name.rsplit('/', 1)[-1]\n- return (model_name, model_id)\n+ model_url = 'https://console.cloud.google.com/automl-tables/locations/{region}/datasets/{dataset_id};modelId={model_id};task=basic/train?project={project_id}'.format(\n+ project_id=gcp_project_id,\n+ region=gcp_region,\n+ dataset_id=dataset_id,\n+ model_id=model_id,\n+ )\n+\n+ return (model_name, model_id, model_url)\n \n \n if __name__ == '__main__':\n import kfp\n- kfp.components.func_to_container_op(automl_create_model_for_tables, output_component_file='component.yaml', base_image='python:3.7')\n+ kfp.components.func_to_container_op(\n+ automl_create_model_for_tables,\n+ output_component_file='component.yaml',\n+ base_image='python:3.7',\n+ packages_to_install=['google-cloud-automl==0.4.0']\n+ )\n", "issue": "[Component] AutoML Tables component should show link as an artifact\n/cc @jessiezcc \r\n/cc @jingzhang36 \r\n/assign @Ark-kun \r\n\r\nIt will be helpful if components in \r\nhttps://github.com/kubeflow/pipelines/tree/b89aabbce5d48fca10817c3ed3ecc2acf6c0066a/components/gcp/automl can show related AutoML tables url as markdown artifacts.\r\n\r\ne.g.\r\n> We would like to be able to click on a link that would take us from the component\u2019s page to an AutoML Tables models page\n", "before_files": [{"content": "# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import NamedTuple\n\n\ndef automl_create_model_for_tables(\n gcp_project_id: str,\n gcp_region: str,\n display_name: str,\n dataset_id: str,\n target_column_path: str = None,\n input_feature_column_paths: list = None,\n optimization_objective: str = 'MAXIMIZE_AU_PRC',\n train_budget_milli_node_hours: int = 1000,\n) -> NamedTuple('Outputs', [('model_path', str), ('model_id', str)]):\n import sys\n import subprocess\n subprocess.run([sys.executable, '-m', 'pip', 'install', 'google-cloud-automl==0.4.0', '--quiet', '--no-warn-script-location'], env={'PIP_DISABLE_PIP_VERSION_CHECK': '1'}, check=True)\n\n from google.cloud import automl\n client = automl.AutoMlClient()\n\n location_path = client.location_path(gcp_project_id, gcp_region)\n model_dict = {\n 'display_name': display_name,\n 'dataset_id': dataset_id,\n 'tables_model_metadata': {\n 'target_column_spec': automl.types.ColumnSpec(name=target_column_path),\n 'input_feature_column_specs': [automl.types.ColumnSpec(name=path) for path in input_feature_column_paths] if input_feature_column_paths else None,\n 'optimization_objective': optimization_objective,\n 'train_budget_milli_node_hours': train_budget_milli_node_hours,\n },\n }\n\n create_model_response = client.create_model(location_path, model_dict)\n print('Create model operation: {}'.format(create_model_response.operation))\n result = create_model_response.result()\n print(result)\n model_name = result.name\n model_id = model_name.rsplit('/', 1)[-1]\n return (model_name, model_id)\n\n\nif __name__ == '__main__':\n import kfp\n kfp.components.func_to_container_op(automl_create_model_for_tables, output_component_file='component.yaml', base_image='python:3.7')\n", "path": "components/gcp/automl/create_model_for_tables/component.py"}, {"content": "# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import NamedTuple\n\n\ndef automl_create_dataset_for_tables(\n gcp_project_id: str,\n gcp_region: str,\n display_name: str,\n description: str = None,\n tables_dataset_metadata: dict = {},\n retry=None, #=google.api_core.gapic_v1.method.DEFAULT,\n timeout: float = None, #=google.api_core.gapic_v1.method.DEFAULT,\n metadata: dict = None,\n) -> NamedTuple('Outputs', [('dataset_path', str), ('create_time', str), ('dataset_id', str)]):\n '''automl_create_dataset_for_tables creates an empty Dataset for AutoML tables\n '''\n import sys\n import subprocess\n subprocess.run([sys.executable, '-m', 'pip', 'install', 'google-cloud-automl==0.4.0', '--quiet', '--no-warn-script-location'], env={'PIP_DISABLE_PIP_VERSION_CHECK': '1'}, check=True)\n\n import google\n from google.cloud import automl\n client = automl.AutoMlClient()\n\n location_path = client.location_path(gcp_project_id, gcp_region)\n dataset_dict = {\n 'display_name': display_name,\n 'description': description,\n 'tables_dataset_metadata': tables_dataset_metadata,\n }\n dataset = client.create_dataset(\n location_path,\n dataset_dict,\n retry or google.api_core.gapic_v1.method.DEFAULT,\n timeout or google.api_core.gapic_v1.method.DEFAULT,\n metadata,\n )\n print(dataset)\n dataset_id = dataset.name.rsplit('/', 1)[-1]\n return (dataset.name, dataset.create_time, dataset_id)\n\n\nif __name__ == '__main__':\n import kfp\n kfp.components.func_to_container_op(automl_create_dataset_for_tables, output_component_file='component.yaml', base_image='python:3.7')\n", "path": "components/gcp/automl/create_dataset_for_tables/component.py"}], "after_files": [{"content": "# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import NamedTuple\n\n\ndef automl_create_model_for_tables(\n gcp_project_id: str,\n gcp_region: str,\n display_name: str,\n dataset_id: str,\n target_column_path: str = None,\n input_feature_column_paths: list = None,\n optimization_objective: str = 'MAXIMIZE_AU_PRC',\n train_budget_milli_node_hours: int = 1000,\n) -> NamedTuple('Outputs', [('model_path', str), ('model_id', str), ('model_page_url', 'URI'),]):\n from google.cloud import automl\n client = automl.AutoMlClient()\n\n location_path = client.location_path(gcp_project_id, gcp_region)\n model_dict = {\n 'display_name': display_name,\n 'dataset_id': dataset_id,\n 'tables_model_metadata': {\n 'target_column_spec': automl.types.ColumnSpec(name=target_column_path),\n 'input_feature_column_specs': [automl.types.ColumnSpec(name=path) for path in input_feature_column_paths] if input_feature_column_paths else None,\n 'optimization_objective': optimization_objective,\n 'train_budget_milli_node_hours': train_budget_milli_node_hours,\n },\n }\n\n create_model_response = client.create_model(location_path, model_dict)\n print('Create model operation: {}'.format(create_model_response.operation))\n result = create_model_response.result()\n print(result)\n model_name = result.name\n model_id = model_name.rsplit('/', 1)[-1]\n model_url = 'https://console.cloud.google.com/automl-tables/locations/{region}/datasets/{dataset_id};modelId={model_id};task=basic/train?project={project_id}'.format(\n project_id=gcp_project_id,\n region=gcp_region,\n dataset_id=dataset_id,\n model_id=model_id,\n )\n\n return (model_name, model_id, model_url)\n\n\nif __name__ == '__main__':\n import kfp\n kfp.components.func_to_container_op(\n automl_create_model_for_tables,\n output_component_file='component.yaml',\n base_image='python:3.7',\n packages_to_install=['google-cloud-automl==0.4.0']\n )\n", "path": "components/gcp/automl/create_model_for_tables/component.py"}, {"content": "# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import NamedTuple\n\n\ndef automl_create_dataset_for_tables(\n gcp_project_id: str,\n gcp_region: str,\n display_name: str,\n description: str = None,\n tables_dataset_metadata: dict = {},\n retry=None, #=google.api_core.gapic_v1.method.DEFAULT,\n timeout: float = None, #=google.api_core.gapic_v1.method.DEFAULT,\n metadata: dict = None,\n) -> NamedTuple('Outputs', [('dataset_path', str), ('create_time', str), ('dataset_id', str), ('dataset_url', 'URI')]):\n '''automl_create_dataset_for_tables creates an empty Dataset for AutoML tables\n '''\n import google\n from google.cloud import automl\n client = automl.AutoMlClient()\n\n location_path = client.location_path(gcp_project_id, gcp_region)\n dataset_dict = {\n 'display_name': display_name,\n 'description': description,\n 'tables_dataset_metadata': tables_dataset_metadata,\n }\n dataset = client.create_dataset(\n location_path,\n dataset_dict,\n retry or google.api_core.gapic_v1.method.DEFAULT,\n timeout or google.api_core.gapic_v1.method.DEFAULT,\n metadata,\n )\n print(dataset)\n dataset_id = dataset.name.rsplit('/', 1)[-1]\n dataset_url = 'https://console.cloud.google.com/automl-tables/locations/{region}/datasets/{dataset_id}/schemav2?project={project_id}'.format(\n project_id=gcp_project_id,\n region=gcp_region,\n dataset_id=dataset_id,\n )\n return (dataset.name, dataset.create_time, dataset_id, dataset_url)\n\n\nif __name__ == '__main__':\n import kfp\n kfp.components.func_to_container_op(\n automl_create_dataset_for_tables,\n output_component_file='component.yaml',\n base_image='python:3.7',\n packages_to_install=['google-cloud-automl==0.4.0']\n )\n", "path": "components/gcp/automl/create_dataset_for_tables/component.py"}]}
| 1,742 | 1,017 |
gh_patches_debug_12879
|
rasdani/github-patches
|
git_diff
|
apluslms__a-plus-771
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
HTML Plugin admin interface does not show relevant information
Some times we have to copy plugins from previous course instances to the current instance. However, it is difficult to know which plugin belongs to the course we want.
**Current view**

**Proposed view**

HTML Plugin admin interface does not show relevant information
Some times we have to copy plugins from previous course instances to the current instance. However, it is difficult to know which plugin belongs to the course we want.
**Current view**

**Proposed view**

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/admin.py`
Content:
```
1 from django.contrib import admin
2
3 from .models import (
4 BaseTab,
5 HTMLTab,
6 ExternalEmbeddedTab,
7 ExternalIFrameTab,
8 BasePlugin,
9 RSSPlugin,
10 HTMLPlugin,
11 ExternalIFramePlugin,
12 )
13
14
15 admin.site.register(BaseTab)
16 admin.site.register(HTMLTab)
17 admin.site.register(ExternalEmbeddedTab)
18 admin.site.register(ExternalIFrameTab)
19 admin.site.register(BasePlugin)
20 admin.site.register(RSSPlugin)
21 admin.site.register(HTMLPlugin)
22 admin.site.register(ExternalIFramePlugin)
23
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/apps/admin.py b/apps/admin.py
--- a/apps/admin.py
+++ b/apps/admin.py
@@ -11,6 +11,12 @@
ExternalIFramePlugin,
)
+class HTMLPluginAdmin(admin.ModelAdmin):
+ list_display_links = ["title"]
+ list_display = ["title", "course_instance_id", "container_type", "views"]
+
+ def course_instance_id(self, obj):
+ return obj.container_pk
admin.site.register(BaseTab)
admin.site.register(HTMLTab)
@@ -18,5 +24,5 @@
admin.site.register(ExternalIFrameTab)
admin.site.register(BasePlugin)
admin.site.register(RSSPlugin)
-admin.site.register(HTMLPlugin)
+admin.site.register(HTMLPlugin, HTMLPluginAdmin)
admin.site.register(ExternalIFramePlugin)
|
{"golden_diff": "diff --git a/apps/admin.py b/apps/admin.py\n--- a/apps/admin.py\n+++ b/apps/admin.py\n@@ -11,6 +11,12 @@\n ExternalIFramePlugin,\n )\n \n+class HTMLPluginAdmin(admin.ModelAdmin):\n+ list_display_links = [\"title\"]\n+ list_display = [\"title\", \"course_instance_id\", \"container_type\", \"views\"]\n+\n+ def course_instance_id(self, obj):\n+ return obj.container_pk\n \n admin.site.register(BaseTab)\n admin.site.register(HTMLTab)\n@@ -18,5 +24,5 @@\n admin.site.register(ExternalIFrameTab)\n admin.site.register(BasePlugin)\n admin.site.register(RSSPlugin)\n-admin.site.register(HTMLPlugin)\n+admin.site.register(HTMLPlugin, HTMLPluginAdmin)\n admin.site.register(ExternalIFramePlugin)\n", "issue": "HTML Plugin admin interface does not show relevant information\nSome times we have to copy plugins from previous course instances to the current instance. However, it is difficult to know which plugin belongs to the course we want.\r\n\r\n**Current view**\r\n\r\n\r\n**Proposed view**\r\n\r\n\nHTML Plugin admin interface does not show relevant information\nSome times we have to copy plugins from previous course instances to the current instance. However, it is difficult to know which plugin belongs to the course we want.\r\n\r\n**Current view**\r\n\r\n\r\n**Proposed view**\r\n\r\n\n", "before_files": [{"content": "from django.contrib import admin\n\nfrom .models import (\n BaseTab,\n HTMLTab,\n ExternalEmbeddedTab,\n ExternalIFrameTab,\n BasePlugin,\n RSSPlugin,\n HTMLPlugin,\n ExternalIFramePlugin,\n)\n\n\nadmin.site.register(BaseTab)\nadmin.site.register(HTMLTab)\nadmin.site.register(ExternalEmbeddedTab)\nadmin.site.register(ExternalIFrameTab)\nadmin.site.register(BasePlugin)\nadmin.site.register(RSSPlugin)\nadmin.site.register(HTMLPlugin)\nadmin.site.register(ExternalIFramePlugin)\n", "path": "apps/admin.py"}], "after_files": [{"content": "from django.contrib import admin\n\nfrom .models import (\n BaseTab,\n HTMLTab,\n ExternalEmbeddedTab,\n ExternalIFrameTab,\n BasePlugin,\n RSSPlugin,\n HTMLPlugin,\n ExternalIFramePlugin,\n)\n\nclass HTMLPluginAdmin(admin.ModelAdmin):\n list_display_links = [\"title\"]\n list_display = [\"title\", \"course_instance_id\", \"container_type\", \"views\"]\n\n def course_instance_id(self, obj):\n return obj.container_pk\n\nadmin.site.register(BaseTab)\nadmin.site.register(HTMLTab)\nadmin.site.register(ExternalEmbeddedTab)\nadmin.site.register(ExternalIFrameTab)\nadmin.site.register(BasePlugin)\nadmin.site.register(RSSPlugin)\nadmin.site.register(HTMLPlugin, HTMLPluginAdmin)\nadmin.site.register(ExternalIFramePlugin)\n", "path": "apps/admin.py"}]}
| 762 | 175 |
gh_patches_debug_21253
|
rasdani/github-patches
|
git_diff
|
streamlit__streamlit-7061
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Some labels in Altair charts are hard to see in dark mode
### Summary
Streamlit has an awesome feature where it changes the label colors of Altair charts when you switch to dark mode. Sweet!
However, it seems that some labels were omitted and thus remain almost illegibly dark in dark mode.
### Steps to reproduce
Run this code snippet [taken from the Altair documentation](https://altair-viz.github.io/gallery/grouped_bar_chart.html):
```python
from vega_datasets import data
st.subheader("barley example")
source = data.barley()
st.write(source)
st.write(
alt.Chart(source)
.mark_bar()
.encode(x="year:O", y="sum(yield):Q", color="year:N", column="site:N")
)
```
### Expected vs actual behavior
In light mode it displays properly:

but in dark mode some of the labels have remained black and are almost impossible to read:

**Note:** I have marked the errors in red.
### Is this a regression?
Not sure.
### Debug info
- Streamlit version: `Streamlit, version 0.82.0`
- Python version: `Python 3.8.5`
- PipEnv: `pipenv, version 2020.11.15`
- OS version: `Ubuntu 20.04.2 LTS`
- Browser version: `Version 91.0.4472.77 (Official Build) (x86_64)`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `e2e/scripts/st_arrow_altair_chart.py`
Content:
```
1 # Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import altair as alt
16 import numpy as np
17 import pandas as pd
18
19 import streamlit as st
20
21 np.random.seed(0)
22
23 data = np.random.randn(200, 3)
24 df = pd.DataFrame(data, columns=["a", "b", "c"])
25 chart = alt.Chart(df).mark_circle().encode(x="a", y="b", size="c", color="c")
26 st._arrow_altair_chart(chart, theme=None)
27
28 st.write("Show default vega lite theme:")
29 st._arrow_altair_chart(chart, theme=None)
30
31 st.write("Show streamlit theme:")
32 st._arrow_altair_chart(chart, theme="streamlit")
33
34 st.write("Overwrite theme config:")
35 chart = (
36 alt.Chart(df, usermeta={"embedOptions": {"theme": None}})
37 .mark_circle()
38 .encode(x="a", y="b", size="c", color="c")
39 )
40 st._arrow_altair_chart(chart, theme="streamlit")
41
42 data = pd.DataFrame(
43 {
44 "a": ["A", "B", "C", "D", "E", "F", "G", "H", "I"],
45 "b": [28, 55, 43, 91, 81, 53, 19, 87, 52],
46 }
47 )
48
49 chart = alt.Chart(data).mark_bar().encode(x="a", y="b")
50
51 st.write("Bar chart with default theme:")
52 st._arrow_altair_chart(chart)
53
54 st.write("Bar chart with streamlit theme:")
55 st._arrow_altair_chart(chart, theme="streamlit")
56
57 st.write("Bar chart with overwritten theme props:")
58 st._arrow_altair_chart(chart.configure_mark(color="black"), theme="streamlit")
59
60 # mark_arc was added in 4.2, but we have to support altair 4.0-4.1, so we
61 # have to skip this part of the test when testing min versions.
62 major, minor, patch = alt.__version__.split(".")
63 if not (major == "4" and minor < "2"):
64
65 source = pd.DataFrame(
66 {"category": [1, 2, 3, 4, 5, 6], "value": [4, 6, 10, 3, 7, 8]}
67 )
68
69 chart = (
70 alt.Chart(source)
71 .mark_arc(innerRadius=50)
72 .encode(
73 theta=alt.Theta(field="value", type="quantitative"),
74 color=alt.Color(field="category", type="nominal"),
75 )
76 )
77
78 st.write("Pie Chart with more than 4 Legend items")
79 st._arrow_altair_chart(chart, theme="streamlit")
80
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/e2e/scripts/st_arrow_altair_chart.py b/e2e/scripts/st_arrow_altair_chart.py
--- a/e2e/scripts/st_arrow_altair_chart.py
+++ b/e2e/scripts/st_arrow_altair_chart.py
@@ -48,12 +48,6 @@
chart = alt.Chart(data).mark_bar().encode(x="a", y="b")
-st.write("Bar chart with default theme:")
-st._arrow_altair_chart(chart)
-
-st.write("Bar chart with streamlit theme:")
-st._arrow_altair_chart(chart, theme="streamlit")
-
st.write("Bar chart with overwritten theme props:")
st._arrow_altair_chart(chart.configure_mark(color="black"), theme="streamlit")
@@ -77,3 +71,20 @@
st.write("Pie Chart with more than 4 Legend items")
st._arrow_altair_chart(chart, theme="streamlit")
+
+# taken from vega_datasets barley example
+barley = alt.UrlData(
+ "https://cdn.jsdelivr.net/npm/[email protected]/data/barley.json"
+)
+
+barley_chart = (
+ alt.Chart(barley)
+ .mark_bar()
+ .encode(x="year:O", y="sum(yield):Q", color="year:N", column="site:N")
+)
+
+st.write("Grouped Bar Chart with default theme:")
+st.altair_chart(barley_chart, theme=None)
+
+st.write("Grouped Bar Chart with streamlit theme:")
+st.altair_chart(barley_chart, theme="streamlit")
|
{"golden_diff": "diff --git a/e2e/scripts/st_arrow_altair_chart.py b/e2e/scripts/st_arrow_altair_chart.py\n--- a/e2e/scripts/st_arrow_altair_chart.py\n+++ b/e2e/scripts/st_arrow_altair_chart.py\n@@ -48,12 +48,6 @@\n \n chart = alt.Chart(data).mark_bar().encode(x=\"a\", y=\"b\")\n \n-st.write(\"Bar chart with default theme:\")\n-st._arrow_altair_chart(chart)\n-\n-st.write(\"Bar chart with streamlit theme:\")\n-st._arrow_altair_chart(chart, theme=\"streamlit\")\n-\n st.write(\"Bar chart with overwritten theme props:\")\n st._arrow_altair_chart(chart.configure_mark(color=\"black\"), theme=\"streamlit\")\n \n@@ -77,3 +71,20 @@\n \n st.write(\"Pie Chart with more than 4 Legend items\")\n st._arrow_altair_chart(chart, theme=\"streamlit\")\n+\n+# taken from vega_datasets barley example\n+barley = alt.UrlData(\n+ \"https://cdn.jsdelivr.net/npm/[email protected]/data/barley.json\"\n+)\n+\n+barley_chart = (\n+ alt.Chart(barley)\n+ .mark_bar()\n+ .encode(x=\"year:O\", y=\"sum(yield):Q\", color=\"year:N\", column=\"site:N\")\n+)\n+\n+st.write(\"Grouped Bar Chart with default theme:\")\n+st.altair_chart(barley_chart, theme=None)\n+\n+st.write(\"Grouped Bar Chart with streamlit theme:\")\n+st.altair_chart(barley_chart, theme=\"streamlit\")\n", "issue": "Some labels in Altair charts are hard to see in dark mode\n### Summary\r\n\r\nStreamlit has an awesome feature where it changes the label colors of Altair charts when you switch to dark mode. Sweet!\r\n\r\nHowever, it seems that some labels were omitted and thus remain almost illegibly dark in dark mode.\r\n\r\n### Steps to reproduce\r\n\r\nRun this code snippet [taken from the Altair documentation](https://altair-viz.github.io/gallery/grouped_bar_chart.html):\r\n\r\n```python\r\nfrom vega_datasets import data\r\n\r\nst.subheader(\"barley example\")\r\nsource = data.barley()\r\nst.write(source)\r\nst.write(\r\n alt.Chart(source)\r\n .mark_bar()\r\n .encode(x=\"year:O\", y=\"sum(yield):Q\", color=\"year:N\", column=\"site:N\")\r\n)\r\n```\r\n\r\n### Expected vs actual behavior\r\n\r\nIn light mode it displays properly:\r\n\r\n\r\n\r\nbut in dark mode some of the labels have remained black and are almost impossible to read:\r\n\r\n\r\n\r\n**Note:** I have marked the errors in red.\r\n\r\n### Is this a regression?\r\n\r\nNot sure.\r\n\r\n### Debug info\r\n\r\n- Streamlit version: `Streamlit, version 0.82.0`\r\n- Python version: `Python 3.8.5`\r\n- PipEnv: `pipenv, version 2020.11.15`\r\n- OS version: `Ubuntu 20.04.2 LTS`\r\n- Browser version: `Version 91.0.4472.77 (Official Build) (x86_64)`\r\n\n", "before_files": [{"content": "# Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport altair as alt\nimport numpy as np\nimport pandas as pd\n\nimport streamlit as st\n\nnp.random.seed(0)\n\ndata = np.random.randn(200, 3)\ndf = pd.DataFrame(data, columns=[\"a\", \"b\", \"c\"])\nchart = alt.Chart(df).mark_circle().encode(x=\"a\", y=\"b\", size=\"c\", color=\"c\")\nst._arrow_altair_chart(chart, theme=None)\n\nst.write(\"Show default vega lite theme:\")\nst._arrow_altair_chart(chart, theme=None)\n\nst.write(\"Show streamlit theme:\")\nst._arrow_altair_chart(chart, theme=\"streamlit\")\n\nst.write(\"Overwrite theme config:\")\nchart = (\n alt.Chart(df, usermeta={\"embedOptions\": {\"theme\": None}})\n .mark_circle()\n .encode(x=\"a\", y=\"b\", size=\"c\", color=\"c\")\n)\nst._arrow_altair_chart(chart, theme=\"streamlit\")\n\ndata = pd.DataFrame(\n {\n \"a\": [\"A\", \"B\", \"C\", \"D\", \"E\", \"F\", \"G\", \"H\", \"I\"],\n \"b\": [28, 55, 43, 91, 81, 53, 19, 87, 52],\n }\n)\n\nchart = alt.Chart(data).mark_bar().encode(x=\"a\", y=\"b\")\n\nst.write(\"Bar chart with default theme:\")\nst._arrow_altair_chart(chart)\n\nst.write(\"Bar chart with streamlit theme:\")\nst._arrow_altair_chart(chart, theme=\"streamlit\")\n\nst.write(\"Bar chart with overwritten theme props:\")\nst._arrow_altair_chart(chart.configure_mark(color=\"black\"), theme=\"streamlit\")\n\n# mark_arc was added in 4.2, but we have to support altair 4.0-4.1, so we\n# have to skip this part of the test when testing min versions.\nmajor, minor, patch = alt.__version__.split(\".\")\nif not (major == \"4\" and minor < \"2\"):\n\n source = pd.DataFrame(\n {\"category\": [1, 2, 3, 4, 5, 6], \"value\": [4, 6, 10, 3, 7, 8]}\n )\n\n chart = (\n alt.Chart(source)\n .mark_arc(innerRadius=50)\n .encode(\n theta=alt.Theta(field=\"value\", type=\"quantitative\"),\n color=alt.Color(field=\"category\", type=\"nominal\"),\n )\n )\n\n st.write(\"Pie Chart with more than 4 Legend items\")\n st._arrow_altair_chart(chart, theme=\"streamlit\")\n", "path": "e2e/scripts/st_arrow_altair_chart.py"}], "after_files": [{"content": "# Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport altair as alt\nimport numpy as np\nimport pandas as pd\n\nimport streamlit as st\n\nnp.random.seed(0)\n\ndata = np.random.randn(200, 3)\ndf = pd.DataFrame(data, columns=[\"a\", \"b\", \"c\"])\nchart = alt.Chart(df).mark_circle().encode(x=\"a\", y=\"b\", size=\"c\", color=\"c\")\nst._arrow_altair_chart(chart, theme=None)\n\nst.write(\"Show default vega lite theme:\")\nst._arrow_altair_chart(chart, theme=None)\n\nst.write(\"Show streamlit theme:\")\nst._arrow_altair_chart(chart, theme=\"streamlit\")\n\nst.write(\"Overwrite theme config:\")\nchart = (\n alt.Chart(df, usermeta={\"embedOptions\": {\"theme\": None}})\n .mark_circle()\n .encode(x=\"a\", y=\"b\", size=\"c\", color=\"c\")\n)\nst._arrow_altair_chart(chart, theme=\"streamlit\")\n\ndata = pd.DataFrame(\n {\n \"a\": [\"A\", \"B\", \"C\", \"D\", \"E\", \"F\", \"G\", \"H\", \"I\"],\n \"b\": [28, 55, 43, 91, 81, 53, 19, 87, 52],\n }\n)\n\nchart = alt.Chart(data).mark_bar().encode(x=\"a\", y=\"b\")\n\nst.write(\"Bar chart with overwritten theme props:\")\nst._arrow_altair_chart(chart.configure_mark(color=\"black\"), theme=\"streamlit\")\n\n# mark_arc was added in 4.2, but we have to support altair 4.0-4.1, so we\n# have to skip this part of the test when testing min versions.\nmajor, minor, patch = alt.__version__.split(\".\")\nif not (major == \"4\" and minor < \"2\"):\n\n source = pd.DataFrame(\n {\"category\": [1, 2, 3, 4, 5, 6], \"value\": [4, 6, 10, 3, 7, 8]}\n )\n\n chart = (\n alt.Chart(source)\n .mark_arc(innerRadius=50)\n .encode(\n theta=alt.Theta(field=\"value\", type=\"quantitative\"),\n color=alt.Color(field=\"category\", type=\"nominal\"),\n )\n )\n\n st.write(\"Pie Chart with more than 4 Legend items\")\n st._arrow_altair_chart(chart, theme=\"streamlit\")\n\n# taken from vega_datasets barley example\nbarley = alt.UrlData(\n \"https://cdn.jsdelivr.net/npm/[email protected]/data/barley.json\"\n)\n\nbarley_chart = (\n alt.Chart(barley)\n .mark_bar()\n .encode(x=\"year:O\", y=\"sum(yield):Q\", color=\"year:N\", column=\"site:N\")\n)\n\nst.write(\"Grouped Bar Chart with default theme:\")\nst.altair_chart(barley_chart, theme=None)\n\nst.write(\"Grouped Bar Chart with streamlit theme:\")\nst.altair_chart(barley_chart, theme=\"streamlit\")\n", "path": "e2e/scripts/st_arrow_altair_chart.py"}]}
| 1,606 | 347 |
gh_patches_debug_16987
|
rasdani/github-patches
|
git_diff
|
strawberry-graphql__strawberry-1233
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
The export-schema command fails when trying to import local modules
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `strawberry/cli/commands/export_schema.py`
Content:
```
1 import click
2
3 from strawberry import Schema
4 from strawberry.printer import print_schema
5 from strawberry.utils.importer import import_module_symbol
6
7
8 @click.command(short_help="Exports the schema")
9 @click.argument("schema", type=str)
10 def export_schema(schema: str):
11 try:
12 schema_symbol = import_module_symbol(schema, default_symbol_name="schema")
13 except (ImportError, AttributeError) as exc:
14 message = str(exc)
15 raise click.BadArgumentUsage(message)
16 if not isinstance(schema_symbol, Schema):
17 message = "The `schema` must be an instance of strawberry.Schema"
18 raise click.BadArgumentUsage(message)
19 print(print_schema(schema_symbol))
20
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/strawberry/cli/commands/export_schema.py b/strawberry/cli/commands/export_schema.py
--- a/strawberry/cli/commands/export_schema.py
+++ b/strawberry/cli/commands/export_schema.py
@@ -1,3 +1,5 @@
+import sys
+
import click
from strawberry import Schema
@@ -7,7 +9,20 @@
@click.command(short_help="Exports the schema")
@click.argument("schema", type=str)
-def export_schema(schema: str):
[email protected](
+ "--app-dir",
+ default=".",
+ type=str,
+ show_default=True,
+ help=(
+ "Look for the module in the specified directory, by adding this to the "
+ "PYTHONPATH. Defaults to the current working directory. "
+ "Works the same as `--app-dir` in uvicorn."
+ ),
+)
+def export_schema(schema: str, app_dir):
+ sys.path.insert(0, app_dir)
+
try:
schema_symbol = import_module_symbol(schema, default_symbol_name="schema")
except (ImportError, AttributeError) as exc:
|
{"golden_diff": "diff --git a/strawberry/cli/commands/export_schema.py b/strawberry/cli/commands/export_schema.py\n--- a/strawberry/cli/commands/export_schema.py\n+++ b/strawberry/cli/commands/export_schema.py\n@@ -1,3 +1,5 @@\n+import sys\n+\n import click\n \n from strawberry import Schema\n@@ -7,7 +9,20 @@\n \n @click.command(short_help=\"Exports the schema\")\n @click.argument(\"schema\", type=str)\n-def export_schema(schema: str):\[email protected](\n+ \"--app-dir\",\n+ default=\".\",\n+ type=str,\n+ show_default=True,\n+ help=(\n+ \"Look for the module in the specified directory, by adding this to the \"\n+ \"PYTHONPATH. Defaults to the current working directory. \"\n+ \"Works the same as `--app-dir` in uvicorn.\"\n+ ),\n+)\n+def export_schema(schema: str, app_dir):\n+ sys.path.insert(0, app_dir)\n+\n try:\n schema_symbol = import_module_symbol(schema, default_symbol_name=\"schema\")\n except (ImportError, AttributeError) as exc:\n", "issue": "The export-schema command fails when trying to import local modules\n\n", "before_files": [{"content": "import click\n\nfrom strawberry import Schema\nfrom strawberry.printer import print_schema\nfrom strawberry.utils.importer import import_module_symbol\n\n\[email protected](short_help=\"Exports the schema\")\[email protected](\"schema\", type=str)\ndef export_schema(schema: str):\n try:\n schema_symbol = import_module_symbol(schema, default_symbol_name=\"schema\")\n except (ImportError, AttributeError) as exc:\n message = str(exc)\n raise click.BadArgumentUsage(message)\n if not isinstance(schema_symbol, Schema):\n message = \"The `schema` must be an instance of strawberry.Schema\"\n raise click.BadArgumentUsage(message)\n print(print_schema(schema_symbol))\n", "path": "strawberry/cli/commands/export_schema.py"}], "after_files": [{"content": "import sys\n\nimport click\n\nfrom strawberry import Schema\nfrom strawberry.printer import print_schema\nfrom strawberry.utils.importer import import_module_symbol\n\n\[email protected](short_help=\"Exports the schema\")\[email protected](\"schema\", type=str)\[email protected](\n \"--app-dir\",\n default=\".\",\n type=str,\n show_default=True,\n help=(\n \"Look for the module in the specified directory, by adding this to the \"\n \"PYTHONPATH. Defaults to the current working directory. \"\n \"Works the same as `--app-dir` in uvicorn.\"\n ),\n)\ndef export_schema(schema: str, app_dir):\n sys.path.insert(0, app_dir)\n\n try:\n schema_symbol = import_module_symbol(schema, default_symbol_name=\"schema\")\n except (ImportError, AttributeError) as exc:\n message = str(exc)\n raise click.BadArgumentUsage(message)\n if not isinstance(schema_symbol, Schema):\n message = \"The `schema` must be an instance of strawberry.Schema\"\n raise click.BadArgumentUsage(message)\n print(print_schema(schema_symbol))\n", "path": "strawberry/cli/commands/export_schema.py"}]}
| 444 | 249 |
gh_patches_debug_12871
|
rasdani/github-patches
|
git_diff
|
archlinux__archinstall-2069
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
locale language only en_US
archlinux-2023.09.01-x86_64.iso
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `archinstall/lib/locale/locale.py`
Content:
```
1 from typing import Iterator, List
2
3 from ..exceptions import ServiceException, SysCallError
4 from ..general import SysCommand
5 from ..output import error
6
7
8 def list_keyboard_languages() -> Iterator[str]:
9 for line in SysCommand("localectl --no-pager list-keymaps", environment_vars={'SYSTEMD_COLORS': '0'}):
10 yield line.decode('UTF-8').strip()
11
12
13 def list_locales() -> List[str]:
14 with open('/etc/locale.gen', 'r') as fp:
15 locales = []
16 # before the list of locales begins there's an empty line with a '#' in front
17 # so we'll collect the localels from bottom up and halt when we're donw
18 entries = fp.readlines()
19 entries.reverse()
20
21 for entry in entries:
22 text = entry.replace('#', '').strip()
23 if text == '':
24 break
25 locales.append(text)
26
27 locales.reverse()
28 return locales
29
30
31 def list_x11_keyboard_languages() -> Iterator[str]:
32 for line in SysCommand("localectl --no-pager list-x11-keymap-layouts", environment_vars={'SYSTEMD_COLORS': '0'}):
33 yield line.decode('UTF-8').strip()
34
35
36 def verify_keyboard_layout(layout :str) -> bool:
37 for language in list_keyboard_languages():
38 if layout.lower() == language.lower():
39 return True
40 return False
41
42
43 def verify_x11_keyboard_layout(layout :str) -> bool:
44 for language in list_x11_keyboard_languages():
45 if layout.lower() == language.lower():
46 return True
47 return False
48
49
50 def set_kb_layout(locale :str) -> bool:
51 if len(locale.strip()):
52 if not verify_keyboard_layout(locale):
53 error(f"Invalid keyboard locale specified: {locale}")
54 return False
55
56 try:
57 SysCommand(f'localectl set-keymap {locale}')
58 except SysCallError as err:
59 raise ServiceException(f"Unable to set locale '{locale}' for console: {err}")
60
61 return True
62
63 return False
64
65
66 def list_timezones() -> Iterator[str]:
67 for line in SysCommand("timedatectl --no-pager list-timezones", environment_vars={'SYSTEMD_COLORS': '0'}):
68 yield line.decode('UTF-8').strip()
69
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/archinstall/lib/locale/locale.py b/archinstall/lib/locale/locale.py
--- a/archinstall/lib/locale/locale.py
+++ b/archinstall/lib/locale/locale.py
@@ -11,21 +11,14 @@
def list_locales() -> List[str]:
- with open('/etc/locale.gen', 'r') as fp:
- locales = []
- # before the list of locales begins there's an empty line with a '#' in front
- # so we'll collect the localels from bottom up and halt when we're donw
- entries = fp.readlines()
- entries.reverse()
-
- for entry in entries:
- text = entry.replace('#', '').strip()
- if text == '':
- break
- locales.append(text)
-
- locales.reverse()
- return locales
+ locales = []
+
+ with open('/usr/share/i18n/SUPPORTED') as file:
+ for line in file:
+ if line != 'C.UTF-8 UTF-8\n':
+ locales.append(line.rstrip())
+
+ return locales
def list_x11_keyboard_languages() -> Iterator[str]:
|
{"golden_diff": "diff --git a/archinstall/lib/locale/locale.py b/archinstall/lib/locale/locale.py\n--- a/archinstall/lib/locale/locale.py\n+++ b/archinstall/lib/locale/locale.py\n@@ -11,21 +11,14 @@\n \n \n def list_locales() -> List[str]:\n-\twith open('/etc/locale.gen', 'r') as fp:\n-\t\tlocales = []\n-\t\t# before the list of locales begins there's an empty line with a '#' in front\n-\t\t# so we'll collect the localels from bottom up and halt when we're donw\n-\t\tentries = fp.readlines()\n-\t\tentries.reverse()\n-\n-\t\tfor entry in entries:\n-\t\t\ttext = entry.replace('#', '').strip()\n-\t\t\tif text == '':\n-\t\t\t\tbreak\n-\t\t\tlocales.append(text)\n-\n-\t\tlocales.reverse()\n-\t\treturn locales\n+\tlocales = []\n+\n+\twith open('/usr/share/i18n/SUPPORTED') as file:\n+\t\tfor line in file:\n+\t\t\tif line != 'C.UTF-8 UTF-8\\n':\n+\t\t\t\tlocales.append(line.rstrip())\n+\n+\treturn locales\n \n \n def list_x11_keyboard_languages() -> Iterator[str]:\n", "issue": "locale language only en_US\narchlinux-2023.09.01-x86_64.iso\n", "before_files": [{"content": "from typing import Iterator, List\n\nfrom ..exceptions import ServiceException, SysCallError\nfrom ..general import SysCommand\nfrom ..output import error\n\n\ndef list_keyboard_languages() -> Iterator[str]:\n\tfor line in SysCommand(\"localectl --no-pager list-keymaps\", environment_vars={'SYSTEMD_COLORS': '0'}):\n\t\tyield line.decode('UTF-8').strip()\n\n\ndef list_locales() -> List[str]:\n\twith open('/etc/locale.gen', 'r') as fp:\n\t\tlocales = []\n\t\t# before the list of locales begins there's an empty line with a '#' in front\n\t\t# so we'll collect the localels from bottom up and halt when we're donw\n\t\tentries = fp.readlines()\n\t\tentries.reverse()\n\n\t\tfor entry in entries:\n\t\t\ttext = entry.replace('#', '').strip()\n\t\t\tif text == '':\n\t\t\t\tbreak\n\t\t\tlocales.append(text)\n\n\t\tlocales.reverse()\n\t\treturn locales\n\n\ndef list_x11_keyboard_languages() -> Iterator[str]:\n\tfor line in SysCommand(\"localectl --no-pager list-x11-keymap-layouts\", environment_vars={'SYSTEMD_COLORS': '0'}):\n\t\tyield line.decode('UTF-8').strip()\n\n\ndef verify_keyboard_layout(layout :str) -> bool:\n\tfor language in list_keyboard_languages():\n\t\tif layout.lower() == language.lower():\n\t\t\treturn True\n\treturn False\n\n\ndef verify_x11_keyboard_layout(layout :str) -> bool:\n\tfor language in list_x11_keyboard_languages():\n\t\tif layout.lower() == language.lower():\n\t\t\treturn True\n\treturn False\n\n\ndef set_kb_layout(locale :str) -> bool:\n\tif len(locale.strip()):\n\t\tif not verify_keyboard_layout(locale):\n\t\t\terror(f\"Invalid keyboard locale specified: {locale}\")\n\t\t\treturn False\n\n\t\ttry:\n\t\t\tSysCommand(f'localectl set-keymap {locale}')\n\t\texcept SysCallError as err:\n\t\t\traise ServiceException(f\"Unable to set locale '{locale}' for console: {err}\")\n\n\t\treturn True\n\n\treturn False\n\n\ndef list_timezones() -> Iterator[str]:\n\tfor line in SysCommand(\"timedatectl --no-pager list-timezones\", environment_vars={'SYSTEMD_COLORS': '0'}):\n\t\tyield line.decode('UTF-8').strip()\n", "path": "archinstall/lib/locale/locale.py"}], "after_files": [{"content": "from typing import Iterator, List\n\nfrom ..exceptions import ServiceException, SysCallError\nfrom ..general import SysCommand\nfrom ..output import error\n\n\ndef list_keyboard_languages() -> Iterator[str]:\n\tfor line in SysCommand(\"localectl --no-pager list-keymaps\", environment_vars={'SYSTEMD_COLORS': '0'}):\n\t\tyield line.decode('UTF-8').strip()\n\n\ndef list_locales() -> List[str]:\n\tlocales = []\n\n\twith open('/usr/share/i18n/SUPPORTED') as file:\n\t\tfor line in file:\n\t\t\tif line != 'C.UTF-8 UTF-8\\n':\n\t\t\t\tlocales.append(line.rstrip())\n\n\treturn locales\n\n\ndef list_x11_keyboard_languages() -> Iterator[str]:\n\tfor line in SysCommand(\"localectl --no-pager list-x11-keymap-layouts\", environment_vars={'SYSTEMD_COLORS': '0'}):\n\t\tyield line.decode('UTF-8').strip()\n\n\ndef verify_keyboard_layout(layout :str) -> bool:\n\tfor language in list_keyboard_languages():\n\t\tif layout.lower() == language.lower():\n\t\t\treturn True\n\treturn False\n\n\ndef verify_x11_keyboard_layout(layout :str) -> bool:\n\tfor language in list_x11_keyboard_languages():\n\t\tif layout.lower() == language.lower():\n\t\t\treturn True\n\treturn False\n\n\ndef set_kb_layout(locale :str) -> bool:\n\tif len(locale.strip()):\n\t\tif not verify_keyboard_layout(locale):\n\t\t\terror(f\"Invalid keyboard locale specified: {locale}\")\n\t\t\treturn False\n\n\t\ttry:\n\t\t\tSysCommand(f'localectl set-keymap {locale}')\n\t\texcept SysCallError as err:\n\t\t\traise ServiceException(f\"Unable to set locale '{locale}' for console: {err}\")\n\n\t\treturn True\n\n\treturn False\n\n\ndef list_timezones() -> Iterator[str]:\n\tfor line in SysCommand(\"timedatectl --no-pager list-timezones\", environment_vars={'SYSTEMD_COLORS': '0'}):\n\t\tyield line.decode('UTF-8').strip()\n", "path": "archinstall/lib/locale/locale.py"}]}
| 916 | 255 |
gh_patches_debug_37742
|
rasdani/github-patches
|
git_diff
|
NVIDIA__NVFlare-196
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
if/elif statement without else clause in `FullModelShareableGenerator`
It would be helpful to add an else statement with a warning message that this DataKind is not supported. I ran into this issue when sending a DataKind.COLLECTION with the shareable by mistake.
See https://github.com/NVIDIA/NVFlare/blob/b3ff7844a9bef746218527ccd07601feb66fd94c/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py#L61
In the same class, when sending a DXO instead of Shareable type, I got this error
```
Traceback (most recent call last):
File "/home/hroth/Code/nvflare/hroth-agglib/nvflare/app_common/workflows/scatter_and_gather.py", line 202, in control_flow
self._global_weights = self.shareable_gen.shareable_to_learnable(aggr_result, fl_ctx)
File "/home/hroth/Code/nvflare/hroth-agglib/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py", line 54, in shareable_to_learnable
dxo = from_shareable(shareable)
File "/home/hroth/Code/nvflare/hroth-agglib/nvflare/apis/dxo.py", line 120, in from_shareable
content_type = s.get_header(ReservedHeaderKey.CONTENT_TYPE)
AttributeError: 'DXO' object has no attribute 'get_header'
```
There should be an instance check here https://github.com/NVIDIA/NVFlare/blob/b3ff7844a9bef746218527ccd07601feb66fd94c/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py#L54
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nvflare/app_common/shareablegenerators/full_model_shareable_generator.py`
Content:
```
1 # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from nvflare.apis.dxo import DataKind, from_shareable
16 from nvflare.apis.fl_context import FLContext
17 from nvflare.apis.shareable import Shareable
18 from nvflare.app_common.abstract.model import ModelLearnable, ModelLearnableKey, model_learnable_to_dxo
19 from nvflare.app_common.abstract.shareable_generator import ShareableGenerator
20 from nvflare.app_common.app_constant import AppConstants
21
22
23 class FullModelShareableGenerator(ShareableGenerator):
24 def learnable_to_shareable(self, ml: ModelLearnable, fl_ctx: FLContext) -> Shareable:
25 """Convert Learnable to Shareable.
26
27 Args:
28 model (Learnable): model to be converted
29 fl_ctx (FLContext): FL context
30
31 Returns:
32 Shareable: a shareable containing a DXO object,
33 """
34 dxo = model_learnable_to_dxo(ml)
35 return dxo.to_shareable()
36
37 def shareable_to_learnable(self, shareable: Shareable, fl_ctx: FLContext) -> ModelLearnable:
38 """Convert Shareable to Learnable.
39
40 Supporting TYPE == TYPE_WEIGHT_DIFF or TYPE_WEIGHTS
41
42 Args:
43 shareable (Shareable): Shareable that contains a DXO object
44 fl_ctx (FLContext): FL context
45
46 Returns: a ModelLearnable object
47 """
48 base_model = fl_ctx.get_prop(AppConstants.GLOBAL_MODEL)
49 if not base_model:
50 self.system_panic(reason="No global base model!", fl_ctx=fl_ctx)
51 return base_model
52
53 weights = base_model[ModelLearnableKey.WEIGHTS]
54 dxo = from_shareable(shareable)
55
56 if dxo.data_kind == DataKind.WEIGHT_DIFF:
57 if dxo.data is not None:
58 model_diff = dxo.data
59 for v_name, v_value in model_diff.items():
60 weights[v_name] = weights[v_name] + v_value
61 elif dxo.data_kind == DataKind.WEIGHTS:
62 weights = dxo.data
63 if not weights:
64 self.log_info(fl_ctx, "No model weights found. Model will not be updated.")
65 else:
66 base_model[ModelLearnableKey.WEIGHTS] = weights
67
68 base_model[ModelLearnableKey.META] = dxo.get_meta_props()
69 return base_model
70
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py b/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py
--- a/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py
+++ b/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py
@@ -21,21 +21,21 @@
class FullModelShareableGenerator(ShareableGenerator):
- def learnable_to_shareable(self, ml: ModelLearnable, fl_ctx: FLContext) -> Shareable:
- """Convert Learnable to Shareable.
+ def learnable_to_shareable(self, model_learnable: ModelLearnable, fl_ctx: FLContext) -> Shareable:
+ """Convert ModelLearnable to Shareable.
Args:
- model (Learnable): model to be converted
+ model_learnable (ModelLearnable): model to be converted
fl_ctx (FLContext): FL context
Returns:
- Shareable: a shareable containing a DXO object,
+ Shareable: a shareable containing a DXO object.
"""
- dxo = model_learnable_to_dxo(ml)
+ dxo = model_learnable_to_dxo(model_learnable)
return dxo.to_shareable()
def shareable_to_learnable(self, shareable: Shareable, fl_ctx: FLContext) -> ModelLearnable:
- """Convert Shareable to Learnable.
+ """Convert Shareable to ModelLearnable.
Supporting TYPE == TYPE_WEIGHT_DIFF or TYPE_WEIGHTS
@@ -43,8 +43,16 @@
shareable (Shareable): Shareable that contains a DXO object
fl_ctx (FLContext): FL context
- Returns: a ModelLearnable object
+ Returns:
+ A ModelLearnable object
+
+ Raises:
+ TypeError: if shareable is not of type shareable
+ ValueError: if data_kind is not `DataKind.WEIGHTS` and is not `DataKind.WEIGHT_DIFF`
"""
+ if not isinstance(shareable, Shareable):
+ raise TypeError("shareable must be Shareable, but got {}.".format(type(shareable)))
+
base_model = fl_ctx.get_prop(AppConstants.GLOBAL_MODEL)
if not base_model:
self.system_panic(reason="No global base model!", fl_ctx=fl_ctx)
@@ -64,6 +72,10 @@
self.log_info(fl_ctx, "No model weights found. Model will not be updated.")
else:
base_model[ModelLearnableKey.WEIGHTS] = weights
+ else:
+ raise ValueError(
+ "data_kind should be either DataKind.WEIGHTS or DataKind.WEIGHT_DIFF, but got {}".format(dxo.data_kind)
+ )
base_model[ModelLearnableKey.META] = dxo.get_meta_props()
return base_model
|
{"golden_diff": "diff --git a/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py b/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py\n--- a/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py\n+++ b/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py\n@@ -21,21 +21,21 @@\n \n \n class FullModelShareableGenerator(ShareableGenerator):\n- def learnable_to_shareable(self, ml: ModelLearnable, fl_ctx: FLContext) -> Shareable:\n- \"\"\"Convert Learnable to Shareable.\n+ def learnable_to_shareable(self, model_learnable: ModelLearnable, fl_ctx: FLContext) -> Shareable:\n+ \"\"\"Convert ModelLearnable to Shareable.\n \n Args:\n- model (Learnable): model to be converted\n+ model_learnable (ModelLearnable): model to be converted\n fl_ctx (FLContext): FL context\n \n Returns:\n- Shareable: a shareable containing a DXO object,\n+ Shareable: a shareable containing a DXO object.\n \"\"\"\n- dxo = model_learnable_to_dxo(ml)\n+ dxo = model_learnable_to_dxo(model_learnable)\n return dxo.to_shareable()\n \n def shareable_to_learnable(self, shareable: Shareable, fl_ctx: FLContext) -> ModelLearnable:\n- \"\"\"Convert Shareable to Learnable.\n+ \"\"\"Convert Shareable to ModelLearnable.\n \n Supporting TYPE == TYPE_WEIGHT_DIFF or TYPE_WEIGHTS\n \n@@ -43,8 +43,16 @@\n shareable (Shareable): Shareable that contains a DXO object\n fl_ctx (FLContext): FL context\n \n- Returns: a ModelLearnable object\n+ Returns:\n+ A ModelLearnable object\n+\n+ Raises:\n+ TypeError: if shareable is not of type shareable\n+ ValueError: if data_kind is not `DataKind.WEIGHTS` and is not `DataKind.WEIGHT_DIFF`\n \"\"\"\n+ if not isinstance(shareable, Shareable):\n+ raise TypeError(\"shareable must be Shareable, but got {}.\".format(type(shareable)))\n+\n base_model = fl_ctx.get_prop(AppConstants.GLOBAL_MODEL)\n if not base_model:\n self.system_panic(reason=\"No global base model!\", fl_ctx=fl_ctx)\n@@ -64,6 +72,10 @@\n self.log_info(fl_ctx, \"No model weights found. Model will not be updated.\")\n else:\n base_model[ModelLearnableKey.WEIGHTS] = weights\n+ else:\n+ raise ValueError(\n+ \"data_kind should be either DataKind.WEIGHTS or DataKind.WEIGHT_DIFF, but got {}\".format(dxo.data_kind)\n+ )\n \n base_model[ModelLearnableKey.META] = dxo.get_meta_props()\n return base_model\n", "issue": "if/elif statement without else clause in `FullModelShareableGenerator`\nIt would be helpful to add an else statement with a warning message that this DataKind is not supported. I ran into this issue when sending a DataKind.COLLECTION with the shareable by mistake.\r\n\r\nSee https://github.com/NVIDIA/NVFlare/blob/b3ff7844a9bef746218527ccd07601feb66fd94c/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py#L61\r\n\r\nIn the same class, when sending a DXO instead of Shareable type, I got this error\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/hroth/Code/nvflare/hroth-agglib/nvflare/app_common/workflows/scatter_and_gather.py\", line 202, in control_flow\r\n self._global_weights = self.shareable_gen.shareable_to_learnable(aggr_result, fl_ctx)\r\n File \"/home/hroth/Code/nvflare/hroth-agglib/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py\", line 54, in shareable_to_learnable\r\n dxo = from_shareable(shareable)\r\n File \"/home/hroth/Code/nvflare/hroth-agglib/nvflare/apis/dxo.py\", line 120, in from_shareable\r\n content_type = s.get_header(ReservedHeaderKey.CONTENT_TYPE)\r\nAttributeError: 'DXO' object has no attribute 'get_header'\r\n```\r\nThere should be an instance check here https://github.com/NVIDIA/NVFlare/blob/b3ff7844a9bef746218527ccd07601feb66fd94c/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py#L54\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom nvflare.apis.dxo import DataKind, from_shareable\nfrom nvflare.apis.fl_context import FLContext\nfrom nvflare.apis.shareable import Shareable\nfrom nvflare.app_common.abstract.model import ModelLearnable, ModelLearnableKey, model_learnable_to_dxo\nfrom nvflare.app_common.abstract.shareable_generator import ShareableGenerator\nfrom nvflare.app_common.app_constant import AppConstants\n\n\nclass FullModelShareableGenerator(ShareableGenerator):\n def learnable_to_shareable(self, ml: ModelLearnable, fl_ctx: FLContext) -> Shareable:\n \"\"\"Convert Learnable to Shareable.\n\n Args:\n model (Learnable): model to be converted\n fl_ctx (FLContext): FL context\n\n Returns:\n Shareable: a shareable containing a DXO object,\n \"\"\"\n dxo = model_learnable_to_dxo(ml)\n return dxo.to_shareable()\n\n def shareable_to_learnable(self, shareable: Shareable, fl_ctx: FLContext) -> ModelLearnable:\n \"\"\"Convert Shareable to Learnable.\n\n Supporting TYPE == TYPE_WEIGHT_DIFF or TYPE_WEIGHTS\n\n Args:\n shareable (Shareable): Shareable that contains a DXO object\n fl_ctx (FLContext): FL context\n\n Returns: a ModelLearnable object\n \"\"\"\n base_model = fl_ctx.get_prop(AppConstants.GLOBAL_MODEL)\n if not base_model:\n self.system_panic(reason=\"No global base model!\", fl_ctx=fl_ctx)\n return base_model\n\n weights = base_model[ModelLearnableKey.WEIGHTS]\n dxo = from_shareable(shareable)\n\n if dxo.data_kind == DataKind.WEIGHT_DIFF:\n if dxo.data is not None:\n model_diff = dxo.data\n for v_name, v_value in model_diff.items():\n weights[v_name] = weights[v_name] + v_value\n elif dxo.data_kind == DataKind.WEIGHTS:\n weights = dxo.data\n if not weights:\n self.log_info(fl_ctx, \"No model weights found. Model will not be updated.\")\n else:\n base_model[ModelLearnableKey.WEIGHTS] = weights\n\n base_model[ModelLearnableKey.META] = dxo.get_meta_props()\n return base_model\n", "path": "nvflare/app_common/shareablegenerators/full_model_shareable_generator.py"}], "after_files": [{"content": "# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom nvflare.apis.dxo import DataKind, from_shareable\nfrom nvflare.apis.fl_context import FLContext\nfrom nvflare.apis.shareable import Shareable\nfrom nvflare.app_common.abstract.model import ModelLearnable, ModelLearnableKey, model_learnable_to_dxo\nfrom nvflare.app_common.abstract.shareable_generator import ShareableGenerator\nfrom nvflare.app_common.app_constant import AppConstants\n\n\nclass FullModelShareableGenerator(ShareableGenerator):\n def learnable_to_shareable(self, model_learnable: ModelLearnable, fl_ctx: FLContext) -> Shareable:\n \"\"\"Convert ModelLearnable to Shareable.\n\n Args:\n model_learnable (ModelLearnable): model to be converted\n fl_ctx (FLContext): FL context\n\n Returns:\n Shareable: a shareable containing a DXO object.\n \"\"\"\n dxo = model_learnable_to_dxo(model_learnable)\n return dxo.to_shareable()\n\n def shareable_to_learnable(self, shareable: Shareable, fl_ctx: FLContext) -> ModelLearnable:\n \"\"\"Convert Shareable to ModelLearnable.\n\n Supporting TYPE == TYPE_WEIGHT_DIFF or TYPE_WEIGHTS\n\n Args:\n shareable (Shareable): Shareable that contains a DXO object\n fl_ctx (FLContext): FL context\n\n Returns:\n A ModelLearnable object\n\n Raises:\n TypeError: if shareable is not of type shareable\n ValueError: if data_kind is not `DataKind.WEIGHTS` and is not `DataKind.WEIGHT_DIFF`\n \"\"\"\n if not isinstance(shareable, Shareable):\n raise TypeError(\"shareable must be Shareable, but got {}.\".format(type(shareable)))\n\n base_model = fl_ctx.get_prop(AppConstants.GLOBAL_MODEL)\n if not base_model:\n self.system_panic(reason=\"No global base model!\", fl_ctx=fl_ctx)\n return base_model\n\n weights = base_model[ModelLearnableKey.WEIGHTS]\n dxo = from_shareable(shareable)\n\n if dxo.data_kind == DataKind.WEIGHT_DIFF:\n if dxo.data is not None:\n model_diff = dxo.data\n for v_name, v_value in model_diff.items():\n weights[v_name] = weights[v_name] + v_value\n elif dxo.data_kind == DataKind.WEIGHTS:\n weights = dxo.data\n if not weights:\n self.log_info(fl_ctx, \"No model weights found. Model will not be updated.\")\n else:\n base_model[ModelLearnableKey.WEIGHTS] = weights\n else:\n raise ValueError(\n \"data_kind should be either DataKind.WEIGHTS or DataKind.WEIGHT_DIFF, but got {}\".format(dxo.data_kind)\n )\n\n base_model[ModelLearnableKey.META] = dxo.get_meta_props()\n return base_model\n", "path": "nvflare/app_common/shareablegenerators/full_model_shareable_generator.py"}]}
| 1,459 | 645 |
gh_patches_debug_4572
|
rasdani/github-patches
|
git_diff
|
cltk__cltk-533
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
External punctuation stopped working on Latin sent tokenizer
Recently reviewing the tokenizer, and it is not capturing exclamation points. I'll look to see the NLTK has changed anything.
``` python
In [12]: text = """quam penitus maestas exedit cura medullas! ut tibi tunc toto
...: pectore sollicitae sensibus ereptis mens excidit! at ego certe cognoram
...: a parva virgine magnanimam. Mam. Aemilius ad castra venit."""
In [13]: tokenizer.tokenize_sentences(text)
Out[13]:
['quam penitus maestas exedit cura medullas! ut tibi tunc toto pectore sollicitae sensibus ereptis mens excidit! at ego certe cognoram a parva virgine magnanimam.',
'Mam. Aemilius ad castra venit.']
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cltk/tokenize/sentence.py`
Content:
```
1 """Tokenize sentences."""
2
3 __author__ = 'Kyle P. Johnson <[email protected]>'
4 __license__ = 'MIT License. See LICENSE.'
5
6
7 from cltk.utils.file_operations import open_pickle
8 from nltk.tokenize.punkt import PunktLanguageVars
9 from nltk.tokenize.punkt import PunktSentenceTokenizer
10 import os
11
12
13 PUNCTUATION = {'greek':
14 {'external': ('.', ';'),
15 'internal': (',', '·'),
16 'file': 'greek.pickle', },
17 'latin':
18 {'external': ('.', '?', ':'),
19 'internal': (',', ';'),
20 'file': 'latin.pickle', }}
21
22
23 class TokenizeSentence(): # pylint: disable=R0903
24 """Tokenize sentences for the language given as argument, e.g.,
25 ``TokenizeSentence('greek')``.
26 """
27
28 def __init__(self: object, language: str):
29 """Lower incoming language name and assemble variables.
30 :type language: str
31 :param language : Language for sentence tokenization.
32 """
33 self.language = language.lower()
34 self.internal_punctuation, self.external_punctuation, self.tokenizer_path = \
35 self._setup_language_variables(self.language)
36
37 def _setup_language_variables(self, lang: str):
38 """Check for language availability and presence of tokenizer file,
39 then read punctuation characters for language and build tokenizer file
40 path.
41 :param lang: The language argument given to the class.
42 :type lang: str
43 :rtype (str, str, str)
44 """
45 assert lang in PUNCTUATION.keys(), \
46 'Sentence tokenizer not available for {0} language.'.format(lang)
47 internal_punctuation = PUNCTUATION[lang]['internal']
48 external_punctuation = PUNCTUATION[lang]['external']
49 file = PUNCTUATION[lang]['file']
50 rel_path = os.path.join('~/cltk_data',
51 lang,
52 'model/' + lang + '_models_cltk/tokenizers/sentence') # pylint: disable=C0301
53 path = os.path.expanduser(rel_path)
54 tokenizer_path = os.path.join(path, file)
55 assert os.path.isfile(tokenizer_path), \
56 'CLTK linguistics data not found for language {0}'.format(lang)
57 return internal_punctuation, external_punctuation, tokenizer_path
58
59 def _setup_tokenizer(self, tokenizer: object):
60 """Add tokenizer and punctuation variables.
61 :type tokenizer: object
62 :param tokenizer : Unpickled tokenizer object.
63 :rtype : object
64 """
65 language_punkt_vars = PunktLanguageVars
66 language_punkt_vars.sent_end_chars = self.external_punctuation
67 language_punkt_vars.internal_punctuation = self.internal_punctuation
68 tokenizer.INCLUDE_ALL_COLLOCS = True
69 tokenizer.INCLUDE_ABBREV_COLLOCS = True
70 params = tokenizer.get_params()
71 return PunktSentenceTokenizer(params)
72
73 def tokenize_sentences(self: object, untokenized_string: str):
74 """Tokenize sentences by reading trained tokenizer and invoking
75 ``PunktSentenceTokenizer()``.
76 :type untokenized_string: str
77 :param untokenized_string: A string containing one of more sentences.
78 :rtype : list of strings
79 """
80 # load tokenizer
81 assert isinstance(untokenized_string, str), \
82 'Incoming argument must be a string.'
83 tokenizer = open_pickle(self.tokenizer_path)
84 tokenizer = self._setup_tokenizer(tokenizer)
85
86 # mk list of tokenized sentences
87 tokenized_sentences = []
88 for sentence in tokenizer.sentences_from_text(untokenized_string, realign_boundaries=True): # pylint: disable=C0301
89 tokenized_sentences.append(sentence)
90 return tokenized_sentences
91
92 def tokenize(self: object, untokenized_string: str):
93 # NLTK's PlaintextCorpusReader needs a function called tokenize
94 # in functions used as a parameter for sentence tokenization.
95 # So this is an alias for tokenize_sentences().
96 return self.tokenize_sentences(untokenized_string)
97
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/cltk/tokenize/sentence.py b/cltk/tokenize/sentence.py
--- a/cltk/tokenize/sentence.py
+++ b/cltk/tokenize/sentence.py
@@ -15,7 +15,7 @@
'internal': (',', '·'),
'file': 'greek.pickle', },
'latin':
- {'external': ('.', '?', ':'),
+ {'external': ('.', '?', '!', ':'),
'internal': (',', ';'),
'file': 'latin.pickle', }}
|
{"golden_diff": "diff --git a/cltk/tokenize/sentence.py b/cltk/tokenize/sentence.py\n--- a/cltk/tokenize/sentence.py\n+++ b/cltk/tokenize/sentence.py\n@@ -15,7 +15,7 @@\n 'internal': (',', '\u00b7'),\n 'file': 'greek.pickle', },\n 'latin':\n- {'external': ('.', '?', ':'),\n+ {'external': ('.', '?', '!', ':'),\n 'internal': (',', ';'),\n 'file': 'latin.pickle', }}\n", "issue": "External punctuation stopped working on Latin sent tokenizer\nRecently reviewing the tokenizer, and it is not capturing exclamation points. I'll look to see the NLTK has changed anything.\r\n``` python\r\nIn [12]: text = \"\"\"quam penitus maestas exedit cura medullas! ut tibi tunc toto \r\n ...: pectore sollicitae sensibus ereptis mens excidit! at ego certe cognoram\r\n ...: a parva virgine magnanimam. Mam. Aemilius ad castra venit.\"\"\"\r\n\r\nIn [13]: tokenizer.tokenize_sentences(text)\r\nOut[13]: \r\n['quam penitus maestas exedit cura medullas! ut tibi tunc toto pectore sollicitae sensibus ereptis mens excidit! at ego certe cognoram a parva virgine magnanimam.',\r\n 'Mam. Aemilius ad castra venit.']\r\n```\n", "before_files": [{"content": "\"\"\"Tokenize sentences.\"\"\"\n\n__author__ = 'Kyle P. Johnson <[email protected]>'\n__license__ = 'MIT License. See LICENSE.'\n\n\nfrom cltk.utils.file_operations import open_pickle\nfrom nltk.tokenize.punkt import PunktLanguageVars\nfrom nltk.tokenize.punkt import PunktSentenceTokenizer\nimport os\n\n\nPUNCTUATION = {'greek':\n {'external': ('.', ';'),\n 'internal': (',', '\u00b7'),\n 'file': 'greek.pickle', },\n 'latin':\n {'external': ('.', '?', ':'),\n 'internal': (',', ';'),\n 'file': 'latin.pickle', }}\n\n\nclass TokenizeSentence(): # pylint: disable=R0903\n \"\"\"Tokenize sentences for the language given as argument, e.g.,\n ``TokenizeSentence('greek')``.\n \"\"\"\n\n def __init__(self: object, language: str):\n \"\"\"Lower incoming language name and assemble variables.\n :type language: str\n :param language : Language for sentence tokenization.\n \"\"\"\n self.language = language.lower()\n self.internal_punctuation, self.external_punctuation, self.tokenizer_path = \\\n self._setup_language_variables(self.language)\n\n def _setup_language_variables(self, lang: str):\n \"\"\"Check for language availability and presence of tokenizer file,\n then read punctuation characters for language and build tokenizer file\n path.\n :param lang: The language argument given to the class.\n :type lang: str\n :rtype (str, str, str)\n \"\"\"\n assert lang in PUNCTUATION.keys(), \\\n 'Sentence tokenizer not available for {0} language.'.format(lang)\n internal_punctuation = PUNCTUATION[lang]['internal']\n external_punctuation = PUNCTUATION[lang]['external']\n file = PUNCTUATION[lang]['file']\n rel_path = os.path.join('~/cltk_data',\n lang,\n 'model/' + lang + '_models_cltk/tokenizers/sentence') # pylint: disable=C0301\n path = os.path.expanduser(rel_path)\n tokenizer_path = os.path.join(path, file)\n assert os.path.isfile(tokenizer_path), \\\n 'CLTK linguistics data not found for language {0}'.format(lang)\n return internal_punctuation, external_punctuation, tokenizer_path\n\n def _setup_tokenizer(self, tokenizer: object):\n \"\"\"Add tokenizer and punctuation variables.\n :type tokenizer: object\n :param tokenizer : Unpickled tokenizer object.\n :rtype : object\n \"\"\"\n language_punkt_vars = PunktLanguageVars\n language_punkt_vars.sent_end_chars = self.external_punctuation\n language_punkt_vars.internal_punctuation = self.internal_punctuation\n tokenizer.INCLUDE_ALL_COLLOCS = True\n tokenizer.INCLUDE_ABBREV_COLLOCS = True\n params = tokenizer.get_params()\n return PunktSentenceTokenizer(params)\n\n def tokenize_sentences(self: object, untokenized_string: str):\n \"\"\"Tokenize sentences by reading trained tokenizer and invoking\n ``PunktSentenceTokenizer()``.\n :type untokenized_string: str\n :param untokenized_string: A string containing one of more sentences.\n :rtype : list of strings\n \"\"\"\n # load tokenizer\n assert isinstance(untokenized_string, str), \\\n 'Incoming argument must be a string.'\n tokenizer = open_pickle(self.tokenizer_path)\n tokenizer = self._setup_tokenizer(tokenizer)\n\n # mk list of tokenized sentences\n tokenized_sentences = []\n for sentence in tokenizer.sentences_from_text(untokenized_string, realign_boundaries=True): # pylint: disable=C0301\n tokenized_sentences.append(sentence)\n return tokenized_sentences\n \n def tokenize(self: object, untokenized_string: str):\n # NLTK's PlaintextCorpusReader needs a function called tokenize\n # in functions used as a parameter for sentence tokenization.\n # So this is an alias for tokenize_sentences().\n return self.tokenize_sentences(untokenized_string)\n", "path": "cltk/tokenize/sentence.py"}], "after_files": [{"content": "\"\"\"Tokenize sentences.\"\"\"\n\n__author__ = 'Kyle P. Johnson <[email protected]>'\n__license__ = 'MIT License. See LICENSE.'\n\n\nfrom cltk.utils.file_operations import open_pickle\nfrom nltk.tokenize.punkt import PunktLanguageVars\nfrom nltk.tokenize.punkt import PunktSentenceTokenizer\nimport os\n\n\nPUNCTUATION = {'greek':\n {'external': ('.', ';'),\n 'internal': (',', '\u00b7'),\n 'file': 'greek.pickle', },\n 'latin':\n {'external': ('.', '?', '!', ':'),\n 'internal': (',', ';'),\n 'file': 'latin.pickle', }}\n\n\nclass TokenizeSentence(): # pylint: disable=R0903\n \"\"\"Tokenize sentences for the language given as argument, e.g.,\n ``TokenizeSentence('greek')``.\n \"\"\"\n\n def __init__(self: object, language: str):\n \"\"\"Lower incoming language name and assemble variables.\n :type language: str\n :param language : Language for sentence tokenization.\n \"\"\"\n self.language = language.lower()\n self.internal_punctuation, self.external_punctuation, self.tokenizer_path = \\\n self._setup_language_variables(self.language)\n\n def _setup_language_variables(self, lang: str):\n \"\"\"Check for language availability and presence of tokenizer file,\n then read punctuation characters for language and build tokenizer file\n path.\n :param lang: The language argument given to the class.\n :type lang: str\n :rtype (str, str, str)\n \"\"\"\n assert lang in PUNCTUATION.keys(), \\\n 'Sentence tokenizer not available for {0} language.'.format(lang)\n internal_punctuation = PUNCTUATION[lang]['internal']\n external_punctuation = PUNCTUATION[lang]['external']\n file = PUNCTUATION[lang]['file']\n rel_path = os.path.join('~/cltk_data',\n lang,\n 'model/' + lang + '_models_cltk/tokenizers/sentence') # pylint: disable=C0301\n path = os.path.expanduser(rel_path)\n tokenizer_path = os.path.join(path, file)\n assert os.path.isfile(tokenizer_path), \\\n 'CLTK linguistics data not found for language {0}'.format(lang)\n return internal_punctuation, external_punctuation, tokenizer_path\n\n def _setup_tokenizer(self, tokenizer: object):\n \"\"\"Add tokenizer and punctuation variables.\n :type tokenizer: object\n :param tokenizer : Unpickled tokenizer object.\n :rtype : object\n \"\"\"\n language_punkt_vars = PunktLanguageVars\n language_punkt_vars.sent_end_chars = self.external_punctuation\n language_punkt_vars.internal_punctuation = self.internal_punctuation\n tokenizer.INCLUDE_ALL_COLLOCS = True\n tokenizer.INCLUDE_ABBREV_COLLOCS = True\n params = tokenizer.get_params()\n return PunktSentenceTokenizer(params)\n\n def tokenize_sentences(self: object, untokenized_string: str):\n \"\"\"Tokenize sentences by reading trained tokenizer and invoking\n ``PunktSentenceTokenizer()``.\n :type untokenized_string: str\n :param untokenized_string: A string containing one of more sentences.\n :rtype : list of strings\n \"\"\"\n # load tokenizer\n assert isinstance(untokenized_string, str), \\\n 'Incoming argument must be a string.'\n tokenizer = open_pickle(self.tokenizer_path)\n tokenizer = self._setup_tokenizer(tokenizer)\n\n # mk list of tokenized sentences\n tokenized_sentences = []\n for sentence in tokenizer.sentences_from_text(untokenized_string, realign_boundaries=True): # pylint: disable=C0301\n tokenized_sentences.append(sentence)\n return tokenized_sentences\n \n def tokenize(self: object, untokenized_string: str):\n # NLTK's PlaintextCorpusReader needs a function called tokenize\n # in functions used as a parameter for sentence tokenization.\n # So this is an alias for tokenize_sentences().\n return self.tokenize_sentences(untokenized_string)\n", "path": "cltk/tokenize/sentence.py"}]}
| 1,538 | 118 |
gh_patches_debug_7773
|
rasdani/github-patches
|
git_diff
|
alltheplaces__alltheplaces-3339
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Spider jbhifi is broken
During the global build at 2021-06-16-14-42-20, spider **jbhifi** failed with **78 features** and **1 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-06-16-14-42-20/logs/jbhifi.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-16-14-42-20/output/jbhifi.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-16-14-42-20/output/jbhifi.geojson))
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/jbhifi.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 import scrapy
3 import json
4 from locations.items import GeojsonPointItem
5 from locations.hours import OpeningHours
6
7 DAYS = ['Su', 'Mo', 'Tu', "We", 'Th', 'Fr', 'Sa']
8
9 class JbHifiSpider(scrapy.Spider):
10 name = "jbhifi"
11 allowed_domains = ["algolia.net"]
12
13 def start_requests(self):
14 headers = {"Content-Type": "application/json",
15 "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:81.0) Gecko/20100101 Firefox/81.0",
16 "Origin": "https://www.jbhifi.com.au",
17 "Referer": "https://www.jbhifi.com.au/pages/store-finder",
18 "Accept": "*/*",
19 'Accept-Encoding': 'gzip, deflate'
20
21 }
22 yield scrapy.http.Request(
23 url="https://vtvkm5urpx-dsn.algolia.net/1/indexes/shopify_store_locations/query?x-algolia-agent=Algolia for JavaScript (3.35.1); Browser (lite)&x-algolia-application-id=VTVKM5URPX&x-algolia-api-key=a0c0108d737ad5ab54a0e2da900bf040",
24 method="POST",
25 headers=headers,
26 body='{"params":"query=&hitsPerPage=1000&filters=displayOnWeb%3Ap"}')
27
28 def process_trading_hours(self, store_hours):
29 opening_hours = OpeningHours()
30 for day in store_hours:
31 opening_hours.add_range(DAYS[day['DayOfWeek']], day['OpeningTime'], day['ClosingTime'])
32
33 return opening_hours.as_opening_hours()
34
35 def parse(self, response):
36 stores = json.loads(response.body)
37
38 for store in stores['hits']:
39 properties = {
40 'ref': store['shopId'],
41 'name': store['storeName'],
42 'addr_full': f"{store['storeAddress']['Line1']} {store['storeAddress'].get('Line2','')} {store['storeAddress'].get('Line3','')}".strip(),
43 'city': store['storeAddress']['Suburb'],
44 'state': store['storeAddress']['State'],
45 'postcode': store['storeAddress']['Postcode'],
46 'country': 'AU',
47 'lat': store['_geoloc']['lat'],
48 'lon': store['_geoloc']['lng'],
49 'phone': store['phone'],
50 'opening_hours': self.process_trading_hours(store['normalTradingHours'])
51 }
52
53 yield GeojsonPointItem(**properties)
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/locations/spiders/jbhifi.py b/locations/spiders/jbhifi.py
--- a/locations/spiders/jbhifi.py
+++ b/locations/spiders/jbhifi.py
@@ -28,7 +28,8 @@
def process_trading_hours(self, store_hours):
opening_hours = OpeningHours()
for day in store_hours:
- opening_hours.add_range(DAYS[day['DayOfWeek']], day['OpeningTime'], day['ClosingTime'])
+ if 'NULL' not in day['OpeningTime'] and 'NULL' not in day['ClosingTime']:
+ opening_hours.add_range(DAYS[day['DayOfWeek']], day['OpeningTime'], day['ClosingTime'])
return opening_hours.as_opening_hours()
|
{"golden_diff": "diff --git a/locations/spiders/jbhifi.py b/locations/spiders/jbhifi.py\n--- a/locations/spiders/jbhifi.py\n+++ b/locations/spiders/jbhifi.py\n@@ -28,7 +28,8 @@\n def process_trading_hours(self, store_hours):\n opening_hours = OpeningHours()\n for day in store_hours:\n- opening_hours.add_range(DAYS[day['DayOfWeek']], day['OpeningTime'], day['ClosingTime'])\n+ if 'NULL' not in day['OpeningTime'] and 'NULL' not in day['ClosingTime']:\n+ opening_hours.add_range(DAYS[day['DayOfWeek']], day['OpeningTime'], day['ClosingTime'])\n \n return opening_hours.as_opening_hours()\n", "issue": "Spider jbhifi is broken\nDuring the global build at 2021-06-16-14-42-20, spider **jbhifi** failed with **78 features** and **1 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-06-16-14-42-20/logs/jbhifi.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-16-14-42-20/output/jbhifi.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-16-14-42-20/output/jbhifi.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nimport json\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\nDAYS = ['Su', 'Mo', 'Tu', \"We\", 'Th', 'Fr', 'Sa']\n\nclass JbHifiSpider(scrapy.Spider):\n name = \"jbhifi\"\n allowed_domains = [\"algolia.net\"]\n \n def start_requests(self):\n headers = {\"Content-Type\": \"application/json\",\n \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:81.0) Gecko/20100101 Firefox/81.0\",\n \"Origin\": \"https://www.jbhifi.com.au\",\n \"Referer\": \"https://www.jbhifi.com.au/pages/store-finder\",\n \"Accept\": \"*/*\",\n 'Accept-Encoding': 'gzip, deflate'\n\n }\n yield scrapy.http.Request(\n url=\"https://vtvkm5urpx-dsn.algolia.net/1/indexes/shopify_store_locations/query?x-algolia-agent=Algolia for JavaScript (3.35.1); Browser (lite)&x-algolia-application-id=VTVKM5URPX&x-algolia-api-key=a0c0108d737ad5ab54a0e2da900bf040\",\n method=\"POST\",\n headers=headers,\n body='{\"params\":\"query=&hitsPerPage=1000&filters=displayOnWeb%3Ap\"}')\n\n def process_trading_hours(self, store_hours):\n opening_hours = OpeningHours()\n for day in store_hours:\n opening_hours.add_range(DAYS[day['DayOfWeek']], day['OpeningTime'], day['ClosingTime'])\n \n return opening_hours.as_opening_hours()\n\n def parse(self, response):\n stores = json.loads(response.body)\n\n for store in stores['hits']:\n properties = {\n 'ref': store['shopId'],\n 'name': store['storeName'],\n 'addr_full': f\"{store['storeAddress']['Line1']} {store['storeAddress'].get('Line2','')} {store['storeAddress'].get('Line3','')}\".strip(),\n 'city': store['storeAddress']['Suburb'],\n 'state': store['storeAddress']['State'],\n 'postcode': store['storeAddress']['Postcode'],\n 'country': 'AU',\n 'lat': store['_geoloc']['lat'],\n 'lon': store['_geoloc']['lng'],\n 'phone': store['phone'],\n 'opening_hours': self.process_trading_hours(store['normalTradingHours'])\n }\n\n yield GeojsonPointItem(**properties)", "path": "locations/spiders/jbhifi.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nimport json\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\nDAYS = ['Su', 'Mo', 'Tu', \"We\", 'Th', 'Fr', 'Sa']\n\nclass JbHifiSpider(scrapy.Spider):\n name = \"jbhifi\"\n allowed_domains = [\"algolia.net\"]\n \n def start_requests(self):\n headers = {\"Content-Type\": \"application/json\",\n \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:81.0) Gecko/20100101 Firefox/81.0\",\n \"Origin\": \"https://www.jbhifi.com.au\",\n \"Referer\": \"https://www.jbhifi.com.au/pages/store-finder\",\n \"Accept\": \"*/*\",\n 'Accept-Encoding': 'gzip, deflate'\n\n }\n yield scrapy.http.Request(\n url=\"https://vtvkm5urpx-dsn.algolia.net/1/indexes/shopify_store_locations/query?x-algolia-agent=Algolia for JavaScript (3.35.1); Browser (lite)&x-algolia-application-id=VTVKM5URPX&x-algolia-api-key=a0c0108d737ad5ab54a0e2da900bf040\",\n method=\"POST\",\n headers=headers,\n body='{\"params\":\"query=&hitsPerPage=1000&filters=displayOnWeb%3Ap\"}')\n\n def process_trading_hours(self, store_hours):\n opening_hours = OpeningHours()\n for day in store_hours:\n if 'NULL' not in day['OpeningTime'] and 'NULL' not in day['ClosingTime']:\n opening_hours.add_range(DAYS[day['DayOfWeek']], day['OpeningTime'], day['ClosingTime'])\n \n return opening_hours.as_opening_hours()\n\n def parse(self, response):\n stores = json.loads(response.body)\n\n for store in stores['hits']:\n properties = {\n 'ref': store['shopId'],\n 'name': store['storeName'],\n 'addr_full': f\"{store['storeAddress']['Line1']} {store['storeAddress'].get('Line2','')} {store['storeAddress'].get('Line3','')}\".strip(),\n 'city': store['storeAddress']['Suburb'],\n 'state': store['storeAddress']['State'],\n 'postcode': store['storeAddress']['Postcode'],\n 'country': 'AU',\n 'lat': store['_geoloc']['lat'],\n 'lon': store['_geoloc']['lng'],\n 'phone': store['phone'],\n 'opening_hours': self.process_trading_hours(store['normalTradingHours'])\n }\n\n yield GeojsonPointItem(**properties)", "path": "locations/spiders/jbhifi.py"}]}
| 1,138 | 164 |
gh_patches_debug_1878
|
rasdani/github-patches
|
git_diff
|
googleapis__google-cloud-python-5856
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Request to release GCS Python library
Hi,
Is it possible to release the Storage client library for Python?
I'd like the new method `get_service_account_email` to be available. Unless there exist concerns.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `storage/setup.py`
Content:
```
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import io
16 import os
17
18 import setuptools
19
20
21 # Package metadata.
22
23 name = 'google-cloud-storage'
24 description = 'Google Cloud Storage API client library'
25 version = '1.10.0'
26 # Should be one of:
27 # 'Development Status :: 3 - Alpha'
28 # 'Development Status :: 4 - Beta'
29 # 'Development Status :: 5 - Production/Stable'
30 release_status = 'Development Status :: 5 - Production/Stable'
31 dependencies = [
32 'google-cloud-core<0.29dev,>=0.28.0',
33 'google-api-core<2.0.0dev,>=0.1.1',
34 'google-resumable-media>=0.3.1',
35 ]
36 extras = {
37 }
38
39
40 # Setup boilerplate below this line.
41
42 package_root = os.path.abspath(os.path.dirname(__file__))
43
44 readme_filename = os.path.join(package_root, 'README.rst')
45 with io.open(readme_filename, encoding='utf-8') as readme_file:
46 readme = readme_file.read()
47
48 # Only include packages under the 'google' namespace. Do not include tests,
49 # benchmarks, etc.
50 packages = [
51 package for package in setuptools.find_packages()
52 if package.startswith('google')]
53
54 # Determine which namespaces are needed.
55 namespaces = ['google']
56 if 'google.cloud' in packages:
57 namespaces.append('google.cloud')
58
59
60 setuptools.setup(
61 name=name,
62 version=version,
63 description=description,
64 long_description=readme,
65 author='Google LLC',
66 author_email='[email protected]',
67 license='Apache 2.0',
68 url='https://github.com/GoogleCloudPlatform/google-cloud-python',
69 classifiers=[
70 release_status,
71 'Intended Audience :: Developers',
72 'License :: OSI Approved :: Apache Software License',
73 'Programming Language :: Python',
74 'Programming Language :: Python :: 2',
75 'Programming Language :: Python :: 2.7',
76 'Programming Language :: Python :: 3',
77 'Programming Language :: Python :: 3.4',
78 'Programming Language :: Python :: 3.5',
79 'Programming Language :: Python :: 3.6',
80 'Operating System :: OS Independent',
81 'Topic :: Internet',
82 ],
83 platforms='Posix; MacOS X; Windows',
84 packages=packages,
85 namespace_packages=namespaces,
86 install_requires=dependencies,
87 extras_require=extras,
88 include_package_data=True,
89 zip_safe=False,
90 )
91
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/storage/setup.py b/storage/setup.py
--- a/storage/setup.py
+++ b/storage/setup.py
@@ -22,7 +22,7 @@
name = 'google-cloud-storage'
description = 'Google Cloud Storage API client library'
-version = '1.10.0'
+version = '1.11.0'
# Should be one of:
# 'Development Status :: 3 - Alpha'
# 'Development Status :: 4 - Beta'
|
{"golden_diff": "diff --git a/storage/setup.py b/storage/setup.py\n--- a/storage/setup.py\n+++ b/storage/setup.py\n@@ -22,7 +22,7 @@\n \n name = 'google-cloud-storage'\n description = 'Google Cloud Storage API client library'\n-version = '1.10.0'\n+version = '1.11.0'\n # Should be one of:\n # 'Development Status :: 3 - Alpha'\n # 'Development Status :: 4 - Beta'\n", "issue": "Request to release GCS Python library\nHi,\r\n\r\nIs it possible to release the Storage client library for Python?\r\n\r\nI'd like the new method `get_service_account_email` to be available. Unless there exist concerns.\n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\n\nimport setuptools\n\n\n# Package metadata.\n\nname = 'google-cloud-storage'\ndescription = 'Google Cloud Storage API client library'\nversion = '1.10.0'\n# Should be one of:\n# 'Development Status :: 3 - Alpha'\n# 'Development Status :: 4 - Beta'\n# 'Development Status :: 5 - Production/Stable'\nrelease_status = 'Development Status :: 5 - Production/Stable'\ndependencies = [\n 'google-cloud-core<0.29dev,>=0.28.0',\n 'google-api-core<2.0.0dev,>=0.1.1',\n 'google-resumable-media>=0.3.1',\n]\nextras = {\n}\n\n\n# Setup boilerplate below this line.\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, 'README.rst')\nwith io.open(readme_filename, encoding='utf-8') as readme_file:\n readme = readme_file.read()\n\n# Only include packages under the 'google' namespace. Do not include tests,\n# benchmarks, etc.\npackages = [\n package for package in setuptools.find_packages()\n if package.startswith('google')]\n\n# Determine which namespaces are needed.\nnamespaces = ['google']\nif 'google.cloud' in packages:\n namespaces.append('google.cloud')\n\n\nsetuptools.setup(\n name=name,\n version=version,\n description=description,\n long_description=readme,\n author='Google LLC',\n author_email='[email protected]',\n license='Apache 2.0',\n url='https://github.com/GoogleCloudPlatform/google-cloud-python',\n classifiers=[\n release_status,\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Operating System :: OS Independent',\n 'Topic :: Internet',\n ],\n platforms='Posix; MacOS X; Windows',\n packages=packages,\n namespace_packages=namespaces,\n install_requires=dependencies,\n extras_require=extras,\n include_package_data=True,\n zip_safe=False,\n)\n", "path": "storage/setup.py"}], "after_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\n\nimport setuptools\n\n\n# Package metadata.\n\nname = 'google-cloud-storage'\ndescription = 'Google Cloud Storage API client library'\nversion = '1.11.0'\n# Should be one of:\n# 'Development Status :: 3 - Alpha'\n# 'Development Status :: 4 - Beta'\n# 'Development Status :: 5 - Production/Stable'\nrelease_status = 'Development Status :: 5 - Production/Stable'\ndependencies = [\n 'google-cloud-core<0.29dev,>=0.28.0',\n 'google-api-core<2.0.0dev,>=0.1.1',\n 'google-resumable-media>=0.3.1',\n]\nextras = {\n}\n\n\n# Setup boilerplate below this line.\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, 'README.rst')\nwith io.open(readme_filename, encoding='utf-8') as readme_file:\n readme = readme_file.read()\n\n# Only include packages under the 'google' namespace. Do not include tests,\n# benchmarks, etc.\npackages = [\n package for package in setuptools.find_packages()\n if package.startswith('google')]\n\n# Determine which namespaces are needed.\nnamespaces = ['google']\nif 'google.cloud' in packages:\n namespaces.append('google.cloud')\n\n\nsetuptools.setup(\n name=name,\n version=version,\n description=description,\n long_description=readme,\n author='Google LLC',\n author_email='[email protected]',\n license='Apache 2.0',\n url='https://github.com/GoogleCloudPlatform/google-cloud-python',\n classifiers=[\n release_status,\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Operating System :: OS Independent',\n 'Topic :: Internet',\n ],\n platforms='Posix; MacOS X; Windows',\n packages=packages,\n namespace_packages=namespaces,\n install_requires=dependencies,\n extras_require=extras,\n include_package_data=True,\n zip_safe=False,\n)\n", "path": "storage/setup.py"}]}
| 1,124 | 101 |
gh_patches_debug_67229
|
rasdani/github-patches
|
git_diff
|
pypi__warehouse-434
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Redirect a slash-less URL to the slashed variant
We have urls like `/project/foobar/`, if someone enters `/project/foobar` we should redirect that to `/project/foobar/`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `warehouse/config.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import fs.opener
14 import transaction
15
16 from pyramid.config import Configurator
17 from tzf.pyramid_yml import config_defaults
18
19 from warehouse.utils.static import WarehouseCacheBuster
20
21
22 def content_security_policy_tween_factory(handler, registry):
23 policy = registry.settings.get("csp", {})
24 policy = "; ".join([" ".join([k] + v) for k, v in sorted(policy.items())])
25
26 def content_security_policy_tween(request):
27 resp = handler(request)
28
29 # We don't want to apply our Content Security Policy to the debug
30 # toolbar, that's not part of our application and it doesn't work with
31 # our restrictive CSP.
32 if not request.path.startswith("/_debug_toolbar/"):
33 resp.headers["Content-Security-Policy"] = \
34 policy.format(request=request)
35
36 return resp
37
38 return content_security_policy_tween
39
40
41 def configure(settings=None):
42 if settings is None:
43 settings = {}
44
45 config = Configurator(settings=settings)
46
47 # Set our yml.location so that it contains all of our settings files
48 config_defaults(config, ["warehouse:etc"])
49
50 # We want to load configuration from YAML files
51 config.include("tzf.pyramid_yml")
52
53 # We'll want to use Jinja2 as our template system.
54 config.include("pyramid_jinja2")
55
56 # We also want to use Jinja2 for .html templates as well, because we just
57 # assume that all templates will be using Jinja.
58 config.add_jinja2_renderer(".html")
59
60 # We'll want to configure some filters for Jinja2 as well.
61 filters = config.get_settings().setdefault("jinja2.filters", {})
62 filters.setdefault("readme", "warehouse.filters:readme_renderer")
63 filters.setdefault("shorten_number", "warehouse.filters:shorten_number")
64
65 # We also want to register some global functions for Jinja
66 jglobals = config.get_settings().setdefault("jinja2.globals", {})
67 jglobals.setdefault("gravatar", "warehouse.utils.gravatar:gravatar")
68
69 # We'll store all of our templates in one location, warehouse/templates
70 # so we'll go ahead and add that to the Jinja2 search path.
71 config.add_jinja2_search_path("warehouse:templates", name=".html")
72
73 # Configure our transaction handling so that each request gets it's own
74 # transaction handler and the lifetime of the transaction is tied to the
75 # lifetime of the request.
76 config.add_settings({
77 "tm.manager_hook": lambda request: transaction.TransactionManager(),
78 })
79 config.include("pyramid_tm")
80
81 # Register support for services
82 config.include("pyramid_services")
83
84 # Register support for internationalization and localization
85 config.include(".i18n")
86
87 # Register the configuration for the PostgreSQL database.
88 config.include(".db")
89
90 # Register our session support
91 config.include(".sessions")
92
93 # Register our support for http and origin caching
94 config.include(".cache.http")
95 config.include(".cache.origin")
96
97 # Register our CSRF support
98 config.include(".csrf")
99
100 # Register our authentication support.
101 config.include(".accounts")
102
103 # Allow the packaging app to register any services it has.
104 config.include(".packaging")
105
106 # Register all our URL routes for Warehouse.
107 config.include(".routes")
108
109 # Enable a Content Security Policy
110 config.add_settings({
111 "csp": {
112 "default-src": ["'none'"],
113 "frame-ancestors": ["'none'"],
114 "img-src": [
115 "'self'",
116 config.registry.settings["camo.url"],
117 "https://secure.gravatar.com",
118 ],
119 "referrer": ["cross-origin"],
120 "reflected-xss": ["block"],
121 "script-src": ["'self'"],
122 "style-src": ["'self'"],
123 },
124 })
125 config.add_tween("warehouse.config.content_security_policy_tween_factory")
126
127 # Configure the filesystems we use.
128 config.registry["filesystems"] = {}
129 for key, path in {
130 k[5:]: v
131 for k, v in config.registry.settings.items()
132 if k.startswith("dirs.")}.items():
133 config.registry["filesystems"][key] = \
134 fs.opener.fsopendir(path, create_dir=True)
135
136 # Enable Warehouse to service our static files
137 config.add_static_view(
138 name="static",
139 path="warehouse:static",
140 cachebust=WarehouseCacheBuster(
141 "warehouse:static/manifest.json",
142 cache=not config.registry.settings["pyramid.reload_assets"],
143 ),
144 )
145
146 # Scan everything for configuration
147 config.scan(ignore=["warehouse.migrations.env"])
148
149 return config
150
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/warehouse/config.py b/warehouse/config.py
--- a/warehouse/config.py
+++ b/warehouse/config.py
@@ -124,6 +124,10 @@
})
config.add_tween("warehouse.config.content_security_policy_tween_factory")
+ # If a route matches with a slash appended to it, redirect to that route
+ # instead of returning a HTTPNotFound.
+ config.add_notfound_view(append_slash=True)
+
# Configure the filesystems we use.
config.registry["filesystems"] = {}
for key, path in {
|
{"golden_diff": "diff --git a/warehouse/config.py b/warehouse/config.py\n--- a/warehouse/config.py\n+++ b/warehouse/config.py\n@@ -124,6 +124,10 @@\n })\n config.add_tween(\"warehouse.config.content_security_policy_tween_factory\")\n \n+ # If a route matches with a slash appended to it, redirect to that route\n+ # instead of returning a HTTPNotFound.\n+ config.add_notfound_view(append_slash=True)\n+\n # Configure the filesystems we use.\n config.registry[\"filesystems\"] = {}\n for key, path in {\n", "issue": "Redirect a slash-less URL to the slashed variant\nWe have urls like `/project/foobar/`, if someone enters `/project/foobar` we should redirect that to `/project/foobar/`.\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport fs.opener\nimport transaction\n\nfrom pyramid.config import Configurator\nfrom tzf.pyramid_yml import config_defaults\n\nfrom warehouse.utils.static import WarehouseCacheBuster\n\n\ndef content_security_policy_tween_factory(handler, registry):\n policy = registry.settings.get(\"csp\", {})\n policy = \"; \".join([\" \".join([k] + v) for k, v in sorted(policy.items())])\n\n def content_security_policy_tween(request):\n resp = handler(request)\n\n # We don't want to apply our Content Security Policy to the debug\n # toolbar, that's not part of our application and it doesn't work with\n # our restrictive CSP.\n if not request.path.startswith(\"/_debug_toolbar/\"):\n resp.headers[\"Content-Security-Policy\"] = \\\n policy.format(request=request)\n\n return resp\n\n return content_security_policy_tween\n\n\ndef configure(settings=None):\n if settings is None:\n settings = {}\n\n config = Configurator(settings=settings)\n\n # Set our yml.location so that it contains all of our settings files\n config_defaults(config, [\"warehouse:etc\"])\n\n # We want to load configuration from YAML files\n config.include(\"tzf.pyramid_yml\")\n\n # We'll want to use Jinja2 as our template system.\n config.include(\"pyramid_jinja2\")\n\n # We also want to use Jinja2 for .html templates as well, because we just\n # assume that all templates will be using Jinja.\n config.add_jinja2_renderer(\".html\")\n\n # We'll want to configure some filters for Jinja2 as well.\n filters = config.get_settings().setdefault(\"jinja2.filters\", {})\n filters.setdefault(\"readme\", \"warehouse.filters:readme_renderer\")\n filters.setdefault(\"shorten_number\", \"warehouse.filters:shorten_number\")\n\n # We also want to register some global functions for Jinja\n jglobals = config.get_settings().setdefault(\"jinja2.globals\", {})\n jglobals.setdefault(\"gravatar\", \"warehouse.utils.gravatar:gravatar\")\n\n # We'll store all of our templates in one location, warehouse/templates\n # so we'll go ahead and add that to the Jinja2 search path.\n config.add_jinja2_search_path(\"warehouse:templates\", name=\".html\")\n\n # Configure our transaction handling so that each request gets it's own\n # transaction handler and the lifetime of the transaction is tied to the\n # lifetime of the request.\n config.add_settings({\n \"tm.manager_hook\": lambda request: transaction.TransactionManager(),\n })\n config.include(\"pyramid_tm\")\n\n # Register support for services\n config.include(\"pyramid_services\")\n\n # Register support for internationalization and localization\n config.include(\".i18n\")\n\n # Register the configuration for the PostgreSQL database.\n config.include(\".db\")\n\n # Register our session support\n config.include(\".sessions\")\n\n # Register our support for http and origin caching\n config.include(\".cache.http\")\n config.include(\".cache.origin\")\n\n # Register our CSRF support\n config.include(\".csrf\")\n\n # Register our authentication support.\n config.include(\".accounts\")\n\n # Allow the packaging app to register any services it has.\n config.include(\".packaging\")\n\n # Register all our URL routes for Warehouse.\n config.include(\".routes\")\n\n # Enable a Content Security Policy\n config.add_settings({\n \"csp\": {\n \"default-src\": [\"'none'\"],\n \"frame-ancestors\": [\"'none'\"],\n \"img-src\": [\n \"'self'\",\n config.registry.settings[\"camo.url\"],\n \"https://secure.gravatar.com\",\n ],\n \"referrer\": [\"cross-origin\"],\n \"reflected-xss\": [\"block\"],\n \"script-src\": [\"'self'\"],\n \"style-src\": [\"'self'\"],\n },\n })\n config.add_tween(\"warehouse.config.content_security_policy_tween_factory\")\n\n # Configure the filesystems we use.\n config.registry[\"filesystems\"] = {}\n for key, path in {\n k[5:]: v\n for k, v in config.registry.settings.items()\n if k.startswith(\"dirs.\")}.items():\n config.registry[\"filesystems\"][key] = \\\n fs.opener.fsopendir(path, create_dir=True)\n\n # Enable Warehouse to service our static files\n config.add_static_view(\n name=\"static\",\n path=\"warehouse:static\",\n cachebust=WarehouseCacheBuster(\n \"warehouse:static/manifest.json\",\n cache=not config.registry.settings[\"pyramid.reload_assets\"],\n ),\n )\n\n # Scan everything for configuration\n config.scan(ignore=[\"warehouse.migrations.env\"])\n\n return config\n", "path": "warehouse/config.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport fs.opener\nimport transaction\n\nfrom pyramid.config import Configurator\nfrom tzf.pyramid_yml import config_defaults\n\nfrom warehouse.utils.static import WarehouseCacheBuster\n\n\ndef content_security_policy_tween_factory(handler, registry):\n policy = registry.settings.get(\"csp\", {})\n policy = \"; \".join([\" \".join([k] + v) for k, v in sorted(policy.items())])\n\n def content_security_policy_tween(request):\n resp = handler(request)\n\n # We don't want to apply our Content Security Policy to the debug\n # toolbar, that's not part of our application and it doesn't work with\n # our restrictive CSP.\n if not request.path.startswith(\"/_debug_toolbar/\"):\n resp.headers[\"Content-Security-Policy\"] = \\\n policy.format(request=request)\n\n return resp\n\n return content_security_policy_tween\n\n\ndef configure(settings=None):\n if settings is None:\n settings = {}\n\n config = Configurator(settings=settings)\n\n # Set our yml.location so that it contains all of our settings files\n config_defaults(config, [\"warehouse:etc\"])\n\n # We want to load configuration from YAML files\n config.include(\"tzf.pyramid_yml\")\n\n # We'll want to use Jinja2 as our template system.\n config.include(\"pyramid_jinja2\")\n\n # We also want to use Jinja2 for .html templates as well, because we just\n # assume that all templates will be using Jinja.\n config.add_jinja2_renderer(\".html\")\n\n # We'll want to configure some filters for Jinja2 as well.\n filters = config.get_settings().setdefault(\"jinja2.filters\", {})\n filters.setdefault(\"readme\", \"warehouse.filters:readme_renderer\")\n filters.setdefault(\"shorten_number\", \"warehouse.filters:shorten_number\")\n\n # We also want to register some global functions for Jinja\n jglobals = config.get_settings().setdefault(\"jinja2.globals\", {})\n jglobals.setdefault(\"gravatar\", \"warehouse.utils.gravatar:gravatar\")\n\n # We'll store all of our templates in one location, warehouse/templates\n # so we'll go ahead and add that to the Jinja2 search path.\n config.add_jinja2_search_path(\"warehouse:templates\", name=\".html\")\n\n # Configure our transaction handling so that each request gets it's own\n # transaction handler and the lifetime of the transaction is tied to the\n # lifetime of the request.\n config.add_settings({\n \"tm.manager_hook\": lambda request: transaction.TransactionManager(),\n })\n config.include(\"pyramid_tm\")\n\n # Register support for services\n config.include(\"pyramid_services\")\n\n # Register support for internationalization and localization\n config.include(\".i18n\")\n\n # Register the configuration for the PostgreSQL database.\n config.include(\".db\")\n\n # Register our session support\n config.include(\".sessions\")\n\n # Register our support for http and origin caching\n config.include(\".cache.http\")\n config.include(\".cache.origin\")\n\n # Register our CSRF support\n config.include(\".csrf\")\n\n # Register our authentication support.\n config.include(\".accounts\")\n\n # Allow the packaging app to register any services it has.\n config.include(\".packaging\")\n\n # Register all our URL routes for Warehouse.\n config.include(\".routes\")\n\n # Enable a Content Security Policy\n config.add_settings({\n \"csp\": {\n \"default-src\": [\"'none'\"],\n \"frame-ancestors\": [\"'none'\"],\n \"img-src\": [\n \"'self'\",\n config.registry.settings[\"camo.url\"],\n \"https://secure.gravatar.com\",\n ],\n \"referrer\": [\"cross-origin\"],\n \"reflected-xss\": [\"block\"],\n \"script-src\": [\"'self'\"],\n \"style-src\": [\"'self'\"],\n },\n })\n config.add_tween(\"warehouse.config.content_security_policy_tween_factory\")\n\n # If a route matches with a slash appended to it, redirect to that route\n # instead of returning a HTTPNotFound.\n config.add_notfound_view(append_slash=True)\n\n # Configure the filesystems we use.\n config.registry[\"filesystems\"] = {}\n for key, path in {\n k[5:]: v\n for k, v in config.registry.settings.items()\n if k.startswith(\"dirs.\")}.items():\n config.registry[\"filesystems\"][key] = \\\n fs.opener.fsopendir(path, create_dir=True)\n\n # Enable Warehouse to service our static files\n config.add_static_view(\n name=\"static\",\n path=\"warehouse:static\",\n cachebust=WarehouseCacheBuster(\n \"warehouse:static/manifest.json\",\n cache=not config.registry.settings[\"pyramid.reload_assets\"],\n ),\n )\n\n # Scan everything for configuration\n config.scan(ignore=[\"warehouse.migrations.env\"])\n\n return config\n", "path": "warehouse/config.py"}]}
| 1,781 | 129 |
gh_patches_debug_1456
|
rasdani/github-patches
|
git_diff
|
arviz-devs__arviz-596
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Installing arviz breaks pymc3 installation
**Describe the bug**
Installing Arviz breaks a pymc3 installation, which is unfortunate because they're built to be compatible. After installation, importing pymc3 throws the following error.
> WARNING (theano.tensor.blas): Using NumPy C-API based implementation for BLAS functions.
The reason is because arviz installation requires numpy==1.15 rather than numpy>=1.15. If you have 1.16, it uninstalls it and re-installs 1.15. It's annoying to fix. I ended up having to scrap the whole virtual environment and start over.
**To Reproduce**
Install arviz if you have any version of numpy other than 1.15, then import pymc3.
**Expected behavior**
Do not force downgrade of numpy.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `arviz/__init__.py`
Content:
```
1 # pylint: disable=wildcard-import,invalid-name,wrong-import-position
2 """ArviZ is a library for exploratory analysis of Bayesian models."""
3 __version__ = "0.3.2"
4
5 import os
6 import logging
7 from matplotlib.pyplot import style
8
9 # add ArviZ's styles to matplotlib's styles
10 arviz_style_path = os.path.join(os.path.dirname(__file__), "plots", "styles")
11 style.core.USER_LIBRARY_PATHS.append(arviz_style_path)
12 style.core.reload_library()
13
14 # Configure logging before importing arviz internals
15 _log = logging.getLogger("arviz")
16
17 if not logging.root.handlers:
18 handler = logging.StreamHandler()
19 _log.setLevel(logging.INFO)
20 _log.addHandler(handler)
21
22 from .data import *
23 from .plots import *
24 from .stats import *
25
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/arviz/__init__.py b/arviz/__init__.py
--- a/arviz/__init__.py
+++ b/arviz/__init__.py
@@ -1,6 +1,6 @@
# pylint: disable=wildcard-import,invalid-name,wrong-import-position
"""ArviZ is a library for exploratory analysis of Bayesian models."""
-__version__ = "0.3.2"
+__version__ = "0.3.3"
import os
import logging
|
{"golden_diff": "diff --git a/arviz/__init__.py b/arviz/__init__.py\n--- a/arviz/__init__.py\n+++ b/arviz/__init__.py\n@@ -1,6 +1,6 @@\n # pylint: disable=wildcard-import,invalid-name,wrong-import-position\n \"\"\"ArviZ is a library for exploratory analysis of Bayesian models.\"\"\"\n-__version__ = \"0.3.2\"\n+__version__ = \"0.3.3\"\n \n import os\n import logging\n", "issue": "Installing arviz breaks pymc3 installation\n**Describe the bug**\r\nInstalling Arviz breaks a pymc3 installation, which is unfortunate because they're built to be compatible. After installation, importing pymc3 throws the following error. \r\n\r\n> WARNING (theano.tensor.blas): Using NumPy C-API based implementation for BLAS functions.\r\n\r\nThe reason is because arviz installation requires numpy==1.15 rather than numpy>=1.15. If you have 1.16, it uninstalls it and re-installs 1.15. It's annoying to fix. I ended up having to scrap the whole virtual environment and start over.\r\n\r\n**To Reproduce**\r\nInstall arviz if you have any version of numpy other than 1.15, then import pymc3. \r\n\r\n**Expected behavior**\r\nDo not force downgrade of numpy. \n", "before_files": [{"content": "# pylint: disable=wildcard-import,invalid-name,wrong-import-position\n\"\"\"ArviZ is a library for exploratory analysis of Bayesian models.\"\"\"\n__version__ = \"0.3.2\"\n\nimport os\nimport logging\nfrom matplotlib.pyplot import style\n\n# add ArviZ's styles to matplotlib's styles\narviz_style_path = os.path.join(os.path.dirname(__file__), \"plots\", \"styles\")\nstyle.core.USER_LIBRARY_PATHS.append(arviz_style_path)\nstyle.core.reload_library()\n\n# Configure logging before importing arviz internals\n_log = logging.getLogger(\"arviz\")\n\nif not logging.root.handlers:\n handler = logging.StreamHandler()\n _log.setLevel(logging.INFO)\n _log.addHandler(handler)\n\nfrom .data import *\nfrom .plots import *\nfrom .stats import *\n", "path": "arviz/__init__.py"}], "after_files": [{"content": "# pylint: disable=wildcard-import,invalid-name,wrong-import-position\n\"\"\"ArviZ is a library for exploratory analysis of Bayesian models.\"\"\"\n__version__ = \"0.3.3\"\n\nimport os\nimport logging\nfrom matplotlib.pyplot import style\n\n# add ArviZ's styles to matplotlib's styles\narviz_style_path = os.path.join(os.path.dirname(__file__), \"plots\", \"styles\")\nstyle.core.USER_LIBRARY_PATHS.append(arviz_style_path)\nstyle.core.reload_library()\n\n# Configure logging before importing arviz internals\n_log = logging.getLogger(\"arviz\")\n\nif not logging.root.handlers:\n handler = logging.StreamHandler()\n _log.setLevel(logging.INFO)\n _log.addHandler(handler)\n\nfrom .data import *\nfrom .plots import *\nfrom .stats import *\n", "path": "arviz/__init__.py"}]}
| 646 | 108 |
gh_patches_debug_1030
|
rasdani/github-patches
|
git_diff
|
scikit-image__scikit-image-1820
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Deprecate Python 2.6 after release of 0.12
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `skimage/__init__.py`
Content:
```
1 """Image Processing SciKit (Toolbox for SciPy)
2
3 ``scikit-image`` (a.k.a. ``skimage``) is a collection of algorithms for image
4 processing and computer vision.
5
6 The main package of ``skimage`` only provides a few utilities for converting
7 between image data types; for most features, you need to import one of the
8 following subpackages:
9
10 Subpackages
11 -----------
12 color
13 Color space conversion.
14 data
15 Test images and example data.
16 draw
17 Drawing primitives (lines, text, etc.) that operate on NumPy arrays.
18 exposure
19 Image intensity adjustment, e.g., histogram equalization, etc.
20 feature
21 Feature detection and extraction, e.g., texture analysis corners, etc.
22 filters
23 Sharpening, edge finding, rank filters, thresholding, etc.
24 graph
25 Graph-theoretic operations, e.g., shortest paths.
26 io
27 Reading, saving, and displaying images and video.
28 measure
29 Measurement of image properties, e.g., similarity and contours.
30 morphology
31 Morphological operations, e.g., opening or skeletonization.
32 novice
33 Simplified interface for teaching purposes.
34 restoration
35 Restoration algorithms, e.g., deconvolution algorithms, denoising, etc.
36 segmentation
37 Partitioning an image into multiple regions.
38 transform
39 Geometric and other transforms, e.g., rotation or the Radon transform.
40 util
41 Generic utilities.
42 viewer
43 A simple graphical user interface for visualizing results and exploring
44 parameters.
45
46 Utility Functions
47 -----------------
48 img_as_float
49 Convert an image to floating point format, with values in [0, 1].
50 img_as_uint
51 Convert an image to unsigned integer format, with values in [0, 65535].
52 img_as_int
53 Convert an image to signed integer format, with values in [-32768, 32767].
54 img_as_ubyte
55 Convert an image to unsigned byte format, with values in [0, 255].
56
57 """
58
59 import os.path as osp
60 import imp
61 import functools
62 import warnings
63 import sys
64
65 pkg_dir = osp.abspath(osp.dirname(__file__))
66 data_dir = osp.join(pkg_dir, 'data')
67
68 __version__ = '0.12dev'
69
70 try:
71 imp.find_module('nose')
72 except ImportError:
73 def _test(doctest=False, verbose=False):
74 """This would run all unit tests, but nose couldn't be
75 imported so the test suite can not run.
76 """
77 raise ImportError("Could not load nose. Unit tests not available.")
78
79 else:
80 def _test(doctest=False, verbose=False):
81 """Run all unit tests."""
82 import nose
83 args = ['', pkg_dir, '--exe', '--ignore-files=^_test']
84 if verbose:
85 args.extend(['-v', '-s'])
86 if doctest:
87 args.extend(['--with-doctest', '--ignore-files=^\.',
88 '--ignore-files=^setup\.py$$', '--ignore-files=test'])
89 # Make sure warnings do not break the doc tests
90 with warnings.catch_warnings():
91 warnings.simplefilter("ignore")
92 success = nose.run('skimage', argv=args)
93 else:
94 success = nose.run('skimage', argv=args)
95 # Return sys.exit code
96 if success:
97 return 0
98 else:
99 return 1
100
101
102 # do not use `test` as function name as this leads to a recursion problem with
103 # the nose test suite
104 test = _test
105 test_verbose = functools.partial(test, verbose=True)
106 test_verbose.__doc__ = test.__doc__
107 doctest = functools.partial(test, doctest=True)
108 doctest.__doc__ = doctest.__doc__
109 doctest_verbose = functools.partial(test, doctest=True, verbose=True)
110 doctest_verbose.__doc__ = doctest.__doc__
111
112
113 # Logic for checking for improper install and importing while in the source
114 # tree when package has not been installed inplace.
115 # Code adapted from scikit-learn's __check_build module.
116 _INPLACE_MSG = """
117 It appears that you are importing a local scikit-image source tree. For
118 this, you need to have an inplace install. Maybe you are in the source
119 directory and you need to try from another location."""
120
121 _STANDARD_MSG = """
122 Your install of scikit-image appears to be broken.
123 Try re-installing the package following the instructions at:
124 http://scikit-image.org/docs/stable/install.html """
125
126
127 def _raise_build_error(e):
128 # Raise a comprehensible error
129 local_dir = osp.split(__file__)[0]
130 msg = _STANDARD_MSG
131 if local_dir == "skimage":
132 # Picking up the local install: this will work only if the
133 # install is an 'inplace build'
134 msg = _INPLACE_MSG
135 raise ImportError("""%s
136 It seems that scikit-image has not been built correctly.
137 %s""" % (e, msg))
138
139 try:
140 # This variable is injected in the __builtins__ by the build
141 # process. It used to enable importing subpackages of skimage when
142 # the binaries are not built
143 __SKIMAGE_SETUP__
144 except NameError:
145 __SKIMAGE_SETUP__ = False
146
147 if __SKIMAGE_SETUP__:
148 sys.stderr.write('Partial import of skimage during the build process.\n')
149 # We are not importing the rest of the scikit during the build
150 # process, as it may not be compiled yet
151 else:
152 try:
153 from ._shared import geometry
154 del geometry
155 except ImportError as e:
156 _raise_build_error(e)
157 from .util.dtype import *
158
159 del warnings, functools, osp, imp, sys
160
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/skimage/__init__.py b/skimage/__init__.py
--- a/skimage/__init__.py
+++ b/skimage/__init__.py
@@ -156,4 +156,9 @@
_raise_build_error(e)
from .util.dtype import *
+
+if sys.version.startswith('2.6'):
+ warnings.warn("Python 2.6 is deprecated and will not be supported in scikit-image 0.13+")
+
+
del warnings, functools, osp, imp, sys
|
{"golden_diff": "diff --git a/skimage/__init__.py b/skimage/__init__.py\n--- a/skimage/__init__.py\n+++ b/skimage/__init__.py\n@@ -156,4 +156,9 @@\n _raise_build_error(e)\n from .util.dtype import *\n \n+\n+if sys.version.startswith('2.6'):\n+ warnings.warn(\"Python 2.6 is deprecated and will not be supported in scikit-image 0.13+\")\n+\n+\n del warnings, functools, osp, imp, sys\n", "issue": "Deprecate Python 2.6 after release of 0.12\n\n", "before_files": [{"content": "\"\"\"Image Processing SciKit (Toolbox for SciPy)\n\n``scikit-image`` (a.k.a. ``skimage``) is a collection of algorithms for image\nprocessing and computer vision.\n\nThe main package of ``skimage`` only provides a few utilities for converting\nbetween image data types; for most features, you need to import one of the\nfollowing subpackages:\n\nSubpackages\n-----------\ncolor\n Color space conversion.\ndata\n Test images and example data.\ndraw\n Drawing primitives (lines, text, etc.) that operate on NumPy arrays.\nexposure\n Image intensity adjustment, e.g., histogram equalization, etc.\nfeature\n Feature detection and extraction, e.g., texture analysis corners, etc.\nfilters\n Sharpening, edge finding, rank filters, thresholding, etc.\ngraph\n Graph-theoretic operations, e.g., shortest paths.\nio\n Reading, saving, and displaying images and video.\nmeasure\n Measurement of image properties, e.g., similarity and contours.\nmorphology\n Morphological operations, e.g., opening or skeletonization.\nnovice\n Simplified interface for teaching purposes.\nrestoration\n Restoration algorithms, e.g., deconvolution algorithms, denoising, etc.\nsegmentation\n Partitioning an image into multiple regions.\ntransform\n Geometric and other transforms, e.g., rotation or the Radon transform.\nutil\n Generic utilities.\nviewer\n A simple graphical user interface for visualizing results and exploring\n parameters.\n\nUtility Functions\n-----------------\nimg_as_float\n Convert an image to floating point format, with values in [0, 1].\nimg_as_uint\n Convert an image to unsigned integer format, with values in [0, 65535].\nimg_as_int\n Convert an image to signed integer format, with values in [-32768, 32767].\nimg_as_ubyte\n Convert an image to unsigned byte format, with values in [0, 255].\n\n\"\"\"\n\nimport os.path as osp\nimport imp\nimport functools\nimport warnings\nimport sys\n\npkg_dir = osp.abspath(osp.dirname(__file__))\ndata_dir = osp.join(pkg_dir, 'data')\n\n__version__ = '0.12dev'\n\ntry:\n imp.find_module('nose')\nexcept ImportError:\n def _test(doctest=False, verbose=False):\n \"\"\"This would run all unit tests, but nose couldn't be\n imported so the test suite can not run.\n \"\"\"\n raise ImportError(\"Could not load nose. Unit tests not available.\")\n\nelse:\n def _test(doctest=False, verbose=False):\n \"\"\"Run all unit tests.\"\"\"\n import nose\n args = ['', pkg_dir, '--exe', '--ignore-files=^_test']\n if verbose:\n args.extend(['-v', '-s'])\n if doctest:\n args.extend(['--with-doctest', '--ignore-files=^\\.',\n '--ignore-files=^setup\\.py$$', '--ignore-files=test'])\n # Make sure warnings do not break the doc tests\n with warnings.catch_warnings():\n warnings.simplefilter(\"ignore\")\n success = nose.run('skimage', argv=args)\n else:\n success = nose.run('skimage', argv=args)\n # Return sys.exit code\n if success:\n return 0\n else:\n return 1\n\n\n# do not use `test` as function name as this leads to a recursion problem with\n# the nose test suite\ntest = _test\ntest_verbose = functools.partial(test, verbose=True)\ntest_verbose.__doc__ = test.__doc__\ndoctest = functools.partial(test, doctest=True)\ndoctest.__doc__ = doctest.__doc__\ndoctest_verbose = functools.partial(test, doctest=True, verbose=True)\ndoctest_verbose.__doc__ = doctest.__doc__\n\n\n# Logic for checking for improper install and importing while in the source\n# tree when package has not been installed inplace.\n# Code adapted from scikit-learn's __check_build module.\n_INPLACE_MSG = \"\"\"\nIt appears that you are importing a local scikit-image source tree. For\nthis, you need to have an inplace install. Maybe you are in the source\ndirectory and you need to try from another location.\"\"\"\n\n_STANDARD_MSG = \"\"\"\nYour install of scikit-image appears to be broken.\nTry re-installing the package following the instructions at:\nhttp://scikit-image.org/docs/stable/install.html \"\"\"\n\n\ndef _raise_build_error(e):\n # Raise a comprehensible error\n local_dir = osp.split(__file__)[0]\n msg = _STANDARD_MSG\n if local_dir == \"skimage\":\n # Picking up the local install: this will work only if the\n # install is an 'inplace build'\n msg = _INPLACE_MSG\n raise ImportError(\"\"\"%s\nIt seems that scikit-image has not been built correctly.\n%s\"\"\" % (e, msg))\n\ntry:\n # This variable is injected in the __builtins__ by the build\n # process. It used to enable importing subpackages of skimage when\n # the binaries are not built\n __SKIMAGE_SETUP__\nexcept NameError:\n __SKIMAGE_SETUP__ = False\n\nif __SKIMAGE_SETUP__:\n sys.stderr.write('Partial import of skimage during the build process.\\n')\n # We are not importing the rest of the scikit during the build\n # process, as it may not be compiled yet\nelse:\n try:\n from ._shared import geometry\n del geometry\n except ImportError as e:\n _raise_build_error(e)\n from .util.dtype import *\n\ndel warnings, functools, osp, imp, sys\n", "path": "skimage/__init__.py"}], "after_files": [{"content": "\"\"\"Image Processing SciKit (Toolbox for SciPy)\n\n``scikit-image`` (a.k.a. ``skimage``) is a collection of algorithms for image\nprocessing and computer vision.\n\nThe main package of ``skimage`` only provides a few utilities for converting\nbetween image data types; for most features, you need to import one of the\nfollowing subpackages:\n\nSubpackages\n-----------\ncolor\n Color space conversion.\ndata\n Test images and example data.\ndraw\n Drawing primitives (lines, text, etc.) that operate on NumPy arrays.\nexposure\n Image intensity adjustment, e.g., histogram equalization, etc.\nfeature\n Feature detection and extraction, e.g., texture analysis corners, etc.\nfilters\n Sharpening, edge finding, rank filters, thresholding, etc.\ngraph\n Graph-theoretic operations, e.g., shortest paths.\nio\n Reading, saving, and displaying images and video.\nmeasure\n Measurement of image properties, e.g., similarity and contours.\nmorphology\n Morphological operations, e.g., opening or skeletonization.\nnovice\n Simplified interface for teaching purposes.\nrestoration\n Restoration algorithms, e.g., deconvolution algorithms, denoising, etc.\nsegmentation\n Partitioning an image into multiple regions.\ntransform\n Geometric and other transforms, e.g., rotation or the Radon transform.\nutil\n Generic utilities.\nviewer\n A simple graphical user interface for visualizing results and exploring\n parameters.\n\nUtility Functions\n-----------------\nimg_as_float\n Convert an image to floating point format, with values in [0, 1].\nimg_as_uint\n Convert an image to unsigned integer format, with values in [0, 65535].\nimg_as_int\n Convert an image to signed integer format, with values in [-32768, 32767].\nimg_as_ubyte\n Convert an image to unsigned byte format, with values in [0, 255].\n\n\"\"\"\n\nimport os.path as osp\nimport imp\nimport functools\nimport warnings\nimport sys\n\npkg_dir = osp.abspath(osp.dirname(__file__))\ndata_dir = osp.join(pkg_dir, 'data')\n\n__version__ = '0.12dev'\n\ntry:\n imp.find_module('nose')\nexcept ImportError:\n def _test(doctest=False, verbose=False):\n \"\"\"This would run all unit tests, but nose couldn't be\n imported so the test suite can not run.\n \"\"\"\n raise ImportError(\"Could not load nose. Unit tests not available.\")\n\nelse:\n def _test(doctest=False, verbose=False):\n \"\"\"Run all unit tests.\"\"\"\n import nose\n args = ['', pkg_dir, '--exe', '--ignore-files=^_test']\n if verbose:\n args.extend(['-v', '-s'])\n if doctest:\n args.extend(['--with-doctest', '--ignore-files=^\\.',\n '--ignore-files=^setup\\.py$$', '--ignore-files=test'])\n # Make sure warnings do not break the doc tests\n with warnings.catch_warnings():\n warnings.simplefilter(\"ignore\")\n success = nose.run('skimage', argv=args)\n else:\n success = nose.run('skimage', argv=args)\n # Return sys.exit code\n if success:\n return 0\n else:\n return 1\n\n\n# do not use `test` as function name as this leads to a recursion problem with\n# the nose test suite\ntest = _test\ntest_verbose = functools.partial(test, verbose=True)\ntest_verbose.__doc__ = test.__doc__\ndoctest = functools.partial(test, doctest=True)\ndoctest.__doc__ = doctest.__doc__\ndoctest_verbose = functools.partial(test, doctest=True, verbose=True)\ndoctest_verbose.__doc__ = doctest.__doc__\n\n\n# Logic for checking for improper install and importing while in the source\n# tree when package has not been installed inplace.\n# Code adapted from scikit-learn's __check_build module.\n_INPLACE_MSG = \"\"\"\nIt appears that you are importing a local scikit-image source tree. For\nthis, you need to have an inplace install. Maybe you are in the source\ndirectory and you need to try from another location.\"\"\"\n\n_STANDARD_MSG = \"\"\"\nYour install of scikit-image appears to be broken.\nTry re-installing the package following the instructions at:\nhttp://scikit-image.org/docs/stable/install.html \"\"\"\n\n\ndef _raise_build_error(e):\n # Raise a comprehensible error\n local_dir = osp.split(__file__)[0]\n msg = _STANDARD_MSG\n if local_dir == \"skimage\":\n # Picking up the local install: this will work only if the\n # install is an 'inplace build'\n msg = _INPLACE_MSG\n raise ImportError(\"\"\"%s\nIt seems that scikit-image has not been built correctly.\n%s\"\"\" % (e, msg))\n\ntry:\n # This variable is injected in the __builtins__ by the build\n # process. It used to enable importing subpackages of skimage when\n # the binaries are not built\n __SKIMAGE_SETUP__\nexcept NameError:\n __SKIMAGE_SETUP__ = False\n\nif __SKIMAGE_SETUP__:\n sys.stderr.write('Partial import of skimage during the build process.\\n')\n # We are not importing the rest of the scikit during the build\n # process, as it may not be compiled yet\nelse:\n try:\n from ._shared import geometry\n del geometry\n except ImportError as e:\n _raise_build_error(e)\n from .util.dtype import *\n\n\nif sys.version.startswith('2.6'):\n warnings.warn(\"Python 2.6 is deprecated and will not be supported in scikit-image 0.13+\")\n\n\ndel warnings, functools, osp, imp, sys\n", "path": "skimage/__init__.py"}]}
| 1,864 | 121 |
gh_patches_debug_14545
|
rasdani/github-patches
|
git_diff
|
sql-machine-learning__elasticdl-332
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Failed generating cifar10 dataset when building dev image
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `elasticdl/recordio_ds_gen/cifar10/show_data.py`
Content:
```
1 from recordio import File
2 from elasticdl.recordio_ds_gen.mnist import record
3 import sys
4 import argparse
5
6 # TODO: share code with MNIST dataset.
7 def main(argv):
8 print(argv)
9 parser = argparse.ArgumentParser(
10 description="Show same data from CIFAR10 recordio"
11 )
12 parser.add_argument("file", help="RecordIo file to read")
13 parser.add_argument(
14 "--start", default=0, type=int, help="Start record number"
15 )
16 parser.add_argument("--step", default=1, type=int, help="Step")
17 parser.add_argument(
18 "--n", default=20, type=int, help="How many record to show"
19 )
20 args = parser.parse_args(argv)
21
22 with File(args.file, "r") as f:
23 for i in range(
24 args.start, args.start + (args.n * args.step), args.step
25 ):
26 print("-" * 10)
27 print("record:", i)
28 record.show(*record.decode(f.get(i)))
29
30
31 if __name__ == "__main__":
32 main(sys.argv[1:])
33
```
Path: `elasticdl/recordio_ds_gen/cifar10/gen_data.py`
Content:
```
1 #!/usr/bin/env python
2
3 """
4 Download and transform CIFAR10 data to RecordIO format.
5 """
6
7 import itertools
8 import argparse
9 import os
10 import sys
11 from recordio import File
12 from tensorflow.python.keras import backend
13 from tensorflow.python.keras.datasets import cifar10
14 from elasticdl.recordio_ds_gen.mnist import record
15
16 # TODO: This function can be shared with MNIST dataset
17 def gen(file_dir, data, label, *, chunk_size, record_per_file):
18 assert len(data) == len(label) and len(data) > 0
19 os.makedirs(file_dir)
20 it = zip(data, label)
21 try:
22 for i in itertools.count():
23 file_name = file_dir + "/data-%04d" % i
24 print("writing:", file_name)
25 with File(file_name, "w", max_chunk_size=chunk_size) as f:
26 for _ in range(record_per_file):
27 row = next(it)
28 f.write(record.encode(row[0], row[1]))
29 except StopIteration:
30 pass
31
32
33 def main(argv):
34 parser = argparse.ArgumentParser(
35 description="Generate CIFAR10 datasets in RecordIO format."
36 )
37 parser.add_argument("dir", help="Output directory")
38 parser.add_argument(
39 "--num_record_per_chunk",
40 default=1024,
41 type=int,
42 help="Approximate number of records in a chunk.",
43 )
44 parser.add_argument(
45 "--num_chunk",
46 default=16,
47 type=int,
48 help="Number of chunks in a RecordIO file",
49 )
50 args = parser.parse_args(argv)
51 # one uncompressed record has size 3 * 32 * 32 + 1 bytes.
52 # Also add some slack for safety.
53 chunk_size = args.num_record_per_chunk * (3 * 32 * 32 + 1) + 100
54 record_per_file = args.num_record_per_chunk * args.num_chunk
55 backend.set_image_data_format("channels_first")
56
57 (x_train, y_train), (x_test, y_test) = cifar10.load_data()
58 gen(
59 args.dir + "/cifar10/train",
60 x_train,
61 y_train,
62 chunk_size=chunk_size,
63 record_per_file=record_per_file,
64 )
65
66 # Work around a bug in cifar10.load_data() where y_test is not converted
67 # to uint8
68 y_test = y_test.astype("uint8")
69 gen(
70 args.dir + "/cifar10/test",
71 x_test,
72 y_test,
73 chunk_size=chunk_size,
74 record_per_file=record_per_file,
75 )
76
77
78 if __name__ == "__main__":
79 main(sys.argv[1:])
80
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/elasticdl/recordio_ds_gen/cifar10/gen_data.py b/elasticdl/recordio_ds_gen/cifar10/gen_data.py
--- a/elasticdl/recordio_ds_gen/cifar10/gen_data.py
+++ b/elasticdl/recordio_ds_gen/cifar10/gen_data.py
@@ -11,7 +11,7 @@
from recordio import File
from tensorflow.python.keras import backend
from tensorflow.python.keras.datasets import cifar10
-from elasticdl.recordio_ds_gen.mnist import record
+from elasticdl.recordio_ds_gen.cifar10 import record
# TODO: This function can be shared with MNIST dataset
def gen(file_dir, data, label, *, chunk_size, record_per_file):
diff --git a/elasticdl/recordio_ds_gen/cifar10/show_data.py b/elasticdl/recordio_ds_gen/cifar10/show_data.py
--- a/elasticdl/recordio_ds_gen/cifar10/show_data.py
+++ b/elasticdl/recordio_ds_gen/cifar10/show_data.py
@@ -1,5 +1,5 @@
from recordio import File
-from elasticdl.recordio_ds_gen.mnist import record
+from elasticdl.recordio_ds_gen.cifar10 import record
import sys
import argparse
|
{"golden_diff": "diff --git a/elasticdl/recordio_ds_gen/cifar10/gen_data.py b/elasticdl/recordio_ds_gen/cifar10/gen_data.py\n--- a/elasticdl/recordio_ds_gen/cifar10/gen_data.py\n+++ b/elasticdl/recordio_ds_gen/cifar10/gen_data.py\n@@ -11,7 +11,7 @@\n from recordio import File\n from tensorflow.python.keras import backend\n from tensorflow.python.keras.datasets import cifar10\n-from elasticdl.recordio_ds_gen.mnist import record\n+from elasticdl.recordio_ds_gen.cifar10 import record\n \n # TODO: This function can be shared with MNIST dataset\n def gen(file_dir, data, label, *, chunk_size, record_per_file):\ndiff --git a/elasticdl/recordio_ds_gen/cifar10/show_data.py b/elasticdl/recordio_ds_gen/cifar10/show_data.py\n--- a/elasticdl/recordio_ds_gen/cifar10/show_data.py\n+++ b/elasticdl/recordio_ds_gen/cifar10/show_data.py\n@@ -1,5 +1,5 @@\n from recordio import File\n-from elasticdl.recordio_ds_gen.mnist import record\n+from elasticdl.recordio_ds_gen.cifar10 import record\n import sys\n import argparse\n", "issue": "Failed generating cifar10 dataset when building dev image\n\n", "before_files": [{"content": "from recordio import File\nfrom elasticdl.recordio_ds_gen.mnist import record\nimport sys\nimport argparse\n\n# TODO: share code with MNIST dataset.\ndef main(argv):\n print(argv)\n parser = argparse.ArgumentParser(\n description=\"Show same data from CIFAR10 recordio\"\n )\n parser.add_argument(\"file\", help=\"RecordIo file to read\")\n parser.add_argument(\n \"--start\", default=0, type=int, help=\"Start record number\"\n )\n parser.add_argument(\"--step\", default=1, type=int, help=\"Step\")\n parser.add_argument(\n \"--n\", default=20, type=int, help=\"How many record to show\"\n )\n args = parser.parse_args(argv)\n\n with File(args.file, \"r\") as f:\n for i in range(\n args.start, args.start + (args.n * args.step), args.step\n ):\n print(\"-\" * 10)\n print(\"record:\", i)\n record.show(*record.decode(f.get(i)))\n\n\nif __name__ == \"__main__\":\n main(sys.argv[1:])\n", "path": "elasticdl/recordio_ds_gen/cifar10/show_data.py"}, {"content": "#!/usr/bin/env python\n\n\"\"\"\nDownload and transform CIFAR10 data to RecordIO format.\n\"\"\"\n\nimport itertools\nimport argparse\nimport os\nimport sys\nfrom recordio import File\nfrom tensorflow.python.keras import backend\nfrom tensorflow.python.keras.datasets import cifar10\nfrom elasticdl.recordio_ds_gen.mnist import record\n\n# TODO: This function can be shared with MNIST dataset\ndef gen(file_dir, data, label, *, chunk_size, record_per_file):\n assert len(data) == len(label) and len(data) > 0\n os.makedirs(file_dir)\n it = zip(data, label)\n try:\n for i in itertools.count():\n file_name = file_dir + \"/data-%04d\" % i\n print(\"writing:\", file_name)\n with File(file_name, \"w\", max_chunk_size=chunk_size) as f:\n for _ in range(record_per_file):\n row = next(it)\n f.write(record.encode(row[0], row[1]))\n except StopIteration:\n pass\n\n\ndef main(argv):\n parser = argparse.ArgumentParser(\n description=\"Generate CIFAR10 datasets in RecordIO format.\"\n )\n parser.add_argument(\"dir\", help=\"Output directory\")\n parser.add_argument(\n \"--num_record_per_chunk\",\n default=1024,\n type=int,\n help=\"Approximate number of records in a chunk.\",\n )\n parser.add_argument(\n \"--num_chunk\",\n default=16,\n type=int,\n help=\"Number of chunks in a RecordIO file\",\n )\n args = parser.parse_args(argv)\n # one uncompressed record has size 3 * 32 * 32 + 1 bytes.\n # Also add some slack for safety.\n chunk_size = args.num_record_per_chunk * (3 * 32 * 32 + 1) + 100\n record_per_file = args.num_record_per_chunk * args.num_chunk\n backend.set_image_data_format(\"channels_first\")\n\n (x_train, y_train), (x_test, y_test) = cifar10.load_data()\n gen(\n args.dir + \"/cifar10/train\",\n x_train,\n y_train,\n chunk_size=chunk_size,\n record_per_file=record_per_file,\n )\n\n # Work around a bug in cifar10.load_data() where y_test is not converted\n # to uint8\n y_test = y_test.astype(\"uint8\")\n gen(\n args.dir + \"/cifar10/test\",\n x_test,\n y_test,\n chunk_size=chunk_size,\n record_per_file=record_per_file,\n )\n\n\nif __name__ == \"__main__\":\n main(sys.argv[1:])\n", "path": "elasticdl/recordio_ds_gen/cifar10/gen_data.py"}], "after_files": [{"content": "from recordio import File\nfrom elasticdl.recordio_ds_gen.cifar10 import record\nimport sys\nimport argparse\n\n# TODO: share code with MNIST dataset.\ndef main(argv):\n print(argv)\n parser = argparse.ArgumentParser(\n description=\"Show same data from CIFAR10 recordio\"\n )\n parser.add_argument(\"file\", help=\"RecordIo file to read\")\n parser.add_argument(\n \"--start\", default=0, type=int, help=\"Start record number\"\n )\n parser.add_argument(\"--step\", default=1, type=int, help=\"Step\")\n parser.add_argument(\n \"--n\", default=20, type=int, help=\"How many record to show\"\n )\n args = parser.parse_args(argv)\n\n with File(args.file, \"r\") as f:\n for i in range(\n args.start, args.start + (args.n * args.step), args.step\n ):\n print(\"-\" * 10)\n print(\"record:\", i)\n record.show(*record.decode(f.get(i)))\n\n\nif __name__ == \"__main__\":\n main(sys.argv[1:])\n", "path": "elasticdl/recordio_ds_gen/cifar10/show_data.py"}, {"content": "#!/usr/bin/env python\n\n\"\"\"\nDownload and transform CIFAR10 data to RecordIO format.\n\"\"\"\n\nimport itertools\nimport argparse\nimport os\nimport sys\nfrom recordio import File\nfrom tensorflow.python.keras import backend\nfrom tensorflow.python.keras.datasets import cifar10\nfrom elasticdl.recordio_ds_gen.cifar10 import record\n\n# TODO: This function can be shared with MNIST dataset\ndef gen(file_dir, data, label, *, chunk_size, record_per_file):\n assert len(data) == len(label) and len(data) > 0\n os.makedirs(file_dir)\n it = zip(data, label)\n try:\n for i in itertools.count():\n file_name = file_dir + \"/data-%04d\" % i\n print(\"writing:\", file_name)\n with File(file_name, \"w\", max_chunk_size=chunk_size) as f:\n for _ in range(record_per_file):\n row = next(it)\n f.write(record.encode(row[0], row[1]))\n except StopIteration:\n pass\n\n\ndef main(argv):\n parser = argparse.ArgumentParser(\n description=\"Generate CIFAR10 datasets in RecordIO format.\"\n )\n parser.add_argument(\"dir\", help=\"Output directory\")\n parser.add_argument(\n \"--num_record_per_chunk\",\n default=1024,\n type=int,\n help=\"Approximate number of records in a chunk.\",\n )\n parser.add_argument(\n \"--num_chunk\",\n default=16,\n type=int,\n help=\"Number of chunks in a RecordIO file\",\n )\n args = parser.parse_args(argv)\n # one uncompressed record has size 3 * 32 * 32 + 1 bytes.\n # Also add some slack for safety.\n chunk_size = args.num_record_per_chunk * (3 * 32 * 32 + 1) + 100\n record_per_file = args.num_record_per_chunk * args.num_chunk\n backend.set_image_data_format(\"channels_first\")\n\n (x_train, y_train), (x_test, y_test) = cifar10.load_data()\n gen(\n args.dir + \"/cifar10/train\",\n x_train,\n y_train,\n chunk_size=chunk_size,\n record_per_file=record_per_file,\n )\n\n # Work around a bug in cifar10.load_data() where y_test is not converted\n # to uint8\n y_test = y_test.astype(\"uint8\")\n gen(\n args.dir + \"/cifar10/test\",\n x_test,\n y_test,\n chunk_size=chunk_size,\n record_per_file=record_per_file,\n )\n\n\nif __name__ == \"__main__\":\n main(sys.argv[1:])\n", "path": "elasticdl/recordio_ds_gen/cifar10/gen_data.py"}]}
| 1,341 | 286 |
gh_patches_debug_29462
|
rasdani/github-patches
|
git_diff
|
akvo__akvo-rsr-2566
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Search in EUTF akvo site
Partner team had a training and workshop with EUTF last week and discovered that search terms in EUTF akvo site returned unrelated results.
Search for tombouctou shows up a project of SNV in EUTF akvo page, which is confusing for the partner as they expect to see their own projects only on their akvo site.
<img width="1070" alt="screen shot 2017-02-06 at 15 56 41" src="https://cloud.githubusercontent.com/assets/21127166/22652066/45bdf606-ec85-11e6-9c05-25d421b329c1.png">
What the partner expects is to see just projects where they are one of the participating partners.
If the search does not match any of their projects, it should then not return anything.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `akvo/rest/views/typeahead.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 """Akvo RSR is covered by the GNU Affero General Public License.
4 See more details in the license.txt file located at the root folder of the
5 Akvo RSR module. For additional details on the GNU license please
6 see < http://www.gnu.org/licenses/agpl.html >.
7 """
8
9 from akvo.rest.serializers import (TypeaheadCountrySerializer,
10 TypeaheadOrganisationSerializer,
11 TypeaheadProjectSerializer,
12 TypeaheadProjectUpdateSerializer)
13
14 from akvo.codelists.models import Country, Version
15 from akvo.rsr.models import Organisation, Project, ProjectUpdate
16 from akvo.rsr.views.project import _project_directory_coll
17
18 from django.conf import settings
19
20 from rest_framework.decorators import api_view
21 from rest_framework.response import Response
22
23
24 def rejig(queryset, serializer):
25 """Rearrange & add queryset count to the response data."""
26 return {
27 'count': queryset.count(),
28 'results': serializer.data
29 }
30
31
32 @api_view(['GET'])
33 def typeahead_country(request):
34 iati_version = Version.objects.get(code=settings.IATI_VERSION)
35 countries = Country.objects.filter(version=iati_version)
36 return Response(
37 rejig(countries, TypeaheadCountrySerializer(countries, many=True))
38 )
39
40
41 @api_view(['GET'])
42 def typeahead_organisation(request):
43 organisations = Organisation.objects.all()
44 return Response(
45 rejig(organisations, TypeaheadOrganisationSerializer(organisations,
46 many=True))
47 )
48
49
50 @api_view(['GET'])
51 def typeahead_user_organisations(request):
52 user = request.user
53 is_admin = user.is_active and (user.is_superuser or user.is_admin)
54 organisations = user.approved_organisations() if not is_admin else Organisation.objects.all()
55 return Response(
56 rejig(organisations, TypeaheadOrganisationSerializer(organisations,
57 many=True))
58 )
59
60
61 @api_view(['GET'])
62 def typeahead_project(request):
63 """Return the typeaheads for projects.
64
65 Without any query parameters, it returns the info for all the projects in
66 the current context -- changes depending on whether we are on a partner
67 site, or the RSR site.
68
69 If a project query parameter with a project id is passed, the info for all
70 projects associated with partners for the specified project is returned.
71
72 NOTE: The unauthenticated user gets information about all the projects when
73 using this API endpoint. More permission checking will need to be added,
74 if the amount of data being returned is changed.
75
76 """
77 project_id = request.GET.get('project', None)
78 if project_id is None:
79 project = None
80
81 else:
82 try:
83 project = Project.objects.get(id=project_id)
84 except Project.DoesNotExist:
85 project = None
86
87 if project is None:
88 # Search bar - organization projects, published
89 projects = _project_directory_coll(request)
90
91 else:
92 # Project editor - all projects of partners for this project
93 projects = Project.objects.of_partners(project.partners.distinct()).distinct()
94
95 projects = projects.exclude(title='')
96 return Response(
97 rejig(projects, TypeaheadProjectSerializer(projects, many=True))
98 )
99
100
101 @api_view(['GET'])
102 def typeahead_user_projects(request):
103 user = request.user
104 is_admin = user.is_active and (user.is_superuser or user.is_admin)
105 if is_admin:
106 projects = Project.objects.all()
107 else:
108 projects = user.approved_organisations().all_projects()
109 projects = projects.exclude(title='')
110 return Response(
111 rejig(projects, TypeaheadProjectSerializer(projects, many=True))
112 )
113
114
115 @api_view(['GET'])
116 def typeahead_impact_projects(request):
117 user = request.user
118 projects = Project.objects.all() if user.is_admin or user.is_superuser else user.my_projects()
119 projects = projects.published().filter(is_impact_project=True).order_by('title')
120
121 return Response(
122 rejig(projects, TypeaheadProjectSerializer(projects, many=True))
123 )
124
125
126 @api_view(['GET'])
127 def typeahead_projectupdate(request):
128 updates = ProjectUpdate.objects.all()
129 return Response(
130 rejig(updates, TypeaheadProjectUpdateSerializer(updates, many=True))
131 )
132
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/akvo/rest/views/typeahead.py b/akvo/rest/views/typeahead.py
--- a/akvo/rest/views/typeahead.py
+++ b/akvo/rest/views/typeahead.py
@@ -66,32 +66,22 @@
the current context -- changes depending on whether we are on a partner
site, or the RSR site.
- If a project query parameter with a project id is passed, the info for all
- projects associated with partners for the specified project is returned.
+ If a published query parameter is passed, only projects that have been
+ published are returned.
NOTE: The unauthenticated user gets information about all the projects when
using this API endpoint. More permission checking will need to be added,
if the amount of data being returned is changed.
"""
- project_id = request.GET.get('project', None)
- if project_id is None:
- project = None
-
+ if request.GET.get('published', '0') == '0':
+ # Project editor - organization projects, all
+ page = request.rsr_page
+ projects = page.organisation.all_projects() if page else Project.objects.all()
else:
- try:
- project = Project.objects.get(id=project_id)
- except Project.DoesNotExist:
- project = None
-
- if project is None:
# Search bar - organization projects, published
projects = _project_directory_coll(request)
- else:
- # Project editor - all projects of partners for this project
- projects = Project.objects.of_partners(project.partners.distinct()).distinct()
-
projects = projects.exclude(title='')
return Response(
rejig(projects, TypeaheadProjectSerializer(projects, many=True))
|
{"golden_diff": "diff --git a/akvo/rest/views/typeahead.py b/akvo/rest/views/typeahead.py\n--- a/akvo/rest/views/typeahead.py\n+++ b/akvo/rest/views/typeahead.py\n@@ -66,32 +66,22 @@\n the current context -- changes depending on whether we are on a partner\n site, or the RSR site.\n \n- If a project query parameter with a project id is passed, the info for all\n- projects associated with partners for the specified project is returned.\n+ If a published query parameter is passed, only projects that have been\n+ published are returned.\n \n NOTE: The unauthenticated user gets information about all the projects when\n using this API endpoint. More permission checking will need to be added,\n if the amount of data being returned is changed.\n \n \"\"\"\n- project_id = request.GET.get('project', None)\n- if project_id is None:\n- project = None\n-\n+ if request.GET.get('published', '0') == '0':\n+ # Project editor - organization projects, all\n+ page = request.rsr_page\n+ projects = page.organisation.all_projects() if page else Project.objects.all()\n else:\n- try:\n- project = Project.objects.get(id=project_id)\n- except Project.DoesNotExist:\n- project = None\n-\n- if project is None:\n # Search bar - organization projects, published\n projects = _project_directory_coll(request)\n \n- else:\n- # Project editor - all projects of partners for this project\n- projects = Project.objects.of_partners(project.partners.distinct()).distinct()\n-\n projects = projects.exclude(title='')\n return Response(\n rejig(projects, TypeaheadProjectSerializer(projects, many=True))\n", "issue": "Search in EUTF akvo site\nPartner team had a training and workshop with EUTF last week and discovered that search terms in EUTF akvo site returned unrelated results.\r\n\r\nSearch for tombouctou shows up a project of SNV in EUTF akvo page, which is confusing for the partner as they expect to see their own projects only on their akvo site. \r\n\r\n<img width=\"1070\" alt=\"screen shot 2017-02-06 at 15 56 41\" src=\"https://cloud.githubusercontent.com/assets/21127166/22652066/45bdf606-ec85-11e6-9c05-25d421b329c1.png\">\r\n\r\nWhat the partner expects is to see just projects where they are one of the participating partners. \r\nIf the search does not match any of their projects, it should then not return anything. \n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Akvo RSR is covered by the GNU Affero General Public License.\nSee more details in the license.txt file located at the root folder of the\nAkvo RSR module. For additional details on the GNU license please\nsee < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nfrom akvo.rest.serializers import (TypeaheadCountrySerializer,\n TypeaheadOrganisationSerializer,\n TypeaheadProjectSerializer,\n TypeaheadProjectUpdateSerializer)\n\nfrom akvo.codelists.models import Country, Version\nfrom akvo.rsr.models import Organisation, Project, ProjectUpdate\nfrom akvo.rsr.views.project import _project_directory_coll\n\nfrom django.conf import settings\n\nfrom rest_framework.decorators import api_view\nfrom rest_framework.response import Response\n\n\ndef rejig(queryset, serializer):\n \"\"\"Rearrange & add queryset count to the response data.\"\"\"\n return {\n 'count': queryset.count(),\n 'results': serializer.data\n }\n\n\n@api_view(['GET'])\ndef typeahead_country(request):\n iati_version = Version.objects.get(code=settings.IATI_VERSION)\n countries = Country.objects.filter(version=iati_version)\n return Response(\n rejig(countries, TypeaheadCountrySerializer(countries, many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_organisation(request):\n organisations = Organisation.objects.all()\n return Response(\n rejig(organisations, TypeaheadOrganisationSerializer(organisations,\n many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_user_organisations(request):\n user = request.user\n is_admin = user.is_active and (user.is_superuser or user.is_admin)\n organisations = user.approved_organisations() if not is_admin else Organisation.objects.all()\n return Response(\n rejig(organisations, TypeaheadOrganisationSerializer(organisations,\n many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_project(request):\n \"\"\"Return the typeaheads for projects.\n\n Without any query parameters, it returns the info for all the projects in\n the current context -- changes depending on whether we are on a partner\n site, or the RSR site.\n\n If a project query parameter with a project id is passed, the info for all\n projects associated with partners for the specified project is returned.\n\n NOTE: The unauthenticated user gets information about all the projects when\n using this API endpoint. More permission checking will need to be added,\n if the amount of data being returned is changed.\n\n \"\"\"\n project_id = request.GET.get('project', None)\n if project_id is None:\n project = None\n\n else:\n try:\n project = Project.objects.get(id=project_id)\n except Project.DoesNotExist:\n project = None\n\n if project is None:\n # Search bar - organization projects, published\n projects = _project_directory_coll(request)\n\n else:\n # Project editor - all projects of partners for this project\n projects = Project.objects.of_partners(project.partners.distinct()).distinct()\n\n projects = projects.exclude(title='')\n return Response(\n rejig(projects, TypeaheadProjectSerializer(projects, many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_user_projects(request):\n user = request.user\n is_admin = user.is_active and (user.is_superuser or user.is_admin)\n if is_admin:\n projects = Project.objects.all()\n else:\n projects = user.approved_organisations().all_projects()\n projects = projects.exclude(title='')\n return Response(\n rejig(projects, TypeaheadProjectSerializer(projects, many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_impact_projects(request):\n user = request.user\n projects = Project.objects.all() if user.is_admin or user.is_superuser else user.my_projects()\n projects = projects.published().filter(is_impact_project=True).order_by('title')\n\n return Response(\n rejig(projects, TypeaheadProjectSerializer(projects, many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_projectupdate(request):\n updates = ProjectUpdate.objects.all()\n return Response(\n rejig(updates, TypeaheadProjectUpdateSerializer(updates, many=True))\n )\n", "path": "akvo/rest/views/typeahead.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Akvo RSR is covered by the GNU Affero General Public License.\nSee more details in the license.txt file located at the root folder of the\nAkvo RSR module. For additional details on the GNU license please\nsee < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nfrom akvo.rest.serializers import (TypeaheadCountrySerializer,\n TypeaheadOrganisationSerializer,\n TypeaheadProjectSerializer,\n TypeaheadProjectUpdateSerializer)\n\nfrom akvo.codelists.models import Country, Version\nfrom akvo.rsr.models import Organisation, Project, ProjectUpdate\nfrom akvo.rsr.views.project import _project_directory_coll\n\nfrom django.conf import settings\n\nfrom rest_framework.decorators import api_view\nfrom rest_framework.response import Response\n\n\ndef rejig(queryset, serializer):\n \"\"\"Rearrange & add queryset count to the response data.\"\"\"\n return {\n 'count': queryset.count(),\n 'results': serializer.data\n }\n\n\n@api_view(['GET'])\ndef typeahead_country(request):\n iati_version = Version.objects.get(code=settings.IATI_VERSION)\n countries = Country.objects.filter(version=iati_version)\n return Response(\n rejig(countries, TypeaheadCountrySerializer(countries, many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_organisation(request):\n organisations = Organisation.objects.all()\n return Response(\n rejig(organisations, TypeaheadOrganisationSerializer(organisations,\n many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_user_organisations(request):\n user = request.user\n is_admin = user.is_active and (user.is_superuser or user.is_admin)\n organisations = user.approved_organisations() if not is_admin else Organisation.objects.all()\n return Response(\n rejig(organisations, TypeaheadOrganisationSerializer(organisations,\n many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_project(request):\n \"\"\"Return the typeaheads for projects.\n\n Without any query parameters, it returns the info for all the projects in\n the current context -- changes depending on whether we are on a partner\n site, or the RSR site.\n\n If a published query parameter is passed, only projects that have been\n published are returned.\n\n NOTE: The unauthenticated user gets information about all the projects when\n using this API endpoint. More permission checking will need to be added,\n if the amount of data being returned is changed.\n\n \"\"\"\n if request.GET.get('published', '0') == '0':\n # Project editor - organization projects, all\n page = request.rsr_page\n projects = page.organisation.all_projects() if page else Project.objects.all()\n else:\n # Search bar - organization projects, published\n projects = _project_directory_coll(request)\n\n projects = projects.exclude(title='')\n return Response(\n rejig(projects, TypeaheadProjectSerializer(projects, many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_user_projects(request):\n user = request.user\n is_admin = user.is_active and (user.is_superuser or user.is_admin)\n if is_admin:\n projects = Project.objects.all()\n else:\n projects = user.approved_organisations().all_projects()\n projects = projects.exclude(title='')\n return Response(\n rejig(projects, TypeaheadProjectSerializer(projects, many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_impact_projects(request):\n user = request.user\n projects = Project.objects.all() if user.is_admin or user.is_superuser else user.my_projects()\n projects = projects.published().filter(is_impact_project=True).order_by('title')\n\n return Response(\n rejig(projects, TypeaheadProjectSerializer(projects, many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_projectupdate(request):\n updates = ProjectUpdate.objects.all()\n return Response(\n rejig(updates, TypeaheadProjectUpdateSerializer(updates, many=True))\n )\n", "path": "akvo/rest/views/typeahead.py"}]}
| 1,674 | 386 |
gh_patches_debug_24894
|
rasdani/github-patches
|
git_diff
|
privacyidea__privacyidea-2130
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Better separation of audit log from privacyidea*
Hi,
python logging may be easily separated using the qualname. However, privacyidea uses the module/class names. Since they all start with "privacyidea.", it is not possible to log the audit to one place and all the rest to a different place (python logging cannot *exclude* qualnames).
To solve this, one could use a custom qualname for the privacyidea audit. I think here:
https://github.com/privacyidea/privacyidea/blob/ea7d9e53d42504288ba3909f7057924fe8d250b0/privacyidea/lib/auditmodules/loggeraudit.py#L62
Best regards,
Henning
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `privacyidea/lib/auditmodules/loggeraudit.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # 2019-11-06 Cornelius Kölbel <[email protected]>
4 # initial code for writing audit information to a file
5 #
6 # This code is free software; you can redistribute it and/or
7 # modify it under the terms of the GNU AFFERO GENERAL PUBLIC LICENSE
8 # License as published by the Free Software Foundation; either
9 # version 3 of the License, or any later version.
10 #
11 # This code is distributed in the hope that it will be useful,
12 # but WITHOUT ANY WARRANTY; without even the implied warranty of
13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 # GNU AFFERO GENERAL PUBLIC LICENSE for more details.
15 #
16 # You should have received a copy of the GNU Affero General Public
17 # License along with this program. If not, see <http://www.gnu.org/licenses/>.
18 #
19 #
20 __doc__ = """The Logger Audit Module is used to write audit entries to the Python logging module.
21
22 The Logger Audit Module is configured like this:
23
24 PI_AUDIT_MODULE = "privacyidea.lib.auditmodules.loggeraudit"
25 PI_AUDIT_SERVERNAME = "your choice"
26
27 PI_LOGCONFIG = "/etc/privacyidea/logging.cfg"
28
29 The LoggerAudit Class uses the same PI logging config as you could use anyways.
30 To explicitly write audit logs, you need to add something like the following to
31 the logging.cfg
32
33 Example:
34
35 [handlers]
36 keys=file,audit
37
38 [loggers]
39 keys=root,privacyidea,audit
40
41 ...
42
43 [logger_audit]
44 handlers=audit
45 qualname=privacyidea.lib.auditmodules.loggeraudit
46 level=INFO
47
48 [handler_audit]
49 class=logging.handlers.RotatingFileHandler
50 backupCount=14
51 maxBytes=10000000
52 formatter=detail
53 level=INFO
54 args=('/var/log/privacyidea/audit.log',)
55
56 """
57
58 import logging
59 from privacyidea.lib.auditmodules.base import (Audit as AuditBase)
60 import datetime
61
62 log = logging.getLogger(__name__)
63
64
65 class Audit(AuditBase):
66 """
67 This is the LoggerAudit module, which writes the audit entries
68 to the Python logging
69
70 .. note:: This audit module does not provide a *Read* capability.
71 """
72
73 def __init__(self, config=None):
74 super(Audit, self).__init__(config)
75 self.name = "loggeraudit"
76
77 def finalize_log(self):
78 """
79 This method is used to log the data
80 e.g. write the data to a file.
81 """
82 self.audit_data["policies"] = ",".join(self.audit_data.get("policies", []))
83 self.audit_data["timestamp"] = datetime.datetime.utcnow()
84 log.info(u"{0!s}".format(self.audit_data))
85 self.audit_data = {}
86
87
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/privacyidea/lib/auditmodules/loggeraudit.py b/privacyidea/lib/auditmodules/loggeraudit.py
--- a/privacyidea/lib/auditmodules/loggeraudit.py
+++ b/privacyidea/lib/auditmodules/loggeraudit.py
@@ -56,10 +56,9 @@
"""
import logging
+import json
from privacyidea.lib.auditmodules.base import (Audit as AuditBase)
-import datetime
-
-log = logging.getLogger(__name__)
+from datetime import datetime
class Audit(AuditBase):
@@ -73,6 +72,8 @@
def __init__(self, config=None):
super(Audit, self).__init__(config)
self.name = "loggeraudit"
+ self.qualname = self.config.get('PI_AUDIT_LOGGER_QUALNAME', __name__)
+ self.logger = logging.getLogger(self.qualname)
def finalize_log(self):
"""
@@ -80,8 +81,6 @@
e.g. write the data to a file.
"""
self.audit_data["policies"] = ",".join(self.audit_data.get("policies", []))
- self.audit_data["timestamp"] = datetime.datetime.utcnow()
- log.info(u"{0!s}".format(self.audit_data))
+ self.audit_data["timestamp"] = datetime.utcnow().isoformat()
+ self.logger.info("{0!s}".format(json.dumps(self.audit_data, sort_keys=True)))
self.audit_data = {}
-
-
|
{"golden_diff": "diff --git a/privacyidea/lib/auditmodules/loggeraudit.py b/privacyidea/lib/auditmodules/loggeraudit.py\n--- a/privacyidea/lib/auditmodules/loggeraudit.py\n+++ b/privacyidea/lib/auditmodules/loggeraudit.py\n@@ -56,10 +56,9 @@\n \"\"\"\n \n import logging\n+import json\n from privacyidea.lib.auditmodules.base import (Audit as AuditBase)\n-import datetime\n-\n-log = logging.getLogger(__name__)\n+from datetime import datetime\n \n \n class Audit(AuditBase):\n@@ -73,6 +72,8 @@\n def __init__(self, config=None):\n super(Audit, self).__init__(config)\n self.name = \"loggeraudit\"\n+ self.qualname = self.config.get('PI_AUDIT_LOGGER_QUALNAME', __name__)\n+ self.logger = logging.getLogger(self.qualname)\n \n def finalize_log(self):\n \"\"\"\n@@ -80,8 +81,6 @@\n e.g. write the data to a file.\n \"\"\"\n self.audit_data[\"policies\"] = \",\".join(self.audit_data.get(\"policies\", []))\n- self.audit_data[\"timestamp\"] = datetime.datetime.utcnow()\n- log.info(u\"{0!s}\".format(self.audit_data))\n+ self.audit_data[\"timestamp\"] = datetime.utcnow().isoformat()\n+ self.logger.info(\"{0!s}\".format(json.dumps(self.audit_data, sort_keys=True)))\n self.audit_data = {}\n-\n-\n", "issue": "Better separation of audit log from privacyidea*\nHi,\r\n\r\npython logging may be easily separated using the qualname. However, privacyidea uses the module/class names. Since they all start with \"privacyidea.\", it is not possible to log the audit to one place and all the rest to a different place (python logging cannot *exclude* qualnames).\r\n\r\nTo solve this, one could use a custom qualname for the privacyidea audit. I think here:\r\nhttps://github.com/privacyidea/privacyidea/blob/ea7d9e53d42504288ba3909f7057924fe8d250b0/privacyidea/lib/auditmodules/loggeraudit.py#L62\r\n\r\nBest regards,\r\n\r\nHenning\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# 2019-11-06 Cornelius K\u00f6lbel <[email protected]>\n# initial code for writing audit information to a file\n#\n# This code is free software; you can redistribute it and/or\n# modify it under the terms of the GNU AFFERO GENERAL PUBLIC LICENSE\n# License as published by the Free Software Foundation; either\n# version 3 of the License, or any later version.\n#\n# This code is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU AFFERO GENERAL PUBLIC LICENSE for more details.\n#\n# You should have received a copy of the GNU Affero General Public\n# License along with this program. If not, see <http://www.gnu.org/licenses/>.\n#\n#\n__doc__ = \"\"\"The Logger Audit Module is used to write audit entries to the Python logging module.\n\nThe Logger Audit Module is configured like this:\n\n PI_AUDIT_MODULE = \"privacyidea.lib.auditmodules.loggeraudit\"\n PI_AUDIT_SERVERNAME = \"your choice\"\n\n PI_LOGCONFIG = \"/etc/privacyidea/logging.cfg\"\n\nThe LoggerAudit Class uses the same PI logging config as you could use anyways.\nTo explicitly write audit logs, you need to add something like the following to\nthe logging.cfg\n\nExample:\n\n[handlers]\nkeys=file,audit\n\n[loggers]\nkeys=root,privacyidea,audit\n\n...\n\n[logger_audit]\nhandlers=audit\nqualname=privacyidea.lib.auditmodules.loggeraudit\nlevel=INFO\n\n[handler_audit]\nclass=logging.handlers.RotatingFileHandler\nbackupCount=14\nmaxBytes=10000000\nformatter=detail\nlevel=INFO\nargs=('/var/log/privacyidea/audit.log',)\n\n\"\"\"\n\nimport logging\nfrom privacyidea.lib.auditmodules.base import (Audit as AuditBase)\nimport datetime\n\nlog = logging.getLogger(__name__)\n\n\nclass Audit(AuditBase):\n \"\"\"\n This is the LoggerAudit module, which writes the audit entries\n to the Python logging\n\n .. note:: This audit module does not provide a *Read* capability.\n \"\"\"\n\n def __init__(self, config=None):\n super(Audit, self).__init__(config)\n self.name = \"loggeraudit\"\n\n def finalize_log(self):\n \"\"\"\n This method is used to log the data\n e.g. write the data to a file.\n \"\"\"\n self.audit_data[\"policies\"] = \",\".join(self.audit_data.get(\"policies\", []))\n self.audit_data[\"timestamp\"] = datetime.datetime.utcnow()\n log.info(u\"{0!s}\".format(self.audit_data))\n self.audit_data = {}\n\n\n", "path": "privacyidea/lib/auditmodules/loggeraudit.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# 2019-11-06 Cornelius K\u00f6lbel <[email protected]>\n# initial code for writing audit information to a file\n#\n# This code is free software; you can redistribute it and/or\n# modify it under the terms of the GNU AFFERO GENERAL PUBLIC LICENSE\n# License as published by the Free Software Foundation; either\n# version 3 of the License, or any later version.\n#\n# This code is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU AFFERO GENERAL PUBLIC LICENSE for more details.\n#\n# You should have received a copy of the GNU Affero General Public\n# License along with this program. If not, see <http://www.gnu.org/licenses/>.\n#\n#\n__doc__ = \"\"\"The Logger Audit Module is used to write audit entries to the Python logging module.\n\nThe Logger Audit Module is configured like this:\n\n PI_AUDIT_MODULE = \"privacyidea.lib.auditmodules.loggeraudit\"\n PI_AUDIT_SERVERNAME = \"your choice\"\n\n PI_LOGCONFIG = \"/etc/privacyidea/logging.cfg\"\n\nThe LoggerAudit Class uses the same PI logging config as you could use anyways.\nTo explicitly write audit logs, you need to add something like the following to\nthe logging.cfg\n\nExample:\n\n[handlers]\nkeys=file,audit\n\n[loggers]\nkeys=root,privacyidea,audit\n\n...\n\n[logger_audit]\nhandlers=audit\nqualname=privacyidea.lib.auditmodules.loggeraudit\nlevel=INFO\n\n[handler_audit]\nclass=logging.handlers.RotatingFileHandler\nbackupCount=14\nmaxBytes=10000000\nformatter=detail\nlevel=INFO\nargs=('/var/log/privacyidea/audit.log',)\n\n\"\"\"\n\nimport logging\nimport json\nfrom privacyidea.lib.auditmodules.base import (Audit as AuditBase)\nfrom datetime import datetime\n\n\nclass Audit(AuditBase):\n \"\"\"\n This is the LoggerAudit module, which writes the audit entries\n to the Python logging\n\n .. note:: This audit module does not provide a *Read* capability.\n \"\"\"\n\n def __init__(self, config=None):\n super(Audit, self).__init__(config)\n self.name = \"loggeraudit\"\n self.qualname = self.config.get('PI_AUDIT_LOGGER_QUALNAME', __name__)\n self.logger = logging.getLogger(self.qualname)\n\n def finalize_log(self):\n \"\"\"\n This method is used to log the data\n e.g. write the data to a file.\n \"\"\"\n self.audit_data[\"policies\"] = \",\".join(self.audit_data.get(\"policies\", []))\n self.audit_data[\"timestamp\"] = datetime.utcnow().isoformat()\n self.logger.info(\"{0!s}\".format(json.dumps(self.audit_data, sort_keys=True)))\n self.audit_data = {}\n", "path": "privacyidea/lib/auditmodules/loggeraudit.py"}]}
| 1,209 | 320 |
gh_patches_debug_1290
|
rasdani/github-patches
|
git_diff
|
weecology__retriever-950
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Check MySQL and Postgres credential files
In addition to allowing users to directly provide their MySQL and PostgreSQL credentials, it should also be possible for them to store these credentials in the usual places.
We should check information given by the user to the retriever first, and then fall back on the configuration files for usernames and passwords if they are not provided.
For PostgreSQL this is `~/.pgpass` with the format:
```
hostname:port:database:username:password
```
See: https://wiki.postgresql.org/wiki/Pgpass. `*`s can be used in place of any of the `:` separated values.
For MySQL this is `~/.my.cnf` with the format:
```
[client]
user = root
password = yourpassword
```
See: https://dev.mysql.com/doc/refman/5.5/en/option-files.html. `.my.cnf` can contain a lot of additional configuration information so we'll need to look explicitly for `user =` and `password =`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `retriever/engines/mysql.py`
Content:
```
1 from __future__ import print_function
2 from builtins import str
3 import os
4 from retriever.lib.models import Engine, no_cleanup
5 from retriever import ENCODING
6
7
8 class engine(Engine):
9 """Engine instance for MySQL."""
10 name = "MySQL"
11 abbreviation = "mysql"
12 datatypes = {
13 "auto": "INT(5) NOT NULL AUTO_INCREMENT",
14 "int": "INT",
15 "bigint": "BIGINT",
16 "double": "DOUBLE",
17 "decimal": "DECIMAL",
18 "char": ("TEXT", "VARCHAR"),
19 "bool": "BOOL",
20 }
21 max_int = 4294967295
22 placeholder = "%s"
23 required_opts = [("user",
24 "Enter your MySQL username",
25 "root"),
26 ("password",
27 "Enter your password",
28 ""),
29 ("host",
30 "Enter your MySQL host",
31 "localhost"),
32 ("port",
33 "Enter your MySQL port",
34 3306),
35 ("database_name",
36 "Format of database name",
37 "{db}"),
38 ("table_name",
39 "Format of table name",
40 "{db}.{table}"),
41 ]
42
43 def create_db_statement(self):
44 """Returns a SQL statement to create a database."""
45 createstatement = "CREATE DATABASE IF NOT EXISTS " + self.database_name()
46 return createstatement
47
48 def insert_data_from_file(self, filename):
49 """Calls MySQL "LOAD DATA LOCAL INFILE" statement to perform a bulk
50 insert."""
51
52 mysql_set_autocommit_off = """SET autocommit=0; SET UNIQUE_CHECKS=0; SET FOREIGN_KEY_CHECKS=0; SET sql_log_bin=0;"""
53 mysql_set_autocommit_on = """SET GLOBAL innodb_flush_log_at_trx_commit=1; COMMIT; SET autocommit=1; SET unique_checks=1; SET foreign_key_checks=1;"""
54
55 self.get_cursor()
56 ct = len([True for c in self.table.columns if c[1][0][:3] == "ct-"]) != 0
57 if (self.table.cleanup.function == no_cleanup and
58 not self.table.fixed_width and
59 not ct and
60 (not hasattr(self.table, "do_not_bulk_insert") or not self.table.do_not_bulk_insert)):
61
62 print ("Inserting data from " + os.path.basename(filename) + "...")
63
64 columns = self.table.get_insert_columns()
65 statement = """
66 LOAD DATA LOCAL INFILE '""" + filename.replace("\\", "\\\\") + """'
67 INTO TABLE """ + self.table_name() + """
68 FIELDS TERMINATED BY '""" + self.table.delimiter + """'
69 OPTIONALLY ENCLOSED BY '"'
70 LINES TERMINATED BY '\\n'
71 IGNORE """ + str(self.table.header_rows) + """ LINES
72 (""" + columns + ")"
73 try:
74 self.cursor.execute(mysql_set_autocommit_off)
75 self.cursor.execute(statement)
76
77 self.cursor.execute(mysql_set_autocommit_on)
78 except Exception as e:
79 self.disconnect() # If the execute fails the database connection can get hung up
80 self.cursor.execute(mysql_set_autocommit_on)
81 return Engine.insert_data_from_file(self, filename)
82 else:
83 return Engine.insert_data_from_file(self, filename)
84
85 def table_exists(self, dbname, tablename):
86 """Checks to see if the given table exists"""
87 if not hasattr(self, 'existing_table_names'):
88 self.cursor.execute(
89 "SELECT table_schema, table_name "
90 "FROM information_schema.tables WHERE table_schema NOT IN "
91 "('mysql', 'information_schema', 'performance_schema');")
92 self.existing_table_names = set()
93 for schema, table in self.cursor:
94 self.existing_table_names.add((schema.lower(), table.lower()))
95 return (dbname.lower(), tablename.lower()) in self.existing_table_names
96
97 def set_engine_encoding(self):
98 """Set MySQL database encoding to match data encoding
99
100 Please update the encoding lookup table if the required encoding is not present.
101 """
102 encoding = ENCODING.lower()
103 if self.script.encoding:
104 encoding = self.script.encoding.lower()
105 encoding_lookup = {'iso-8859-1': 'latin1', 'latin-1': 'latin1', 'utf-8': 'utf8'}
106 db_encoding = encoding_lookup.get(encoding)
107 self.execute("SET NAMES '{0}';".format(db_encoding))
108
109 def get_connection(self):
110 """Gets the db connection."""
111 args = {'host': self.opts['host'],
112 'port': int(self.opts['port']),
113 'user': self.opts['user'],
114 'passwd': self.opts['password']}
115 import pymysql as dbapi
116 import pymysql.constants.CLIENT as client
117 args['client_flag'] = client.LOCAL_FILES
118 self.get_input()
119 return dbapi.connect(**args)
120
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/retriever/engines/mysql.py b/retriever/engines/mysql.py
--- a/retriever/engines/mysql.py
+++ b/retriever/engines/mysql.py
@@ -116,4 +116,4 @@
import pymysql.constants.CLIENT as client
args['client_flag'] = client.LOCAL_FILES
self.get_input()
- return dbapi.connect(**args)
+ return dbapi.connect(read_default_file='~/.my.cnf', **args)
|
{"golden_diff": "diff --git a/retriever/engines/mysql.py b/retriever/engines/mysql.py\n--- a/retriever/engines/mysql.py\n+++ b/retriever/engines/mysql.py\n@@ -116,4 +116,4 @@\n import pymysql.constants.CLIENT as client\n args['client_flag'] = client.LOCAL_FILES\n self.get_input()\n- return dbapi.connect(**args)\n+ return dbapi.connect(read_default_file='~/.my.cnf', **args)\n", "issue": "Check MySQL and Postgres credential files\nIn addition to allowing users to directly provide their MySQL and PostgreSQL credentials, it should also be possible for them to store these credentials in the usual places.\n\nWe should check information given by the user to the retriever first, and then fall back on the configuration files for usernames and passwords if they are not provided.\n\nFor PostgreSQL this is `~/.pgpass` with the format:\n\n```\nhostname:port:database:username:password \n```\n\nSee: https://wiki.postgresql.org/wiki/Pgpass. `*`s can be used in place of any of the `:` separated values.\n\nFor MySQL this is `~/.my.cnf` with the format:\n\n```\n[client]\nuser = root\npassword = yourpassword\n```\n\nSee: https://dev.mysql.com/doc/refman/5.5/en/option-files.html. `.my.cnf` can contain a lot of additional configuration information so we'll need to look explicitly for `user =` and `password =`.\n\n", "before_files": [{"content": "from __future__ import print_function\nfrom builtins import str\nimport os\nfrom retriever.lib.models import Engine, no_cleanup\nfrom retriever import ENCODING\n\n\nclass engine(Engine):\n \"\"\"Engine instance for MySQL.\"\"\"\n name = \"MySQL\"\n abbreviation = \"mysql\"\n datatypes = {\n \"auto\": \"INT(5) NOT NULL AUTO_INCREMENT\",\n \"int\": \"INT\",\n \"bigint\": \"BIGINT\",\n \"double\": \"DOUBLE\",\n \"decimal\": \"DECIMAL\",\n \"char\": (\"TEXT\", \"VARCHAR\"),\n \"bool\": \"BOOL\",\n }\n max_int = 4294967295\n placeholder = \"%s\"\n required_opts = [(\"user\",\n \"Enter your MySQL username\",\n \"root\"),\n (\"password\",\n \"Enter your password\",\n \"\"),\n (\"host\",\n \"Enter your MySQL host\",\n \"localhost\"),\n (\"port\",\n \"Enter your MySQL port\",\n 3306),\n (\"database_name\",\n \"Format of database name\",\n \"{db}\"),\n (\"table_name\",\n \"Format of table name\",\n \"{db}.{table}\"),\n ]\n\n def create_db_statement(self):\n \"\"\"Returns a SQL statement to create a database.\"\"\"\n createstatement = \"CREATE DATABASE IF NOT EXISTS \" + self.database_name()\n return createstatement\n\n def insert_data_from_file(self, filename):\n \"\"\"Calls MySQL \"LOAD DATA LOCAL INFILE\" statement to perform a bulk\n insert.\"\"\"\n\n mysql_set_autocommit_off = \"\"\"SET autocommit=0; SET UNIQUE_CHECKS=0; SET FOREIGN_KEY_CHECKS=0; SET sql_log_bin=0;\"\"\"\n mysql_set_autocommit_on = \"\"\"SET GLOBAL innodb_flush_log_at_trx_commit=1; COMMIT; SET autocommit=1; SET unique_checks=1; SET foreign_key_checks=1;\"\"\"\n \n self.get_cursor()\n ct = len([True for c in self.table.columns if c[1][0][:3] == \"ct-\"]) != 0\n if (self.table.cleanup.function == no_cleanup and\n not self.table.fixed_width and\n not ct and\n (not hasattr(self.table, \"do_not_bulk_insert\") or not self.table.do_not_bulk_insert)):\n\n print (\"Inserting data from \" + os.path.basename(filename) + \"...\")\n\n columns = self.table.get_insert_columns()\n statement = \"\"\"\nLOAD DATA LOCAL INFILE '\"\"\" + filename.replace(\"\\\\\", \"\\\\\\\\\") + \"\"\"'\nINTO TABLE \"\"\" + self.table_name() + \"\"\"\nFIELDS TERMINATED BY '\"\"\" + self.table.delimiter + \"\"\"'\nOPTIONALLY ENCLOSED BY '\"'\nLINES TERMINATED BY '\\\\n'\nIGNORE \"\"\" + str(self.table.header_rows) + \"\"\" LINES\n(\"\"\" + columns + \")\"\n try:\n self.cursor.execute(mysql_set_autocommit_off)\n self.cursor.execute(statement)\n\n self.cursor.execute(mysql_set_autocommit_on)\n except Exception as e:\n self.disconnect() # If the execute fails the database connection can get hung up\n self.cursor.execute(mysql_set_autocommit_on)\n return Engine.insert_data_from_file(self, filename)\n else:\n return Engine.insert_data_from_file(self, filename)\n\n def table_exists(self, dbname, tablename):\n \"\"\"Checks to see if the given table exists\"\"\"\n if not hasattr(self, 'existing_table_names'):\n self.cursor.execute(\n \"SELECT table_schema, table_name \"\n \"FROM information_schema.tables WHERE table_schema NOT IN \"\n \"('mysql', 'information_schema', 'performance_schema');\")\n self.existing_table_names = set()\n for schema, table in self.cursor:\n self.existing_table_names.add((schema.lower(), table.lower()))\n return (dbname.lower(), tablename.lower()) in self.existing_table_names\n\n def set_engine_encoding(self):\n \"\"\"Set MySQL database encoding to match data encoding\n\n Please update the encoding lookup table if the required encoding is not present.\n \"\"\"\n encoding = ENCODING.lower()\n if self.script.encoding:\n encoding = self.script.encoding.lower()\n encoding_lookup = {'iso-8859-1': 'latin1', 'latin-1': 'latin1', 'utf-8': 'utf8'}\n db_encoding = encoding_lookup.get(encoding)\n self.execute(\"SET NAMES '{0}';\".format(db_encoding))\n\n def get_connection(self):\n \"\"\"Gets the db connection.\"\"\"\n args = {'host': self.opts['host'],\n 'port': int(self.opts['port']),\n 'user': self.opts['user'],\n 'passwd': self.opts['password']}\n import pymysql as dbapi\n import pymysql.constants.CLIENT as client\n args['client_flag'] = client.LOCAL_FILES\n self.get_input()\n return dbapi.connect(**args)\n", "path": "retriever/engines/mysql.py"}], "after_files": [{"content": "from __future__ import print_function\nfrom builtins import str\nimport os\nfrom retriever.lib.models import Engine, no_cleanup\nfrom retriever import ENCODING\n\n\nclass engine(Engine):\n \"\"\"Engine instance for MySQL.\"\"\"\n name = \"MySQL\"\n abbreviation = \"mysql\"\n datatypes = {\n \"auto\": \"INT(5) NOT NULL AUTO_INCREMENT\",\n \"int\": \"INT\",\n \"bigint\": \"BIGINT\",\n \"double\": \"DOUBLE\",\n \"decimal\": \"DECIMAL\",\n \"char\": (\"TEXT\", \"VARCHAR\"),\n \"bool\": \"BOOL\",\n }\n max_int = 4294967295\n placeholder = \"%s\"\n required_opts = [(\"user\",\n \"Enter your MySQL username\",\n \"root\"),\n (\"password\",\n \"Enter your password\",\n \"\"),\n (\"host\",\n \"Enter your MySQL host\",\n \"localhost\"),\n (\"port\",\n \"Enter your MySQL port\",\n 3306),\n (\"database_name\",\n \"Format of database name\",\n \"{db}\"),\n (\"table_name\",\n \"Format of table name\",\n \"{db}.{table}\"),\n ]\n\n def create_db_statement(self):\n \"\"\"Returns a SQL statement to create a database.\"\"\"\n createstatement = \"CREATE DATABASE IF NOT EXISTS \" + self.database_name()\n return createstatement\n\n def insert_data_from_file(self, filename):\n \"\"\"Calls MySQL \"LOAD DATA LOCAL INFILE\" statement to perform a bulk\n insert.\"\"\"\n\n mysql_set_autocommit_off = \"\"\"SET autocommit=0; SET UNIQUE_CHECKS=0; SET FOREIGN_KEY_CHECKS=0; SET sql_log_bin=0;\"\"\"\n mysql_set_autocommit_on = \"\"\"SET GLOBAL innodb_flush_log_at_trx_commit=1; COMMIT; SET autocommit=1; SET unique_checks=1; SET foreign_key_checks=1;\"\"\"\n \n self.get_cursor()\n ct = len([True for c in self.table.columns if c[1][0][:3] == \"ct-\"]) != 0\n if (self.table.cleanup.function == no_cleanup and\n not self.table.fixed_width and\n not ct and\n (not hasattr(self.table, \"do_not_bulk_insert\") or not self.table.do_not_bulk_insert)):\n\n print (\"Inserting data from \" + os.path.basename(filename) + \"...\")\n\n columns = self.table.get_insert_columns()\n statement = \"\"\"\nLOAD DATA LOCAL INFILE '\"\"\" + filename.replace(\"\\\\\", \"\\\\\\\\\") + \"\"\"'\nINTO TABLE \"\"\" + self.table_name() + \"\"\"\nFIELDS TERMINATED BY '\"\"\" + self.table.delimiter + \"\"\"'\nOPTIONALLY ENCLOSED BY '\"'\nLINES TERMINATED BY '\\\\n'\nIGNORE \"\"\" + str(self.table.header_rows) + \"\"\" LINES\n(\"\"\" + columns + \")\"\n try:\n self.cursor.execute(mysql_set_autocommit_off)\n self.cursor.execute(statement)\n\n self.cursor.execute(mysql_set_autocommit_on)\n except Exception as e:\n self.disconnect() # If the execute fails the database connection can get hung up\n self.cursor.execute(mysql_set_autocommit_on)\n return Engine.insert_data_from_file(self, filename)\n else:\n return Engine.insert_data_from_file(self, filename)\n\n def table_exists(self, dbname, tablename):\n \"\"\"Checks to see if the given table exists\"\"\"\n if not hasattr(self, 'existing_table_names'):\n self.cursor.execute(\n \"SELECT table_schema, table_name \"\n \"FROM information_schema.tables WHERE table_schema NOT IN \"\n \"('mysql', 'information_schema', 'performance_schema');\")\n self.existing_table_names = set()\n for schema, table in self.cursor:\n self.existing_table_names.add((schema.lower(), table.lower()))\n return (dbname.lower(), tablename.lower()) in self.existing_table_names\n\n def set_engine_encoding(self):\n \"\"\"Set MySQL database encoding to match data encoding\n\n Please update the encoding lookup table if the required encoding is not present.\n \"\"\"\n encoding = ENCODING.lower()\n if self.script.encoding:\n encoding = self.script.encoding.lower()\n encoding_lookup = {'iso-8859-1': 'latin1', 'latin-1': 'latin1', 'utf-8': 'utf8'}\n db_encoding = encoding_lookup.get(encoding)\n self.execute(\"SET NAMES '{0}';\".format(db_encoding))\n\n def get_connection(self):\n \"\"\"Gets the db connection.\"\"\"\n args = {'host': self.opts['host'],\n 'port': int(self.opts['port']),\n 'user': self.opts['user'],\n 'passwd': self.opts['password']}\n import pymysql as dbapi\n import pymysql.constants.CLIENT as client\n args['client_flag'] = client.LOCAL_FILES\n self.get_input()\n return dbapi.connect(read_default_file='~/.my.cnf', **args)\n", "path": "retriever/engines/mysql.py"}]}
| 1,752 | 111 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.