problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
25.4k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 582
39.1k
| num_tokens
int64 271
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_2627
|
rasdani/github-patches
|
git_diff
|
streamlink__streamlink-2171
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
INE Plugin
## Plugin Issue
<!-- Replace [ ] with [x] in order to check the box -->
- [X] This is a plugin issue and I have read the contribution guidelines.
### Description
The INE plugin doesn't appear to work on any videos I try.
### Reproduction steps / Explicit stream URLs to test
Try do download a video
### Log output
<!--
TEXT LOG OUTPUT IS REQUIRED for a plugin issue!
Use the `--loglevel debug` parameter and avoid using parameters which suppress log output.
https://streamlink.github.io/cli.html#cmdoption-l
Make sure to **remove usernames and passwords**
You can copy the output to https://gist.github.com/ or paste it below.
-->
```
streamlink https://streaming.ine.com/play/419cdc1a-a4a8-4eba-b8b3-5dda324daa94/day-1-part-1#/ --http-cookie laravel_session=<Removed> --loglevel debug
[cli][debug] OS: macOS 10.14.1
[cli][debug] Python: 2.7.10
[cli][debug] Streamlink: 0.14.2
[cli][debug] Requests(2.19.1), Socks(1.6.7), Websocket(0.54.0)
[cli][info] Found matching plugin ine for URL https://streaming.ine.com/play/419cdc1a-a4a8-4eba-b8b3-5dda324daa94/day-1-part-1#/
[plugin.ine][debug] Found video ID: 419cdc1a-a4a8-4eba-b8b3-5dda324daa94
[plugin.ine][debug] Loading player JS: https://content.jwplatform.com/players/yyYIR4k9-p4NBeNN0.js?exp=1543579899&sig=5e0058876669be2e2aafc7e52d067b78
error: Unable to validate result: <_sre.SRE_Match object at 0x106564dc8> does not equal None or Unable to validate key 'playlist': Type of u'//content.jwplatform.com/v2/media/yyYIR4k9?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyZWNvbW1lbmRhdGlvbnNfcGxheWxpc3RfaWQiOiJ5cHQwdDR4aCIsInJlc291cmNlIjoiL3YyL21lZGlhL3l5WUlSNGs5IiwiZXhwIjoxNTQzNTc5OTIwfQ.pHEgoDYzc219-S_slfWRhyEoCsyCZt74BiL8RNs5IJ8' should be 'str' but is 'unicode'
```
### Additional comments, screenshots, etc.
[Love Streamlink? Please consider supporting our collective. Thanks!](https://opencollective.com/streamlink/donate)
INE Plugin
## Plugin Issue
<!-- Replace [ ] with [x] in order to check the box -->
- [X] This is a plugin issue and I have read the contribution guidelines.
### Description
The INE plugin doesn't appear to work on any videos I try.
### Reproduction steps / Explicit stream URLs to test
Try do download a video
### Log output
<!--
TEXT LOG OUTPUT IS REQUIRED for a plugin issue!
Use the `--loglevel debug` parameter and avoid using parameters which suppress log output.
https://streamlink.github.io/cli.html#cmdoption-l
Make sure to **remove usernames and passwords**
You can copy the output to https://gist.github.com/ or paste it below.
-->
```
streamlink https://streaming.ine.com/play/419cdc1a-a4a8-4eba-b8b3-5dda324daa94/day-1-part-1#/ --http-cookie laravel_session=<Removed> --loglevel debug
[cli][debug] OS: macOS 10.14.1
[cli][debug] Python: 2.7.10
[cli][debug] Streamlink: 0.14.2
[cli][debug] Requests(2.19.1), Socks(1.6.7), Websocket(0.54.0)
[cli][info] Found matching plugin ine for URL https://streaming.ine.com/play/419cdc1a-a4a8-4eba-b8b3-5dda324daa94/day-1-part-1#/
[plugin.ine][debug] Found video ID: 419cdc1a-a4a8-4eba-b8b3-5dda324daa94
[plugin.ine][debug] Loading player JS: https://content.jwplatform.com/players/yyYIR4k9-p4NBeNN0.js?exp=1543579899&sig=5e0058876669be2e2aafc7e52d067b78
error: Unable to validate result: <_sre.SRE_Match object at 0x106564dc8> does not equal None or Unable to validate key 'playlist': Type of u'//content.jwplatform.com/v2/media/yyYIR4k9?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyZWNvbW1lbmRhdGlvbnNfcGxheWxpc3RfaWQiOiJ5cHQwdDR4aCIsInJlc291cmNlIjoiL3YyL21lZGlhL3l5WUlSNGs5IiwiZXhwIjoxNTQzNTc5OTIwfQ.pHEgoDYzc219-S_slfWRhyEoCsyCZt74BiL8RNs5IJ8' should be 'str' but is 'unicode'
```
### Additional comments, screenshots, etc.
[Love Streamlink? Please consider supporting our collective. Thanks!](https://opencollective.com/streamlink/donate)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink/plugins/ine.py`
Content:
```
1 from __future__ import print_function
2
3 import json
4 import re
5
6 from streamlink.plugin import Plugin
7 from streamlink.plugin.api import validate
8 from streamlink.stream import HLSStream, HTTPStream
9 from streamlink.utils import update_scheme
10
11
12 class INE(Plugin):
13 url_re = re.compile(r"""https://streaming.ine.com/play\#?/
14 ([0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12})/?
15 (.*?)""", re.VERBOSE)
16 play_url = "https://streaming.ine.com/play/{vid}/watch"
17 js_re = re.compile(r'''script type="text/javascript" src="(https://content.jwplatform.com/players/.*?)"''')
18 jwplayer_re = re.compile(r'''jwConfig\s*=\s*(\{.*\});''', re.DOTALL)
19 setup_schema = validate.Schema(
20 validate.transform(jwplayer_re.search),
21 validate.any(
22 None,
23 validate.all(
24 validate.get(1),
25 validate.transform(json.loads),
26 {"playlist": str},
27 validate.get("playlist")
28 )
29 )
30 )
31
32 @classmethod
33 def can_handle_url(cls, url):
34 return cls.url_re.match(url) is not None
35
36 def _get_streams(self):
37 vid = self.url_re.match(self.url).group(1)
38 self.logger.debug("Found video ID: {0}", vid)
39
40 page = self.session.http.get(self.play_url.format(vid=vid))
41 js_url_m = self.js_re.search(page.text)
42 if js_url_m:
43 js_url = js_url_m.group(1)
44 self.logger.debug("Loading player JS: {0}", js_url)
45
46 res = self.session.http.get(js_url)
47 metadata_url = update_scheme(self.url, self.setup_schema.validate(res.text))
48 data = self.session.http.json(self.session.http.get(metadata_url))
49
50 for source in data["playlist"][0]["sources"]:
51 if source["type"] == "application/vnd.apple.mpegurl":
52 for s in HLSStream.parse_variant_playlist(self.session, source["file"]).items():
53 yield s
54 elif source["type"] == "video/mp4":
55 yield "{0}p".format(source["height"]), HTTPStream(self.session, source["file"])
56
57
58 __plugin__ = INE
59
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/streamlink/plugins/ine.py b/src/streamlink/plugins/ine.py
--- a/src/streamlink/plugins/ine.py
+++ b/src/streamlink/plugins/ine.py
@@ -23,7 +23,7 @@
validate.all(
validate.get(1),
validate.transform(json.loads),
- {"playlist": str},
+ {"playlist": validate.text},
validate.get("playlist")
)
)
|
{"golden_diff": "diff --git a/src/streamlink/plugins/ine.py b/src/streamlink/plugins/ine.py\n--- a/src/streamlink/plugins/ine.py\n+++ b/src/streamlink/plugins/ine.py\n@@ -23,7 +23,7 @@\n validate.all(\n validate.get(1),\n validate.transform(json.loads),\n- {\"playlist\": str},\n+ {\"playlist\": validate.text},\n validate.get(\"playlist\")\n )\n )\n", "issue": "INE Plugin\n## Plugin Issue\r\n\r\n<!-- Replace [ ] with [x] in order to check the box -->\r\n- [X] This is a plugin issue and I have read the contribution guidelines.\r\n\r\n\r\n### Description\r\n\r\nThe INE plugin doesn't appear to work on any videos I try.\r\n\r\n\r\n### Reproduction steps / Explicit stream URLs to test\r\n\r\nTry do download a video\r\n\r\n### Log output\r\n\r\n<!--\r\nTEXT LOG OUTPUT IS REQUIRED for a plugin issue!\r\nUse the `--loglevel debug` parameter and avoid using parameters which suppress log output.\r\nhttps://streamlink.github.io/cli.html#cmdoption-l\r\n\r\nMake sure to **remove usernames and passwords**\r\nYou can copy the output to https://gist.github.com/ or paste it below.\r\n-->\r\n\r\n```\r\nstreamlink https://streaming.ine.com/play/419cdc1a-a4a8-4eba-b8b3-5dda324daa94/day-1-part-1#/ --http-cookie laravel_session=<Removed> --loglevel debug\r\n[cli][debug] OS: macOS 10.14.1\r\n[cli][debug] Python: 2.7.10\r\n[cli][debug] Streamlink: 0.14.2\r\n[cli][debug] Requests(2.19.1), Socks(1.6.7), Websocket(0.54.0)\r\n[cli][info] Found matching plugin ine for URL https://streaming.ine.com/play/419cdc1a-a4a8-4eba-b8b3-5dda324daa94/day-1-part-1#/\r\n[plugin.ine][debug] Found video ID: 419cdc1a-a4a8-4eba-b8b3-5dda324daa94\r\n[plugin.ine][debug] Loading player JS: https://content.jwplatform.com/players/yyYIR4k9-p4NBeNN0.js?exp=1543579899&sig=5e0058876669be2e2aafc7e52d067b78\r\nerror: Unable to validate result: <_sre.SRE_Match object at 0x106564dc8> does not equal None or Unable to validate key 'playlist': Type of u'//content.jwplatform.com/v2/media/yyYIR4k9?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyZWNvbW1lbmRhdGlvbnNfcGxheWxpc3RfaWQiOiJ5cHQwdDR4aCIsInJlc291cmNlIjoiL3YyL21lZGlhL3l5WUlSNGs5IiwiZXhwIjoxNTQzNTc5OTIwfQ.pHEgoDYzc219-S_slfWRhyEoCsyCZt74BiL8RNs5IJ8' should be 'str' but is 'unicode'\r\n```\r\n\r\n\r\n### Additional comments, screenshots, etc.\r\n\r\n\r\n\r\n[Love Streamlink? Please consider supporting our collective. Thanks!](https://opencollective.com/streamlink/donate)\r\n\nINE Plugin\n## Plugin Issue\r\n\r\n<!-- Replace [ ] with [x] in order to check the box -->\r\n- [X] This is a plugin issue and I have read the contribution guidelines.\r\n\r\n\r\n### Description\r\n\r\nThe INE plugin doesn't appear to work on any videos I try.\r\n\r\n\r\n### Reproduction steps / Explicit stream URLs to test\r\n\r\nTry do download a video\r\n\r\n### Log output\r\n\r\n<!--\r\nTEXT LOG OUTPUT IS REQUIRED for a plugin issue!\r\nUse the `--loglevel debug` parameter and avoid using parameters which suppress log output.\r\nhttps://streamlink.github.io/cli.html#cmdoption-l\r\n\r\nMake sure to **remove usernames and passwords**\r\nYou can copy the output to https://gist.github.com/ or paste it below.\r\n-->\r\n\r\n```\r\nstreamlink https://streaming.ine.com/play/419cdc1a-a4a8-4eba-b8b3-5dda324daa94/day-1-part-1#/ --http-cookie laravel_session=<Removed> --loglevel debug\r\n[cli][debug] OS: macOS 10.14.1\r\n[cli][debug] Python: 2.7.10\r\n[cli][debug] Streamlink: 0.14.2\r\n[cli][debug] Requests(2.19.1), Socks(1.6.7), Websocket(0.54.0)\r\n[cli][info] Found matching plugin ine for URL https://streaming.ine.com/play/419cdc1a-a4a8-4eba-b8b3-5dda324daa94/day-1-part-1#/\r\n[plugin.ine][debug] Found video ID: 419cdc1a-a4a8-4eba-b8b3-5dda324daa94\r\n[plugin.ine][debug] Loading player JS: https://content.jwplatform.com/players/yyYIR4k9-p4NBeNN0.js?exp=1543579899&sig=5e0058876669be2e2aafc7e52d067b78\r\nerror: Unable to validate result: <_sre.SRE_Match object at 0x106564dc8> does not equal None or Unable to validate key 'playlist': Type of u'//content.jwplatform.com/v2/media/yyYIR4k9?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyZWNvbW1lbmRhdGlvbnNfcGxheWxpc3RfaWQiOiJ5cHQwdDR4aCIsInJlc291cmNlIjoiL3YyL21lZGlhL3l5WUlSNGs5IiwiZXhwIjoxNTQzNTc5OTIwfQ.pHEgoDYzc219-S_slfWRhyEoCsyCZt74BiL8RNs5IJ8' should be 'str' but is 'unicode'\r\n```\r\n\r\n\r\n### Additional comments, screenshots, etc.\r\n\r\n\r\n\r\n[Love Streamlink? Please consider supporting our collective. Thanks!](https://opencollective.com/streamlink/donate)\r\n\n", "before_files": [{"content": "from __future__ import print_function\n\nimport json\nimport re\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream import HLSStream, HTTPStream\nfrom streamlink.utils import update_scheme\n\n\nclass INE(Plugin):\n url_re = re.compile(r\"\"\"https://streaming.ine.com/play\\#?/\n ([0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12})/?\n (.*?)\"\"\", re.VERBOSE)\n play_url = \"https://streaming.ine.com/play/{vid}/watch\"\n js_re = re.compile(r'''script type=\"text/javascript\" src=\"(https://content.jwplatform.com/players/.*?)\"''')\n jwplayer_re = re.compile(r'''jwConfig\\s*=\\s*(\\{.*\\});''', re.DOTALL)\n setup_schema = validate.Schema(\n validate.transform(jwplayer_re.search),\n validate.any(\n None,\n validate.all(\n validate.get(1),\n validate.transform(json.loads),\n {\"playlist\": str},\n validate.get(\"playlist\")\n )\n )\n )\n\n @classmethod\n def can_handle_url(cls, url):\n return cls.url_re.match(url) is not None\n\n def _get_streams(self):\n vid = self.url_re.match(self.url).group(1)\n self.logger.debug(\"Found video ID: {0}\", vid)\n\n page = self.session.http.get(self.play_url.format(vid=vid))\n js_url_m = self.js_re.search(page.text)\n if js_url_m:\n js_url = js_url_m.group(1)\n self.logger.debug(\"Loading player JS: {0}\", js_url)\n\n res = self.session.http.get(js_url)\n metadata_url = update_scheme(self.url, self.setup_schema.validate(res.text))\n data = self.session.http.json(self.session.http.get(metadata_url))\n\n for source in data[\"playlist\"][0][\"sources\"]:\n if source[\"type\"] == \"application/vnd.apple.mpegurl\":\n for s in HLSStream.parse_variant_playlist(self.session, source[\"file\"]).items():\n yield s\n elif source[\"type\"] == \"video/mp4\":\n yield \"{0}p\".format(source[\"height\"]), HTTPStream(self.session, source[\"file\"])\n\n\n__plugin__ = INE\n", "path": "src/streamlink/plugins/ine.py"}], "after_files": [{"content": "from __future__ import print_function\n\nimport json\nimport re\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream import HLSStream, HTTPStream\nfrom streamlink.utils import update_scheme\n\n\nclass INE(Plugin):\n url_re = re.compile(r\"\"\"https://streaming.ine.com/play\\#?/\n ([0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12})/?\n (.*?)\"\"\", re.VERBOSE)\n play_url = \"https://streaming.ine.com/play/{vid}/watch\"\n js_re = re.compile(r'''script type=\"text/javascript\" src=\"(https://content.jwplatform.com/players/.*?)\"''')\n jwplayer_re = re.compile(r'''jwConfig\\s*=\\s*(\\{.*\\});''', re.DOTALL)\n setup_schema = validate.Schema(\n validate.transform(jwplayer_re.search),\n validate.any(\n None,\n validate.all(\n validate.get(1),\n validate.transform(json.loads),\n {\"playlist\": validate.text},\n validate.get(\"playlist\")\n )\n )\n )\n\n @classmethod\n def can_handle_url(cls, url):\n return cls.url_re.match(url) is not None\n\n def _get_streams(self):\n vid = self.url_re.match(self.url).group(1)\n self.logger.debug(\"Found video ID: {0}\", vid)\n\n page = self.session.http.get(self.play_url.format(vid=vid))\n js_url_m = self.js_re.search(page.text)\n if js_url_m:\n js_url = js_url_m.group(1)\n self.logger.debug(\"Loading player JS: {0}\", js_url)\n\n res = self.session.http.get(js_url)\n metadata_url = update_scheme(self.url, self.setup_schema.validate(res.text))\n data = self.session.http.json(self.session.http.get(metadata_url))\n\n for source in data[\"playlist\"][0][\"sources\"]:\n if source[\"type\"] == \"application/vnd.apple.mpegurl\":\n for s in HLSStream.parse_variant_playlist(self.session, source[\"file\"]).items():\n yield s\n elif source[\"type\"] == \"video/mp4\":\n yield \"{0}p\".format(source[\"height\"]), HTTPStream(self.session, source[\"file\"])\n\n\n__plugin__ = INE\n", "path": "src/streamlink/plugins/ine.py"}]}
| 2,359 | 93 |
gh_patches_debug_24965
|
rasdani/github-patches
|
git_diff
|
google__clusterfuzz-2567
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Investigate GitHub login
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/appengine/libs/csp.py`
Content:
```
1 # Copyright 2019 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Helpers used to generate Content Security Policies for pages."""
15 import collections
16
17 from libs import auth
18
19
20 class CSPBuilder(object):
21 """Helper to build a Content Security Policy string."""
22
23 def __init__(self):
24 self.directives = collections.defaultdict(list)
25
26 def add(self, directive, source, quote=False):
27 """Add a source for a given directive."""
28 # Some values for sources are expected to be quoted. No escaping is done
29 # since these are specific literal values that don't require it.
30 if quote:
31 source = '\'{}\''.format(source)
32
33 assert source not in self.directives[directive], (
34 'Duplicate source "{source}" for directive "{directive}"'.format(
35 source=source, directive=directive))
36 self.directives[directive].append(source)
37
38 def add_sourceless(self, directive):
39 assert directive not in self.directives, (
40 'Sourceless directive "{directive}" already exists.'.format(
41 directive=directive))
42
43 self.directives[directive] = []
44
45 def remove(self, directive, source, quote=False):
46 """Remove a source for a given directive."""
47 if quote:
48 source = '\'{}\''.format(source)
49
50 assert source in self.directives[directive], (
51 'Removing nonexistent "{source}" for directive "{directive}"'.format(
52 source=source, directive=directive))
53 self.directives[directive].remove(source)
54
55 def __str__(self):
56 """Convert to a string to send with a Content-Security-Policy header."""
57 parts = []
58
59 # Sort directives for deterministic results.
60 for directive, sources in sorted(self.directives.items()):
61 # Each policy part has the form "directive source1 source2 ...;".
62 parts.append(' '.join([directive] + sources) + ';')
63
64 return ' '.join(parts)
65
66
67 def get_default_builder():
68 """Get a CSPBuilder object for the default policy.
69
70 Can be modified for specific pages if needed."""
71 builder = CSPBuilder()
72
73 # By default, disallow everything. Whitelist only features that are needed.
74 builder.add('default-src', 'none', quote=True)
75
76 # Allow various directives if sourced from self.
77 builder.add('font-src', 'self', quote=True)
78 builder.add('connect-src', 'self', quote=True)
79 builder.add('img-src', 'self', quote=True)
80 builder.add('manifest-src', 'self', quote=True)
81
82 # External scripts. Google analytics, charting libraries.
83 builder.add('script-src', 'www.google-analytics.com')
84 builder.add('script-src', 'www.gstatic.com')
85 builder.add('script-src', 'apis.google.com')
86
87 # Google Analytics also uses connect-src and img-src.
88 builder.add('connect-src', 'www.google-analytics.com')
89 builder.add('img-src', 'www.google-analytics.com')
90
91 # Firebase.
92 builder.add('img-src', 'www.gstatic.com')
93 builder.add('connect-src', 'securetoken.googleapis.com')
94 builder.add('connect-src', 'www.googleapis.com')
95 builder.add('frame-src', auth.auth_domain())
96
97 # External style. Used for fonts, charting libraries.
98 builder.add('style-src', 'fonts.googleapis.com')
99 builder.add('style-src', 'www.gstatic.com')
100
101 # External fonts.
102 builder.add('font-src', 'fonts.gstatic.com')
103
104 # Some upload forms require us to connect to the cloud storage API.
105 builder.add('connect-src', 'storage.googleapis.com')
106
107 # Mixed content is unexpected, but upgrade requests rather than block.
108 builder.add_sourceless('upgrade-insecure-requests')
109
110 # We don't expect object to be used, but it doesn't fall back to default-src.
111 builder.add('object-src', 'none', quote=True)
112
113 # We don't expect workers to be used, but they fall back to script-src.
114 builder.add('worker-src', 'none', quote=True)
115
116 # Add reporting so that violations don't break things silently.
117 builder.add('report-uri', '/report-csp-failure')
118
119 # TODO(mbarbella): Remove Google-specific cases by allowing configuration.
120
121 # Internal authentication.
122 builder.add('manifest-src', 'login.corp.google.com')
123
124 # TODO(mbarbella): Improve the policy by limiting the additions below.
125
126 # Because we use Polymer Bundler to create large files containing all of our
127 # scripts inline, our policy requires this (which weakens CSP significantly).
128 builder.add('script-src', 'unsafe-inline', quote=True)
129
130 # Some of the pages that read responses from json handlers require this.
131 builder.add('script-src', 'unsafe-eval', quote=True)
132
133 # Our Polymer Bundler usage also requires inline style.
134 builder.add('style-src', 'unsafe-inline', quote=True)
135
136 # Some fonts are loaded from data URIs.
137 builder.add('font-src', 'data:')
138
139 return builder
140
141
142 def get_default():
143 """Get the default Content Security Policy as a string."""
144 return str(get_default_builder())
145
```
Path: `src/appengine/libs/auth.py`
Content:
```
1 # Copyright 2019 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Authentication helpers."""
15
16 import collections
17
18 from firebase_admin import auth
19 from google.cloud import ndb
20 from googleapiclient.discovery import build
21 import jwt
22 import requests
23
24 from clusterfuzz._internal.base import memoize
25 from clusterfuzz._internal.base import utils
26 from clusterfuzz._internal.config import local_config
27 from clusterfuzz._internal.datastore import data_types
28 from clusterfuzz._internal.metrics import logs
29 from clusterfuzz._internal.system import environment
30 from libs import request_cache
31
32 User = collections.namedtuple('User', ['email'])
33
34
35 class AuthError(Exception):
36 """Auth error."""
37
38
39 def auth_domain():
40 """Get the auth domain."""
41 domain = local_config.ProjectConfig().get('firebase.auth_domain')
42 if domain:
43 return domain
44
45 return utils.get_application_id() + '.firebaseapp.com'
46
47
48 def is_current_user_admin():
49 """Returns whether or not the current logged in user is an admin."""
50 if environment.is_local_development():
51 return True
52
53 user = get_current_user()
54 if not user:
55 return False
56
57 key = ndb.Key(data_types.Admin, user.email)
58 return bool(key.get())
59
60
61 @memoize.wrap(memoize.FifoInMemory(1))
62 def _project_number_from_id(project_id):
63 """Get the project number from project ID."""
64 resource_manager = build('cloudresourcemanager', 'v1')
65 result = resource_manager.projects().get(projectId=project_id).execute()
66 if 'projectNumber' not in result:
67 raise AuthError('Failed to get project number.')
68
69 return result['projectNumber']
70
71
72 @memoize.wrap(memoize.FifoInMemory(1))
73 def _get_iap_key(key_id):
74 """Retrieves a public key from the list published by Identity-Aware Proxy,
75 re-fetching the key file if necessary.
76 """
77 resp = requests.get('https://www.gstatic.com/iap/verify/public_key')
78 if resp.status_code != 200:
79 raise AuthError('Unable to fetch IAP keys: {} / {} / {}'.format(
80 resp.status_code, resp.headers, resp.text))
81
82 result = resp.json()
83 key = result.get(key_id)
84 if not key:
85 raise AuthError('Key {!r} not found'.format(key_id))
86
87 return key
88
89
90 def _validate_iap_jwt(iap_jwt):
91 """Validate JWT assertion."""
92 project_id = utils.get_application_id()
93 expected_audience = '/projects/{}/apps/{}'.format(
94 _project_number_from_id(project_id), project_id)
95
96 try:
97 key_id = jwt.get_unverified_header(iap_jwt).get('kid')
98 if not key_id:
99 raise AuthError('No key ID.')
100
101 key = _get_iap_key(key_id)
102 decoded_jwt = jwt.decode(
103 iap_jwt,
104 key,
105 algorithms=['ES256'],
106 issuer='https://cloud.google.com/iap',
107 audience=expected_audience)
108 return decoded_jwt['email']
109 except (jwt.exceptions.InvalidTokenError,
110 requests.exceptions.RequestException) as e:
111 raise AuthError('JWT assertion decode error: ' + str(e))
112
113
114 def get_iap_email(current_request):
115 """Get Cloud IAP email."""
116 jwt_assertion = current_request.headers.get('X-Goog-IAP-JWT-Assertion')
117 if not jwt_assertion:
118 return None
119
120 return _validate_iap_jwt(jwt_assertion)
121
122
123 def get_current_user():
124 """Get the current logged in user, or None."""
125 if environment.is_local_development():
126 return User('user@localhost')
127
128 current_request = request_cache.get_current_request()
129 if local_config.AuthConfig().get('enable_loas'):
130 loas_user = current_request.headers.get('X-AppEngine-LOAS-Peer-Username')
131 if loas_user:
132 return User(loas_user + '@google.com')
133
134 iap_email = get_iap_email(current_request)
135 if iap_email:
136 return User(iap_email)
137
138 cache_backing = request_cache.get_cache_backing()
139 oauth_email = getattr(cache_backing, '_oauth_email', None)
140 if oauth_email:
141 return User(oauth_email)
142
143 cached_email = getattr(cache_backing, '_cached_email', None)
144 if cached_email:
145 return User(cached_email)
146
147 session_cookie = get_session_cookie()
148 if not session_cookie:
149 return None
150
151 try:
152 decoded_claims = decode_claims(get_session_cookie())
153 except AuthError:
154 logs.log_warn('Invalid session cookie.')
155 return None
156
157 if not decoded_claims.get('email_verified'):
158 return None
159
160 email = decoded_claims.get('email')
161 if not email:
162 return None
163
164 # We cache the email for this request if we've validated the user to make
165 # subsequent get_current_user() calls fast.
166 setattr(cache_backing, '_cached_email', email)
167 return User(email)
168
169
170 def create_session_cookie(id_token, expires_in):
171 """Create a new session cookie."""
172 try:
173 return auth.create_session_cookie(id_token, expires_in=expires_in)
174 except auth.AuthError:
175 raise AuthError('Failed to create session cookie.')
176
177
178 def get_session_cookie():
179 """Get the current session cookie."""
180 return request_cache.get_current_request().cookies.get('session')
181
182
183 def revoke_session_cookie(session_cookie):
184 """Revoke a session cookie."""
185 decoded_claims = decode_claims(session_cookie)
186 auth.revoke_refresh_tokens(decoded_claims['sub'])
187
188
189 def decode_claims(session_cookie):
190 """Decode the claims for the current session cookie."""
191 try:
192 return auth.verify_session_cookie(session_cookie, check_revoked=True)
193 except (ValueError, auth.AuthError):
194 raise AuthError('Invalid session cookie.')
195
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/appengine/libs/auth.py b/src/appengine/libs/auth.py
--- a/src/appengine/libs/auth.py
+++ b/src/appengine/libs/auth.py
@@ -154,7 +154,21 @@
logs.log_warn('Invalid session cookie.')
return None
- if not decoded_claims.get('email_verified'):
+ allowed_firebase_providers = local_config.ProjectConfig().get(
+ 'firebase.auth_providers', ['google.com'])
+ firebase_info = decoded_claims.get('firebase', {})
+ sign_in_provider = firebase_info.get('sign_in_provider')
+
+ if sign_in_provider not in allowed_firebase_providers:
+ logs.log_error(f'Firebase provider {sign_in_provider} is not enabled.')
+ return None
+
+ # Per https://docs.github.com/en/authentication/
+ # keeping-your-account-and-data-secure/authorizing-oauth-apps
+ # GitHub requires emails to be verified before an OAuth app can be
+ # authorized, so we make an exception.
+ if (not decoded_claims.get('email_verified') and
+ sign_in_provider != 'github.com'):
return None
email = decoded_claims.get('email')
diff --git a/src/appengine/libs/csp.py b/src/appengine/libs/csp.py
--- a/src/appengine/libs/csp.py
+++ b/src/appengine/libs/csp.py
@@ -92,6 +92,7 @@
builder.add('img-src', 'www.gstatic.com')
builder.add('connect-src', 'securetoken.googleapis.com')
builder.add('connect-src', 'www.googleapis.com')
+ builder.add('connect-src', 'identitytoolkit.googleapis.com')
builder.add('frame-src', auth.auth_domain())
# External style. Used for fonts, charting libraries.
|
{"golden_diff": "diff --git a/src/appengine/libs/auth.py b/src/appengine/libs/auth.py\n--- a/src/appengine/libs/auth.py\n+++ b/src/appengine/libs/auth.py\n@@ -154,7 +154,21 @@\n logs.log_warn('Invalid session cookie.')\n return None\n \n- if not decoded_claims.get('email_verified'):\n+ allowed_firebase_providers = local_config.ProjectConfig().get(\n+ 'firebase.auth_providers', ['google.com'])\n+ firebase_info = decoded_claims.get('firebase', {})\n+ sign_in_provider = firebase_info.get('sign_in_provider')\n+\n+ if sign_in_provider not in allowed_firebase_providers:\n+ logs.log_error(f'Firebase provider {sign_in_provider} is not enabled.')\n+ return None\n+\n+ # Per https://docs.github.com/en/authentication/\n+ # keeping-your-account-and-data-secure/authorizing-oauth-apps\n+ # GitHub requires emails to be verified before an OAuth app can be\n+ # authorized, so we make an exception.\n+ if (not decoded_claims.get('email_verified') and\n+ sign_in_provider != 'github.com'):\n return None\n \n email = decoded_claims.get('email')\ndiff --git a/src/appengine/libs/csp.py b/src/appengine/libs/csp.py\n--- a/src/appengine/libs/csp.py\n+++ b/src/appengine/libs/csp.py\n@@ -92,6 +92,7 @@\n builder.add('img-src', 'www.gstatic.com')\n builder.add('connect-src', 'securetoken.googleapis.com')\n builder.add('connect-src', 'www.googleapis.com')\n+ builder.add('connect-src', 'identitytoolkit.googleapis.com')\n builder.add('frame-src', auth.auth_domain())\n \n # External style. Used for fonts, charting libraries.\n", "issue": "Investigate GitHub login\n\n", "before_files": [{"content": "# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Helpers used to generate Content Security Policies for pages.\"\"\"\nimport collections\n\nfrom libs import auth\n\n\nclass CSPBuilder(object):\n \"\"\"Helper to build a Content Security Policy string.\"\"\"\n\n def __init__(self):\n self.directives = collections.defaultdict(list)\n\n def add(self, directive, source, quote=False):\n \"\"\"Add a source for a given directive.\"\"\"\n # Some values for sources are expected to be quoted. No escaping is done\n # since these are specific literal values that don't require it.\n if quote:\n source = '\\'{}\\''.format(source)\n\n assert source not in self.directives[directive], (\n 'Duplicate source \"{source}\" for directive \"{directive}\"'.format(\n source=source, directive=directive))\n self.directives[directive].append(source)\n\n def add_sourceless(self, directive):\n assert directive not in self.directives, (\n 'Sourceless directive \"{directive}\" already exists.'.format(\n directive=directive))\n\n self.directives[directive] = []\n\n def remove(self, directive, source, quote=False):\n \"\"\"Remove a source for a given directive.\"\"\"\n if quote:\n source = '\\'{}\\''.format(source)\n\n assert source in self.directives[directive], (\n 'Removing nonexistent \"{source}\" for directive \"{directive}\"'.format(\n source=source, directive=directive))\n self.directives[directive].remove(source)\n\n def __str__(self):\n \"\"\"Convert to a string to send with a Content-Security-Policy header.\"\"\"\n parts = []\n\n # Sort directives for deterministic results.\n for directive, sources in sorted(self.directives.items()):\n # Each policy part has the form \"directive source1 source2 ...;\".\n parts.append(' '.join([directive] + sources) + ';')\n\n return ' '.join(parts)\n\n\ndef get_default_builder():\n \"\"\"Get a CSPBuilder object for the default policy.\n\n Can be modified for specific pages if needed.\"\"\"\n builder = CSPBuilder()\n\n # By default, disallow everything. Whitelist only features that are needed.\n builder.add('default-src', 'none', quote=True)\n\n # Allow various directives if sourced from self.\n builder.add('font-src', 'self', quote=True)\n builder.add('connect-src', 'self', quote=True)\n builder.add('img-src', 'self', quote=True)\n builder.add('manifest-src', 'self', quote=True)\n\n # External scripts. Google analytics, charting libraries.\n builder.add('script-src', 'www.google-analytics.com')\n builder.add('script-src', 'www.gstatic.com')\n builder.add('script-src', 'apis.google.com')\n\n # Google Analytics also uses connect-src and img-src.\n builder.add('connect-src', 'www.google-analytics.com')\n builder.add('img-src', 'www.google-analytics.com')\n\n # Firebase.\n builder.add('img-src', 'www.gstatic.com')\n builder.add('connect-src', 'securetoken.googleapis.com')\n builder.add('connect-src', 'www.googleapis.com')\n builder.add('frame-src', auth.auth_domain())\n\n # External style. Used for fonts, charting libraries.\n builder.add('style-src', 'fonts.googleapis.com')\n builder.add('style-src', 'www.gstatic.com')\n\n # External fonts.\n builder.add('font-src', 'fonts.gstatic.com')\n\n # Some upload forms require us to connect to the cloud storage API.\n builder.add('connect-src', 'storage.googleapis.com')\n\n # Mixed content is unexpected, but upgrade requests rather than block.\n builder.add_sourceless('upgrade-insecure-requests')\n\n # We don't expect object to be used, but it doesn't fall back to default-src.\n builder.add('object-src', 'none', quote=True)\n\n # We don't expect workers to be used, but they fall back to script-src.\n builder.add('worker-src', 'none', quote=True)\n\n # Add reporting so that violations don't break things silently.\n builder.add('report-uri', '/report-csp-failure')\n\n # TODO(mbarbella): Remove Google-specific cases by allowing configuration.\n\n # Internal authentication.\n builder.add('manifest-src', 'login.corp.google.com')\n\n # TODO(mbarbella): Improve the policy by limiting the additions below.\n\n # Because we use Polymer Bundler to create large files containing all of our\n # scripts inline, our policy requires this (which weakens CSP significantly).\n builder.add('script-src', 'unsafe-inline', quote=True)\n\n # Some of the pages that read responses from json handlers require this.\n builder.add('script-src', 'unsafe-eval', quote=True)\n\n # Our Polymer Bundler usage also requires inline style.\n builder.add('style-src', 'unsafe-inline', quote=True)\n\n # Some fonts are loaded from data URIs.\n builder.add('font-src', 'data:')\n\n return builder\n\n\ndef get_default():\n \"\"\"Get the default Content Security Policy as a string.\"\"\"\n return str(get_default_builder())\n", "path": "src/appengine/libs/csp.py"}, {"content": "# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Authentication helpers.\"\"\"\n\nimport collections\n\nfrom firebase_admin import auth\nfrom google.cloud import ndb\nfrom googleapiclient.discovery import build\nimport jwt\nimport requests\n\nfrom clusterfuzz._internal.base import memoize\nfrom clusterfuzz._internal.base import utils\nfrom clusterfuzz._internal.config import local_config\nfrom clusterfuzz._internal.datastore import data_types\nfrom clusterfuzz._internal.metrics import logs\nfrom clusterfuzz._internal.system import environment\nfrom libs import request_cache\n\nUser = collections.namedtuple('User', ['email'])\n\n\nclass AuthError(Exception):\n \"\"\"Auth error.\"\"\"\n\n\ndef auth_domain():\n \"\"\"Get the auth domain.\"\"\"\n domain = local_config.ProjectConfig().get('firebase.auth_domain')\n if domain:\n return domain\n\n return utils.get_application_id() + '.firebaseapp.com'\n\n\ndef is_current_user_admin():\n \"\"\"Returns whether or not the current logged in user is an admin.\"\"\"\n if environment.is_local_development():\n return True\n\n user = get_current_user()\n if not user:\n return False\n\n key = ndb.Key(data_types.Admin, user.email)\n return bool(key.get())\n\n\[email protected](memoize.FifoInMemory(1))\ndef _project_number_from_id(project_id):\n \"\"\"Get the project number from project ID.\"\"\"\n resource_manager = build('cloudresourcemanager', 'v1')\n result = resource_manager.projects().get(projectId=project_id).execute()\n if 'projectNumber' not in result:\n raise AuthError('Failed to get project number.')\n\n return result['projectNumber']\n\n\[email protected](memoize.FifoInMemory(1))\ndef _get_iap_key(key_id):\n \"\"\"Retrieves a public key from the list published by Identity-Aware Proxy,\n re-fetching the key file if necessary.\n \"\"\"\n resp = requests.get('https://www.gstatic.com/iap/verify/public_key')\n if resp.status_code != 200:\n raise AuthError('Unable to fetch IAP keys: {} / {} / {}'.format(\n resp.status_code, resp.headers, resp.text))\n\n result = resp.json()\n key = result.get(key_id)\n if not key:\n raise AuthError('Key {!r} not found'.format(key_id))\n\n return key\n\n\ndef _validate_iap_jwt(iap_jwt):\n \"\"\"Validate JWT assertion.\"\"\"\n project_id = utils.get_application_id()\n expected_audience = '/projects/{}/apps/{}'.format(\n _project_number_from_id(project_id), project_id)\n\n try:\n key_id = jwt.get_unverified_header(iap_jwt).get('kid')\n if not key_id:\n raise AuthError('No key ID.')\n\n key = _get_iap_key(key_id)\n decoded_jwt = jwt.decode(\n iap_jwt,\n key,\n algorithms=['ES256'],\n issuer='https://cloud.google.com/iap',\n audience=expected_audience)\n return decoded_jwt['email']\n except (jwt.exceptions.InvalidTokenError,\n requests.exceptions.RequestException) as e:\n raise AuthError('JWT assertion decode error: ' + str(e))\n\n\ndef get_iap_email(current_request):\n \"\"\"Get Cloud IAP email.\"\"\"\n jwt_assertion = current_request.headers.get('X-Goog-IAP-JWT-Assertion')\n if not jwt_assertion:\n return None\n\n return _validate_iap_jwt(jwt_assertion)\n\n\ndef get_current_user():\n \"\"\"Get the current logged in user, or None.\"\"\"\n if environment.is_local_development():\n return User('user@localhost')\n\n current_request = request_cache.get_current_request()\n if local_config.AuthConfig().get('enable_loas'):\n loas_user = current_request.headers.get('X-AppEngine-LOAS-Peer-Username')\n if loas_user:\n return User(loas_user + '@google.com')\n\n iap_email = get_iap_email(current_request)\n if iap_email:\n return User(iap_email)\n\n cache_backing = request_cache.get_cache_backing()\n oauth_email = getattr(cache_backing, '_oauth_email', None)\n if oauth_email:\n return User(oauth_email)\n\n cached_email = getattr(cache_backing, '_cached_email', None)\n if cached_email:\n return User(cached_email)\n\n session_cookie = get_session_cookie()\n if not session_cookie:\n return None\n\n try:\n decoded_claims = decode_claims(get_session_cookie())\n except AuthError:\n logs.log_warn('Invalid session cookie.')\n return None\n\n if not decoded_claims.get('email_verified'):\n return None\n\n email = decoded_claims.get('email')\n if not email:\n return None\n\n # We cache the email for this request if we've validated the user to make\n # subsequent get_current_user() calls fast.\n setattr(cache_backing, '_cached_email', email)\n return User(email)\n\n\ndef create_session_cookie(id_token, expires_in):\n \"\"\"Create a new session cookie.\"\"\"\n try:\n return auth.create_session_cookie(id_token, expires_in=expires_in)\n except auth.AuthError:\n raise AuthError('Failed to create session cookie.')\n\n\ndef get_session_cookie():\n \"\"\"Get the current session cookie.\"\"\"\n return request_cache.get_current_request().cookies.get('session')\n\n\ndef revoke_session_cookie(session_cookie):\n \"\"\"Revoke a session cookie.\"\"\"\n decoded_claims = decode_claims(session_cookie)\n auth.revoke_refresh_tokens(decoded_claims['sub'])\n\n\ndef decode_claims(session_cookie):\n \"\"\"Decode the claims for the current session cookie.\"\"\"\n try:\n return auth.verify_session_cookie(session_cookie, check_revoked=True)\n except (ValueError, auth.AuthError):\n raise AuthError('Invalid session cookie.')\n", "path": "src/appengine/libs/auth.py"}], "after_files": [{"content": "# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Helpers used to generate Content Security Policies for pages.\"\"\"\nimport collections\n\nfrom libs import auth\n\n\nclass CSPBuilder(object):\n \"\"\"Helper to build a Content Security Policy string.\"\"\"\n\n def __init__(self):\n self.directives = collections.defaultdict(list)\n\n def add(self, directive, source, quote=False):\n \"\"\"Add a source for a given directive.\"\"\"\n # Some values for sources are expected to be quoted. No escaping is done\n # since these are specific literal values that don't require it.\n if quote:\n source = '\\'{}\\''.format(source)\n\n assert source not in self.directives[directive], (\n 'Duplicate source \"{source}\" for directive \"{directive}\"'.format(\n source=source, directive=directive))\n self.directives[directive].append(source)\n\n def add_sourceless(self, directive):\n assert directive not in self.directives, (\n 'Sourceless directive \"{directive}\" already exists.'.format(\n directive=directive))\n\n self.directives[directive] = []\n\n def remove(self, directive, source, quote=False):\n \"\"\"Remove a source for a given directive.\"\"\"\n if quote:\n source = '\\'{}\\''.format(source)\n\n assert source in self.directives[directive], (\n 'Removing nonexistent \"{source}\" for directive \"{directive}\"'.format(\n source=source, directive=directive))\n self.directives[directive].remove(source)\n\n def __str__(self):\n \"\"\"Convert to a string to send with a Content-Security-Policy header.\"\"\"\n parts = []\n\n # Sort directives for deterministic results.\n for directive, sources in sorted(self.directives.items()):\n # Each policy part has the form \"directive source1 source2 ...;\".\n parts.append(' '.join([directive] + sources) + ';')\n\n return ' '.join(parts)\n\n\ndef get_default_builder():\n \"\"\"Get a CSPBuilder object for the default policy.\n\n Can be modified for specific pages if needed.\"\"\"\n builder = CSPBuilder()\n\n # By default, disallow everything. Whitelist only features that are needed.\n builder.add('default-src', 'none', quote=True)\n\n # Allow various directives if sourced from self.\n builder.add('font-src', 'self', quote=True)\n builder.add('connect-src', 'self', quote=True)\n builder.add('img-src', 'self', quote=True)\n builder.add('manifest-src', 'self', quote=True)\n\n # External scripts. Google analytics, charting libraries.\n builder.add('script-src', 'www.google-analytics.com')\n builder.add('script-src', 'www.gstatic.com')\n builder.add('script-src', 'apis.google.com')\n\n # Google Analytics also uses connect-src and img-src.\n builder.add('connect-src', 'www.google-analytics.com')\n builder.add('img-src', 'www.google-analytics.com')\n\n # Firebase.\n builder.add('img-src', 'www.gstatic.com')\n builder.add('connect-src', 'securetoken.googleapis.com')\n builder.add('connect-src', 'www.googleapis.com')\n builder.add('connect-src', 'identitytoolkit.googleapis.com')\n builder.add('frame-src', auth.auth_domain())\n\n # External style. Used for fonts, charting libraries.\n builder.add('style-src', 'fonts.googleapis.com')\n builder.add('style-src', 'www.gstatic.com')\n\n # External fonts.\n builder.add('font-src', 'fonts.gstatic.com')\n\n # Some upload forms require us to connect to the cloud storage API.\n builder.add('connect-src', 'storage.googleapis.com')\n\n # Mixed content is unexpected, but upgrade requests rather than block.\n builder.add_sourceless('upgrade-insecure-requests')\n\n # We don't expect object to be used, but it doesn't fall back to default-src.\n builder.add('object-src', 'none', quote=True)\n\n # We don't expect workers to be used, but they fall back to script-src.\n builder.add('worker-src', 'none', quote=True)\n\n # Add reporting so that violations don't break things silently.\n builder.add('report-uri', '/report-csp-failure')\n\n # TODO(mbarbella): Remove Google-specific cases by allowing configuration.\n\n # Internal authentication.\n builder.add('manifest-src', 'login.corp.google.com')\n\n # TODO(mbarbella): Improve the policy by limiting the additions below.\n\n # Because we use Polymer Bundler to create large files containing all of our\n # scripts inline, our policy requires this (which weakens CSP significantly).\n builder.add('script-src', 'unsafe-inline', quote=True)\n\n # Some of the pages that read responses from json handlers require this.\n builder.add('script-src', 'unsafe-eval', quote=True)\n\n # Our Polymer Bundler usage also requires inline style.\n builder.add('style-src', 'unsafe-inline', quote=True)\n\n # Some fonts are loaded from data URIs.\n builder.add('font-src', 'data:')\n\n return builder\n\n\ndef get_default():\n \"\"\"Get the default Content Security Policy as a string.\"\"\"\n return str(get_default_builder())\n", "path": "src/appengine/libs/csp.py"}, {"content": "# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Authentication helpers.\"\"\"\n\nimport collections\n\nfrom firebase_admin import auth\nfrom google.cloud import ndb\nfrom googleapiclient.discovery import build\nimport jwt\nimport requests\n\nfrom clusterfuzz._internal.base import memoize\nfrom clusterfuzz._internal.base import utils\nfrom clusterfuzz._internal.config import local_config\nfrom clusterfuzz._internal.datastore import data_types\nfrom clusterfuzz._internal.metrics import logs\nfrom clusterfuzz._internal.system import environment\nfrom libs import request_cache\n\nUser = collections.namedtuple('User', ['email'])\n\n\nclass AuthError(Exception):\n \"\"\"Auth error.\"\"\"\n\n\ndef auth_domain():\n \"\"\"Get the auth domain.\"\"\"\n domain = local_config.ProjectConfig().get('firebase.auth_domain')\n if domain:\n return domain\n\n return utils.get_application_id() + '.firebaseapp.com'\n\n\ndef is_current_user_admin():\n \"\"\"Returns whether or not the current logged in user is an admin.\"\"\"\n if environment.is_local_development():\n return True\n\n user = get_current_user()\n if not user:\n return False\n\n key = ndb.Key(data_types.Admin, user.email)\n return bool(key.get())\n\n\[email protected](memoize.FifoInMemory(1))\ndef _project_number_from_id(project_id):\n \"\"\"Get the project number from project ID.\"\"\"\n resource_manager = build('cloudresourcemanager', 'v1')\n result = resource_manager.projects().get(projectId=project_id).execute()\n if 'projectNumber' not in result:\n raise AuthError('Failed to get project number.')\n\n return result['projectNumber']\n\n\[email protected](memoize.FifoInMemory(1))\ndef _get_iap_key(key_id):\n \"\"\"Retrieves a public key from the list published by Identity-Aware Proxy,\n re-fetching the key file if necessary.\n \"\"\"\n resp = requests.get('https://www.gstatic.com/iap/verify/public_key')\n if resp.status_code != 200:\n raise AuthError('Unable to fetch IAP keys: {} / {} / {}'.format(\n resp.status_code, resp.headers, resp.text))\n\n result = resp.json()\n key = result.get(key_id)\n if not key:\n raise AuthError('Key {!r} not found'.format(key_id))\n\n return key\n\n\ndef _validate_iap_jwt(iap_jwt):\n \"\"\"Validate JWT assertion.\"\"\"\n project_id = utils.get_application_id()\n expected_audience = '/projects/{}/apps/{}'.format(\n _project_number_from_id(project_id), project_id)\n\n try:\n key_id = jwt.get_unverified_header(iap_jwt).get('kid')\n if not key_id:\n raise AuthError('No key ID.')\n\n key = _get_iap_key(key_id)\n decoded_jwt = jwt.decode(\n iap_jwt,\n key,\n algorithms=['ES256'],\n issuer='https://cloud.google.com/iap',\n audience=expected_audience)\n return decoded_jwt['email']\n except (jwt.exceptions.InvalidTokenError,\n requests.exceptions.RequestException) as e:\n raise AuthError('JWT assertion decode error: ' + str(e))\n\n\ndef get_iap_email(current_request):\n \"\"\"Get Cloud IAP email.\"\"\"\n jwt_assertion = current_request.headers.get('X-Goog-IAP-JWT-Assertion')\n if not jwt_assertion:\n return None\n\n return _validate_iap_jwt(jwt_assertion)\n\n\ndef get_current_user():\n \"\"\"Get the current logged in user, or None.\"\"\"\n if environment.is_local_development():\n return User('user@localhost')\n\n current_request = request_cache.get_current_request()\n if local_config.AuthConfig().get('enable_loas'):\n loas_user = current_request.headers.get('X-AppEngine-LOAS-Peer-Username')\n if loas_user:\n return User(loas_user + '@google.com')\n\n iap_email = get_iap_email(current_request)\n if iap_email:\n return User(iap_email)\n\n cache_backing = request_cache.get_cache_backing()\n oauth_email = getattr(cache_backing, '_oauth_email', None)\n if oauth_email:\n return User(oauth_email)\n\n cached_email = getattr(cache_backing, '_cached_email', None)\n if cached_email:\n return User(cached_email)\n\n session_cookie = get_session_cookie()\n if not session_cookie:\n return None\n\n try:\n decoded_claims = decode_claims(get_session_cookie())\n except AuthError:\n logs.log_warn('Invalid session cookie.')\n return None\n\n allowed_firebase_providers = local_config.ProjectConfig().get(\n 'firebase.auth_providers', ['google.com'])\n firebase_info = decoded_claims.get('firebase', {})\n sign_in_provider = firebase_info.get('sign_in_provider')\n\n if sign_in_provider not in allowed_firebase_providers:\n logs.log_error(f'Firebase provider {sign_in_provider} is not enabled.')\n return None\n\n # Per https://docs.github.com/en/authentication/\n # keeping-your-account-and-data-secure/authorizing-oauth-apps\n # GitHub requires emails to be verified before an OAuth app can be\n # authorized, so we make an exception.\n if (not decoded_claims.get('email_verified') and\n sign_in_provider != 'github.com'):\n return None\n\n email = decoded_claims.get('email')\n if not email:\n return None\n\n # We cache the email for this request if we've validated the user to make\n # subsequent get_current_user() calls fast.\n setattr(cache_backing, '_cached_email', email)\n return User(email)\n\n\ndef create_session_cookie(id_token, expires_in):\n \"\"\"Create a new session cookie.\"\"\"\n try:\n return auth.create_session_cookie(id_token, expires_in=expires_in)\n except auth.AuthError:\n raise AuthError('Failed to create session cookie.')\n\n\ndef get_session_cookie():\n \"\"\"Get the current session cookie.\"\"\"\n return request_cache.get_current_request().cookies.get('session')\n\n\ndef revoke_session_cookie(session_cookie):\n \"\"\"Revoke a session cookie.\"\"\"\n decoded_claims = decode_claims(session_cookie)\n auth.revoke_refresh_tokens(decoded_claims['sub'])\n\n\ndef decode_claims(session_cookie):\n \"\"\"Decode the claims for the current session cookie.\"\"\"\n try:\n return auth.verify_session_cookie(session_cookie, check_revoked=True)\n except (ValueError, auth.AuthError):\n raise AuthError('Invalid session cookie.')\n", "path": "src/appengine/libs/auth.py"}]}
| 3,687 | 395 |
gh_patches_debug_13091
|
rasdani/github-patches
|
git_diff
|
PrefectHQ__prefect-11999
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
no import statement for wait_for_flow_run
### First check
- [X] I added a descriptive title to this issue.
- [X] I used GitHub search to find a similar request and didn't find it 😇
### Describe the issue
There is no import statement for wait_for_flow_run so typing this code into pycharm shows wait_for_flow_run as an error. Searching the internets, the import statement used to be
_from prefect.tasks.prefect import wait_for_flow_run_
yeah, that doesn't work anymore.
### Describe the proposed change
put the correct import statement in the docs which is
_from prefect.flow_runs import wait_for_flow_run_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/prefect/flow_runs.py`
Content:
```
1 from typing import Optional
2 from uuid import UUID
3
4 import anyio
5
6 from prefect.client.orchestration import PrefectClient
7 from prefect.client.schemas import FlowRun
8 from prefect.client.utilities import inject_client
9 from prefect.exceptions import FlowRunWaitTimeout
10 from prefect.logging import get_logger
11
12
13 @inject_client
14 async def wait_for_flow_run(
15 flow_run_id: UUID,
16 timeout: Optional[int] = 10800,
17 poll_interval: int = 5,
18 client: Optional[PrefectClient] = None,
19 log_states: bool = False,
20 ) -> FlowRun:
21 """
22 Waits for the prefect flow run to finish and returns the FlowRun
23
24 Args:
25 flow_run_id: The flow run ID for the flow run to wait for.
26 timeout: The wait timeout in seconds. Defaults to 10800 (3 hours).
27 poll_interval: The poll interval in seconds. Defaults to 5.
28
29 Returns:
30 FlowRun: The finished flow run.
31
32 Raises:
33 prefect.exceptions.FlowWaitTimeout: If flow run goes over the timeout.
34
35 Examples:
36 Create a flow run for a deployment and wait for it to finish:
37 ```python
38 import asyncio
39
40 from prefect import get_client
41
42 async def main():
43 async with get_client() as client:
44 flow_run = await client.create_flow_run_from_deployment(deployment_id="my-deployment-id")
45 flow_run = await wait_for_flow_run(flow_run_id=flow_run.id)
46 print(flow_run.state)
47
48 if __name__ == "__main__":
49 asyncio.run(main())
50
51 ```
52
53 Trigger multiple flow runs and wait for them to finish:
54 ```python
55 import asyncio
56
57 from prefect import get_client
58
59 async def main(num_runs: int):
60 async with get_client() as client:
61 flow_runs = [
62 await client.create_flow_run_from_deployment(deployment_id="my-deployment-id")
63 for _
64 in range(num_runs)
65 ]
66 coros = [wait_for_flow_run(flow_run_id=flow_run.id) for flow_run in flow_runs]
67 finished_flow_runs = await asyncio.gather(*coros)
68 print([flow_run.state for flow_run in finished_flow_runs])
69
70 if __name__ == "__main__":
71 asyncio.run(main(num_runs=10))
72
73 ```
74 """
75 assert client is not None, "Client injection failed"
76 logger = get_logger()
77 with anyio.move_on_after(timeout):
78 while True:
79 flow_run = await client.read_flow_run(flow_run_id)
80 flow_state = flow_run.state
81 if log_states:
82 logger.info(f"Flow run is in state {flow_run.state.name!r}")
83 if flow_state and flow_state.is_final():
84 return flow_run
85 await anyio.sleep(poll_interval)
86 raise FlowRunWaitTimeout(
87 f"Flow run with ID {flow_run_id} exceeded watch timeout of {timeout} seconds"
88 )
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/prefect/flow_runs.py b/src/prefect/flow_runs.py
--- a/src/prefect/flow_runs.py
+++ b/src/prefect/flow_runs.py
@@ -38,6 +38,7 @@
import asyncio
from prefect import get_client
+ from prefect.flow_runs import wait_for_flow_run
async def main():
async with get_client() as client:
@@ -55,6 +56,7 @@
import asyncio
from prefect import get_client
+ from prefect.flow_runs import wait_for_flow_run
async def main(num_runs: int):
async with get_client() as client:
|
{"golden_diff": "diff --git a/src/prefect/flow_runs.py b/src/prefect/flow_runs.py\n--- a/src/prefect/flow_runs.py\n+++ b/src/prefect/flow_runs.py\n@@ -38,6 +38,7 @@\n import asyncio\n \n from prefect import get_client\n+ from prefect.flow_runs import wait_for_flow_run\n \n async def main():\n async with get_client() as client:\n@@ -55,6 +56,7 @@\n import asyncio\n \n from prefect import get_client\n+ from prefect.flow_runs import wait_for_flow_run\n \n async def main(num_runs: int):\n async with get_client() as client:\n", "issue": "no import statement for wait_for_flow_run\n### First check\r\n\r\n- [X] I added a descriptive title to this issue.\r\n- [X] I used GitHub search to find a similar request and didn't find it \ud83d\ude07\r\n\r\n### Describe the issue\r\n\r\nThere is no import statement for wait_for_flow_run so typing this code into pycharm shows wait_for_flow_run as an error. Searching the internets, the import statement used to be\r\n\r\n_from prefect.tasks.prefect import wait_for_flow_run_\r\n\r\nyeah, that doesn't work anymore.\r\n\r\n### Describe the proposed change\r\n\r\nput the correct import statement in the docs which is \r\n\r\n_from prefect.flow_runs import wait_for_flow_run_\r\n\n", "before_files": [{"content": "from typing import Optional\nfrom uuid import UUID\n\nimport anyio\n\nfrom prefect.client.orchestration import PrefectClient\nfrom prefect.client.schemas import FlowRun\nfrom prefect.client.utilities import inject_client\nfrom prefect.exceptions import FlowRunWaitTimeout\nfrom prefect.logging import get_logger\n\n\n@inject_client\nasync def wait_for_flow_run(\n flow_run_id: UUID,\n timeout: Optional[int] = 10800,\n poll_interval: int = 5,\n client: Optional[PrefectClient] = None,\n log_states: bool = False,\n) -> FlowRun:\n \"\"\"\n Waits for the prefect flow run to finish and returns the FlowRun\n\n Args:\n flow_run_id: The flow run ID for the flow run to wait for.\n timeout: The wait timeout in seconds. Defaults to 10800 (3 hours).\n poll_interval: The poll interval in seconds. Defaults to 5.\n\n Returns:\n FlowRun: The finished flow run.\n\n Raises:\n prefect.exceptions.FlowWaitTimeout: If flow run goes over the timeout.\n\n Examples:\n Create a flow run for a deployment and wait for it to finish:\n ```python\n import asyncio\n\n from prefect import get_client\n\n async def main():\n async with get_client() as client:\n flow_run = await client.create_flow_run_from_deployment(deployment_id=\"my-deployment-id\")\n flow_run = await wait_for_flow_run(flow_run_id=flow_run.id)\n print(flow_run.state)\n\n if __name__ == \"__main__\":\n asyncio.run(main())\n\n ```\n\n Trigger multiple flow runs and wait for them to finish:\n ```python\n import asyncio\n\n from prefect import get_client\n\n async def main(num_runs: int):\n async with get_client() as client:\n flow_runs = [\n await client.create_flow_run_from_deployment(deployment_id=\"my-deployment-id\")\n for _\n in range(num_runs)\n ]\n coros = [wait_for_flow_run(flow_run_id=flow_run.id) for flow_run in flow_runs]\n finished_flow_runs = await asyncio.gather(*coros)\n print([flow_run.state for flow_run in finished_flow_runs])\n\n if __name__ == \"__main__\":\n asyncio.run(main(num_runs=10))\n\n ```\n \"\"\"\n assert client is not None, \"Client injection failed\"\n logger = get_logger()\n with anyio.move_on_after(timeout):\n while True:\n flow_run = await client.read_flow_run(flow_run_id)\n flow_state = flow_run.state\n if log_states:\n logger.info(f\"Flow run is in state {flow_run.state.name!r}\")\n if flow_state and flow_state.is_final():\n return flow_run\n await anyio.sleep(poll_interval)\n raise FlowRunWaitTimeout(\n f\"Flow run with ID {flow_run_id} exceeded watch timeout of {timeout} seconds\"\n )\n", "path": "src/prefect/flow_runs.py"}], "after_files": [{"content": "from typing import Optional\nfrom uuid import UUID\n\nimport anyio\n\nfrom prefect.client.orchestration import PrefectClient\nfrom prefect.client.schemas import FlowRun\nfrom prefect.client.utilities import inject_client\nfrom prefect.exceptions import FlowRunWaitTimeout\nfrom prefect.logging import get_logger\n\n\n@inject_client\nasync def wait_for_flow_run(\n flow_run_id: UUID,\n timeout: Optional[int] = 10800,\n poll_interval: int = 5,\n client: Optional[PrefectClient] = None,\n log_states: bool = False,\n) -> FlowRun:\n \"\"\"\n Waits for the prefect flow run to finish and returns the FlowRun\n\n Args:\n flow_run_id: The flow run ID for the flow run to wait for.\n timeout: The wait timeout in seconds. Defaults to 10800 (3 hours).\n poll_interval: The poll interval in seconds. Defaults to 5.\n\n Returns:\n FlowRun: The finished flow run.\n\n Raises:\n prefect.exceptions.FlowWaitTimeout: If flow run goes over the timeout.\n\n Examples:\n Create a flow run for a deployment and wait for it to finish:\n ```python\n import asyncio\n\n from prefect import get_client\n from prefect.flow_runs import wait_for_flow_run\n\n async def main():\n async with get_client() as client:\n flow_run = await client.create_flow_run_from_deployment(deployment_id=\"my-deployment-id\")\n flow_run = await wait_for_flow_run(flow_run_id=flow_run.id)\n print(flow_run.state)\n\n if __name__ == \"__main__\":\n asyncio.run(main())\n\n ```\n\n Trigger multiple flow runs and wait for them to finish:\n ```python\n import asyncio\n\n from prefect import get_client\n from prefect.flow_runs import wait_for_flow_run\n\n async def main(num_runs: int):\n async with get_client() as client:\n flow_runs = [\n await client.create_flow_run_from_deployment(deployment_id=\"my-deployment-id\")\n for _\n in range(num_runs)\n ]\n coros = [wait_for_flow_run(flow_run_id=flow_run.id) for flow_run in flow_runs]\n finished_flow_runs = await asyncio.gather(*coros)\n print([flow_run.state for flow_run in finished_flow_runs])\n\n if __name__ == \"__main__\":\n asyncio.run(main(num_runs=10))\n\n ```\n \"\"\"\n assert client is not None, \"Client injection failed\"\n logger = get_logger()\n with anyio.move_on_after(timeout):\n while True:\n flow_run = await client.read_flow_run(flow_run_id)\n flow_state = flow_run.state\n if log_states:\n logger.info(f\"Flow run is in state {flow_run.state.name!r}\")\n if flow_state and flow_state.is_final():\n return flow_run\n await anyio.sleep(poll_interval)\n raise FlowRunWaitTimeout(\n f\"Flow run with ID {flow_run_id} exceeded watch timeout of {timeout} seconds\"\n )\n", "path": "src/prefect/flow_runs.py"}]}
| 1,205 | 146 |
gh_patches_debug_3915
|
rasdani/github-patches
|
git_diff
|
fossasia__open-event-server-890
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Show return model of sponsor types list in Swagger spec
Currently no return model (or schema) is shown for the GET API to get sponsor types used in a Event

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `open_event/api/sponsors.py`
Content:
```
1 from flask.ext.restplus import Resource, Namespace
2
3 from open_event.models.sponsor import Sponsor as SponsorModel
4
5 from .helpers.helpers import get_paginated_list, requires_auth, get_object_in_event
6 from .helpers.utils import PAGINATED_MODEL, PaginatedResourceBase, ServiceDAO, \
7 PAGE_PARAMS, POST_RESPONSES, PUT_RESPONSES
8 from .helpers import custom_fields as fields
9
10 api = Namespace('sponsors', description='Sponsors', path='/')
11
12 SPONSOR = api.model('Sponsor', {
13 'id': fields.Integer(required=True),
14 'name': fields.String(),
15 'url': fields.Uri(),
16 'logo': fields.ImageUri(),
17 'description': fields.String(),
18 'level': fields.String(),
19 'sponsor_type': fields.String(),
20 })
21
22 SPONSOR_PAGINATED = api.clone('SponsorPaginated', PAGINATED_MODEL, {
23 'results': fields.List(fields.Nested(SPONSOR))
24 })
25
26 SPONSOR_POST = api.clone('SponsorPost', SPONSOR)
27 del SPONSOR_POST['id']
28
29
30 # Create DAO
31 class SponsorDAO(ServiceDAO):
32 def list_types(self, event_id):
33 sponsors = self.list(event_id)
34 return list(set(
35 sponsor.sponsor_type for sponsor in sponsors
36 if sponsor.sponsor_type))
37
38
39 DAO = SponsorDAO(SponsorModel, SPONSOR_POST)
40
41
42 @api.route('/events/<int:event_id>/sponsors/<int:sponsor_id>')
43 @api.response(404, 'Sponsor not found')
44 @api.response(400, 'Sponsor does not belong to event')
45 class Sponsor(Resource):
46 @api.doc('get_sponsor')
47 @api.marshal_with(SPONSOR)
48 def get(self, event_id, sponsor_id):
49 """Fetch a sponsor given its id"""
50 return DAO.get(event_id, sponsor_id)
51
52 @requires_auth
53 @api.doc('delete_sponsor')
54 @api.marshal_with(SPONSOR)
55 def delete(self, event_id, sponsor_id):
56 """Delete a sponsor given its id"""
57 return DAO.delete(event_id, sponsor_id)
58
59 @requires_auth
60 @api.doc('update_sponsor', responses=PUT_RESPONSES)
61 @api.marshal_with(SPONSOR)
62 @api.expect(SPONSOR_POST)
63 def put(self, event_id, sponsor_id):
64 """Update a sponsor given its id"""
65 return DAO.update(event_id, sponsor_id, self.api.payload)
66
67
68 @api.route('/events/<int:event_id>/sponsors')
69 class SponsorList(Resource):
70 @api.doc('list_sponsors')
71 @api.marshal_list_with(SPONSOR)
72 def get(self, event_id):
73 """List all sponsors"""
74 return DAO.list(event_id)
75
76 @requires_auth
77 @api.doc('create_sponsor', responses=POST_RESPONSES)
78 @api.marshal_with(SPONSOR)
79 @api.expect(SPONSOR_POST)
80 def post(self, event_id):
81 """Create a sponsor"""
82 return DAO.create(
83 event_id,
84 self.api.payload,
85 self.api.url_for(self, event_id=event_id)
86 )
87
88
89 @api.route('/events/<int:event_id>/sponsors/types')
90 class SponsorTypesList(Resource):
91 @api.doc('list_sponsor_types')
92 def get(self, event_id):
93 """List all sponsor types"""
94 return DAO.list_types(event_id)
95
96
97 @api.route('/events/<int:event_id>/sponsors/page')
98 class SponsorListPaginated(Resource, PaginatedResourceBase):
99 @api.doc('list_sponsors_paginated', params=PAGE_PARAMS)
100 @api.marshal_with(SPONSOR_PAGINATED)
101 def get(self, event_id):
102 """List sponsors in a paginated manner"""
103 return get_paginated_list(
104 SponsorModel,
105 self.api.url_for(self, event_id=event_id),
106 args=self.parser.parse_args(),
107 event_id=event_id
108 )
109
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/open_event/api/sponsors.py b/open_event/api/sponsors.py
--- a/open_event/api/sponsors.py
+++ b/open_event/api/sponsors.py
@@ -88,7 +88,7 @@
@api.route('/events/<int:event_id>/sponsors/types')
class SponsorTypesList(Resource):
- @api.doc('list_sponsor_types')
+ @api.doc('list_sponsor_types', model=[fields.String()])
def get(self, event_id):
"""List all sponsor types"""
return DAO.list_types(event_id)
|
{"golden_diff": "diff --git a/open_event/api/sponsors.py b/open_event/api/sponsors.py\n--- a/open_event/api/sponsors.py\n+++ b/open_event/api/sponsors.py\n@@ -88,7 +88,7 @@\n \n @api.route('/events/<int:event_id>/sponsors/types')\n class SponsorTypesList(Resource):\n- @api.doc('list_sponsor_types')\n+ @api.doc('list_sponsor_types', model=[fields.String()])\n def get(self, event_id):\n \"\"\"List all sponsor types\"\"\"\n return DAO.list_types(event_id)\n", "issue": "Show return model of sponsor types list in Swagger spec\nCurrently no return model (or schema) is shown for the GET API to get sponsor types used in a Event\n\n\n\n", "before_files": [{"content": "from flask.ext.restplus import Resource, Namespace\n\nfrom open_event.models.sponsor import Sponsor as SponsorModel\n\nfrom .helpers.helpers import get_paginated_list, requires_auth, get_object_in_event\nfrom .helpers.utils import PAGINATED_MODEL, PaginatedResourceBase, ServiceDAO, \\\n PAGE_PARAMS, POST_RESPONSES, PUT_RESPONSES\nfrom .helpers import custom_fields as fields\n\napi = Namespace('sponsors', description='Sponsors', path='/')\n\nSPONSOR = api.model('Sponsor', {\n 'id': fields.Integer(required=True),\n 'name': fields.String(),\n 'url': fields.Uri(),\n 'logo': fields.ImageUri(),\n 'description': fields.String(),\n 'level': fields.String(),\n 'sponsor_type': fields.String(),\n})\n\nSPONSOR_PAGINATED = api.clone('SponsorPaginated', PAGINATED_MODEL, {\n 'results': fields.List(fields.Nested(SPONSOR))\n})\n\nSPONSOR_POST = api.clone('SponsorPost', SPONSOR)\ndel SPONSOR_POST['id']\n\n\n# Create DAO\nclass SponsorDAO(ServiceDAO):\n def list_types(self, event_id):\n sponsors = self.list(event_id)\n return list(set(\n sponsor.sponsor_type for sponsor in sponsors\n if sponsor.sponsor_type))\n\n\nDAO = SponsorDAO(SponsorModel, SPONSOR_POST)\n\n\[email protected]('/events/<int:event_id>/sponsors/<int:sponsor_id>')\[email protected](404, 'Sponsor not found')\[email protected](400, 'Sponsor does not belong to event')\nclass Sponsor(Resource):\n @api.doc('get_sponsor')\n @api.marshal_with(SPONSOR)\n def get(self, event_id, sponsor_id):\n \"\"\"Fetch a sponsor given its id\"\"\"\n return DAO.get(event_id, sponsor_id)\n\n @requires_auth\n @api.doc('delete_sponsor')\n @api.marshal_with(SPONSOR)\n def delete(self, event_id, sponsor_id):\n \"\"\"Delete a sponsor given its id\"\"\"\n return DAO.delete(event_id, sponsor_id)\n\n @requires_auth\n @api.doc('update_sponsor', responses=PUT_RESPONSES)\n @api.marshal_with(SPONSOR)\n @api.expect(SPONSOR_POST)\n def put(self, event_id, sponsor_id):\n \"\"\"Update a sponsor given its id\"\"\"\n return DAO.update(event_id, sponsor_id, self.api.payload)\n\n\[email protected]('/events/<int:event_id>/sponsors')\nclass SponsorList(Resource):\n @api.doc('list_sponsors')\n @api.marshal_list_with(SPONSOR)\n def get(self, event_id):\n \"\"\"List all sponsors\"\"\"\n return DAO.list(event_id)\n\n @requires_auth\n @api.doc('create_sponsor', responses=POST_RESPONSES)\n @api.marshal_with(SPONSOR)\n @api.expect(SPONSOR_POST)\n def post(self, event_id):\n \"\"\"Create a sponsor\"\"\"\n return DAO.create(\n event_id,\n self.api.payload,\n self.api.url_for(self, event_id=event_id)\n )\n\n\[email protected]('/events/<int:event_id>/sponsors/types')\nclass SponsorTypesList(Resource):\n @api.doc('list_sponsor_types')\n def get(self, event_id):\n \"\"\"List all sponsor types\"\"\"\n return DAO.list_types(event_id)\n\n\[email protected]('/events/<int:event_id>/sponsors/page')\nclass SponsorListPaginated(Resource, PaginatedResourceBase):\n @api.doc('list_sponsors_paginated', params=PAGE_PARAMS)\n @api.marshal_with(SPONSOR_PAGINATED)\n def get(self, event_id):\n \"\"\"List sponsors in a paginated manner\"\"\"\n return get_paginated_list(\n SponsorModel,\n self.api.url_for(self, event_id=event_id),\n args=self.parser.parse_args(),\n event_id=event_id\n )\n", "path": "open_event/api/sponsors.py"}], "after_files": [{"content": "from flask.ext.restplus import Resource, Namespace\n\nfrom open_event.models.sponsor import Sponsor as SponsorModel\n\nfrom .helpers.helpers import get_paginated_list, requires_auth, get_object_in_event\nfrom .helpers.utils import PAGINATED_MODEL, PaginatedResourceBase, ServiceDAO, \\\n PAGE_PARAMS, POST_RESPONSES, PUT_RESPONSES\nfrom .helpers import custom_fields as fields\n\napi = Namespace('sponsors', description='Sponsors', path='/')\n\nSPONSOR = api.model('Sponsor', {\n 'id': fields.Integer(required=True),\n 'name': fields.String(),\n 'url': fields.Uri(),\n 'logo': fields.ImageUri(),\n 'description': fields.String(),\n 'level': fields.String(),\n 'sponsor_type': fields.String(),\n})\n\nSPONSOR_PAGINATED = api.clone('SponsorPaginated', PAGINATED_MODEL, {\n 'results': fields.List(fields.Nested(SPONSOR))\n})\n\nSPONSOR_POST = api.clone('SponsorPost', SPONSOR)\ndel SPONSOR_POST['id']\n\n\n# Create DAO\nclass SponsorDAO(ServiceDAO):\n def list_types(self, event_id):\n sponsors = self.list(event_id)\n return list(set(\n sponsor.sponsor_type for sponsor in sponsors\n if sponsor.sponsor_type))\n\n\nDAO = SponsorDAO(SponsorModel, SPONSOR_POST)\n\n\[email protected]('/events/<int:event_id>/sponsors/<int:sponsor_id>')\[email protected](404, 'Sponsor not found')\[email protected](400, 'Sponsor does not belong to event')\nclass Sponsor(Resource):\n @api.doc('get_sponsor')\n @api.marshal_with(SPONSOR)\n def get(self, event_id, sponsor_id):\n \"\"\"Fetch a sponsor given its id\"\"\"\n return DAO.get(event_id, sponsor_id)\n\n @requires_auth\n @api.doc('delete_sponsor')\n @api.marshal_with(SPONSOR)\n def delete(self, event_id, sponsor_id):\n \"\"\"Delete a sponsor given its id\"\"\"\n return DAO.delete(event_id, sponsor_id)\n\n @requires_auth\n @api.doc('update_sponsor', responses=PUT_RESPONSES)\n @api.marshal_with(SPONSOR)\n @api.expect(SPONSOR_POST)\n def put(self, event_id, sponsor_id):\n \"\"\"Update a sponsor given its id\"\"\"\n return DAO.update(event_id, sponsor_id, self.api.payload)\n\n\[email protected]('/events/<int:event_id>/sponsors')\nclass SponsorList(Resource):\n @api.doc('list_sponsors')\n @api.marshal_list_with(SPONSOR)\n def get(self, event_id):\n \"\"\"List all sponsors\"\"\"\n return DAO.list(event_id)\n\n @requires_auth\n @api.doc('create_sponsor', responses=POST_RESPONSES)\n @api.marshal_with(SPONSOR)\n @api.expect(SPONSOR_POST)\n def post(self, event_id):\n \"\"\"Create a sponsor\"\"\"\n return DAO.create(\n event_id,\n self.api.payload,\n self.api.url_for(self, event_id=event_id)\n )\n\n\[email protected]('/events/<int:event_id>/sponsors/types')\nclass SponsorTypesList(Resource):\n @api.doc('list_sponsor_types', model=[fields.String()])\n def get(self, event_id):\n \"\"\"List all sponsor types\"\"\"\n return DAO.list_types(event_id)\n\n\[email protected]('/events/<int:event_id>/sponsors/page')\nclass SponsorListPaginated(Resource, PaginatedResourceBase):\n @api.doc('list_sponsors_paginated', params=PAGE_PARAMS)\n @api.marshal_with(SPONSOR_PAGINATED)\n def get(self, event_id):\n \"\"\"List sponsors in a paginated manner\"\"\"\n return get_paginated_list(\n SponsorModel,\n self.api.url_for(self, event_id=event_id),\n args=self.parser.parse_args(),\n event_id=event_id\n )\n", "path": "open_event/api/sponsors.py"}]}
| 1,415 | 119 |
gh_patches_debug_14502
|
rasdani/github-patches
|
git_diff
|
conan-io__conan-3839
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Conan doesn't keep the username to log to server anymore
From conan 1.8,
When an authentication is required by the conan server, the username is now always asked event though it was specified by conan user. In older version, only the password was required.
To reproduce:
```
$ conan user -c
$ conan user username
Changed user of remote 'server' from 'None' (anonymous) to 'username'
$ conan search -r server *
Please log in to "server" to perform this action. Execute "conan user" command.
Remote 'server' username:
```
To help us debug your issue please explain:
- [X] I've read the [CONTRIBUTING guide](https://raw.githubusercontent.com/conan-io/conan/develop/.github/CONTRIBUTING.md).
- [X] I've specified the Conan version, operating system version and any tool that can be relevant.
- [X] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conans/client/userio.py`
Content:
```
1 import os
2 import sys
3 from conans.client.output import ConanOutput
4 from conans.errors import InvalidNameException, ConanException
5 import getpass
6 from six.moves import input as raw_input
7
8
9 class UserIO(object):
10 """Class to interact with the user, used to show messages and ask for information"""
11
12 def __init__(self, ins=sys.stdin, out=None):
13 """
14 Params:
15 ins: input stream
16 out: ConanOutput, should have "write" method
17 """
18 self._ins = ins
19 if not out:
20 out = ConanOutput(sys.stdout)
21 self.out = out
22 self._interactive = True
23
24 def disable_input(self):
25 self._interactive = False
26
27 def _raise_if_non_interactive(self):
28 if not self._interactive:
29 raise ConanException("Conan interactive mode disabled")
30
31 def raw_input(self):
32 self._raise_if_non_interactive()
33 return raw_input()
34
35 def get_pass(self):
36 self._raise_if_non_interactive()
37 return getpass.getpass("")
38
39 def request_login(self, remote_name, username=None):
40 """Request user to input their name and password
41 :param username If username is specified it only request password"""
42 if self._interactive:
43 self.out.write("Remote '%s' username: " % remote_name)
44 username = self.get_username(remote_name)
45
46 if self._interactive:
47 self.out.write('Please enter a password for "%s" account: ' % username)
48 try:
49 pwd = self.get_password(remote_name)
50 except ConanException:
51 raise
52 except Exception as e:
53 raise ConanException('Cancelled pass %s' % e)
54 return username, pwd
55
56 def get_username(self, remote_name):
57 """Overridable for testing purpose"""
58 return self._get_env_username(remote_name) or self.raw_input()
59
60 def get_password(self, remote_name):
61 """Overridable for testing purpose"""
62 return self._get_env_password(remote_name) or self.get_pass()
63
64 def request_string(self, msg, default_value=None):
65 """Request user to input a msg
66 :param msg Name of the msg
67 """
68 self._raise_if_non_interactive()
69
70 if default_value:
71 self.out.input_text('%s (%s): ' % (msg, default_value))
72 else:
73 self.out.input_text('%s: ' % msg)
74 s = self._ins.readline().replace("\n", "")
75 if default_value is not None and s == '':
76 return default_value
77 return s
78
79 def request_boolean(self, msg, default_option=None):
80 """Request user to input a boolean"""
81 ret = None
82 while ret is None:
83 if default_option is True:
84 s = self.request_string("%s (YES/no)" % msg)
85 elif default_option is False:
86 s = self.request_string("%s (NO/yes)" % msg)
87 else:
88 s = self.request_string("%s (yes/no)" % msg)
89 if default_option is not None and s == '':
90 return default_option
91 if s.lower() in ['yes', 'y']:
92 ret = True
93 elif s.lower() in ['no', 'n']:
94 ret = False
95 else:
96 self.out.error("%s is not a valid answer" % s)
97 return ret
98
99 def _get_env_password(self, remote_name):
100 """
101 Try CONAN_PASSWORD_REMOTE_NAME or CONAN_PASSWORD or return None
102 """
103 remote_name = remote_name.replace("-", "_").upper()
104 var_name = "CONAN_PASSWORD_%s" % remote_name
105 ret = os.getenv(var_name, None) or os.getenv("CONAN_PASSWORD", None)
106 if ret:
107 self.out.info("Got password '******' from environment")
108 return ret
109
110 def _get_env_username(self, remote_name):
111 """
112 Try CONAN_LOGIN_USERNAME_REMOTE_NAME or CONAN_LOGIN_USERNAME or return None
113 """
114 remote_name = remote_name.replace("-", "_").upper()
115 var_name = "CONAN_LOGIN_USERNAME_%s" % remote_name
116 ret = os.getenv(var_name, None) or os.getenv("CONAN_LOGIN_USERNAME", None)
117
118 if ret:
119 self.out.info("Got username '%s' from environment" % ret)
120 return ret
121
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/conans/client/userio.py b/conans/client/userio.py
--- a/conans/client/userio.py
+++ b/conans/client/userio.py
@@ -39,9 +39,11 @@
def request_login(self, remote_name, username=None):
"""Request user to input their name and password
:param username If username is specified it only request password"""
- if self._interactive:
- self.out.write("Remote '%s' username: " % remote_name)
- username = self.get_username(remote_name)
+
+ if not username:
+ if self._interactive:
+ self.out.write("Remote '%s' username: " % remote_name)
+ username = self.get_username(remote_name)
if self._interactive:
self.out.write('Please enter a password for "%s" account: ' % username)
|
{"golden_diff": "diff --git a/conans/client/userio.py b/conans/client/userio.py\n--- a/conans/client/userio.py\n+++ b/conans/client/userio.py\n@@ -39,9 +39,11 @@\n def request_login(self, remote_name, username=None):\n \"\"\"Request user to input their name and password\n :param username If username is specified it only request password\"\"\"\n- if self._interactive:\n- self.out.write(\"Remote '%s' username: \" % remote_name)\n- username = self.get_username(remote_name)\n+\n+ if not username:\n+ if self._interactive:\n+ self.out.write(\"Remote '%s' username: \" % remote_name)\n+ username = self.get_username(remote_name)\n \n if self._interactive:\n self.out.write('Please enter a password for \"%s\" account: ' % username)\n", "issue": "Conan doesn't keep the username to log to server anymore\nFrom conan 1.8,\r\n\r\nWhen an authentication is required by the conan server, the username is now always asked event though it was specified by conan user. In older version, only the password was required.\r\n\r\nTo reproduce:\r\n```\r\n$ conan user -c\r\n$ conan user username\r\nChanged user of remote 'server' from 'None' (anonymous) to 'username'\r\n$ conan search -r server *\r\nPlease log in to \"server\" to perform this action. Execute \"conan user\" command.\r\nRemote 'server' username:\r\n```\r\n\r\nTo help us debug your issue please explain:\r\n\r\n- [X] I've read the [CONTRIBUTING guide](https://raw.githubusercontent.com/conan-io/conan/develop/.github/CONTRIBUTING.md).\r\n- [X] I've specified the Conan version, operating system version and any tool that can be relevant.\r\n- [X] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion.\r\n\r\n\n", "before_files": [{"content": "import os\nimport sys\nfrom conans.client.output import ConanOutput\nfrom conans.errors import InvalidNameException, ConanException\nimport getpass\nfrom six.moves import input as raw_input\n\n\nclass UserIO(object):\n \"\"\"Class to interact with the user, used to show messages and ask for information\"\"\"\n\n def __init__(self, ins=sys.stdin, out=None):\n \"\"\"\n Params:\n ins: input stream\n out: ConanOutput, should have \"write\" method\n \"\"\"\n self._ins = ins\n if not out:\n out = ConanOutput(sys.stdout)\n self.out = out\n self._interactive = True\n\n def disable_input(self):\n self._interactive = False\n\n def _raise_if_non_interactive(self):\n if not self._interactive:\n raise ConanException(\"Conan interactive mode disabled\")\n\n def raw_input(self):\n self._raise_if_non_interactive()\n return raw_input()\n\n def get_pass(self):\n self._raise_if_non_interactive()\n return getpass.getpass(\"\")\n\n def request_login(self, remote_name, username=None):\n \"\"\"Request user to input their name and password\n :param username If username is specified it only request password\"\"\"\n if self._interactive:\n self.out.write(\"Remote '%s' username: \" % remote_name)\n username = self.get_username(remote_name)\n\n if self._interactive:\n self.out.write('Please enter a password for \"%s\" account: ' % username)\n try:\n pwd = self.get_password(remote_name)\n except ConanException:\n raise\n except Exception as e:\n raise ConanException('Cancelled pass %s' % e)\n return username, pwd\n\n def get_username(self, remote_name):\n \"\"\"Overridable for testing purpose\"\"\"\n return self._get_env_username(remote_name) or self.raw_input()\n\n def get_password(self, remote_name):\n \"\"\"Overridable for testing purpose\"\"\"\n return self._get_env_password(remote_name) or self.get_pass()\n\n def request_string(self, msg, default_value=None):\n \"\"\"Request user to input a msg\n :param msg Name of the msg\n \"\"\"\n self._raise_if_non_interactive()\n\n if default_value:\n self.out.input_text('%s (%s): ' % (msg, default_value))\n else:\n self.out.input_text('%s: ' % msg)\n s = self._ins.readline().replace(\"\\n\", \"\")\n if default_value is not None and s == '':\n return default_value\n return s\n\n def request_boolean(self, msg, default_option=None):\n \"\"\"Request user to input a boolean\"\"\"\n ret = None\n while ret is None:\n if default_option is True:\n s = self.request_string(\"%s (YES/no)\" % msg)\n elif default_option is False:\n s = self.request_string(\"%s (NO/yes)\" % msg)\n else:\n s = self.request_string(\"%s (yes/no)\" % msg)\n if default_option is not None and s == '':\n return default_option\n if s.lower() in ['yes', 'y']:\n ret = True\n elif s.lower() in ['no', 'n']:\n ret = False\n else:\n self.out.error(\"%s is not a valid answer\" % s)\n return ret\n\n def _get_env_password(self, remote_name):\n \"\"\"\n Try CONAN_PASSWORD_REMOTE_NAME or CONAN_PASSWORD or return None\n \"\"\"\n remote_name = remote_name.replace(\"-\", \"_\").upper()\n var_name = \"CONAN_PASSWORD_%s\" % remote_name\n ret = os.getenv(var_name, None) or os.getenv(\"CONAN_PASSWORD\", None)\n if ret:\n self.out.info(\"Got password '******' from environment\")\n return ret\n\n def _get_env_username(self, remote_name):\n \"\"\"\n Try CONAN_LOGIN_USERNAME_REMOTE_NAME or CONAN_LOGIN_USERNAME or return None\n \"\"\"\n remote_name = remote_name.replace(\"-\", \"_\").upper()\n var_name = \"CONAN_LOGIN_USERNAME_%s\" % remote_name\n ret = os.getenv(var_name, None) or os.getenv(\"CONAN_LOGIN_USERNAME\", None)\n\n if ret:\n self.out.info(\"Got username '%s' from environment\" % ret)\n return ret\n", "path": "conans/client/userio.py"}], "after_files": [{"content": "import os\nimport sys\nfrom conans.client.output import ConanOutput\nfrom conans.errors import InvalidNameException, ConanException\nimport getpass\nfrom six.moves import input as raw_input\n\n\nclass UserIO(object):\n \"\"\"Class to interact with the user, used to show messages and ask for information\"\"\"\n\n def __init__(self, ins=sys.stdin, out=None):\n \"\"\"\n Params:\n ins: input stream\n out: ConanOutput, should have \"write\" method\n \"\"\"\n self._ins = ins\n if not out:\n out = ConanOutput(sys.stdout)\n self.out = out\n self._interactive = True\n\n def disable_input(self):\n self._interactive = False\n\n def _raise_if_non_interactive(self):\n if not self._interactive:\n raise ConanException(\"Conan interactive mode disabled\")\n\n def raw_input(self):\n self._raise_if_non_interactive()\n return raw_input()\n\n def get_pass(self):\n self._raise_if_non_interactive()\n return getpass.getpass(\"\")\n\n def request_login(self, remote_name, username=None):\n \"\"\"Request user to input their name and password\n :param username If username is specified it only request password\"\"\"\n\n if not username:\n if self._interactive:\n self.out.write(\"Remote '%s' username: \" % remote_name)\n username = self.get_username(remote_name)\n\n if self._interactive:\n self.out.write('Please enter a password for \"%s\" account: ' % username)\n try:\n pwd = self.get_password(remote_name)\n except ConanException:\n raise\n except Exception as e:\n raise ConanException('Cancelled pass %s' % e)\n return username, pwd\n\n def get_username(self, remote_name):\n \"\"\"Overridable for testing purpose\"\"\"\n return self._get_env_username(remote_name) or self.raw_input()\n\n def get_password(self, remote_name):\n \"\"\"Overridable for testing purpose\"\"\"\n return self._get_env_password(remote_name) or self.get_pass()\n\n def request_string(self, msg, default_value=None):\n \"\"\"Request user to input a msg\n :param msg Name of the msg\n \"\"\"\n self._raise_if_non_interactive()\n\n if default_value:\n self.out.input_text('%s (%s): ' % (msg, default_value))\n else:\n self.out.input_text('%s: ' % msg)\n s = self._ins.readline().replace(\"\\n\", \"\")\n if default_value is not None and s == '':\n return default_value\n return s\n\n def request_boolean(self, msg, default_option=None):\n \"\"\"Request user to input a boolean\"\"\"\n ret = None\n while ret is None:\n if default_option is True:\n s = self.request_string(\"%s (YES/no)\" % msg)\n elif default_option is False:\n s = self.request_string(\"%s (NO/yes)\" % msg)\n else:\n s = self.request_string(\"%s (yes/no)\" % msg)\n if default_option is not None and s == '':\n return default_option\n if s.lower() in ['yes', 'y']:\n ret = True\n elif s.lower() in ['no', 'n']:\n ret = False\n else:\n self.out.error(\"%s is not a valid answer\" % s)\n return ret\n\n def _get_env_password(self, remote_name):\n \"\"\"\n Try CONAN_PASSWORD_REMOTE_NAME or CONAN_PASSWORD or return None\n \"\"\"\n remote_name = remote_name.replace(\"-\", \"_\").upper()\n var_name = \"CONAN_PASSWORD_%s\" % remote_name\n ret = os.getenv(var_name, None) or os.getenv(\"CONAN_PASSWORD\", None)\n if ret:\n self.out.info(\"Got password '******' from environment\")\n return ret\n\n def _get_env_username(self, remote_name):\n \"\"\"\n Try CONAN_LOGIN_USERNAME_REMOTE_NAME or CONAN_LOGIN_USERNAME or return None\n \"\"\"\n remote_name = remote_name.replace(\"-\", \"_\").upper()\n var_name = \"CONAN_LOGIN_USERNAME_%s\" % remote_name\n ret = os.getenv(var_name, None) or os.getenv(\"CONAN_LOGIN_USERNAME\", None)\n\n if ret:\n self.out.info(\"Got username '%s' from environment\" % ret)\n return ret\n", "path": "conans/client/userio.py"}]}
| 1,659 | 186 |
gh_patches_debug_3843
|
rasdani/github-patches
|
git_diff
|
jazzband__pip-tools-1105
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
--upgrade-package downgrades unrelated pre-release package when --pre not given
<!-- Describe the issue briefly here. -->
#### Environment Versions
1. OS Type: macOS 10.15.4
1. Python version: 3.7.7
1. pip version: 20.0.2
1. pip-tools version: 4.5.1
#### Steps to replicate
(Note: this example will stop working when `gevent` releases 1.5 final but it can be replicated with any other package that currently has a pre-release version.)
1. Example `req.in` file:
```
click<7
gevent
```
2. `pip-compile req.in`
Output:
```
#
# This file is autogenerated by pip-compile
# To update, run:
#
# pip-compile req.in
#
click==6.7 # via -r req.in
gevent==1.4.0 # via -r req.in
greenlet==0.4.15 # via gevent
```
3. Upgrade gevent to pre-relese
`pip-compile --pre --upgrade-package gevent req.in`
Output:
```
#
# This file is autogenerated by pip-compile
# To update, run:
#
# pip-compile --pre req.in
#
click==6.7 # via -r req.in
gevent==1.5a4 # via -r req.in
greenlet==0.4.15 # via gevent
```
4. Remove version pin of `click` in `.in` file:
```
click
gevent
```
5. Upgrade click:
`pip-compile --upgrade-package click req.in`
Output:
```
#
# This file is autogenerated by pip-compile
# To update, run:
#
# pip-compile req.in
#
click==6.7 # via -r req.in
gevent==1.4.0 # via -r req.in
greenlet==0.4.15 # via gevent
```
#### Expected result
Once a package has been resolved to a pre-release version it should never "magically" be downgraded. Especially if only unrelated other packages are concerned.
I could see that there may be an argument for a plain `pip-compile` run to revert to the non-prerelease version, but I would disagree even there. But for `--upgrade-package` I see no way where this is correct behaviour.
#### Actual result
Not giving `--pre` at any time after it has been used once and a package is resolved to a pre-release version will downgrade it back to the last released version.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `piptools/repositories/local.py`
Content:
```
1 # coding: utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 from contextlib import contextmanager
5
6 from pip._internal.utils.hashes import FAVORITE_HASH
7
8 from .._compat import PIP_VERSION
9 from .base import BaseRepository
10
11 from piptools.utils import as_tuple, key_from_ireq, make_install_requirement
12
13
14 def ireq_satisfied_by_existing_pin(ireq, existing_pin):
15 """
16 Return True if the given InstallationRequirement is satisfied by the
17 previously encountered version pin.
18 """
19 version = next(iter(existing_pin.req.specifier)).version
20 return version in ireq.req.specifier
21
22
23 class LocalRequirementsRepository(BaseRepository):
24 """
25 The LocalRequirementsRepository proxied the _real_ repository by first
26 checking if a requirement can be satisfied by existing pins (i.e. the
27 result of a previous compile step).
28
29 In effect, if a requirement can be satisfied with a version pinned in the
30 requirements file, we prefer that version over the best match found in
31 PyPI. This keeps updates to the requirements.txt down to a minimum.
32 """
33
34 def __init__(self, existing_pins, proxied_repository):
35 self.repository = proxied_repository
36 self.existing_pins = existing_pins
37
38 @property
39 def options(self):
40 return self.repository.options
41
42 @property
43 def finder(self):
44 return self.repository.finder
45
46 @property
47 def session(self):
48 return self.repository.session
49
50 @property
51 def DEFAULT_INDEX_URL(self):
52 return self.repository.DEFAULT_INDEX_URL
53
54 def clear_caches(self):
55 self.repository.clear_caches()
56
57 def freshen_build_caches(self):
58 self.repository.freshen_build_caches()
59
60 def find_best_match(self, ireq, prereleases=None):
61 key = key_from_ireq(ireq)
62 existing_pin = self.existing_pins.get(key)
63 if existing_pin and ireq_satisfied_by_existing_pin(ireq, existing_pin):
64 project, version, _ = as_tuple(existing_pin)
65 return make_install_requirement(
66 project, version, ireq.extras, constraint=ireq.constraint
67 )
68 else:
69 return self.repository.find_best_match(ireq, prereleases)
70
71 def get_dependencies(self, ireq):
72 return self.repository.get_dependencies(ireq)
73
74 def get_hashes(self, ireq):
75 key = key_from_ireq(ireq)
76 existing_pin = self.existing_pins.get(key)
77 if existing_pin and ireq_satisfied_by_existing_pin(ireq, existing_pin):
78 if PIP_VERSION[:2] <= (20, 0):
79 hashes = existing_pin.options.get("hashes", {})
80 else:
81 hashes = existing_pin.hash_options
82 hexdigests = hashes.get(FAVORITE_HASH)
83 if hexdigests:
84 return {
85 ":".join([FAVORITE_HASH, hexdigest]) for hexdigest in hexdigests
86 }
87 return self.repository.get_hashes(ireq)
88
89 @contextmanager
90 def allow_all_wheels(self):
91 with self.repository.allow_all_wheels():
92 yield
93
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/piptools/repositories/local.py b/piptools/repositories/local.py
--- a/piptools/repositories/local.py
+++ b/piptools/repositories/local.py
@@ -17,7 +17,9 @@
previously encountered version pin.
"""
version = next(iter(existing_pin.req.specifier)).version
- return version in ireq.req.specifier
+ return ireq.req.specifier.contains(
+ version, prereleases=existing_pin.req.specifier.prereleases
+ )
class LocalRequirementsRepository(BaseRepository):
|
{"golden_diff": "diff --git a/piptools/repositories/local.py b/piptools/repositories/local.py\n--- a/piptools/repositories/local.py\n+++ b/piptools/repositories/local.py\n@@ -17,7 +17,9 @@\n previously encountered version pin.\n \"\"\"\n version = next(iter(existing_pin.req.specifier)).version\n- return version in ireq.req.specifier\n+ return ireq.req.specifier.contains(\n+ version, prereleases=existing_pin.req.specifier.prereleases\n+ )\n \n \n class LocalRequirementsRepository(BaseRepository):\n", "issue": "--upgrade-package downgrades unrelated pre-release package when --pre not given\n<!-- Describe the issue briefly here. -->\r\n\r\n#### Environment Versions\r\n\r\n1. OS Type: macOS 10.15.4\r\n1. Python version: 3.7.7\r\n1. pip version: 20.0.2\r\n1. pip-tools version: 4.5.1\r\n\r\n#### Steps to replicate\r\n\r\n(Note: this example will stop working when `gevent` releases 1.5 final but it can be replicated with any other package that currently has a pre-release version.)\r\n\r\n1. Example `req.in` file:\r\n ```\r\n click<7\r\n gevent\r\n ```\r\n2. `pip-compile req.in`\r\n Output:\r\n ```\r\n #\r\n # This file is autogenerated by pip-compile\r\n # To update, run:\r\n #\r\n # pip-compile req.in\r\n #\r\n click==6.7 # via -r req.in\r\n gevent==1.4.0 # via -r req.in\r\n greenlet==0.4.15 # via gevent\r\n ```\r\n3. Upgrade gevent to pre-relese\r\n `pip-compile --pre --upgrade-package gevent req.in`\r\n Output:\r\n ```\r\n #\r\n # This file is autogenerated by pip-compile\r\n # To update, run:\r\n #\r\n # pip-compile --pre req.in\r\n #\r\n click==6.7 # via -r req.in\r\n gevent==1.5a4 # via -r req.in\r\n greenlet==0.4.15 # via gevent\r\n ```\r\n4. Remove version pin of `click` in `.in` file:\r\n ```\r\n click\r\n gevent\r\n ```\r\n5. Upgrade click:\r\n `pip-compile --upgrade-package click req.in`\r\n Output:\r\n ```\r\n #\r\n # This file is autogenerated by pip-compile\r\n # To update, run:\r\n #\r\n # pip-compile req.in\r\n #\r\n click==6.7 # via -r req.in\r\n gevent==1.4.0 # via -r req.in\r\n greenlet==0.4.15 # via gevent\r\n ```\r\n\r\n#### Expected result\r\n\r\nOnce a package has been resolved to a pre-release version it should never \"magically\" be downgraded. Especially if only unrelated other packages are concerned.\r\n\r\nI could see that there may be an argument for a plain `pip-compile` run to revert to the non-prerelease version, but I would disagree even there. But for `--upgrade-package` I see no way where this is correct behaviour.\r\n\r\n#### Actual result\r\n\r\nNot giving `--pre` at any time after it has been used once and a package is resolved to a pre-release version will downgrade it back to the last released version.\n", "before_files": [{"content": "# coding: utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom contextlib import contextmanager\n\nfrom pip._internal.utils.hashes import FAVORITE_HASH\n\nfrom .._compat import PIP_VERSION\nfrom .base import BaseRepository\n\nfrom piptools.utils import as_tuple, key_from_ireq, make_install_requirement\n\n\ndef ireq_satisfied_by_existing_pin(ireq, existing_pin):\n \"\"\"\n Return True if the given InstallationRequirement is satisfied by the\n previously encountered version pin.\n \"\"\"\n version = next(iter(existing_pin.req.specifier)).version\n return version in ireq.req.specifier\n\n\nclass LocalRequirementsRepository(BaseRepository):\n \"\"\"\n The LocalRequirementsRepository proxied the _real_ repository by first\n checking if a requirement can be satisfied by existing pins (i.e. the\n result of a previous compile step).\n\n In effect, if a requirement can be satisfied with a version pinned in the\n requirements file, we prefer that version over the best match found in\n PyPI. This keeps updates to the requirements.txt down to a minimum.\n \"\"\"\n\n def __init__(self, existing_pins, proxied_repository):\n self.repository = proxied_repository\n self.existing_pins = existing_pins\n\n @property\n def options(self):\n return self.repository.options\n\n @property\n def finder(self):\n return self.repository.finder\n\n @property\n def session(self):\n return self.repository.session\n\n @property\n def DEFAULT_INDEX_URL(self):\n return self.repository.DEFAULT_INDEX_URL\n\n def clear_caches(self):\n self.repository.clear_caches()\n\n def freshen_build_caches(self):\n self.repository.freshen_build_caches()\n\n def find_best_match(self, ireq, prereleases=None):\n key = key_from_ireq(ireq)\n existing_pin = self.existing_pins.get(key)\n if existing_pin and ireq_satisfied_by_existing_pin(ireq, existing_pin):\n project, version, _ = as_tuple(existing_pin)\n return make_install_requirement(\n project, version, ireq.extras, constraint=ireq.constraint\n )\n else:\n return self.repository.find_best_match(ireq, prereleases)\n\n def get_dependencies(self, ireq):\n return self.repository.get_dependencies(ireq)\n\n def get_hashes(self, ireq):\n key = key_from_ireq(ireq)\n existing_pin = self.existing_pins.get(key)\n if existing_pin and ireq_satisfied_by_existing_pin(ireq, existing_pin):\n if PIP_VERSION[:2] <= (20, 0):\n hashes = existing_pin.options.get(\"hashes\", {})\n else:\n hashes = existing_pin.hash_options\n hexdigests = hashes.get(FAVORITE_HASH)\n if hexdigests:\n return {\n \":\".join([FAVORITE_HASH, hexdigest]) for hexdigest in hexdigests\n }\n return self.repository.get_hashes(ireq)\n\n @contextmanager\n def allow_all_wheels(self):\n with self.repository.allow_all_wheels():\n yield\n", "path": "piptools/repositories/local.py"}], "after_files": [{"content": "# coding: utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom contextlib import contextmanager\n\nfrom pip._internal.utils.hashes import FAVORITE_HASH\n\nfrom .._compat import PIP_VERSION\nfrom .base import BaseRepository\n\nfrom piptools.utils import as_tuple, key_from_ireq, make_install_requirement\n\n\ndef ireq_satisfied_by_existing_pin(ireq, existing_pin):\n \"\"\"\n Return True if the given InstallationRequirement is satisfied by the\n previously encountered version pin.\n \"\"\"\n version = next(iter(existing_pin.req.specifier)).version\n return ireq.req.specifier.contains(\n version, prereleases=existing_pin.req.specifier.prereleases\n )\n\n\nclass LocalRequirementsRepository(BaseRepository):\n \"\"\"\n The LocalRequirementsRepository proxied the _real_ repository by first\n checking if a requirement can be satisfied by existing pins (i.e. the\n result of a previous compile step).\n\n In effect, if a requirement can be satisfied with a version pinned in the\n requirements file, we prefer that version over the best match found in\n PyPI. This keeps updates to the requirements.txt down to a minimum.\n \"\"\"\n\n def __init__(self, existing_pins, proxied_repository):\n self.repository = proxied_repository\n self.existing_pins = existing_pins\n\n @property\n def options(self):\n return self.repository.options\n\n @property\n def finder(self):\n return self.repository.finder\n\n @property\n def session(self):\n return self.repository.session\n\n @property\n def DEFAULT_INDEX_URL(self):\n return self.repository.DEFAULT_INDEX_URL\n\n def clear_caches(self):\n self.repository.clear_caches()\n\n def freshen_build_caches(self):\n self.repository.freshen_build_caches()\n\n def find_best_match(self, ireq, prereleases=None):\n key = key_from_ireq(ireq)\n existing_pin = self.existing_pins.get(key)\n if existing_pin and ireq_satisfied_by_existing_pin(ireq, existing_pin):\n project, version, _ = as_tuple(existing_pin)\n return make_install_requirement(\n project, version, ireq.extras, constraint=ireq.constraint\n )\n else:\n return self.repository.find_best_match(ireq, prereleases)\n\n def get_dependencies(self, ireq):\n return self.repository.get_dependencies(ireq)\n\n def get_hashes(self, ireq):\n key = key_from_ireq(ireq)\n existing_pin = self.existing_pins.get(key)\n if existing_pin and ireq_satisfied_by_existing_pin(ireq, existing_pin):\n if PIP_VERSION[:2] <= (20, 0):\n hashes = existing_pin.options.get(\"hashes\", {})\n else:\n hashes = existing_pin.hash_options\n hexdigests = hashes.get(FAVORITE_HASH)\n if hexdigests:\n return {\n \":\".join([FAVORITE_HASH, hexdigest]) for hexdigest in hexdigests\n }\n return self.repository.get_hashes(ireq)\n\n @contextmanager\n def allow_all_wheels(self):\n with self.repository.allow_all_wheels():\n yield\n", "path": "piptools/repositories/local.py"}]}
| 1,749 | 121 |
gh_patches_debug_34763
|
rasdani/github-patches
|
git_diff
|
DataBiosphere__toil-3581
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add type hints to resources.py
Add type hints to src/toil/lib/encryption/resources.py so it can be checked under mypy during linting.
┆Issue is synchronized with this [Jira Task](https://ucsc-cgl.atlassian.net/browse/TOIL-885)
┆Issue Number: TOIL-885
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/toil/lib/resources.py`
Content:
```
1 # Copyright (C) 2015-2021 Regents of the University of California
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import fnmatch
15 import os
16 import resource
17
18 from typing import List
19
20
21 def get_total_cpu_time_and_memory_usage():
22 """
23 Gives the total cpu time of itself and all its children, and the maximum RSS memory usage of
24 itself and its single largest child.
25 """
26 me = resource.getrusage(resource.RUSAGE_SELF)
27 children = resource.getrusage(resource.RUSAGE_CHILDREN)
28 total_cpu_time = me.ru_utime + me.ru_stime + children.ru_utime + children.ru_stime
29 total_memory_usage = me.ru_maxrss + children.ru_maxrss
30 return total_cpu_time, total_memory_usage
31
32
33 def get_total_cpu_time():
34 """Gives the total cpu time, including the children."""
35 me = resource.getrusage(resource.RUSAGE_SELF)
36 childs = resource.getrusage(resource.RUSAGE_CHILDREN)
37 return me.ru_utime + me.ru_stime + childs.ru_utime + childs.ru_stime
38
39
40 def glob(glob_pattern: str, directoryname: str) -> List[str]:
41 """
42 Walks through a directory and its subdirectories looking for files matching
43 the glob_pattern and returns a list=[].
44
45 :param directoryname: Any accessible folder name on the filesystem.
46 :param glob_pattern: A string like "*.txt", which would find all text files.
47 :return: A list=[] of absolute filepaths matching the glob pattern.
48 """
49 matches = []
50 for root, dirnames, filenames in os.walk(directoryname):
51 for filename in fnmatch.filter(filenames, glob_pattern):
52 absolute_filepath = os.path.join(root, filename)
53 matches.append(absolute_filepath)
54 return matches
55
```
Path: `contrib/admin/mypy-with-ignore.py`
Content:
```
1 #!/usr/bin/env python3
2 """
3 Runs mypy and ignores files that do not yet have passing type hints.
4
5 Does not type check test files (any path including "src/toil/test").
6 """
7 import os
8 import subprocess
9 import sys
10
11 pkg_root = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..')) # noqa
12 sys.path.insert(0, pkg_root) # noqa
13
14 from src.toil.lib.resources import glob # type: ignore
15
16
17 def main():
18 all_files_to_check = []
19 for d in ['dashboard', 'docker', 'docs', 'src']:
20 all_files_to_check += glob(glob_pattern='*.py', directoryname=os.path.join(pkg_root, d))
21
22 # TODO: Remove these paths as typing is added and mypy conflicts are addressed
23 ignore_paths = [os.path.abspath(f) for f in [
24 'docker/Dockerfile.py',
25 'docs/conf.py',
26 'docs/vendor/sphinxcontrib/fulltoc.py',
27 'docs/vendor/sphinxcontrib/__init__.py',
28 'src/toil/job.py',
29 'src/toil/leader.py',
30 'src/toil/statsAndLogging.py',
31 'src/toil/common.py',
32 'src/toil/realtimeLogger.py',
33 'src/toil/worker.py',
34 'src/toil/serviceManager.py',
35 'src/toil/toilState.py',
36 'src/toil/__init__.py',
37 'src/toil/resource.py',
38 'src/toil/deferred.py',
39 'src/toil/version.py',
40 'src/toil/wdl/utils.py',
41 'src/toil/wdl/wdl_types.py',
42 'src/toil/wdl/wdl_synthesis.py',
43 # 'src/toil/wdl/__init__.py',
44 'src/toil/wdl/wdl_analysis.py',
45 'src/toil/wdl/wdl_functions.py',
46 'src/toil/wdl/toilwdl.py',
47 'src/toil/wdl/versions/draft2.py',
48 'src/toil/wdl/versions/v1.py',
49 # 'src/toil/wdl/versions/__init__.py',
50 'src/toil/wdl/versions/dev.py',
51 'src/toil/provisioners/clusterScaler.py',
52 'src/toil/provisioners/abstractProvisioner.py',
53 'src/toil/provisioners/gceProvisioner.py',
54 'src/toil/provisioners/__init__.py',
55 'src/toil/provisioners/node.py',
56 'src/toil/provisioners/aws/boto2Context.py',
57 'src/toil/provisioners/aws/awsProvisioner.py',
58 'src/toil/provisioners/aws/__init__.py',
59 'src/toil/batchSystems/slurm.py',
60 'src/toil/batchSystems/gridengine.py',
61 'src/toil/batchSystems/singleMachine.py',
62 'src/toil/batchSystems/abstractBatchSystem.py',
63 'src/toil/batchSystems/parasol.py',
64 'src/toil/batchSystems/kubernetes.py',
65 'src/toil/batchSystems/torque.py',
66 'src/toil/batchSystems/options.py',
67 'src/toil/batchSystems/registry.py',
68 'src/toil/batchSystems/lsf.py',
69 'src/toil/batchSystems/__init__.py',
70 'src/toil/batchSystems/abstractGridEngineBatchSystem.py',
71 'src/toil/batchSystems/lsfHelper.py',
72 'src/toil/batchSystems/htcondor.py',
73 'src/toil/batchSystems/mesos/batchSystem.py',
74 'src/toil/batchSystems/mesos/executor.py',
75 'src/toil/batchSystems/mesos/conftest.py',
76 'src/toil/batchSystems/mesos/__init__.py',
77 'src/toil/batchSystems/mesos/test/__init__.py',
78 'src/toil/cwl/conftest.py',
79 'src/toil/cwl/__init__.py',
80 'src/toil/cwl/cwltoil.py',
81 'src/toil/fileStores/cachingFileStore.py',
82 'src/toil/fileStores/abstractFileStore.py',
83 'src/toil/fileStores/nonCachingFileStore.py',
84 'src/toil/fileStores/__init__.py',
85 'src/toil/jobStores/utils.py',
86 'src/toil/jobStores/abstractJobStore.py',
87 'src/toil/jobStores/conftest.py',
88 'src/toil/jobStores/fileJobStore.py',
89 'src/toil/jobStores/__init__.py',
90 'src/toil/jobStores/googleJobStore.py',
91 'src/toil/jobStores/aws/utils.py',
92 'src/toil/jobStores/aws/jobStore.py',
93 'src/toil/jobStores/aws/__init__.py',
94 'src/toil/utils/toilDebugFile.py',
95 'src/toil/utils/toilUpdateEC2Instances.py',
96 'src/toil/utils/toilStatus.py',
97 'src/toil/utils/toilStats.py',
98 'src/toil/utils/toilSshCluster.py',
99 'src/toil/utils/toilMain.py',
100 'src/toil/utils/toilKill.py',
101 'src/toil/utils/__init__.py',
102 'src/toil/utils/toilDestroyCluster.py',
103 'src/toil/utils/toilDebugJob.py',
104 'src/toil/utils/toilRsyncCluster.py',
105 'src/toil/utils/toilClean.py',
106 'src/toil/utils/toilLaunchCluster.py',
107 'src/toil/lib/memoize.py',
108 'src/toil/lib/resources.py',
109 'src/toil/lib/throttle.py',
110 'src/toil/lib/humanize.py',
111 'src/toil/lib/compatibility.py',
112 'src/toil/lib/iterables.py',
113 'src/toil/lib/bioio.py',
114 'src/toil/lib/ec2.py',
115 'src/toil/lib/conversions.py',
116 'src/toil/lib/ec2nodes.py',
117 'src/toil/lib/misc.py',
118 'src/toil/lib/expando.py',
119 'src/toil/lib/threading.py',
120 'src/toil/lib/exceptions.py',
121 'src/toil/lib/__init__.py',
122 'src/toil/lib/generatedEC2Lists.py',
123 'src/toil/lib/retry.py',
124 'src/toil/lib/objects.py',
125 'src/toil/lib/io.py',
126 'src/toil/lib/docker.py',
127 'src/toil/lib/encryption/_nacl.py',
128 'src/toil/lib/encryption/_dummy.py',
129 'src/toil/lib/encryption/conftest.py',
130 'src/toil/lib/encryption/__init__.py',
131 'src/toil/lib/aws/utils.py',
132 'src/toil/lib/aws/__init__.py'
133 ]]
134
135 filtered_files_to_check = []
136 for file_path in all_files_to_check:
137 if file_path not in ignore_paths and 'src/toil/test' not in file_path:
138 filtered_files_to_check.append(file_path)
139 # follow-imports type checks pypi projects we don't control, so we skip it; why is this their default?
140 args = ['mypy', '--follow-imports=skip'] + filtered_files_to_check
141 p = subprocess.run(args=args, stdout=subprocess.PIPE)
142 result = p.stdout.decode()
143 print(result)
144 if 'Success: no issues found' not in result:
145 exit(1)
146
147
148 if __name__ == '__main__':
149 main()
150
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/contrib/admin/mypy-with-ignore.py b/contrib/admin/mypy-with-ignore.py
--- a/contrib/admin/mypy-with-ignore.py
+++ b/contrib/admin/mypy-with-ignore.py
@@ -40,13 +40,11 @@
'src/toil/wdl/utils.py',
'src/toil/wdl/wdl_types.py',
'src/toil/wdl/wdl_synthesis.py',
- # 'src/toil/wdl/__init__.py',
'src/toil/wdl/wdl_analysis.py',
'src/toil/wdl/wdl_functions.py',
'src/toil/wdl/toilwdl.py',
'src/toil/wdl/versions/draft2.py',
'src/toil/wdl/versions/v1.py',
- # 'src/toil/wdl/versions/__init__.py',
'src/toil/wdl/versions/dev.py',
'src/toil/provisioners/clusterScaler.py',
'src/toil/provisioners/abstractProvisioner.py',
@@ -105,7 +103,6 @@
'src/toil/utils/toilClean.py',
'src/toil/utils/toilLaunchCluster.py',
'src/toil/lib/memoize.py',
- 'src/toil/lib/resources.py',
'src/toil/lib/throttle.py',
'src/toil/lib/humanize.py',
'src/toil/lib/compatibility.py',
diff --git a/src/toil/lib/resources.py b/src/toil/lib/resources.py
--- a/src/toil/lib/resources.py
+++ b/src/toil/lib/resources.py
@@ -15,10 +15,10 @@
import os
import resource
-from typing import List
+from typing import List, Tuple
-def get_total_cpu_time_and_memory_usage():
+def get_total_cpu_time_and_memory_usage() -> Tuple[float, int]:
"""
Gives the total cpu time of itself and all its children, and the maximum RSS memory usage of
itself and its single largest child.
@@ -30,7 +30,7 @@
return total_cpu_time, total_memory_usage
-def get_total_cpu_time():
+def get_total_cpu_time() -> float:
"""Gives the total cpu time, including the children."""
me = resource.getrusage(resource.RUSAGE_SELF)
childs = resource.getrusage(resource.RUSAGE_CHILDREN)
|
{"golden_diff": "diff --git a/contrib/admin/mypy-with-ignore.py b/contrib/admin/mypy-with-ignore.py\n--- a/contrib/admin/mypy-with-ignore.py\n+++ b/contrib/admin/mypy-with-ignore.py\n@@ -40,13 +40,11 @@\n 'src/toil/wdl/utils.py',\n 'src/toil/wdl/wdl_types.py',\n 'src/toil/wdl/wdl_synthesis.py',\n- # 'src/toil/wdl/__init__.py',\n 'src/toil/wdl/wdl_analysis.py',\n 'src/toil/wdl/wdl_functions.py',\n 'src/toil/wdl/toilwdl.py',\n 'src/toil/wdl/versions/draft2.py',\n 'src/toil/wdl/versions/v1.py',\n- # 'src/toil/wdl/versions/__init__.py',\n 'src/toil/wdl/versions/dev.py',\n 'src/toil/provisioners/clusterScaler.py',\n 'src/toil/provisioners/abstractProvisioner.py',\n@@ -105,7 +103,6 @@\n 'src/toil/utils/toilClean.py',\n 'src/toil/utils/toilLaunchCluster.py',\n 'src/toil/lib/memoize.py',\n- 'src/toil/lib/resources.py',\n 'src/toil/lib/throttle.py',\n 'src/toil/lib/humanize.py',\n 'src/toil/lib/compatibility.py',\ndiff --git a/src/toil/lib/resources.py b/src/toil/lib/resources.py\n--- a/src/toil/lib/resources.py\n+++ b/src/toil/lib/resources.py\n@@ -15,10 +15,10 @@\n import os\n import resource\n \n-from typing import List\n+from typing import List, Tuple\n \n \n-def get_total_cpu_time_and_memory_usage():\n+def get_total_cpu_time_and_memory_usage() -> Tuple[float, int]:\n \"\"\"\n Gives the total cpu time of itself and all its children, and the maximum RSS memory usage of\n itself and its single largest child.\n@@ -30,7 +30,7 @@\n return total_cpu_time, total_memory_usage\n \n \n-def get_total_cpu_time():\n+def get_total_cpu_time() -> float:\n \"\"\"Gives the total cpu time, including the children.\"\"\"\n me = resource.getrusage(resource.RUSAGE_SELF)\n childs = resource.getrusage(resource.RUSAGE_CHILDREN)\n", "issue": "Add type hints to resources.py\nAdd type hints to src/toil/lib/encryption/resources.py so it can be checked under mypy during linting.\n\n\u2506Issue is synchronized with this [Jira Task](https://ucsc-cgl.atlassian.net/browse/TOIL-885)\n\u2506Issue Number: TOIL-885\n\n", "before_files": [{"content": "# Copyright (C) 2015-2021 Regents of the University of California\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport fnmatch\nimport os\nimport resource\n\nfrom typing import List\n\n\ndef get_total_cpu_time_and_memory_usage():\n \"\"\"\n Gives the total cpu time of itself and all its children, and the maximum RSS memory usage of\n itself and its single largest child.\n \"\"\"\n me = resource.getrusage(resource.RUSAGE_SELF)\n children = resource.getrusage(resource.RUSAGE_CHILDREN)\n total_cpu_time = me.ru_utime + me.ru_stime + children.ru_utime + children.ru_stime\n total_memory_usage = me.ru_maxrss + children.ru_maxrss\n return total_cpu_time, total_memory_usage\n\n\ndef get_total_cpu_time():\n \"\"\"Gives the total cpu time, including the children.\"\"\"\n me = resource.getrusage(resource.RUSAGE_SELF)\n childs = resource.getrusage(resource.RUSAGE_CHILDREN)\n return me.ru_utime + me.ru_stime + childs.ru_utime + childs.ru_stime\n\n\ndef glob(glob_pattern: str, directoryname: str) -> List[str]:\n \"\"\"\n Walks through a directory and its subdirectories looking for files matching\n the glob_pattern and returns a list=[].\n\n :param directoryname: Any accessible folder name on the filesystem.\n :param glob_pattern: A string like \"*.txt\", which would find all text files.\n :return: A list=[] of absolute filepaths matching the glob pattern.\n \"\"\"\n matches = []\n for root, dirnames, filenames in os.walk(directoryname):\n for filename in fnmatch.filter(filenames, glob_pattern):\n absolute_filepath = os.path.join(root, filename)\n matches.append(absolute_filepath)\n return matches\n", "path": "src/toil/lib/resources.py"}, {"content": "#!/usr/bin/env python3\n\"\"\"\nRuns mypy and ignores files that do not yet have passing type hints.\n\nDoes not type check test files (any path including \"src/toil/test\").\n\"\"\"\nimport os\nimport subprocess\nimport sys\n\npkg_root = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..')) # noqa\nsys.path.insert(0, pkg_root) # noqa\n\nfrom src.toil.lib.resources import glob # type: ignore\n\n\ndef main():\n all_files_to_check = []\n for d in ['dashboard', 'docker', 'docs', 'src']:\n all_files_to_check += glob(glob_pattern='*.py', directoryname=os.path.join(pkg_root, d))\n\n # TODO: Remove these paths as typing is added and mypy conflicts are addressed\n ignore_paths = [os.path.abspath(f) for f in [\n 'docker/Dockerfile.py',\n 'docs/conf.py',\n 'docs/vendor/sphinxcontrib/fulltoc.py',\n 'docs/vendor/sphinxcontrib/__init__.py',\n 'src/toil/job.py',\n 'src/toil/leader.py',\n 'src/toil/statsAndLogging.py',\n 'src/toil/common.py',\n 'src/toil/realtimeLogger.py',\n 'src/toil/worker.py',\n 'src/toil/serviceManager.py',\n 'src/toil/toilState.py',\n 'src/toil/__init__.py',\n 'src/toil/resource.py',\n 'src/toil/deferred.py',\n 'src/toil/version.py',\n 'src/toil/wdl/utils.py',\n 'src/toil/wdl/wdl_types.py',\n 'src/toil/wdl/wdl_synthesis.py',\n # 'src/toil/wdl/__init__.py',\n 'src/toil/wdl/wdl_analysis.py',\n 'src/toil/wdl/wdl_functions.py',\n 'src/toil/wdl/toilwdl.py',\n 'src/toil/wdl/versions/draft2.py',\n 'src/toil/wdl/versions/v1.py',\n # 'src/toil/wdl/versions/__init__.py',\n 'src/toil/wdl/versions/dev.py',\n 'src/toil/provisioners/clusterScaler.py',\n 'src/toil/provisioners/abstractProvisioner.py',\n 'src/toil/provisioners/gceProvisioner.py',\n 'src/toil/provisioners/__init__.py',\n 'src/toil/provisioners/node.py',\n 'src/toil/provisioners/aws/boto2Context.py',\n 'src/toil/provisioners/aws/awsProvisioner.py',\n 'src/toil/provisioners/aws/__init__.py',\n 'src/toil/batchSystems/slurm.py',\n 'src/toil/batchSystems/gridengine.py',\n 'src/toil/batchSystems/singleMachine.py',\n 'src/toil/batchSystems/abstractBatchSystem.py',\n 'src/toil/batchSystems/parasol.py',\n 'src/toil/batchSystems/kubernetes.py',\n 'src/toil/batchSystems/torque.py',\n 'src/toil/batchSystems/options.py',\n 'src/toil/batchSystems/registry.py',\n 'src/toil/batchSystems/lsf.py',\n 'src/toil/batchSystems/__init__.py',\n 'src/toil/batchSystems/abstractGridEngineBatchSystem.py',\n 'src/toil/batchSystems/lsfHelper.py',\n 'src/toil/batchSystems/htcondor.py',\n 'src/toil/batchSystems/mesos/batchSystem.py',\n 'src/toil/batchSystems/mesos/executor.py',\n 'src/toil/batchSystems/mesos/conftest.py',\n 'src/toil/batchSystems/mesos/__init__.py',\n 'src/toil/batchSystems/mesos/test/__init__.py',\n 'src/toil/cwl/conftest.py',\n 'src/toil/cwl/__init__.py',\n 'src/toil/cwl/cwltoil.py',\n 'src/toil/fileStores/cachingFileStore.py',\n 'src/toil/fileStores/abstractFileStore.py',\n 'src/toil/fileStores/nonCachingFileStore.py',\n 'src/toil/fileStores/__init__.py',\n 'src/toil/jobStores/utils.py',\n 'src/toil/jobStores/abstractJobStore.py',\n 'src/toil/jobStores/conftest.py',\n 'src/toil/jobStores/fileJobStore.py',\n 'src/toil/jobStores/__init__.py',\n 'src/toil/jobStores/googleJobStore.py',\n 'src/toil/jobStores/aws/utils.py',\n 'src/toil/jobStores/aws/jobStore.py',\n 'src/toil/jobStores/aws/__init__.py',\n 'src/toil/utils/toilDebugFile.py',\n 'src/toil/utils/toilUpdateEC2Instances.py',\n 'src/toil/utils/toilStatus.py',\n 'src/toil/utils/toilStats.py',\n 'src/toil/utils/toilSshCluster.py',\n 'src/toil/utils/toilMain.py',\n 'src/toil/utils/toilKill.py',\n 'src/toil/utils/__init__.py',\n 'src/toil/utils/toilDestroyCluster.py',\n 'src/toil/utils/toilDebugJob.py',\n 'src/toil/utils/toilRsyncCluster.py',\n 'src/toil/utils/toilClean.py',\n 'src/toil/utils/toilLaunchCluster.py',\n 'src/toil/lib/memoize.py',\n 'src/toil/lib/resources.py',\n 'src/toil/lib/throttle.py',\n 'src/toil/lib/humanize.py',\n 'src/toil/lib/compatibility.py',\n 'src/toil/lib/iterables.py',\n 'src/toil/lib/bioio.py',\n 'src/toil/lib/ec2.py',\n 'src/toil/lib/conversions.py',\n 'src/toil/lib/ec2nodes.py',\n 'src/toil/lib/misc.py',\n 'src/toil/lib/expando.py',\n 'src/toil/lib/threading.py',\n 'src/toil/lib/exceptions.py',\n 'src/toil/lib/__init__.py',\n 'src/toil/lib/generatedEC2Lists.py',\n 'src/toil/lib/retry.py',\n 'src/toil/lib/objects.py',\n 'src/toil/lib/io.py',\n 'src/toil/lib/docker.py',\n 'src/toil/lib/encryption/_nacl.py',\n 'src/toil/lib/encryption/_dummy.py',\n 'src/toil/lib/encryption/conftest.py',\n 'src/toil/lib/encryption/__init__.py',\n 'src/toil/lib/aws/utils.py',\n 'src/toil/lib/aws/__init__.py'\n ]]\n\n filtered_files_to_check = []\n for file_path in all_files_to_check:\n if file_path not in ignore_paths and 'src/toil/test' not in file_path:\n filtered_files_to_check.append(file_path)\n # follow-imports type checks pypi projects we don't control, so we skip it; why is this their default?\n args = ['mypy', '--follow-imports=skip'] + filtered_files_to_check\n p = subprocess.run(args=args, stdout=subprocess.PIPE)\n result = p.stdout.decode()\n print(result)\n if 'Success: no issues found' not in result:\n exit(1)\n\n\nif __name__ == '__main__':\n main()\n", "path": "contrib/admin/mypy-with-ignore.py"}], "after_files": [{"content": "# Copyright (C) 2015-2021 Regents of the University of California\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport fnmatch\nimport os\nimport resource\n\nfrom typing import List, Tuple\n\n\ndef get_total_cpu_time_and_memory_usage() -> Tuple[float, int]:\n \"\"\"\n Gives the total cpu time of itself and all its children, and the maximum RSS memory usage of\n itself and its single largest child.\n \"\"\"\n me = resource.getrusage(resource.RUSAGE_SELF)\n children = resource.getrusage(resource.RUSAGE_CHILDREN)\n total_cpu_time = me.ru_utime + me.ru_stime + children.ru_utime + children.ru_stime\n total_memory_usage = me.ru_maxrss + children.ru_maxrss\n return total_cpu_time, total_memory_usage\n\n\ndef get_total_cpu_time() -> float:\n \"\"\"Gives the total cpu time, including the children.\"\"\"\n me = resource.getrusage(resource.RUSAGE_SELF)\n childs = resource.getrusage(resource.RUSAGE_CHILDREN)\n return me.ru_utime + me.ru_stime + childs.ru_utime + childs.ru_stime\n\n\ndef glob(glob_pattern: str, directoryname: str) -> List[str]:\n \"\"\"\n Walks through a directory and its subdirectories looking for files matching\n the glob_pattern and returns a list=[].\n\n :param directoryname: Any accessible folder name on the filesystem.\n :param glob_pattern: A string like \"*.txt\", which would find all text files.\n :return: A list=[] of absolute filepaths matching the glob pattern.\n \"\"\"\n matches = []\n for root, dirnames, filenames in os.walk(directoryname):\n for filename in fnmatch.filter(filenames, glob_pattern):\n absolute_filepath = os.path.join(root, filename)\n matches.append(absolute_filepath)\n return matches\n", "path": "src/toil/lib/resources.py"}, {"content": "#!/usr/bin/env python3\n\"\"\"\nRuns mypy and ignores files that do not yet have passing type hints.\n\nDoes not type check test files (any path including \"src/toil/test\").\n\"\"\"\nimport os\nimport subprocess\nimport sys\n\npkg_root = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..')) # noqa\nsys.path.insert(0, pkg_root) # noqa\n\nfrom src.toil.lib.resources import glob # type: ignore\n\n\ndef main():\n all_files_to_check = []\n for d in ['dashboard', 'docker', 'docs', 'src']:\n all_files_to_check += glob(glob_pattern='*.py', directoryname=os.path.join(pkg_root, d))\n\n # TODO: Remove these paths as typing is added and mypy conflicts are addressed\n ignore_paths = [os.path.abspath(f) for f in [\n 'docker/Dockerfile.py',\n 'docs/conf.py',\n 'docs/vendor/sphinxcontrib/fulltoc.py',\n 'docs/vendor/sphinxcontrib/__init__.py',\n 'src/toil/job.py',\n 'src/toil/leader.py',\n 'src/toil/statsAndLogging.py',\n 'src/toil/common.py',\n 'src/toil/realtimeLogger.py',\n 'src/toil/worker.py',\n 'src/toil/serviceManager.py',\n 'src/toil/toilState.py',\n 'src/toil/__init__.py',\n 'src/toil/resource.py',\n 'src/toil/deferred.py',\n 'src/toil/version.py',\n 'src/toil/wdl/utils.py',\n 'src/toil/wdl/wdl_types.py',\n 'src/toil/wdl/wdl_synthesis.py',\n 'src/toil/wdl/wdl_analysis.py',\n 'src/toil/wdl/wdl_functions.py',\n 'src/toil/wdl/toilwdl.py',\n 'src/toil/wdl/versions/draft2.py',\n 'src/toil/wdl/versions/v1.py',\n 'src/toil/wdl/versions/dev.py',\n 'src/toil/provisioners/clusterScaler.py',\n 'src/toil/provisioners/abstractProvisioner.py',\n 'src/toil/provisioners/gceProvisioner.py',\n 'src/toil/provisioners/__init__.py',\n 'src/toil/provisioners/node.py',\n 'src/toil/provisioners/aws/boto2Context.py',\n 'src/toil/provisioners/aws/awsProvisioner.py',\n 'src/toil/provisioners/aws/__init__.py',\n 'src/toil/batchSystems/slurm.py',\n 'src/toil/batchSystems/gridengine.py',\n 'src/toil/batchSystems/singleMachine.py',\n 'src/toil/batchSystems/abstractBatchSystem.py',\n 'src/toil/batchSystems/parasol.py',\n 'src/toil/batchSystems/kubernetes.py',\n 'src/toil/batchSystems/torque.py',\n 'src/toil/batchSystems/options.py',\n 'src/toil/batchSystems/registry.py',\n 'src/toil/batchSystems/lsf.py',\n 'src/toil/batchSystems/__init__.py',\n 'src/toil/batchSystems/abstractGridEngineBatchSystem.py',\n 'src/toil/batchSystems/lsfHelper.py',\n 'src/toil/batchSystems/htcondor.py',\n 'src/toil/batchSystems/mesos/batchSystem.py',\n 'src/toil/batchSystems/mesos/executor.py',\n 'src/toil/batchSystems/mesos/conftest.py',\n 'src/toil/batchSystems/mesos/__init__.py',\n 'src/toil/batchSystems/mesos/test/__init__.py',\n 'src/toil/cwl/conftest.py',\n 'src/toil/cwl/__init__.py',\n 'src/toil/cwl/cwltoil.py',\n 'src/toil/fileStores/cachingFileStore.py',\n 'src/toil/fileStores/abstractFileStore.py',\n 'src/toil/fileStores/nonCachingFileStore.py',\n 'src/toil/fileStores/__init__.py',\n 'src/toil/jobStores/utils.py',\n 'src/toil/jobStores/abstractJobStore.py',\n 'src/toil/jobStores/conftest.py',\n 'src/toil/jobStores/fileJobStore.py',\n 'src/toil/jobStores/__init__.py',\n 'src/toil/jobStores/googleJobStore.py',\n 'src/toil/jobStores/aws/utils.py',\n 'src/toil/jobStores/aws/jobStore.py',\n 'src/toil/jobStores/aws/__init__.py',\n 'src/toil/utils/toilDebugFile.py',\n 'src/toil/utils/toilUpdateEC2Instances.py',\n 'src/toil/utils/toilStatus.py',\n 'src/toil/utils/toilStats.py',\n 'src/toil/utils/toilSshCluster.py',\n 'src/toil/utils/toilMain.py',\n 'src/toil/utils/toilKill.py',\n 'src/toil/utils/__init__.py',\n 'src/toil/utils/toilDestroyCluster.py',\n 'src/toil/utils/toilDebugJob.py',\n 'src/toil/utils/toilRsyncCluster.py',\n 'src/toil/utils/toilClean.py',\n 'src/toil/utils/toilLaunchCluster.py',\n 'src/toil/lib/memoize.py',\n 'src/toil/lib/throttle.py',\n 'src/toil/lib/humanize.py',\n 'src/toil/lib/compatibility.py',\n 'src/toil/lib/iterables.py',\n 'src/toil/lib/bioio.py',\n 'src/toil/lib/ec2.py',\n 'src/toil/lib/conversions.py',\n 'src/toil/lib/ec2nodes.py',\n 'src/toil/lib/misc.py',\n 'src/toil/lib/expando.py',\n 'src/toil/lib/threading.py',\n 'src/toil/lib/exceptions.py',\n 'src/toil/lib/__init__.py',\n 'src/toil/lib/generatedEC2Lists.py',\n 'src/toil/lib/retry.py',\n 'src/toil/lib/objects.py',\n 'src/toil/lib/io.py',\n 'src/toil/lib/docker.py',\n 'src/toil/lib/encryption/_nacl.py',\n 'src/toil/lib/encryption/_dummy.py',\n 'src/toil/lib/encryption/conftest.py',\n 'src/toil/lib/encryption/__init__.py',\n 'src/toil/lib/aws/utils.py',\n 'src/toil/lib/aws/__init__.py'\n ]]\n\n filtered_files_to_check = []\n for file_path in all_files_to_check:\n if file_path not in ignore_paths and 'src/toil/test' not in file_path:\n filtered_files_to_check.append(file_path)\n # follow-imports type checks pypi projects we don't control, so we skip it; why is this their default?\n args = ['mypy', '--follow-imports=skip'] + filtered_files_to_check\n p = subprocess.run(args=args, stdout=subprocess.PIPE)\n result = p.stdout.decode()\n print(result)\n if 'Success: no issues found' not in result:\n exit(1)\n\n\nif __name__ == '__main__':\n main()\n", "path": "contrib/admin/mypy-with-ignore.py"}]}
| 2,911 | 520 |
gh_patches_debug_60613
|
rasdani/github-patches
|
git_diff
|
cloudtools__troposphere-552
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support AutoScalingCreationPolicy
From the docs, this is a top-level property of a [CreationPolicy](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-creationpolicy.html#cfn-attributes-creationpolicy-properties). It is used for the [AutoScalingReplacingPolicy](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatepolicy.html#cfn-attributes-updatepolicy-replacingupdate) to specify the MinSuccessfulInstancesPercent property.
The docs have a good example of this:
``` json
"UpdatePolicy" : {
"AutoScalingReplacingUpdate" : {
"WillReplace" : "true"
},
"CreationPolicy" : {
"ResourceSignal" : {
"Count" : { "Ref" : "ResourceSignalsOnCreate"},
"Timeout" : "PT10M"
},
"AutoScalingCreationPolicy" : {
"MinSuccessfulInstancesPercent" : { "Ref" : "MinSuccessfulPercentParameter" }
}
}
```
I might take a crack at this but I figured I'd file an issue first if only so that I can reference it.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `troposphere/policies.py`
Content:
```
1 from . import AWSProperty, AWSAttribute, validate_pausetime
2 from .validators import positive_integer, integer, boolean
3
4
5 class AutoScalingRollingUpdate(AWSProperty):
6 props = {
7 'MaxBatchSize': (positive_integer, False),
8 'MinInstancesInService': (integer, False),
9 'MinSuccessfulInstancesPercent': (integer, False),
10 'PauseTime': (validate_pausetime, False),
11 'SuspendProcesses': ([basestring], False),
12 'WaitOnResourceSignals': (boolean, False),
13 }
14
15
16 class AutoScalingScheduledAction(AWSProperty):
17 props = {
18 'IgnoreUnmodifiedGroupSizeProperties': (boolean, False),
19 }
20
21
22 class AutoScalingReplacingUpdate(AWSProperty):
23 props = {
24 'WillReplace': (boolean, False),
25 }
26
27
28 class UpdatePolicy(AWSAttribute):
29 props = {
30 'AutoScalingRollingUpdate': (AutoScalingRollingUpdate, False),
31 'AutoScalingScheduledAction': (AutoScalingScheduledAction, False),
32 'AutoScalingReplacingUpdate': (AutoScalingReplacingUpdate, False),
33 }
34
35
36 class ResourceSignal(AWSProperty):
37 props = {
38 'Count': (positive_integer, False),
39 'Timeout': (validate_pausetime, False),
40 }
41
42
43 class CreationPolicy(AWSAttribute):
44 props = {
45 'ResourceSignal': (ResourceSignal, True),
46 }
47
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/troposphere/policies.py b/troposphere/policies.py
--- a/troposphere/policies.py
+++ b/troposphere/policies.py
@@ -40,7 +40,14 @@
}
+class AutoScalingCreationPolicy(AWSProperty):
+ props = {
+ 'MinSuccessfulInstancesPercent': (integer, False),
+ }
+
+
class CreationPolicy(AWSAttribute):
props = {
+ 'AutoScalingCreationPolicy': (AutoScalingCreationPolicy, False),
'ResourceSignal': (ResourceSignal, True),
}
|
{"golden_diff": "diff --git a/troposphere/policies.py b/troposphere/policies.py\n--- a/troposphere/policies.py\n+++ b/troposphere/policies.py\n@@ -40,7 +40,14 @@\n }\n \n \n+class AutoScalingCreationPolicy(AWSProperty):\n+ props = {\n+ 'MinSuccessfulInstancesPercent': (integer, False),\n+ }\n+\n+\n class CreationPolicy(AWSAttribute):\n props = {\n+ 'AutoScalingCreationPolicy': (AutoScalingCreationPolicy, False),\n 'ResourceSignal': (ResourceSignal, True),\n }\n", "issue": "Support AutoScalingCreationPolicy\nFrom the docs, this is a top-level property of a [CreationPolicy](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-creationpolicy.html#cfn-attributes-creationpolicy-properties). It is used for the [AutoScalingReplacingPolicy](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatepolicy.html#cfn-attributes-updatepolicy-replacingupdate) to specify the MinSuccessfulInstancesPercent property.\n\nThe docs have a good example of this:\n\n``` json\n\"UpdatePolicy\" : {\n \"AutoScalingReplacingUpdate\" : {\n \"WillReplace\" : \"true\"\n },\n\"CreationPolicy\" : {\n \"ResourceSignal\" : {\n \"Count\" : { \"Ref\" : \"ResourceSignalsOnCreate\"},\n \"Timeout\" : \"PT10M\"\n },\n \"AutoScalingCreationPolicy\" : {\n \"MinSuccessfulInstancesPercent\" : { \"Ref\" : \"MinSuccessfulPercentParameter\" }\n }\n}\n```\n\nI might take a crack at this but I figured I'd file an issue first if only so that I can reference it.\n\n", "before_files": [{"content": "from . import AWSProperty, AWSAttribute, validate_pausetime\nfrom .validators import positive_integer, integer, boolean\n\n\nclass AutoScalingRollingUpdate(AWSProperty):\n props = {\n 'MaxBatchSize': (positive_integer, False),\n 'MinInstancesInService': (integer, False),\n 'MinSuccessfulInstancesPercent': (integer, False),\n 'PauseTime': (validate_pausetime, False),\n 'SuspendProcesses': ([basestring], False),\n 'WaitOnResourceSignals': (boolean, False),\n }\n\n\nclass AutoScalingScheduledAction(AWSProperty):\n props = {\n 'IgnoreUnmodifiedGroupSizeProperties': (boolean, False),\n }\n\n\nclass AutoScalingReplacingUpdate(AWSProperty):\n props = {\n 'WillReplace': (boolean, False),\n }\n\n\nclass UpdatePolicy(AWSAttribute):\n props = {\n 'AutoScalingRollingUpdate': (AutoScalingRollingUpdate, False),\n 'AutoScalingScheduledAction': (AutoScalingScheduledAction, False),\n 'AutoScalingReplacingUpdate': (AutoScalingReplacingUpdate, False),\n }\n\n\nclass ResourceSignal(AWSProperty):\n props = {\n 'Count': (positive_integer, False),\n 'Timeout': (validate_pausetime, False),\n }\n\n\nclass CreationPolicy(AWSAttribute):\n props = {\n 'ResourceSignal': (ResourceSignal, True),\n }\n", "path": "troposphere/policies.py"}], "after_files": [{"content": "from . import AWSProperty, AWSAttribute, validate_pausetime\nfrom .validators import positive_integer, integer, boolean\n\n\nclass AutoScalingRollingUpdate(AWSProperty):\n props = {\n 'MaxBatchSize': (positive_integer, False),\n 'MinInstancesInService': (integer, False),\n 'MinSuccessfulInstancesPercent': (integer, False),\n 'PauseTime': (validate_pausetime, False),\n 'SuspendProcesses': ([basestring], False),\n 'WaitOnResourceSignals': (boolean, False),\n }\n\n\nclass AutoScalingScheduledAction(AWSProperty):\n props = {\n 'IgnoreUnmodifiedGroupSizeProperties': (boolean, False),\n }\n\n\nclass AutoScalingReplacingUpdate(AWSProperty):\n props = {\n 'WillReplace': (boolean, False),\n }\n\n\nclass UpdatePolicy(AWSAttribute):\n props = {\n 'AutoScalingRollingUpdate': (AutoScalingRollingUpdate, False),\n 'AutoScalingScheduledAction': (AutoScalingScheduledAction, False),\n 'AutoScalingReplacingUpdate': (AutoScalingReplacingUpdate, False),\n }\n\n\nclass ResourceSignal(AWSProperty):\n props = {\n 'Count': (positive_integer, False),\n 'Timeout': (validate_pausetime, False),\n }\n\n\nclass AutoScalingCreationPolicy(AWSProperty):\n props = {\n 'MinSuccessfulInstancesPercent': (integer, False),\n }\n\n\nclass CreationPolicy(AWSAttribute):\n props = {\n 'AutoScalingCreationPolicy': (AutoScalingCreationPolicy, False),\n 'ResourceSignal': (ResourceSignal, True),\n }\n", "path": "troposphere/policies.py"}]}
| 883 | 125 |
gh_patches_debug_43665
|
rasdani/github-patches
|
git_diff
|
gratipay__gratipay.com-4127
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
fill out the test suite for the mixins, prune old tests
Reticketed from #3994.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gratipay/models/team/mixins/membership.py`
Content:
```
1 from __future__ import absolute_import, division, print_function, unicode_literals
2
3 from .takes import ZERO, PENNY
4
5
6 class MembershipMixin(object):
7 """Teams may have zero or more members, who are participants that take money from the team.
8 """
9
10 def add_member(self, participant, recorder):
11 """Add a participant to this team.
12
13 :param Participant participant: the participant to add
14 :param Participant recorder: the participant making the change
15
16 """
17 self.set_take_for(participant, PENNY, recorder)
18
19
20 def remove_member(self, participant, recorder):
21 """Remove a participant from this team.
22
23 :param Participant participant: the participant to remove
24 :param Participant recorder: the participant making the change
25
26 """
27 self.set_take_for(participant, ZERO, recorder)
28
29
30 def remove_all_members(self, cursor=None):
31 (cursor or self.db).run("""
32 INSERT INTO takes (ctime, member, team, amount, recorder) (
33 SELECT ctime, member, %(username)s, 0.00, %(username)s
34 FROM current_takes
35 WHERE team=%(username)s
36 AND amount > 0
37 );
38 """, dict(username=self.username))
39
40
41 @property
42 def nmembers(self):
43 """The number of members. Read-only and computed (not in the db); equal to
44 :py:attr:`~gratipay.models.team.mixins.takes.ndistributing_to`.
45 """
46 return self.ndistributing_to
47
48
49 def get_memberships(self, current_participant=None):
50 """Return a list of member dicts.
51 """
52 takes = self.compute_actual_takes()
53 members = []
54 for take in takes.values():
55 member = {}
56 member['participant_id'] = take['participant'].id
57 member['username'] = take['participant'].username
58 member['take'] = take['nominal_amount']
59 member['balance'] = take['balance']
60 member['percentage'] = take['percentage']
61
62 member['editing_allowed'] = False
63 member['is_current_user'] = False
64 if current_participant:
65 member['removal_allowed'] = current_participant.username == self.owner
66 if member['username'] == current_participant.username:
67 member['is_current_user'] = True
68 if take['ctime'] is not None:
69 # current user, but not the team itself
70 member['editing_allowed']= True
71
72 member['last_week'] = self.get_take_last_week_for(member['participant_id'])
73 members.append(member)
74 return members
75
```
Path: `gratipay/models/team/mixins/takes.py`
Content:
```
1 from __future__ import absolute_import, division, print_function, unicode_literals
2
3 from collections import OrderedDict
4 from decimal import Decimal as D
5
6 ZERO = D('0.00')
7 PENNY = D('0.01')
8
9
10 class TakesMixin(object):
11 """:py:class:`~gratipay.models.participant.Participant` s who are members
12 of a :py:class:`~gratipay.models.team.Team` may take money from the team
13 during :py:class:`~gratipay.billing.payday.Payday`. Only the team owner may
14 add a new member, by setting their take to a penny, but team owners may
15 *only* set their take to a penny---no more. Team owners may also remove
16 members, by setting their take to zero, as may the members themselves, who
17 may also set their take to whatever they wish.
18 """
19
20 #: The total amount of money the team distributes to participants
21 #: (including the owner) during payday. Read-only; equal to
22 #: :py:attr:`~gratipay.models.team.Team.receiving`.
23
24 distributing = 0
25
26
27 #: The number of participants (including the owner) that the team
28 #: distributes money to during payday. Read-only; modified by
29 #: :py:meth:`set_take_for`.
30
31 ndistributing_to = 0
32
33
34 def get_take_last_week_for(self, participant_id):
35 """Get the participant's nominal take last week.
36 """
37 return self.db.one("""
38
39 SELECT amount
40 FROM takes
41 WHERE team_id=%s AND participant_id=%s
42 AND mtime < (
43 SELECT ts_start
44 FROM paydays
45 WHERE ts_end > ts_start
46 ORDER BY ts_start DESC LIMIT 1
47 )
48 ORDER BY mtime DESC LIMIT 1
49
50 """, (self.id, participant_id), default=ZERO)
51
52
53 def set_take_for(self, participant, take, recorder, cursor=None):
54 """Set the amount a participant wants to take from this team during payday.
55
56 :param Participant participant: the participant to set the take for
57 :param Decimal take: the amount the participant wants to take
58 :param Participant recorder: the participant making the change
59
60 :return: the new take as a py:class:`~decimal.Decimal`
61 :raises: :py:exc:`NotAllowed`
62
63 It is a bug to pass in a ``participant`` or ``recorder`` that is
64 suspicious, unclaimed, or without a verified email and identity.
65 Furthermore, :py:exc:`NotAllowed` is raised in the following circumstances:
66
67 - ``recorder`` is neither ``participant`` nor the team owner
68 - ``recorder`` is the team owner and ``take`` is neither zero nor $0.01
69 - ``recorder`` is ``participant``, but ``participant`` isn't already on the team
70
71 """
72 def vet(p):
73 if p.is_suspicious:
74 raise NotAllowed("user must not be flagged as suspicious")
75 elif not p.has_verified_identity:
76 raise NotAllowed("user must have a verified identity")
77 elif not p.email_address:
78 raise NotAllowed("user must have added at least one email address")
79 elif not p.is_claimed:
80 raise NotAllowed("user must have claimed the account")
81
82 vet(participant)
83 vet(recorder)
84
85 owner_recording = recorder.username == self.owner
86 owner_taking = participant.username == self.owner
87 taker_recording = recorder == participant
88 adding_or_removing = take in (ZERO, PENNY)
89
90 if owner_recording:
91 if not adding_or_removing and not owner_taking:
92 raise NotAllowed("owner can only add and remove members, not otherwise set takes")
93 elif not taker_recording:
94 raise NotAllowed("can only set own take")
95
96 with self.db.get_cursor(cursor) as cursor:
97 cursor.run("LOCK TABLE takes IN EXCLUSIVE MODE") # avoid race conditions
98
99 # Compute the current takes
100 old_takes = self.compute_actual_takes(cursor)
101
102 if recorder.username != self.owner:
103 if recorder == participant and participant.id not in old_takes:
104 raise NotAllowed("can only set take if already a member of the team")
105
106 new_take = cursor.one( """
107
108 INSERT INTO takes
109 (ctime, participant_id, team_id, amount, recorder_id)
110 VALUES ( COALESCE (( SELECT ctime
111 FROM takes
112 WHERE (participant_id=%(participant_id)s
113 AND team_id=%(team_id)s)
114 LIMIT 1
115 ), CURRENT_TIMESTAMP)
116 , %(participant_id)s, %(team_id)s, %(amount)s, %(recorder_id)s
117 )
118 RETURNING amount
119
120 """, { 'participant_id': participant.id
121 , 'team_id': self.id
122 , 'amount': take
123 , 'recorder_id': recorder.id
124 })
125
126 # Compute the new takes
127 all_new_takes = self.compute_actual_takes(cursor)
128
129 # Update computed values
130 self.update_taking(old_takes, all_new_takes, cursor, participant)
131 self.update_distributing(all_new_takes, cursor)
132
133 return new_take
134
135
136 def get_take_for(self, participant, cursor=None):
137 """
138 :param Participant participant: the participant to get the take for
139 :param GratipayDB cursor: a database cursor; if ``None``, a new cursor will be used
140 :return: a :py:class:`~decimal.Decimal`: the ``participant``'s take from this team, or 0.
141 """
142 return (cursor or self.db).one("""
143
144 SELECT amount
145 FROM current_takes
146 WHERE team_id=%s AND participant_id=%s
147
148 """, (self.id, participant.id), default=ZERO)
149
150
151 def update_taking(self, old_takes, new_takes, cursor=None, member=None):
152 """Update `taking` amounts based on the difference between `old_takes`
153 and `new_takes`.
154 """
155
156 # XXX Deal with owner as well as members
157
158 for participant_id in set(old_takes.keys()).union(new_takes.keys()):
159 old = old_takes.get(participant_id, {}).get('actual_amount', ZERO)
160 new = new_takes.get(participant_id, {}).get('actual_amount', ZERO)
161 delta = new - old
162 if delta != 0:
163 taking = (cursor or self.db).one("""
164 UPDATE participants
165 SET taking = (taking + %(delta)s)
166 WHERE id=%(participant_id)s
167 RETURNING taking
168 """, dict(participant_id=participant_id, delta=delta))
169 if member and participant_id == member.id:
170 member.set_attributes(taking=taking)
171
172
173 def update_distributing(self, new_takes, cursor=None):
174 """Update the computed values on the team.
175 """
176 distributing = sum(t['actual_amount'] for t in new_takes.values())
177 ndistributing_to = len(tuple(t for t in new_takes.values() if t['actual_amount'] > 0))
178
179 r = (cursor or self.db).one("""
180 UPDATE teams
181 SET distributing=%s, ndistributing_to=%s WHERE id=%s
182 RETURNING distributing, ndistributing_to
183 """, (distributing, ndistributing_to, self.id))
184
185 self.set_attributes(**r._asdict())
186
187
188 def get_current_takes(self, cursor=None):
189 """Return a list of member takes for a team.
190 """
191 TAKES = """
192 SELECT p.*::participants AS participant
193 , ct.amount, ct.ctime, ct.mtime
194 FROM current_takes ct
195 JOIN participants p
196 ON ct.participant_id = p.id
197 WHERE team_id=%(team_id)s
198 ORDER BY amount ASC, ctime ASC
199 """
200 records = (cursor or self.db).all(TAKES, dict(team_id=self.id))
201 return [r._asdict() for r in records]
202
203
204 def compute_actual_takes(self, cursor=None):
205 """Get the takes, compute the actual amounts, and return an OrderedDict.
206 """
207 actual_takes = OrderedDict()
208 nominal_takes = self.get_current_takes(cursor=cursor)
209 available = balance = self.available
210 for take in nominal_takes:
211 nominal_amount = take['nominal_amount'] = take.pop('amount')
212 actual_amount = take['actual_amount'] = min(nominal_amount, balance)
213 take['balance'] = balance = balance - actual_amount
214 take['percentage'] = actual_amount / available
215 actual_takes[take['participant'].id] = take
216 return actual_takes
217
218
219 class NotAllowed(Exception):
220 """Raised by :py:meth:`set_take_for` if ``recorder`` is not allowed to set
221 the take for ``participant``.
222 """
223
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/gratipay/models/team/mixins/membership.py b/gratipay/models/team/mixins/membership.py
--- a/gratipay/models/team/mixins/membership.py
+++ b/gratipay/models/team/mixins/membership.py
@@ -27,17 +27,6 @@
self.set_take_for(participant, ZERO, recorder)
- def remove_all_members(self, cursor=None):
- (cursor or self.db).run("""
- INSERT INTO takes (ctime, member, team, amount, recorder) (
- SELECT ctime, member, %(username)s, 0.00, %(username)s
- FROM current_takes
- WHERE team=%(username)s
- AND amount > 0
- );
- """, dict(username=self.username))
-
-
@property
def nmembers(self):
"""The number of members. Read-only and computed (not in the db); equal to
@@ -64,11 +53,8 @@
if current_participant:
member['removal_allowed'] = current_participant.username == self.owner
if member['username'] == current_participant.username:
- member['is_current_user'] = True
- if take['ctime'] is not None:
- # current user, but not the team itself
- member['editing_allowed']= True
+ member['editing_allowed']= True
- member['last_week'] = self.get_take_last_week_for(member['participant_id'])
+ member['last_week'] = self.get_take_last_week_for(take['participant'])
members.append(member)
return members
diff --git a/gratipay/models/team/mixins/takes.py b/gratipay/models/team/mixins/takes.py
--- a/gratipay/models/team/mixins/takes.py
+++ b/gratipay/models/team/mixins/takes.py
@@ -31,25 +31,6 @@
ndistributing_to = 0
- def get_take_last_week_for(self, participant_id):
- """Get the participant's nominal take last week.
- """
- return self.db.one("""
-
- SELECT amount
- FROM takes
- WHERE team_id=%s AND participant_id=%s
- AND mtime < (
- SELECT ts_start
- FROM paydays
- WHERE ts_end > ts_start
- ORDER BY ts_start DESC LIMIT 1
- )
- ORDER BY mtime DESC LIMIT 1
-
- """, (self.id, participant_id), default=ZERO)
-
-
def set_take_for(self, participant, take, recorder, cursor=None):
"""Set the amount a participant wants to take from this team during payday.
@@ -72,10 +53,10 @@
def vet(p):
if p.is_suspicious:
raise NotAllowed("user must not be flagged as suspicious")
- elif not p.has_verified_identity:
- raise NotAllowed("user must have a verified identity")
elif not p.email_address:
raise NotAllowed("user must have added at least one email address")
+ elif not p.has_verified_identity:
+ raise NotAllowed("user must have a verified identity")
elif not p.is_claimed:
raise NotAllowed("user must have claimed the account")
@@ -148,6 +129,30 @@
""", (self.id, participant.id), default=ZERO)
+ def get_take_last_week_for(self, participant, cursor=None):
+ """
+ :param Participant participant: the participant to get the take for
+ :param GratipayDB cursor: a database cursor; if ``None``, a new cursor
+ will be used
+ :return: a :py:class:`~decimal.Decimal`: the ``participant``'s take
+ from this team at the beginning of the last completed payday, or 0.
+ """
+ return (cursor or self.db).one("""
+
+ SELECT amount
+ FROM takes
+ WHERE team_id=%s AND participant_id=%s
+ AND mtime < (
+ SELECT ts_start
+ FROM paydays
+ WHERE ts_end > ts_start
+ ORDER BY ts_start DESC LIMIT 1
+ )
+ ORDER BY mtime DESC LIMIT 1
+
+ """, (self.id, participant.id), default=ZERO)
+
+
def update_taking(self, old_takes, new_takes, cursor=None, member=None):
"""Update `taking` amounts based on the difference between `old_takes`
and `new_takes`.
|
{"golden_diff": "diff --git a/gratipay/models/team/mixins/membership.py b/gratipay/models/team/mixins/membership.py\n--- a/gratipay/models/team/mixins/membership.py\n+++ b/gratipay/models/team/mixins/membership.py\n@@ -27,17 +27,6 @@\n self.set_take_for(participant, ZERO, recorder)\n \n \n- def remove_all_members(self, cursor=None):\n- (cursor or self.db).run(\"\"\"\n- INSERT INTO takes (ctime, member, team, amount, recorder) (\n- SELECT ctime, member, %(username)s, 0.00, %(username)s\n- FROM current_takes\n- WHERE team=%(username)s\n- AND amount > 0\n- );\n- \"\"\", dict(username=self.username))\n-\n-\n @property\n def nmembers(self):\n \"\"\"The number of members. Read-only and computed (not in the db); equal to\n@@ -64,11 +53,8 @@\n if current_participant:\n member['removal_allowed'] = current_participant.username == self.owner\n if member['username'] == current_participant.username:\n- member['is_current_user'] = True\n- if take['ctime'] is not None:\n- # current user, but not the team itself\n- member['editing_allowed']= True\n+ member['editing_allowed']= True\n \n- member['last_week'] = self.get_take_last_week_for(member['participant_id'])\n+ member['last_week'] = self.get_take_last_week_for(take['participant'])\n members.append(member)\n return members\ndiff --git a/gratipay/models/team/mixins/takes.py b/gratipay/models/team/mixins/takes.py\n--- a/gratipay/models/team/mixins/takes.py\n+++ b/gratipay/models/team/mixins/takes.py\n@@ -31,25 +31,6 @@\n ndistributing_to = 0\n \n \n- def get_take_last_week_for(self, participant_id):\n- \"\"\"Get the participant's nominal take last week.\n- \"\"\"\n- return self.db.one(\"\"\"\n-\n- SELECT amount\n- FROM takes\n- WHERE team_id=%s AND participant_id=%s\n- AND mtime < (\n- SELECT ts_start\n- FROM paydays\n- WHERE ts_end > ts_start\n- ORDER BY ts_start DESC LIMIT 1\n- )\n- ORDER BY mtime DESC LIMIT 1\n-\n- \"\"\", (self.id, participant_id), default=ZERO)\n-\n-\n def set_take_for(self, participant, take, recorder, cursor=None):\n \"\"\"Set the amount a participant wants to take from this team during payday.\n \n@@ -72,10 +53,10 @@\n def vet(p):\n if p.is_suspicious:\n raise NotAllowed(\"user must not be flagged as suspicious\")\n- elif not p.has_verified_identity:\n- raise NotAllowed(\"user must have a verified identity\")\n elif not p.email_address:\n raise NotAllowed(\"user must have added at least one email address\")\n+ elif not p.has_verified_identity:\n+ raise NotAllowed(\"user must have a verified identity\")\n elif not p.is_claimed:\n raise NotAllowed(\"user must have claimed the account\")\n \n@@ -148,6 +129,30 @@\n \"\"\", (self.id, participant.id), default=ZERO)\n \n \n+ def get_take_last_week_for(self, participant, cursor=None):\n+ \"\"\"\n+ :param Participant participant: the participant to get the take for\n+ :param GratipayDB cursor: a database cursor; if ``None``, a new cursor\n+ will be used\n+ :return: a :py:class:`~decimal.Decimal`: the ``participant``'s take\n+ from this team at the beginning of the last completed payday, or 0.\n+ \"\"\"\n+ return (cursor or self.db).one(\"\"\"\n+\n+ SELECT amount\n+ FROM takes\n+ WHERE team_id=%s AND participant_id=%s\n+ AND mtime < (\n+ SELECT ts_start\n+ FROM paydays\n+ WHERE ts_end > ts_start\n+ ORDER BY ts_start DESC LIMIT 1\n+ )\n+ ORDER BY mtime DESC LIMIT 1\n+\n+ \"\"\", (self.id, participant.id), default=ZERO)\n+\n+\n def update_taking(self, old_takes, new_takes, cursor=None, member=None):\n \"\"\"Update `taking` amounts based on the difference between `old_takes`\n and `new_takes`.\n", "issue": "fill out the test suite for the mixins, prune old tests\nReticketed from #3994.\n\n", "before_files": [{"content": "from __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom .takes import ZERO, PENNY\n\n\nclass MembershipMixin(object):\n \"\"\"Teams may have zero or more members, who are participants that take money from the team.\n \"\"\"\n\n def add_member(self, participant, recorder):\n \"\"\"Add a participant to this team.\n\n :param Participant participant: the participant to add\n :param Participant recorder: the participant making the change\n\n \"\"\"\n self.set_take_for(participant, PENNY, recorder)\n\n\n def remove_member(self, participant, recorder):\n \"\"\"Remove a participant from this team.\n\n :param Participant participant: the participant to remove\n :param Participant recorder: the participant making the change\n\n \"\"\"\n self.set_take_for(participant, ZERO, recorder)\n\n\n def remove_all_members(self, cursor=None):\n (cursor or self.db).run(\"\"\"\n INSERT INTO takes (ctime, member, team, amount, recorder) (\n SELECT ctime, member, %(username)s, 0.00, %(username)s\n FROM current_takes\n WHERE team=%(username)s\n AND amount > 0\n );\n \"\"\", dict(username=self.username))\n\n\n @property\n def nmembers(self):\n \"\"\"The number of members. Read-only and computed (not in the db); equal to\n :py:attr:`~gratipay.models.team.mixins.takes.ndistributing_to`.\n \"\"\"\n return self.ndistributing_to\n\n\n def get_memberships(self, current_participant=None):\n \"\"\"Return a list of member dicts.\n \"\"\"\n takes = self.compute_actual_takes()\n members = []\n for take in takes.values():\n member = {}\n member['participant_id'] = take['participant'].id\n member['username'] = take['participant'].username\n member['take'] = take['nominal_amount']\n member['balance'] = take['balance']\n member['percentage'] = take['percentage']\n\n member['editing_allowed'] = False\n member['is_current_user'] = False\n if current_participant:\n member['removal_allowed'] = current_participant.username == self.owner\n if member['username'] == current_participant.username:\n member['is_current_user'] = True\n if take['ctime'] is not None:\n # current user, but not the team itself\n member['editing_allowed']= True\n\n member['last_week'] = self.get_take_last_week_for(member['participant_id'])\n members.append(member)\n return members\n", "path": "gratipay/models/team/mixins/membership.py"}, {"content": "from __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom collections import OrderedDict\nfrom decimal import Decimal as D\n\nZERO = D('0.00')\nPENNY = D('0.01')\n\n\nclass TakesMixin(object):\n \"\"\":py:class:`~gratipay.models.participant.Participant` s who are members\n of a :py:class:`~gratipay.models.team.Team` may take money from the team\n during :py:class:`~gratipay.billing.payday.Payday`. Only the team owner may\n add a new member, by setting their take to a penny, but team owners may\n *only* set their take to a penny---no more. Team owners may also remove\n members, by setting their take to zero, as may the members themselves, who\n may also set their take to whatever they wish.\n \"\"\"\n\n #: The total amount of money the team distributes to participants\n #: (including the owner) during payday. Read-only; equal to\n #: :py:attr:`~gratipay.models.team.Team.receiving`.\n\n distributing = 0\n\n\n #: The number of participants (including the owner) that the team\n #: distributes money to during payday. Read-only; modified by\n #: :py:meth:`set_take_for`.\n\n ndistributing_to = 0\n\n\n def get_take_last_week_for(self, participant_id):\n \"\"\"Get the participant's nominal take last week.\n \"\"\"\n return self.db.one(\"\"\"\n\n SELECT amount\n FROM takes\n WHERE team_id=%s AND participant_id=%s\n AND mtime < (\n SELECT ts_start\n FROM paydays\n WHERE ts_end > ts_start\n ORDER BY ts_start DESC LIMIT 1\n )\n ORDER BY mtime DESC LIMIT 1\n\n \"\"\", (self.id, participant_id), default=ZERO)\n\n\n def set_take_for(self, participant, take, recorder, cursor=None):\n \"\"\"Set the amount a participant wants to take from this team during payday.\n\n :param Participant participant: the participant to set the take for\n :param Decimal take: the amount the participant wants to take\n :param Participant recorder: the participant making the change\n\n :return: the new take as a py:class:`~decimal.Decimal`\n :raises: :py:exc:`NotAllowed`\n\n It is a bug to pass in a ``participant`` or ``recorder`` that is\n suspicious, unclaimed, or without a verified email and identity.\n Furthermore, :py:exc:`NotAllowed` is raised in the following circumstances:\n\n - ``recorder`` is neither ``participant`` nor the team owner\n - ``recorder`` is the team owner and ``take`` is neither zero nor $0.01\n - ``recorder`` is ``participant``, but ``participant`` isn't already on the team\n\n \"\"\"\n def vet(p):\n if p.is_suspicious:\n raise NotAllowed(\"user must not be flagged as suspicious\")\n elif not p.has_verified_identity:\n raise NotAllowed(\"user must have a verified identity\")\n elif not p.email_address:\n raise NotAllowed(\"user must have added at least one email address\")\n elif not p.is_claimed:\n raise NotAllowed(\"user must have claimed the account\")\n\n vet(participant)\n vet(recorder)\n\n owner_recording = recorder.username == self.owner\n owner_taking = participant.username == self.owner\n taker_recording = recorder == participant\n adding_or_removing = take in (ZERO, PENNY)\n\n if owner_recording:\n if not adding_or_removing and not owner_taking:\n raise NotAllowed(\"owner can only add and remove members, not otherwise set takes\")\n elif not taker_recording:\n raise NotAllowed(\"can only set own take\")\n\n with self.db.get_cursor(cursor) as cursor:\n cursor.run(\"LOCK TABLE takes IN EXCLUSIVE MODE\") # avoid race conditions\n\n # Compute the current takes\n old_takes = self.compute_actual_takes(cursor)\n\n if recorder.username != self.owner:\n if recorder == participant and participant.id not in old_takes:\n raise NotAllowed(\"can only set take if already a member of the team\")\n\n new_take = cursor.one( \"\"\"\n\n INSERT INTO takes\n (ctime, participant_id, team_id, amount, recorder_id)\n VALUES ( COALESCE (( SELECT ctime\n FROM takes\n WHERE (participant_id=%(participant_id)s\n AND team_id=%(team_id)s)\n LIMIT 1\n ), CURRENT_TIMESTAMP)\n , %(participant_id)s, %(team_id)s, %(amount)s, %(recorder_id)s\n )\n RETURNING amount\n\n \"\"\", { 'participant_id': participant.id\n , 'team_id': self.id\n , 'amount': take\n , 'recorder_id': recorder.id\n })\n\n # Compute the new takes\n all_new_takes = self.compute_actual_takes(cursor)\n\n # Update computed values\n self.update_taking(old_takes, all_new_takes, cursor, participant)\n self.update_distributing(all_new_takes, cursor)\n\n return new_take\n\n\n def get_take_for(self, participant, cursor=None):\n \"\"\"\n :param Participant participant: the participant to get the take for\n :param GratipayDB cursor: a database cursor; if ``None``, a new cursor will be used\n :return: a :py:class:`~decimal.Decimal`: the ``participant``'s take from this team, or 0.\n \"\"\"\n return (cursor or self.db).one(\"\"\"\n\n SELECT amount\n FROM current_takes\n WHERE team_id=%s AND participant_id=%s\n\n \"\"\", (self.id, participant.id), default=ZERO)\n\n\n def update_taking(self, old_takes, new_takes, cursor=None, member=None):\n \"\"\"Update `taking` amounts based on the difference between `old_takes`\n and `new_takes`.\n \"\"\"\n\n # XXX Deal with owner as well as members\n\n for participant_id in set(old_takes.keys()).union(new_takes.keys()):\n old = old_takes.get(participant_id, {}).get('actual_amount', ZERO)\n new = new_takes.get(participant_id, {}).get('actual_amount', ZERO)\n delta = new - old\n if delta != 0:\n taking = (cursor or self.db).one(\"\"\"\n UPDATE participants\n SET taking = (taking + %(delta)s)\n WHERE id=%(participant_id)s\n RETURNING taking\n \"\"\", dict(participant_id=participant_id, delta=delta))\n if member and participant_id == member.id:\n member.set_attributes(taking=taking)\n\n\n def update_distributing(self, new_takes, cursor=None):\n \"\"\"Update the computed values on the team.\n \"\"\"\n distributing = sum(t['actual_amount'] for t in new_takes.values())\n ndistributing_to = len(tuple(t for t in new_takes.values() if t['actual_amount'] > 0))\n\n r = (cursor or self.db).one(\"\"\"\n UPDATE teams\n SET distributing=%s, ndistributing_to=%s WHERE id=%s\n RETURNING distributing, ndistributing_to\n \"\"\", (distributing, ndistributing_to, self.id))\n\n self.set_attributes(**r._asdict())\n\n\n def get_current_takes(self, cursor=None):\n \"\"\"Return a list of member takes for a team.\n \"\"\"\n TAKES = \"\"\"\n SELECT p.*::participants AS participant\n , ct.amount, ct.ctime, ct.mtime\n FROM current_takes ct\n JOIN participants p\n ON ct.participant_id = p.id\n WHERE team_id=%(team_id)s\n ORDER BY amount ASC, ctime ASC\n \"\"\"\n records = (cursor or self.db).all(TAKES, dict(team_id=self.id))\n return [r._asdict() for r in records]\n\n\n def compute_actual_takes(self, cursor=None):\n \"\"\"Get the takes, compute the actual amounts, and return an OrderedDict.\n \"\"\"\n actual_takes = OrderedDict()\n nominal_takes = self.get_current_takes(cursor=cursor)\n available = balance = self.available\n for take in nominal_takes:\n nominal_amount = take['nominal_amount'] = take.pop('amount')\n actual_amount = take['actual_amount'] = min(nominal_amount, balance)\n take['balance'] = balance = balance - actual_amount\n take['percentage'] = actual_amount / available\n actual_takes[take['participant'].id] = take\n return actual_takes\n\n\nclass NotAllowed(Exception):\n \"\"\"Raised by :py:meth:`set_take_for` if ``recorder`` is not allowed to set\n the take for ``participant``.\n \"\"\"\n", "path": "gratipay/models/team/mixins/takes.py"}], "after_files": [{"content": "from __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom .takes import ZERO, PENNY\n\n\nclass MembershipMixin(object):\n \"\"\"Teams may have zero or more members, who are participants that take money from the team.\n \"\"\"\n\n def add_member(self, participant, recorder):\n \"\"\"Add a participant to this team.\n\n :param Participant participant: the participant to add\n :param Participant recorder: the participant making the change\n\n \"\"\"\n self.set_take_for(participant, PENNY, recorder)\n\n\n def remove_member(self, participant, recorder):\n \"\"\"Remove a participant from this team.\n\n :param Participant participant: the participant to remove\n :param Participant recorder: the participant making the change\n\n \"\"\"\n self.set_take_for(participant, ZERO, recorder)\n\n\n @property\n def nmembers(self):\n \"\"\"The number of members. Read-only and computed (not in the db); equal to\n :py:attr:`~gratipay.models.team.mixins.takes.ndistributing_to`.\n \"\"\"\n return self.ndistributing_to\n\n\n def get_memberships(self, current_participant=None):\n \"\"\"Return a list of member dicts.\n \"\"\"\n takes = self.compute_actual_takes()\n members = []\n for take in takes.values():\n member = {}\n member['participant_id'] = take['participant'].id\n member['username'] = take['participant'].username\n member['take'] = take['nominal_amount']\n member['balance'] = take['balance']\n member['percentage'] = take['percentage']\n\n member['editing_allowed'] = False\n member['is_current_user'] = False\n if current_participant:\n member['removal_allowed'] = current_participant.username == self.owner\n if member['username'] == current_participant.username:\n member['editing_allowed']= True\n\n member['last_week'] = self.get_take_last_week_for(take['participant'])\n members.append(member)\n return members\n", "path": "gratipay/models/team/mixins/membership.py"}, {"content": "from __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom collections import OrderedDict\nfrom decimal import Decimal as D\n\nZERO = D('0.00')\nPENNY = D('0.01')\n\n\nclass TakesMixin(object):\n \"\"\":py:class:`~gratipay.models.participant.Participant` s who are members\n of a :py:class:`~gratipay.models.team.Team` may take money from the team\n during :py:class:`~gratipay.billing.payday.Payday`. Only the team owner may\n add a new member, by setting their take to a penny, but team owners may\n *only* set their take to a penny---no more. Team owners may also remove\n members, by setting their take to zero, as may the members themselves, who\n may also set their take to whatever they wish.\n \"\"\"\n\n #: The total amount of money the team distributes to participants\n #: (including the owner) during payday. Read-only; equal to\n #: :py:attr:`~gratipay.models.team.Team.receiving`.\n\n distributing = 0\n\n\n #: The number of participants (including the owner) that the team\n #: distributes money to during payday. Read-only; modified by\n #: :py:meth:`set_take_for`.\n\n ndistributing_to = 0\n\n\n def set_take_for(self, participant, take, recorder, cursor=None):\n \"\"\"Set the amount a participant wants to take from this team during payday.\n\n :param Participant participant: the participant to set the take for\n :param Decimal take: the amount the participant wants to take\n :param Participant recorder: the participant making the change\n\n :return: the new take as a py:class:`~decimal.Decimal`\n :raises: :py:exc:`NotAllowed`\n\n It is a bug to pass in a ``participant`` or ``recorder`` that is\n suspicious, unclaimed, or without a verified email and identity.\n Furthermore, :py:exc:`NotAllowed` is raised in the following circumstances:\n\n - ``recorder`` is neither ``participant`` nor the team owner\n - ``recorder`` is the team owner and ``take`` is neither zero nor $0.01\n - ``recorder`` is ``participant``, but ``participant`` isn't already on the team\n\n \"\"\"\n def vet(p):\n if p.is_suspicious:\n raise NotAllowed(\"user must not be flagged as suspicious\")\n elif not p.email_address:\n raise NotAllowed(\"user must have added at least one email address\")\n elif not p.has_verified_identity:\n raise NotAllowed(\"user must have a verified identity\")\n elif not p.is_claimed:\n raise NotAllowed(\"user must have claimed the account\")\n\n vet(participant)\n vet(recorder)\n\n owner_recording = recorder.username == self.owner\n owner_taking = participant.username == self.owner\n taker_recording = recorder == participant\n adding_or_removing = take in (ZERO, PENNY)\n\n if owner_recording:\n if not adding_or_removing and not owner_taking:\n raise NotAllowed(\"owner can only add and remove members, not otherwise set takes\")\n elif not taker_recording:\n raise NotAllowed(\"can only set own take\")\n\n with self.db.get_cursor(cursor) as cursor:\n cursor.run(\"LOCK TABLE takes IN EXCLUSIVE MODE\") # avoid race conditions\n\n # Compute the current takes\n old_takes = self.compute_actual_takes(cursor)\n\n if recorder.username != self.owner:\n if recorder == participant and participant.id not in old_takes:\n raise NotAllowed(\"can only set take if already a member of the team\")\n\n new_take = cursor.one( \"\"\"\n\n INSERT INTO takes\n (ctime, participant_id, team_id, amount, recorder_id)\n VALUES ( COALESCE (( SELECT ctime\n FROM takes\n WHERE (participant_id=%(participant_id)s\n AND team_id=%(team_id)s)\n LIMIT 1\n ), CURRENT_TIMESTAMP)\n , %(participant_id)s, %(team_id)s, %(amount)s, %(recorder_id)s\n )\n RETURNING amount\n\n \"\"\", { 'participant_id': participant.id\n , 'team_id': self.id\n , 'amount': take\n , 'recorder_id': recorder.id\n })\n\n # Compute the new takes\n all_new_takes = self.compute_actual_takes(cursor)\n\n # Update computed values\n self.update_taking(old_takes, all_new_takes, cursor, participant)\n self.update_distributing(all_new_takes, cursor)\n\n return new_take\n\n\n def get_take_for(self, participant, cursor=None):\n \"\"\"\n :param Participant participant: the participant to get the take for\n :param GratipayDB cursor: a database cursor; if ``None``, a new cursor will be used\n :return: a :py:class:`~decimal.Decimal`: the ``participant``'s take from this team, or 0.\n \"\"\"\n return (cursor or self.db).one(\"\"\"\n\n SELECT amount\n FROM current_takes\n WHERE team_id=%s AND participant_id=%s\n\n \"\"\", (self.id, participant.id), default=ZERO)\n\n\n def get_take_last_week_for(self, participant, cursor=None):\n \"\"\"\n :param Participant participant: the participant to get the take for\n :param GratipayDB cursor: a database cursor; if ``None``, a new cursor\n will be used\n :return: a :py:class:`~decimal.Decimal`: the ``participant``'s take\n from this team at the beginning of the last completed payday, or 0.\n \"\"\"\n return (cursor or self.db).one(\"\"\"\n\n SELECT amount\n FROM takes\n WHERE team_id=%s AND participant_id=%s\n AND mtime < (\n SELECT ts_start\n FROM paydays\n WHERE ts_end > ts_start\n ORDER BY ts_start DESC LIMIT 1\n )\n ORDER BY mtime DESC LIMIT 1\n\n \"\"\", (self.id, participant.id), default=ZERO)\n\n\n def update_taking(self, old_takes, new_takes, cursor=None, member=None):\n \"\"\"Update `taking` amounts based on the difference between `old_takes`\n and `new_takes`.\n \"\"\"\n\n # XXX Deal with owner as well as members\n\n for participant_id in set(old_takes.keys()).union(new_takes.keys()):\n old = old_takes.get(participant_id, {}).get('actual_amount', ZERO)\n new = new_takes.get(participant_id, {}).get('actual_amount', ZERO)\n delta = new - old\n if delta != 0:\n taking = (cursor or self.db).one(\"\"\"\n UPDATE participants\n SET taking = (taking + %(delta)s)\n WHERE id=%(participant_id)s\n RETURNING taking\n \"\"\", dict(participant_id=participant_id, delta=delta))\n if member and participant_id == member.id:\n member.set_attributes(taking=taking)\n\n\n def update_distributing(self, new_takes, cursor=None):\n \"\"\"Update the computed values on the team.\n \"\"\"\n distributing = sum(t['actual_amount'] for t in new_takes.values())\n ndistributing_to = len(tuple(t for t in new_takes.values() if t['actual_amount'] > 0))\n\n r = (cursor or self.db).one(\"\"\"\n UPDATE teams\n SET distributing=%s, ndistributing_to=%s WHERE id=%s\n RETURNING distributing, ndistributing_to\n \"\"\", (distributing, ndistributing_to, self.id))\n\n self.set_attributes(**r._asdict())\n\n\n def get_current_takes(self, cursor=None):\n \"\"\"Return a list of member takes for a team.\n \"\"\"\n TAKES = \"\"\"\n SELECT p.*::participants AS participant\n , ct.amount, ct.ctime, ct.mtime\n FROM current_takes ct\n JOIN participants p\n ON ct.participant_id = p.id\n WHERE team_id=%(team_id)s\n ORDER BY amount ASC, ctime ASC\n \"\"\"\n records = (cursor or self.db).all(TAKES, dict(team_id=self.id))\n return [r._asdict() for r in records]\n\n\n def compute_actual_takes(self, cursor=None):\n \"\"\"Get the takes, compute the actual amounts, and return an OrderedDict.\n \"\"\"\n actual_takes = OrderedDict()\n nominal_takes = self.get_current_takes(cursor=cursor)\n available = balance = self.available\n for take in nominal_takes:\n nominal_amount = take['nominal_amount'] = take.pop('amount')\n actual_amount = take['actual_amount'] = min(nominal_amount, balance)\n take['balance'] = balance = balance - actual_amount\n take['percentage'] = actual_amount / available\n actual_takes[take['participant'].id] = take\n return actual_takes\n\n\nclass NotAllowed(Exception):\n \"\"\"Raised by :py:meth:`set_take_for` if ``recorder`` is not allowed to set\n the take for ``participant``.\n \"\"\"\n", "path": "gratipay/models/team/mixins/takes.py"}]}
| 3,494 | 1,012 |
gh_patches_debug_23137
|
rasdani/github-patches
|
git_diff
|
scrapy__scrapy-1563
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug] Incorrectly picked URL in `scrapy.http.FormRequest.from_response` when there is a `<base>` tag
## Issue Description
Incorrectly picked URL in `scrapy.http.FormRequest.from_response` when there is a `<base>` tag.
## How to Reproduce the Issue & Version Used
```
[pengyu@GLaDOS tmp]$ python2
Python 2.7.10 (default, Sep 7 2015, 13:51:49)
[GCC 5.2.0] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import scrapy
>>> scrapy.__version__
u'1.0.3'
>>> html_body = '''
... <html>
... <head>
... <base href="http://b.com/">
... </head>
... <body>
... <form action="test_form">
... </form>
... </body>
... </html>
... '''
>>> response = scrapy.http.TextResponse(url='http://a.com/', body=html_body)
>>> request = scrapy.http.FormRequest.from_response(response)
>>> request.url
'http://a.com/test_form'
```
## Expected Result
`request.url` shall be `'http://b.com/test_form'`
## Suggested Fix
The issue can be fixed by fixing a few lines in `scrapy/http/request/form.py`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/http/request/form.py`
Content:
```
1 """
2 This module implements the FormRequest class which is a more convenient class
3 (than Request) to generate Requests based on form data.
4
5 See documentation in docs/topics/request-response.rst
6 """
7
8 from six.moves.urllib.parse import urljoin, urlencode
9 import lxml.html
10 from parsel.selector import create_root_node
11 import six
12 from scrapy.http.request import Request
13 from scrapy.utils.python import to_bytes, is_listlike
14
15
16 class FormRequest(Request):
17
18 def __init__(self, *args, **kwargs):
19 formdata = kwargs.pop('formdata', None)
20 if formdata and kwargs.get('method') is None:
21 kwargs['method'] = 'POST'
22
23 super(FormRequest, self).__init__(*args, **kwargs)
24
25 if formdata:
26 items = formdata.items() if isinstance(formdata, dict) else formdata
27 querystr = _urlencode(items, self.encoding)
28 if self.method == 'POST':
29 self.headers.setdefault(b'Content-Type', b'application/x-www-form-urlencoded')
30 self._set_body(querystr)
31 else:
32 self._set_url(self.url + ('&' if '?' in self.url else '?') + querystr)
33
34 @classmethod
35 def from_response(cls, response, formname=None, formid=None, formnumber=0, formdata=None,
36 clickdata=None, dont_click=False, formxpath=None, **kwargs):
37 kwargs.setdefault('encoding', response.encoding)
38 form = _get_form(response, formname, formid, formnumber, formxpath)
39 formdata = _get_inputs(form, formdata, dont_click, clickdata, response)
40 url = _get_form_url(form, kwargs.pop('url', None))
41 method = kwargs.pop('method', form.method)
42 return cls(url=url, method=method, formdata=formdata, **kwargs)
43
44
45 def _get_form_url(form, url):
46 if url is None:
47 return form.action or form.base_url
48 return urljoin(form.base_url, url)
49
50
51 def _urlencode(seq, enc):
52 values = [(to_bytes(k, enc), to_bytes(v, enc))
53 for k, vs in seq
54 for v in (vs if is_listlike(vs) else [vs])]
55 return urlencode(values, doseq=1)
56
57
58 def _get_form(response, formname, formid, formnumber, formxpath):
59 """Find the form element """
60 text = response.body_as_unicode()
61 root = create_root_node(text, lxml.html.HTMLParser, base_url=response.url)
62 forms = root.xpath('//form')
63 if not forms:
64 raise ValueError("No <form> element found in %s" % response)
65
66 if formname is not None:
67 f = root.xpath('//form[@name="%s"]' % formname)
68 if f:
69 return f[0]
70
71 if formid is not None:
72 f = root.xpath('//form[@id="%s"]' % formid)
73 if f:
74 return f[0]
75
76 # Get form element from xpath, if not found, go up
77 if formxpath is not None:
78 nodes = root.xpath(formxpath)
79 if nodes:
80 el = nodes[0]
81 while True:
82 if el.tag == 'form':
83 return el
84 el = el.getparent()
85 if el is None:
86 break
87 raise ValueError('No <form> element found with %s' % formxpath)
88
89 # If we get here, it means that either formname was None
90 # or invalid
91 if formnumber is not None:
92 try:
93 form = forms[formnumber]
94 except IndexError:
95 raise IndexError("Form number %d not found in %s" %
96 (formnumber, response))
97 else:
98 return form
99
100
101 def _get_inputs(form, formdata, dont_click, clickdata, response):
102 try:
103 formdata = dict(formdata or ())
104 except (ValueError, TypeError):
105 raise ValueError('formdata should be a dict or iterable of tuples')
106
107 inputs = form.xpath('descendant::textarea'
108 '|descendant::select'
109 '|descendant::input[@type!="submit" and @type!="image" and @type!="reset"'
110 'and ((@type!="checkbox" and @type!="radio") or @checked)]')
111 values = [(k, u'' if v is None else v)
112 for k, v in (_value(e) for e in inputs)
113 if k and k not in formdata]
114
115 if not dont_click:
116 clickable = _get_clickable(clickdata, form)
117 if clickable and clickable[0] not in formdata and not clickable[0] is None:
118 values.append(clickable)
119
120 values.extend(formdata.items())
121 return values
122
123
124 def _value(ele):
125 n = ele.name
126 v = ele.value
127 if ele.tag == 'select':
128 return _select_value(ele, n, v)
129 return n, v
130
131
132 def _select_value(ele, n, v):
133 multiple = ele.multiple
134 if v is None and not multiple:
135 # Match browser behaviour on simple select tag without options selected
136 # And for select tags wihout options
137 o = ele.value_options
138 return (n, o[0]) if o else (None, None)
139 elif v is not None and multiple:
140 # This is a workround to bug in lxml fixed 2.3.1
141 # fix https://github.com/lxml/lxml/commit/57f49eed82068a20da3db8f1b18ae00c1bab8b12#L1L1139
142 selected_options = ele.xpath('.//option[@selected]')
143 v = [(o.get('value') or o.text or u'').strip() for o in selected_options]
144 return n, v
145
146
147 def _get_clickable(clickdata, form):
148 """
149 Returns the clickable element specified in clickdata,
150 if the latter is given. If not, it returns the first
151 clickable element found
152 """
153 clickables = [el for el in form.xpath('descendant::input[@type="submit"]'
154 '|descendant::button[@type="submit"]'
155 '|descendant::button[not(@type)]')]
156 if not clickables:
157 return
158
159 # If we don't have clickdata, we just use the first clickable element
160 if clickdata is None:
161 el = clickables[0]
162 return (el.get('name'), el.get('value') or '')
163
164 # If clickdata is given, we compare it to the clickable elements to find a
165 # match. We first look to see if the number is specified in clickdata,
166 # because that uniquely identifies the element
167 nr = clickdata.get('nr', None)
168 if nr is not None:
169 try:
170 el = list(form.inputs)[nr]
171 except IndexError:
172 pass
173 else:
174 return (el.get('name'), el.get('value') or '')
175
176 # We didn't find it, so now we build an XPath expression out of the other
177 # arguments, because they can be used as such
178 xpath = u'.//*' + \
179 u''.join(u'[@%s="%s"]' % c for c in six.iteritems(clickdata))
180 el = form.xpath(xpath)
181 if len(el) == 1:
182 return (el[0].get('name'), el[0].get('value') or '')
183 elif len(el) > 1:
184 raise ValueError("Multiple elements found (%r) matching the criteria "
185 "in clickdata: %r" % (el, clickdata))
186 else:
187 raise ValueError('No clickable element matching clickdata: %r' % (clickdata,))
188
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scrapy/http/request/form.py b/scrapy/http/request/form.py
--- a/scrapy/http/request/form.py
+++ b/scrapy/http/request/form.py
@@ -11,6 +11,7 @@
import six
from scrapy.http.request import Request
from scrapy.utils.python import to_bytes, is_listlike
+from scrapy.utils.response import get_base_url
class FormRequest(Request):
@@ -44,7 +45,7 @@
def _get_form_url(form, url):
if url is None:
- return form.action or form.base_url
+ return urljoin(form.base_url, form.action)
return urljoin(form.base_url, url)
@@ -58,7 +59,7 @@
def _get_form(response, formname, formid, formnumber, formxpath):
"""Find the form element """
text = response.body_as_unicode()
- root = create_root_node(text, lxml.html.HTMLParser, base_url=response.url)
+ root = create_root_node(text, lxml.html.HTMLParser, base_url=get_base_url(response))
forms = root.xpath('//form')
if not forms:
raise ValueError("No <form> element found in %s" % response)
|
{"golden_diff": "diff --git a/scrapy/http/request/form.py b/scrapy/http/request/form.py\n--- a/scrapy/http/request/form.py\n+++ b/scrapy/http/request/form.py\n@@ -11,6 +11,7 @@\n import six\n from scrapy.http.request import Request\n from scrapy.utils.python import to_bytes, is_listlike\n+from scrapy.utils.response import get_base_url\n \n \n class FormRequest(Request):\n@@ -44,7 +45,7 @@\n \n def _get_form_url(form, url):\n if url is None:\n- return form.action or form.base_url\n+ return urljoin(form.base_url, form.action)\n return urljoin(form.base_url, url)\n \n \n@@ -58,7 +59,7 @@\n def _get_form(response, formname, formid, formnumber, formxpath):\n \"\"\"Find the form element \"\"\"\n text = response.body_as_unicode()\n- root = create_root_node(text, lxml.html.HTMLParser, base_url=response.url)\n+ root = create_root_node(text, lxml.html.HTMLParser, base_url=get_base_url(response))\n forms = root.xpath('//form')\n if not forms:\n raise ValueError(\"No <form> element found in %s\" % response)\n", "issue": "[Bug] Incorrectly picked URL in `scrapy.http.FormRequest.from_response` when there is a `<base>` tag\n## Issue Description\n\nIncorrectly picked URL in `scrapy.http.FormRequest.from_response` when there is a `<base>` tag.\n## How to Reproduce the Issue & Version Used\n\n```\n[pengyu@GLaDOS tmp]$ python2\nPython 2.7.10 (default, Sep 7 2015, 13:51:49) \n[GCC 5.2.0] on linux2\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import scrapy\n>>> scrapy.__version__\nu'1.0.3'\n>>> html_body = '''\n... <html>\n... <head>\n... <base href=\"http://b.com/\">\n... </head>\n... <body>\n... <form action=\"test_form\">\n... </form>\n... </body>\n... </html>\n... '''\n>>> response = scrapy.http.TextResponse(url='http://a.com/', body=html_body)\n>>> request = scrapy.http.FormRequest.from_response(response)\n>>> request.url\n'http://a.com/test_form'\n```\n## Expected Result\n\n`request.url` shall be `'http://b.com/test_form'`\n## Suggested Fix\n\nThe issue can be fixed by fixing a few lines in `scrapy/http/request/form.py`\n\n", "before_files": [{"content": "\"\"\"\nThis module implements the FormRequest class which is a more convenient class\n(than Request) to generate Requests based on form data.\n\nSee documentation in docs/topics/request-response.rst\n\"\"\"\n\nfrom six.moves.urllib.parse import urljoin, urlencode\nimport lxml.html\nfrom parsel.selector import create_root_node\nimport six\nfrom scrapy.http.request import Request\nfrom scrapy.utils.python import to_bytes, is_listlike\n\n\nclass FormRequest(Request):\n\n def __init__(self, *args, **kwargs):\n formdata = kwargs.pop('formdata', None)\n if formdata and kwargs.get('method') is None:\n kwargs['method'] = 'POST'\n\n super(FormRequest, self).__init__(*args, **kwargs)\n\n if formdata:\n items = formdata.items() if isinstance(formdata, dict) else formdata\n querystr = _urlencode(items, self.encoding)\n if self.method == 'POST':\n self.headers.setdefault(b'Content-Type', b'application/x-www-form-urlencoded')\n self._set_body(querystr)\n else:\n self._set_url(self.url + ('&' if '?' in self.url else '?') + querystr)\n\n @classmethod\n def from_response(cls, response, formname=None, formid=None, formnumber=0, formdata=None,\n clickdata=None, dont_click=False, formxpath=None, **kwargs):\n kwargs.setdefault('encoding', response.encoding)\n form = _get_form(response, formname, formid, formnumber, formxpath)\n formdata = _get_inputs(form, formdata, dont_click, clickdata, response)\n url = _get_form_url(form, kwargs.pop('url', None))\n method = kwargs.pop('method', form.method)\n return cls(url=url, method=method, formdata=formdata, **kwargs)\n\n\ndef _get_form_url(form, url):\n if url is None:\n return form.action or form.base_url\n return urljoin(form.base_url, url)\n\n\ndef _urlencode(seq, enc):\n values = [(to_bytes(k, enc), to_bytes(v, enc))\n for k, vs in seq\n for v in (vs if is_listlike(vs) else [vs])]\n return urlencode(values, doseq=1)\n\n\ndef _get_form(response, formname, formid, formnumber, formxpath):\n \"\"\"Find the form element \"\"\"\n text = response.body_as_unicode()\n root = create_root_node(text, lxml.html.HTMLParser, base_url=response.url)\n forms = root.xpath('//form')\n if not forms:\n raise ValueError(\"No <form> element found in %s\" % response)\n\n if formname is not None:\n f = root.xpath('//form[@name=\"%s\"]' % formname)\n if f:\n return f[0]\n\n if formid is not None:\n f = root.xpath('//form[@id=\"%s\"]' % formid)\n if f:\n return f[0]\n \n # Get form element from xpath, if not found, go up\n if formxpath is not None:\n nodes = root.xpath(formxpath)\n if nodes:\n el = nodes[0]\n while True:\n if el.tag == 'form':\n return el\n el = el.getparent()\n if el is None:\n break\n raise ValueError('No <form> element found with %s' % formxpath)\n\n # If we get here, it means that either formname was None\n # or invalid\n if formnumber is not None:\n try:\n form = forms[formnumber]\n except IndexError:\n raise IndexError(\"Form number %d not found in %s\" %\n (formnumber, response))\n else:\n return form\n\n\ndef _get_inputs(form, formdata, dont_click, clickdata, response):\n try:\n formdata = dict(formdata or ())\n except (ValueError, TypeError):\n raise ValueError('formdata should be a dict or iterable of tuples')\n\n inputs = form.xpath('descendant::textarea'\n '|descendant::select'\n '|descendant::input[@type!=\"submit\" and @type!=\"image\" and @type!=\"reset\"'\n 'and ((@type!=\"checkbox\" and @type!=\"radio\") or @checked)]')\n values = [(k, u'' if v is None else v)\n for k, v in (_value(e) for e in inputs)\n if k and k not in formdata]\n\n if not dont_click:\n clickable = _get_clickable(clickdata, form)\n if clickable and clickable[0] not in formdata and not clickable[0] is None:\n values.append(clickable)\n\n values.extend(formdata.items())\n return values\n\n\ndef _value(ele):\n n = ele.name\n v = ele.value\n if ele.tag == 'select':\n return _select_value(ele, n, v)\n return n, v\n\n\ndef _select_value(ele, n, v):\n multiple = ele.multiple\n if v is None and not multiple:\n # Match browser behaviour on simple select tag without options selected\n # And for select tags wihout options\n o = ele.value_options\n return (n, o[0]) if o else (None, None)\n elif v is not None and multiple:\n # This is a workround to bug in lxml fixed 2.3.1\n # fix https://github.com/lxml/lxml/commit/57f49eed82068a20da3db8f1b18ae00c1bab8b12#L1L1139\n selected_options = ele.xpath('.//option[@selected]')\n v = [(o.get('value') or o.text or u'').strip() for o in selected_options]\n return n, v\n\n\ndef _get_clickable(clickdata, form):\n \"\"\"\n Returns the clickable element specified in clickdata,\n if the latter is given. If not, it returns the first\n clickable element found\n \"\"\"\n clickables = [el for el in form.xpath('descendant::input[@type=\"submit\"]'\n '|descendant::button[@type=\"submit\"]'\n '|descendant::button[not(@type)]')]\n if not clickables:\n return\n\n # If we don't have clickdata, we just use the first clickable element\n if clickdata is None:\n el = clickables[0]\n return (el.get('name'), el.get('value') or '')\n\n # If clickdata is given, we compare it to the clickable elements to find a\n # match. We first look to see if the number is specified in clickdata,\n # because that uniquely identifies the element\n nr = clickdata.get('nr', None)\n if nr is not None:\n try:\n el = list(form.inputs)[nr]\n except IndexError:\n pass\n else:\n return (el.get('name'), el.get('value') or '')\n\n # We didn't find it, so now we build an XPath expression out of the other\n # arguments, because they can be used as such\n xpath = u'.//*' + \\\n u''.join(u'[@%s=\"%s\"]' % c for c in six.iteritems(clickdata))\n el = form.xpath(xpath)\n if len(el) == 1:\n return (el[0].get('name'), el[0].get('value') or '')\n elif len(el) > 1:\n raise ValueError(\"Multiple elements found (%r) matching the criteria \"\n \"in clickdata: %r\" % (el, clickdata))\n else:\n raise ValueError('No clickable element matching clickdata: %r' % (clickdata,))\n", "path": "scrapy/http/request/form.py"}], "after_files": [{"content": "\"\"\"\nThis module implements the FormRequest class which is a more convenient class\n(than Request) to generate Requests based on form data.\n\nSee documentation in docs/topics/request-response.rst\n\"\"\"\n\nfrom six.moves.urllib.parse import urljoin, urlencode\nimport lxml.html\nfrom parsel.selector import create_root_node\nimport six\nfrom scrapy.http.request import Request\nfrom scrapy.utils.python import to_bytes, is_listlike\nfrom scrapy.utils.response import get_base_url\n\n\nclass FormRequest(Request):\n\n def __init__(self, *args, **kwargs):\n formdata = kwargs.pop('formdata', None)\n if formdata and kwargs.get('method') is None:\n kwargs['method'] = 'POST'\n\n super(FormRequest, self).__init__(*args, **kwargs)\n\n if formdata:\n items = formdata.items() if isinstance(formdata, dict) else formdata\n querystr = _urlencode(items, self.encoding)\n if self.method == 'POST':\n self.headers.setdefault(b'Content-Type', b'application/x-www-form-urlencoded')\n self._set_body(querystr)\n else:\n self._set_url(self.url + ('&' if '?' in self.url else '?') + querystr)\n\n @classmethod\n def from_response(cls, response, formname=None, formid=None, formnumber=0, formdata=None,\n clickdata=None, dont_click=False, formxpath=None, **kwargs):\n kwargs.setdefault('encoding', response.encoding)\n form = _get_form(response, formname, formid, formnumber, formxpath)\n formdata = _get_inputs(form, formdata, dont_click, clickdata, response)\n url = _get_form_url(form, kwargs.pop('url', None))\n method = kwargs.pop('method', form.method)\n return cls(url=url, method=method, formdata=formdata, **kwargs)\n\n\ndef _get_form_url(form, url):\n if url is None:\n return urljoin(form.base_url, form.action)\n return urljoin(form.base_url, url)\n\n\ndef _urlencode(seq, enc):\n values = [(to_bytes(k, enc), to_bytes(v, enc))\n for k, vs in seq\n for v in (vs if is_listlike(vs) else [vs])]\n return urlencode(values, doseq=1)\n\n\ndef _get_form(response, formname, formid, formnumber, formxpath):\n \"\"\"Find the form element \"\"\"\n text = response.body_as_unicode()\n root = create_root_node(text, lxml.html.HTMLParser, base_url=get_base_url(response))\n forms = root.xpath('//form')\n if not forms:\n raise ValueError(\"No <form> element found in %s\" % response)\n\n if formname is not None:\n f = root.xpath('//form[@name=\"%s\"]' % formname)\n if f:\n return f[0]\n\n if formid is not None:\n f = root.xpath('//form[@id=\"%s\"]' % formid)\n if f:\n return f[0]\n \n # Get form element from xpath, if not found, go up\n if formxpath is not None:\n nodes = root.xpath(formxpath)\n if nodes:\n el = nodes[0]\n while True:\n if el.tag == 'form':\n return el\n el = el.getparent()\n if el is None:\n break\n raise ValueError('No <form> element found with %s' % formxpath)\n\n # If we get here, it means that either formname was None\n # or invalid\n if formnumber is not None:\n try:\n form = forms[formnumber]\n except IndexError:\n raise IndexError(\"Form number %d not found in %s\" %\n (formnumber, response))\n else:\n return form\n\n\ndef _get_inputs(form, formdata, dont_click, clickdata, response):\n try:\n formdata = dict(formdata or ())\n except (ValueError, TypeError):\n raise ValueError('formdata should be a dict or iterable of tuples')\n\n inputs = form.xpath('descendant::textarea'\n '|descendant::select'\n '|descendant::input[@type!=\"submit\" and @type!=\"image\" and @type!=\"reset\"'\n 'and ((@type!=\"checkbox\" and @type!=\"radio\") or @checked)]')\n values = [(k, u'' if v is None else v)\n for k, v in (_value(e) for e in inputs)\n if k and k not in formdata]\n\n if not dont_click:\n clickable = _get_clickable(clickdata, form)\n if clickable and clickable[0] not in formdata and not clickable[0] is None:\n values.append(clickable)\n\n values.extend(formdata.items())\n return values\n\n\ndef _value(ele):\n n = ele.name\n v = ele.value\n if ele.tag == 'select':\n return _select_value(ele, n, v)\n return n, v\n\n\ndef _select_value(ele, n, v):\n multiple = ele.multiple\n if v is None and not multiple:\n # Match browser behaviour on simple select tag without options selected\n # And for select tags wihout options\n o = ele.value_options\n return (n, o[0]) if o else (None, None)\n elif v is not None and multiple:\n # This is a workround to bug in lxml fixed 2.3.1\n # fix https://github.com/lxml/lxml/commit/57f49eed82068a20da3db8f1b18ae00c1bab8b12#L1L1139\n selected_options = ele.xpath('.//option[@selected]')\n v = [(o.get('value') or o.text or u'').strip() for o in selected_options]\n return n, v\n\n\ndef _get_clickable(clickdata, form):\n \"\"\"\n Returns the clickable element specified in clickdata,\n if the latter is given. If not, it returns the first\n clickable element found\n \"\"\"\n clickables = [el for el in form.xpath('descendant::input[@type=\"submit\"]'\n '|descendant::button[@type=\"submit\"]'\n '|descendant::button[not(@type)]')]\n if not clickables:\n return\n\n # If we don't have clickdata, we just use the first clickable element\n if clickdata is None:\n el = clickables[0]\n return (el.get('name'), el.get('value') or '')\n\n # If clickdata is given, we compare it to the clickable elements to find a\n # match. We first look to see if the number is specified in clickdata,\n # because that uniquely identifies the element\n nr = clickdata.get('nr', None)\n if nr is not None:\n try:\n el = list(form.inputs)[nr]\n except IndexError:\n pass\n else:\n return (el.get('name'), el.get('value') or '')\n\n # We didn't find it, so now we build an XPath expression out of the other\n # arguments, because they can be used as such\n xpath = u'.//*' + \\\n u''.join(u'[@%s=\"%s\"]' % c for c in six.iteritems(clickdata))\n el = form.xpath(xpath)\n if len(el) == 1:\n return (el[0].get('name'), el[0].get('value') or '')\n elif len(el) > 1:\n raise ValueError(\"Multiple elements found (%r) matching the criteria \"\n \"in clickdata: %r\" % (el, clickdata))\n else:\n raise ValueError('No clickable element matching clickdata: %r' % (clickdata,))\n", "path": "scrapy/http/request/form.py"}]}
| 2,706 | 262 |
gh_patches_debug_30995
|
rasdani/github-patches
|
git_diff
|
pypa__pip-4224
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pip search picks older version if returned list of versions are not ordered
* Pip version: 9.0.1
* Python version: 2.7
* Operating System: Ubuntu/CentOS
### Description:
For a list of versions returned by local pypi server that was ill-ordered like
```[{...'versions': ['1.0.249', '1.0.251', '1.0.250'], 'name':...}...]```
search picks the top element among all the versions returned to it.
```version = hit.get('versions', ['-'])[-1]```
at https://github.com/pypa/pip/blob/9.0.1/pip/commands/search.py#L107 and https://github.com/pypa/pip/blob/9.0.1/pip/commands/search.py#L99
Rather it should do something like
```version = highest_version(hit.get('versions', ['-']))```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pip/commands/search.py`
Content:
```
1 from __future__ import absolute_import
2
3 import logging
4 import sys
5 import textwrap
6
7 from pip.basecommand import Command, SUCCESS
8 from pip.compat import OrderedDict
9 from pip.download import PipXmlrpcTransport
10 from pip.models import PyPI
11 from pip.utils import get_terminal_size
12 from pip.utils.logging import indent_log
13 from pip.exceptions import CommandError
14 from pip.status_codes import NO_MATCHES_FOUND
15 from pip._vendor.packaging.version import parse as parse_version
16 from pip._vendor import pkg_resources
17 from pip._vendor.six.moves import xmlrpc_client
18
19
20 logger = logging.getLogger(__name__)
21
22
23 class SearchCommand(Command):
24 """Search for PyPI packages whose name or summary contains <query>."""
25 name = 'search'
26 usage = """
27 %prog [options] <query>"""
28 summary = 'Search PyPI for packages.'
29
30 def __init__(self, *args, **kw):
31 super(SearchCommand, self).__init__(*args, **kw)
32 self.cmd_opts.add_option(
33 '-i', '--index',
34 dest='index',
35 metavar='URL',
36 default=PyPI.pypi_url,
37 help='Base URL of Python Package Index (default %default)')
38
39 self.parser.insert_option_group(0, self.cmd_opts)
40
41 def run(self, options, args):
42 if not args:
43 raise CommandError('Missing required argument (search query).')
44 query = args
45 pypi_hits = self.search(query, options)
46 hits = transform_hits(pypi_hits)
47
48 terminal_width = None
49 if sys.stdout.isatty():
50 terminal_width = get_terminal_size()[0]
51
52 print_results(hits, terminal_width=terminal_width)
53 if pypi_hits:
54 return SUCCESS
55 return NO_MATCHES_FOUND
56
57 def search(self, query, options):
58 index_url = options.index
59 with self._build_session(options) as session:
60 transport = PipXmlrpcTransport(index_url, session)
61 pypi = xmlrpc_client.ServerProxy(index_url, transport)
62 hits = pypi.search({'name': query, 'summary': query}, 'or')
63 return hits
64
65
66 def transform_hits(hits):
67 """
68 The list from pypi is really a list of versions. We want a list of
69 packages with the list of versions stored inline. This converts the
70 list from pypi into one we can use.
71 """
72 packages = OrderedDict()
73 for hit in hits:
74 name = hit['name']
75 summary = hit['summary']
76 version = hit['version']
77
78 if name not in packages.keys():
79 packages[name] = {
80 'name': name,
81 'summary': summary,
82 'versions': [version],
83 }
84 else:
85 packages[name]['versions'].append(version)
86
87 # if this is the highest version, replace summary and score
88 if version == highest_version(packages[name]['versions']):
89 packages[name]['summary'] = summary
90
91 return list(packages.values())
92
93
94 def print_results(hits, name_column_width=None, terminal_width=None):
95 if not hits:
96 return
97 if name_column_width is None:
98 name_column_width = max([
99 len(hit['name']) + len(hit.get('versions', ['-'])[-1])
100 for hit in hits
101 ]) + 4
102
103 installed_packages = [p.project_name for p in pkg_resources.working_set]
104 for hit in hits:
105 name = hit['name']
106 summary = hit['summary'] or ''
107 version = hit.get('versions', ['-'])[-1]
108 if terminal_width is not None:
109 target_width = terminal_width - name_column_width - 5
110 if target_width > 10:
111 # wrap and indent summary to fit terminal
112 summary = textwrap.wrap(summary, target_width)
113 summary = ('\n' + ' ' * (name_column_width + 3)).join(summary)
114
115 line = '%-*s - %s' % (name_column_width,
116 '%s (%s)' % (name, version), summary)
117 try:
118 logger.info(line)
119 if name in installed_packages:
120 dist = pkg_resources.get_distribution(name)
121 with indent_log():
122 latest = highest_version(hit['versions'])
123 if dist.version == latest:
124 logger.info('INSTALLED: %s (latest)', dist.version)
125 else:
126 logger.info('INSTALLED: %s', dist.version)
127 logger.info('LATEST: %s', latest)
128 except UnicodeEncodeError:
129 pass
130
131
132 def highest_version(versions):
133 return max(versions, key=parse_version)
134
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pip/commands/search.py b/pip/commands/search.py
--- a/pip/commands/search.py
+++ b/pip/commands/search.py
@@ -96,7 +96,7 @@
return
if name_column_width is None:
name_column_width = max([
- len(hit['name']) + len(hit.get('versions', ['-'])[-1])
+ len(hit['name']) + len(highest_version(hit.get('versions', ['-'])))
for hit in hits
]) + 4
@@ -104,7 +104,7 @@
for hit in hits:
name = hit['name']
summary = hit['summary'] or ''
- version = hit.get('versions', ['-'])[-1]
+ latest = highest_version(hit.get('versions', ['-']))
if terminal_width is not None:
target_width = terminal_width - name_column_width - 5
if target_width > 10:
@@ -113,13 +113,12 @@
summary = ('\n' + ' ' * (name_column_width + 3)).join(summary)
line = '%-*s - %s' % (name_column_width,
- '%s (%s)' % (name, version), summary)
+ '%s (%s)' % (name, latest), summary)
try:
logger.info(line)
if name in installed_packages:
dist = pkg_resources.get_distribution(name)
with indent_log():
- latest = highest_version(hit['versions'])
if dist.version == latest:
logger.info('INSTALLED: %s (latest)', dist.version)
else:
|
{"golden_diff": "diff --git a/pip/commands/search.py b/pip/commands/search.py\n--- a/pip/commands/search.py\n+++ b/pip/commands/search.py\n@@ -96,7 +96,7 @@\n return\n if name_column_width is None:\n name_column_width = max([\n- len(hit['name']) + len(hit.get('versions', ['-'])[-1])\n+ len(hit['name']) + len(highest_version(hit.get('versions', ['-'])))\n for hit in hits\n ]) + 4\n \n@@ -104,7 +104,7 @@\n for hit in hits:\n name = hit['name']\n summary = hit['summary'] or ''\n- version = hit.get('versions', ['-'])[-1]\n+ latest = highest_version(hit.get('versions', ['-']))\n if terminal_width is not None:\n target_width = terminal_width - name_column_width - 5\n if target_width > 10:\n@@ -113,13 +113,12 @@\n summary = ('\\n' + ' ' * (name_column_width + 3)).join(summary)\n \n line = '%-*s - %s' % (name_column_width,\n- '%s (%s)' % (name, version), summary)\n+ '%s (%s)' % (name, latest), summary)\n try:\n logger.info(line)\n if name in installed_packages:\n dist = pkg_resources.get_distribution(name)\n with indent_log():\n- latest = highest_version(hit['versions'])\n if dist.version == latest:\n logger.info('INSTALLED: %s (latest)', dist.version)\n else:\n", "issue": "pip search picks older version if returned list of versions are not ordered\n* Pip version: 9.0.1\r\n* Python version: 2.7\r\n* Operating System: Ubuntu/CentOS\r\n\r\n### Description:\r\n\r\nFor a list of versions returned by local pypi server that was ill-ordered like\r\n```[{...'versions': ['1.0.249', '1.0.251', '1.0.250'], 'name':...}...]```\r\n\r\nsearch picks the top element among all the versions returned to it.\r\n```version = hit.get('versions', ['-'])[-1]```\r\n at https://github.com/pypa/pip/blob/9.0.1/pip/commands/search.py#L107 and https://github.com/pypa/pip/blob/9.0.1/pip/commands/search.py#L99\r\n\r\nRather it should do something like\r\n```version = highest_version(hit.get('versions', ['-']))```\r\n\r\n\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport logging\nimport sys\nimport textwrap\n\nfrom pip.basecommand import Command, SUCCESS\nfrom pip.compat import OrderedDict\nfrom pip.download import PipXmlrpcTransport\nfrom pip.models import PyPI\nfrom pip.utils import get_terminal_size\nfrom pip.utils.logging import indent_log\nfrom pip.exceptions import CommandError\nfrom pip.status_codes import NO_MATCHES_FOUND\nfrom pip._vendor.packaging.version import parse as parse_version\nfrom pip._vendor import pkg_resources\nfrom pip._vendor.six.moves import xmlrpc_client\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass SearchCommand(Command):\n \"\"\"Search for PyPI packages whose name or summary contains <query>.\"\"\"\n name = 'search'\n usage = \"\"\"\n %prog [options] <query>\"\"\"\n summary = 'Search PyPI for packages.'\n\n def __init__(self, *args, **kw):\n super(SearchCommand, self).__init__(*args, **kw)\n self.cmd_opts.add_option(\n '-i', '--index',\n dest='index',\n metavar='URL',\n default=PyPI.pypi_url,\n help='Base URL of Python Package Index (default %default)')\n\n self.parser.insert_option_group(0, self.cmd_opts)\n\n def run(self, options, args):\n if not args:\n raise CommandError('Missing required argument (search query).')\n query = args\n pypi_hits = self.search(query, options)\n hits = transform_hits(pypi_hits)\n\n terminal_width = None\n if sys.stdout.isatty():\n terminal_width = get_terminal_size()[0]\n\n print_results(hits, terminal_width=terminal_width)\n if pypi_hits:\n return SUCCESS\n return NO_MATCHES_FOUND\n\n def search(self, query, options):\n index_url = options.index\n with self._build_session(options) as session:\n transport = PipXmlrpcTransport(index_url, session)\n pypi = xmlrpc_client.ServerProxy(index_url, transport)\n hits = pypi.search({'name': query, 'summary': query}, 'or')\n return hits\n\n\ndef transform_hits(hits):\n \"\"\"\n The list from pypi is really a list of versions. We want a list of\n packages with the list of versions stored inline. This converts the\n list from pypi into one we can use.\n \"\"\"\n packages = OrderedDict()\n for hit in hits:\n name = hit['name']\n summary = hit['summary']\n version = hit['version']\n\n if name not in packages.keys():\n packages[name] = {\n 'name': name,\n 'summary': summary,\n 'versions': [version],\n }\n else:\n packages[name]['versions'].append(version)\n\n # if this is the highest version, replace summary and score\n if version == highest_version(packages[name]['versions']):\n packages[name]['summary'] = summary\n\n return list(packages.values())\n\n\ndef print_results(hits, name_column_width=None, terminal_width=None):\n if not hits:\n return\n if name_column_width is None:\n name_column_width = max([\n len(hit['name']) + len(hit.get('versions', ['-'])[-1])\n for hit in hits\n ]) + 4\n\n installed_packages = [p.project_name for p in pkg_resources.working_set]\n for hit in hits:\n name = hit['name']\n summary = hit['summary'] or ''\n version = hit.get('versions', ['-'])[-1]\n if terminal_width is not None:\n target_width = terminal_width - name_column_width - 5\n if target_width > 10:\n # wrap and indent summary to fit terminal\n summary = textwrap.wrap(summary, target_width)\n summary = ('\\n' + ' ' * (name_column_width + 3)).join(summary)\n\n line = '%-*s - %s' % (name_column_width,\n '%s (%s)' % (name, version), summary)\n try:\n logger.info(line)\n if name in installed_packages:\n dist = pkg_resources.get_distribution(name)\n with indent_log():\n latest = highest_version(hit['versions'])\n if dist.version == latest:\n logger.info('INSTALLED: %s (latest)', dist.version)\n else:\n logger.info('INSTALLED: %s', dist.version)\n logger.info('LATEST: %s', latest)\n except UnicodeEncodeError:\n pass\n\n\ndef highest_version(versions):\n return max(versions, key=parse_version)\n", "path": "pip/commands/search.py"}], "after_files": [{"content": "from __future__ import absolute_import\n\nimport logging\nimport sys\nimport textwrap\n\nfrom pip.basecommand import Command, SUCCESS\nfrom pip.compat import OrderedDict\nfrom pip.download import PipXmlrpcTransport\nfrom pip.models import PyPI\nfrom pip.utils import get_terminal_size\nfrom pip.utils.logging import indent_log\nfrom pip.exceptions import CommandError\nfrom pip.status_codes import NO_MATCHES_FOUND\nfrom pip._vendor.packaging.version import parse as parse_version\nfrom pip._vendor import pkg_resources\nfrom pip._vendor.six.moves import xmlrpc_client\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass SearchCommand(Command):\n \"\"\"Search for PyPI packages whose name or summary contains <query>.\"\"\"\n name = 'search'\n usage = \"\"\"\n %prog [options] <query>\"\"\"\n summary = 'Search PyPI for packages.'\n\n def __init__(self, *args, **kw):\n super(SearchCommand, self).__init__(*args, **kw)\n self.cmd_opts.add_option(\n '-i', '--index',\n dest='index',\n metavar='URL',\n default=PyPI.pypi_url,\n help='Base URL of Python Package Index (default %default)')\n\n self.parser.insert_option_group(0, self.cmd_opts)\n\n def run(self, options, args):\n if not args:\n raise CommandError('Missing required argument (search query).')\n query = args\n pypi_hits = self.search(query, options)\n hits = transform_hits(pypi_hits)\n\n terminal_width = None\n if sys.stdout.isatty():\n terminal_width = get_terminal_size()[0]\n\n print_results(hits, terminal_width=terminal_width)\n if pypi_hits:\n return SUCCESS\n return NO_MATCHES_FOUND\n\n def search(self, query, options):\n index_url = options.index\n with self._build_session(options) as session:\n transport = PipXmlrpcTransport(index_url, session)\n pypi = xmlrpc_client.ServerProxy(index_url, transport)\n hits = pypi.search({'name': query, 'summary': query}, 'or')\n return hits\n\n\ndef transform_hits(hits):\n \"\"\"\n The list from pypi is really a list of versions. We want a list of\n packages with the list of versions stored inline. This converts the\n list from pypi into one we can use.\n \"\"\"\n packages = OrderedDict()\n for hit in hits:\n name = hit['name']\n summary = hit['summary']\n version = hit['version']\n\n if name not in packages.keys():\n packages[name] = {\n 'name': name,\n 'summary': summary,\n 'versions': [version],\n }\n else:\n packages[name]['versions'].append(version)\n\n # if this is the highest version, replace summary and score\n if version == highest_version(packages[name]['versions']):\n packages[name]['summary'] = summary\n\n return list(packages.values())\n\n\ndef print_results(hits, name_column_width=None, terminal_width=None):\n if not hits:\n return\n if name_column_width is None:\n name_column_width = max([\n len(hit['name']) + len(highest_version(hit.get('versions', ['-'])))\n for hit in hits\n ]) + 4\n\n installed_packages = [p.project_name for p in pkg_resources.working_set]\n for hit in hits:\n name = hit['name']\n summary = hit['summary'] or ''\n latest = highest_version(hit.get('versions', ['-']))\n if terminal_width is not None:\n target_width = terminal_width - name_column_width - 5\n if target_width > 10:\n # wrap and indent summary to fit terminal\n summary = textwrap.wrap(summary, target_width)\n summary = ('\\n' + ' ' * (name_column_width + 3)).join(summary)\n\n line = '%-*s - %s' % (name_column_width,\n '%s (%s)' % (name, latest), summary)\n try:\n logger.info(line)\n if name in installed_packages:\n dist = pkg_resources.get_distribution(name)\n with indent_log():\n if dist.version == latest:\n logger.info('INSTALLED: %s (latest)', dist.version)\n else:\n logger.info('INSTALLED: %s', dist.version)\n logger.info('LATEST: %s', latest)\n except UnicodeEncodeError:\n pass\n\n\ndef highest_version(versions):\n return max(versions, key=parse_version)\n", "path": "pip/commands/search.py"}]}
| 1,740 | 359 |
gh_patches_debug_28928
|
rasdani/github-patches
|
git_diff
|
pallets__werkzeug-1610
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Reloader picks wrong module when Flask is run with the pydev debugger
This is a weird situation where the pallets/werkzeug#1416 fix to make `python -m` reloading more correct actually exposed an issue with PyDev. It rewrites `python -m flask` to `python flask` (which is clearly not correct), while Python itself rewrites it to `python /path/to/flask_entry_point.py`. Werkzeug still correctly detects that we were run as a module, but since `sys.argv[0]` is no longer a path but the module name, it incorrectly decides that there is a module named `flask.flask` in the current directory.
_Originally posted by @davidism in https://github.com/pallets/flask/issues/3297#issuecomment-510120836_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/werkzeug/_reloader.py`
Content:
```
1 import os
2 import subprocess
3 import sys
4 import threading
5 import time
6 from itertools import chain
7
8 from ._compat import iteritems
9 from ._compat import PY2
10 from ._compat import text_type
11 from ._internal import _log
12
13
14 def _iter_module_files():
15 """This iterates over all relevant Python files. It goes through all
16 loaded files from modules, all files in folders of already loaded modules
17 as well as all files reachable through a package.
18 """
19 # The list call is necessary on Python 3 in case the module
20 # dictionary modifies during iteration.
21 for module in list(sys.modules.values()):
22 if module is None:
23 continue
24 filename = getattr(module, "__file__", None)
25 if filename:
26 if os.path.isdir(filename) and os.path.exists(
27 os.path.join(filename, "__init__.py")
28 ):
29 filename = os.path.join(filename, "__init__.py")
30
31 old = None
32 while not os.path.isfile(filename):
33 old = filename
34 filename = os.path.dirname(filename)
35 if filename == old:
36 break
37 else:
38 if filename[-4:] in (".pyc", ".pyo"):
39 filename = filename[:-1]
40 yield filename
41
42
43 def _find_observable_paths(extra_files=None):
44 """Finds all paths that should be observed."""
45 rv = set(
46 os.path.dirname(os.path.abspath(x)) if os.path.isfile(x) else os.path.abspath(x)
47 for x in sys.path
48 )
49
50 for filename in extra_files or ():
51 rv.add(os.path.dirname(os.path.abspath(filename)))
52
53 for module in list(sys.modules.values()):
54 fn = getattr(module, "__file__", None)
55 if fn is None:
56 continue
57 fn = os.path.abspath(fn)
58 rv.add(os.path.dirname(fn))
59
60 return _find_common_roots(rv)
61
62
63 def _get_args_for_reloading():
64 """Returns the executable. This contains a workaround for windows
65 if the executable is incorrectly reported to not have the .exe
66 extension which can cause bugs on reloading. This also contains
67 a workaround for linux where the file is executable (possibly with
68 a program other than python)
69 """
70 rv = [sys.executable]
71 py_script = os.path.abspath(sys.argv[0])
72 args = sys.argv[1:]
73 # Need to look at main module to determine how it was executed.
74 __main__ = sys.modules["__main__"]
75
76 if __main__.__package__ is None:
77 # Executed a file, like "python app.py".
78 if os.name == "nt":
79 # Windows entry points have ".exe" extension and should be
80 # called directly.
81 if not os.path.exists(py_script) and os.path.exists(py_script + ".exe"):
82 py_script += ".exe"
83
84 if (
85 os.path.splitext(rv[0])[1] == ".exe"
86 and os.path.splitext(py_script)[1] == ".exe"
87 ):
88 rv.pop(0)
89
90 elif os.path.isfile(py_script) and os.access(py_script, os.X_OK):
91 # The file is marked as executable. Nix adds a wrapper that
92 # shouldn't be called with the Python executable.
93 rv.pop(0)
94
95 rv.append(py_script)
96 else:
97 # Executed a module, like "python -m werkzeug.serving".
98 if sys.argv[0] == "-m":
99 # Flask works around previous behavior by putting
100 # "-m flask" in sys.argv.
101 # TODO remove this once Flask no longer misbehaves
102 args = sys.argv
103 else:
104 py_module = __main__.__package__
105 name = os.path.splitext(os.path.basename(py_script))[0]
106
107 if name != "__main__":
108 py_module += "." + name
109
110 rv.extend(("-m", py_module.lstrip(".")))
111
112 rv.extend(args)
113 return rv
114
115
116 def _find_common_roots(paths):
117 """Out of some paths it finds the common roots that need monitoring."""
118 paths = [x.split(os.path.sep) for x in paths]
119 root = {}
120 for chunks in sorted(paths, key=len, reverse=True):
121 node = root
122 for chunk in chunks:
123 node = node.setdefault(chunk, {})
124 node.clear()
125
126 rv = set()
127
128 def _walk(node, path):
129 for prefix, child in iteritems(node):
130 _walk(child, path + (prefix,))
131 if not node:
132 rv.add("/".join(path))
133
134 _walk(root, ())
135 return rv
136
137
138 class ReloaderLoop(object):
139 name = None
140
141 # monkeypatched by testsuite. wrapping with `staticmethod` is required in
142 # case time.sleep has been replaced by a non-c function (e.g. by
143 # `eventlet.monkey_patch`) before we get here
144 _sleep = staticmethod(time.sleep)
145
146 def __init__(self, extra_files=None, interval=1):
147 self.extra_files = set(os.path.abspath(x) for x in extra_files or ())
148 self.interval = interval
149
150 def run(self):
151 pass
152
153 def restart_with_reloader(self):
154 """Spawn a new Python interpreter with the same arguments as this one,
155 but running the reloader thread.
156 """
157 while 1:
158 _log("info", " * Restarting with %s" % self.name)
159 args = _get_args_for_reloading()
160
161 # a weird bug on windows. sometimes unicode strings end up in the
162 # environment and subprocess.call does not like this, encode them
163 # to latin1 and continue.
164 if os.name == "nt" and PY2:
165 new_environ = {}
166 for key, value in iteritems(os.environ):
167 if isinstance(key, text_type):
168 key = key.encode("iso-8859-1")
169 if isinstance(value, text_type):
170 value = value.encode("iso-8859-1")
171 new_environ[key] = value
172 else:
173 new_environ = os.environ.copy()
174
175 new_environ["WERKZEUG_RUN_MAIN"] = "true"
176 exit_code = subprocess.call(args, env=new_environ, close_fds=False)
177 if exit_code != 3:
178 return exit_code
179
180 def trigger_reload(self, filename):
181 self.log_reload(filename)
182 sys.exit(3)
183
184 def log_reload(self, filename):
185 filename = os.path.abspath(filename)
186 _log("info", " * Detected change in %r, reloading" % filename)
187
188
189 class StatReloaderLoop(ReloaderLoop):
190 name = "stat"
191
192 def run(self):
193 mtimes = {}
194 while 1:
195 for filename in chain(_iter_module_files(), self.extra_files):
196 try:
197 mtime = os.stat(filename).st_mtime
198 except OSError:
199 continue
200
201 old_time = mtimes.get(filename)
202 if old_time is None:
203 mtimes[filename] = mtime
204 continue
205 elif mtime > old_time:
206 self.trigger_reload(filename)
207 self._sleep(self.interval)
208
209
210 class WatchdogReloaderLoop(ReloaderLoop):
211 def __init__(self, *args, **kwargs):
212 ReloaderLoop.__init__(self, *args, **kwargs)
213 from watchdog.observers import Observer
214 from watchdog.events import FileSystemEventHandler
215
216 self.observable_paths = set()
217
218 def _check_modification(filename):
219 if filename in self.extra_files:
220 self.trigger_reload(filename)
221 dirname = os.path.dirname(filename)
222 if dirname.startswith(tuple(self.observable_paths)):
223 if filename.endswith((".pyc", ".pyo", ".py")):
224 self.trigger_reload(filename)
225
226 class _CustomHandler(FileSystemEventHandler):
227 def on_created(self, event):
228 _check_modification(event.src_path)
229
230 def on_modified(self, event):
231 _check_modification(event.src_path)
232
233 def on_moved(self, event):
234 _check_modification(event.src_path)
235 _check_modification(event.dest_path)
236
237 def on_deleted(self, event):
238 _check_modification(event.src_path)
239
240 reloader_name = Observer.__name__.lower()
241 if reloader_name.endswith("observer"):
242 reloader_name = reloader_name[:-8]
243 reloader_name += " reloader"
244
245 self.name = reloader_name
246
247 self.observer_class = Observer
248 self.event_handler = _CustomHandler()
249 self.should_reload = False
250
251 def trigger_reload(self, filename):
252 # This is called inside an event handler, which means throwing
253 # SystemExit has no effect.
254 # https://github.com/gorakhargosh/watchdog/issues/294
255 self.should_reload = True
256 self.log_reload(filename)
257
258 def run(self):
259 watches = {}
260 observer = self.observer_class()
261 observer.start()
262
263 try:
264 while not self.should_reload:
265 to_delete = set(watches)
266 paths = _find_observable_paths(self.extra_files)
267 for path in paths:
268 if path not in watches:
269 try:
270 watches[path] = observer.schedule(
271 self.event_handler, path, recursive=True
272 )
273 except OSError:
274 # Clear this path from list of watches We don't want
275 # the same error message showing again in the next
276 # iteration.
277 watches[path] = None
278 to_delete.discard(path)
279 for path in to_delete:
280 watch = watches.pop(path, None)
281 if watch is not None:
282 observer.unschedule(watch)
283 self.observable_paths = paths
284 self._sleep(self.interval)
285 finally:
286 observer.stop()
287 observer.join()
288
289 sys.exit(3)
290
291
292 reloader_loops = {"stat": StatReloaderLoop, "watchdog": WatchdogReloaderLoop}
293
294 try:
295 __import__("watchdog.observers")
296 except ImportError:
297 reloader_loops["auto"] = reloader_loops["stat"]
298 else:
299 reloader_loops["auto"] = reloader_loops["watchdog"]
300
301
302 def ensure_echo_on():
303 """Ensure that echo mode is enabled. Some tools such as PDB disable
304 it which causes usability issues after reload."""
305 # tcgetattr will fail if stdin isn't a tty
306 if not sys.stdin.isatty():
307 return
308 try:
309 import termios
310 except ImportError:
311 return
312 attributes = termios.tcgetattr(sys.stdin)
313 if not attributes[3] & termios.ECHO:
314 attributes[3] |= termios.ECHO
315 termios.tcsetattr(sys.stdin, termios.TCSANOW, attributes)
316
317
318 def run_with_reloader(main_func, extra_files=None, interval=1, reloader_type="auto"):
319 """Run the given function in an independent python interpreter."""
320 import signal
321
322 reloader = reloader_loops[reloader_type](extra_files, interval)
323 signal.signal(signal.SIGTERM, lambda *args: sys.exit(0))
324 try:
325 if os.environ.get("WERKZEUG_RUN_MAIN") == "true":
326 ensure_echo_on()
327 t = threading.Thread(target=main_func, args=())
328 t.setDaemon(True)
329 t.start()
330 reloader.run()
331 else:
332 sys.exit(reloader.restart_with_reloader())
333 except KeyboardInterrupt:
334 pass
335
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/werkzeug/_reloader.py b/src/werkzeug/_reloader.py
--- a/src/werkzeug/_reloader.py
+++ b/src/werkzeug/_reloader.py
@@ -68,13 +68,15 @@
a program other than python)
"""
rv = [sys.executable]
- py_script = os.path.abspath(sys.argv[0])
+ py_script = sys.argv[0]
args = sys.argv[1:]
# Need to look at main module to determine how it was executed.
__main__ = sys.modules["__main__"]
if __main__.__package__ is None:
# Executed a file, like "python app.py".
+ py_script = os.path.abspath(py_script)
+
if os.name == "nt":
# Windows entry points have ".exe" extension and should be
# called directly.
@@ -101,11 +103,16 @@
# TODO remove this once Flask no longer misbehaves
args = sys.argv
else:
- py_module = __main__.__package__
- name = os.path.splitext(os.path.basename(py_script))[0]
+ if os.path.isfile(py_script):
+ # Rewritten by Python from "-m script" to "/path/to/script.py".
+ py_module = __main__.__package__
+ name = os.path.splitext(os.path.basename(py_script))[0]
- if name != "__main__":
- py_module += "." + name
+ if name != "__main__":
+ py_module += "." + name
+ else:
+ # Incorrectly rewritten by pydevd debugger from "-m script" to "script".
+ py_module = py_script
rv.extend(("-m", py_module.lstrip(".")))
|
{"golden_diff": "diff --git a/src/werkzeug/_reloader.py b/src/werkzeug/_reloader.py\n--- a/src/werkzeug/_reloader.py\n+++ b/src/werkzeug/_reloader.py\n@@ -68,13 +68,15 @@\n a program other than python)\n \"\"\"\n rv = [sys.executable]\n- py_script = os.path.abspath(sys.argv[0])\n+ py_script = sys.argv[0]\n args = sys.argv[1:]\n # Need to look at main module to determine how it was executed.\n __main__ = sys.modules[\"__main__\"]\n \n if __main__.__package__ is None:\n # Executed a file, like \"python app.py\".\n+ py_script = os.path.abspath(py_script)\n+\n if os.name == \"nt\":\n # Windows entry points have \".exe\" extension and should be\n # called directly.\n@@ -101,11 +103,16 @@\n # TODO remove this once Flask no longer misbehaves\n args = sys.argv\n else:\n- py_module = __main__.__package__\n- name = os.path.splitext(os.path.basename(py_script))[0]\n+ if os.path.isfile(py_script):\n+ # Rewritten by Python from \"-m script\" to \"/path/to/script.py\".\n+ py_module = __main__.__package__\n+ name = os.path.splitext(os.path.basename(py_script))[0]\n \n- if name != \"__main__\":\n- py_module += \".\" + name\n+ if name != \"__main__\":\n+ py_module += \".\" + name\n+ else:\n+ # Incorrectly rewritten by pydevd debugger from \"-m script\" to \"script\".\n+ py_module = py_script\n \n rv.extend((\"-m\", py_module.lstrip(\".\")))\n", "issue": "Reloader picks wrong module when Flask is run with the pydev debugger\nThis is a weird situation where the pallets/werkzeug#1416 fix to make `python -m` reloading more correct actually exposed an issue with PyDev. It rewrites `python -m flask` to `python flask` (which is clearly not correct), while Python itself rewrites it to `python /path/to/flask_entry_point.py`. Werkzeug still correctly detects that we were run as a module, but since `sys.argv[0]` is no longer a path but the module name, it incorrectly decides that there is a module named `flask.flask` in the current directory.\r\n\r\n_Originally posted by @davidism in https://github.com/pallets/flask/issues/3297#issuecomment-510120836_\n", "before_files": [{"content": "import os\nimport subprocess\nimport sys\nimport threading\nimport time\nfrom itertools import chain\n\nfrom ._compat import iteritems\nfrom ._compat import PY2\nfrom ._compat import text_type\nfrom ._internal import _log\n\n\ndef _iter_module_files():\n \"\"\"This iterates over all relevant Python files. It goes through all\n loaded files from modules, all files in folders of already loaded modules\n as well as all files reachable through a package.\n \"\"\"\n # The list call is necessary on Python 3 in case the module\n # dictionary modifies during iteration.\n for module in list(sys.modules.values()):\n if module is None:\n continue\n filename = getattr(module, \"__file__\", None)\n if filename:\n if os.path.isdir(filename) and os.path.exists(\n os.path.join(filename, \"__init__.py\")\n ):\n filename = os.path.join(filename, \"__init__.py\")\n\n old = None\n while not os.path.isfile(filename):\n old = filename\n filename = os.path.dirname(filename)\n if filename == old:\n break\n else:\n if filename[-4:] in (\".pyc\", \".pyo\"):\n filename = filename[:-1]\n yield filename\n\n\ndef _find_observable_paths(extra_files=None):\n \"\"\"Finds all paths that should be observed.\"\"\"\n rv = set(\n os.path.dirname(os.path.abspath(x)) if os.path.isfile(x) else os.path.abspath(x)\n for x in sys.path\n )\n\n for filename in extra_files or ():\n rv.add(os.path.dirname(os.path.abspath(filename)))\n\n for module in list(sys.modules.values()):\n fn = getattr(module, \"__file__\", None)\n if fn is None:\n continue\n fn = os.path.abspath(fn)\n rv.add(os.path.dirname(fn))\n\n return _find_common_roots(rv)\n\n\ndef _get_args_for_reloading():\n \"\"\"Returns the executable. This contains a workaround for windows\n if the executable is incorrectly reported to not have the .exe\n extension which can cause bugs on reloading. This also contains\n a workaround for linux where the file is executable (possibly with\n a program other than python)\n \"\"\"\n rv = [sys.executable]\n py_script = os.path.abspath(sys.argv[0])\n args = sys.argv[1:]\n # Need to look at main module to determine how it was executed.\n __main__ = sys.modules[\"__main__\"]\n\n if __main__.__package__ is None:\n # Executed a file, like \"python app.py\".\n if os.name == \"nt\":\n # Windows entry points have \".exe\" extension and should be\n # called directly.\n if not os.path.exists(py_script) and os.path.exists(py_script + \".exe\"):\n py_script += \".exe\"\n\n if (\n os.path.splitext(rv[0])[1] == \".exe\"\n and os.path.splitext(py_script)[1] == \".exe\"\n ):\n rv.pop(0)\n\n elif os.path.isfile(py_script) and os.access(py_script, os.X_OK):\n # The file is marked as executable. Nix adds a wrapper that\n # shouldn't be called with the Python executable.\n rv.pop(0)\n\n rv.append(py_script)\n else:\n # Executed a module, like \"python -m werkzeug.serving\".\n if sys.argv[0] == \"-m\":\n # Flask works around previous behavior by putting\n # \"-m flask\" in sys.argv.\n # TODO remove this once Flask no longer misbehaves\n args = sys.argv\n else:\n py_module = __main__.__package__\n name = os.path.splitext(os.path.basename(py_script))[0]\n\n if name != \"__main__\":\n py_module += \".\" + name\n\n rv.extend((\"-m\", py_module.lstrip(\".\")))\n\n rv.extend(args)\n return rv\n\n\ndef _find_common_roots(paths):\n \"\"\"Out of some paths it finds the common roots that need monitoring.\"\"\"\n paths = [x.split(os.path.sep) for x in paths]\n root = {}\n for chunks in sorted(paths, key=len, reverse=True):\n node = root\n for chunk in chunks:\n node = node.setdefault(chunk, {})\n node.clear()\n\n rv = set()\n\n def _walk(node, path):\n for prefix, child in iteritems(node):\n _walk(child, path + (prefix,))\n if not node:\n rv.add(\"/\".join(path))\n\n _walk(root, ())\n return rv\n\n\nclass ReloaderLoop(object):\n name = None\n\n # monkeypatched by testsuite. wrapping with `staticmethod` is required in\n # case time.sleep has been replaced by a non-c function (e.g. by\n # `eventlet.monkey_patch`) before we get here\n _sleep = staticmethod(time.sleep)\n\n def __init__(self, extra_files=None, interval=1):\n self.extra_files = set(os.path.abspath(x) for x in extra_files or ())\n self.interval = interval\n\n def run(self):\n pass\n\n def restart_with_reloader(self):\n \"\"\"Spawn a new Python interpreter with the same arguments as this one,\n but running the reloader thread.\n \"\"\"\n while 1:\n _log(\"info\", \" * Restarting with %s\" % self.name)\n args = _get_args_for_reloading()\n\n # a weird bug on windows. sometimes unicode strings end up in the\n # environment and subprocess.call does not like this, encode them\n # to latin1 and continue.\n if os.name == \"nt\" and PY2:\n new_environ = {}\n for key, value in iteritems(os.environ):\n if isinstance(key, text_type):\n key = key.encode(\"iso-8859-1\")\n if isinstance(value, text_type):\n value = value.encode(\"iso-8859-1\")\n new_environ[key] = value\n else:\n new_environ = os.environ.copy()\n\n new_environ[\"WERKZEUG_RUN_MAIN\"] = \"true\"\n exit_code = subprocess.call(args, env=new_environ, close_fds=False)\n if exit_code != 3:\n return exit_code\n\n def trigger_reload(self, filename):\n self.log_reload(filename)\n sys.exit(3)\n\n def log_reload(self, filename):\n filename = os.path.abspath(filename)\n _log(\"info\", \" * Detected change in %r, reloading\" % filename)\n\n\nclass StatReloaderLoop(ReloaderLoop):\n name = \"stat\"\n\n def run(self):\n mtimes = {}\n while 1:\n for filename in chain(_iter_module_files(), self.extra_files):\n try:\n mtime = os.stat(filename).st_mtime\n except OSError:\n continue\n\n old_time = mtimes.get(filename)\n if old_time is None:\n mtimes[filename] = mtime\n continue\n elif mtime > old_time:\n self.trigger_reload(filename)\n self._sleep(self.interval)\n\n\nclass WatchdogReloaderLoop(ReloaderLoop):\n def __init__(self, *args, **kwargs):\n ReloaderLoop.__init__(self, *args, **kwargs)\n from watchdog.observers import Observer\n from watchdog.events import FileSystemEventHandler\n\n self.observable_paths = set()\n\n def _check_modification(filename):\n if filename in self.extra_files:\n self.trigger_reload(filename)\n dirname = os.path.dirname(filename)\n if dirname.startswith(tuple(self.observable_paths)):\n if filename.endswith((\".pyc\", \".pyo\", \".py\")):\n self.trigger_reload(filename)\n\n class _CustomHandler(FileSystemEventHandler):\n def on_created(self, event):\n _check_modification(event.src_path)\n\n def on_modified(self, event):\n _check_modification(event.src_path)\n\n def on_moved(self, event):\n _check_modification(event.src_path)\n _check_modification(event.dest_path)\n\n def on_deleted(self, event):\n _check_modification(event.src_path)\n\n reloader_name = Observer.__name__.lower()\n if reloader_name.endswith(\"observer\"):\n reloader_name = reloader_name[:-8]\n reloader_name += \" reloader\"\n\n self.name = reloader_name\n\n self.observer_class = Observer\n self.event_handler = _CustomHandler()\n self.should_reload = False\n\n def trigger_reload(self, filename):\n # This is called inside an event handler, which means throwing\n # SystemExit has no effect.\n # https://github.com/gorakhargosh/watchdog/issues/294\n self.should_reload = True\n self.log_reload(filename)\n\n def run(self):\n watches = {}\n observer = self.observer_class()\n observer.start()\n\n try:\n while not self.should_reload:\n to_delete = set(watches)\n paths = _find_observable_paths(self.extra_files)\n for path in paths:\n if path not in watches:\n try:\n watches[path] = observer.schedule(\n self.event_handler, path, recursive=True\n )\n except OSError:\n # Clear this path from list of watches We don't want\n # the same error message showing again in the next\n # iteration.\n watches[path] = None\n to_delete.discard(path)\n for path in to_delete:\n watch = watches.pop(path, None)\n if watch is not None:\n observer.unschedule(watch)\n self.observable_paths = paths\n self._sleep(self.interval)\n finally:\n observer.stop()\n observer.join()\n\n sys.exit(3)\n\n\nreloader_loops = {\"stat\": StatReloaderLoop, \"watchdog\": WatchdogReloaderLoop}\n\ntry:\n __import__(\"watchdog.observers\")\nexcept ImportError:\n reloader_loops[\"auto\"] = reloader_loops[\"stat\"]\nelse:\n reloader_loops[\"auto\"] = reloader_loops[\"watchdog\"]\n\n\ndef ensure_echo_on():\n \"\"\"Ensure that echo mode is enabled. Some tools such as PDB disable\n it which causes usability issues after reload.\"\"\"\n # tcgetattr will fail if stdin isn't a tty\n if not sys.stdin.isatty():\n return\n try:\n import termios\n except ImportError:\n return\n attributes = termios.tcgetattr(sys.stdin)\n if not attributes[3] & termios.ECHO:\n attributes[3] |= termios.ECHO\n termios.tcsetattr(sys.stdin, termios.TCSANOW, attributes)\n\n\ndef run_with_reloader(main_func, extra_files=None, interval=1, reloader_type=\"auto\"):\n \"\"\"Run the given function in an independent python interpreter.\"\"\"\n import signal\n\n reloader = reloader_loops[reloader_type](extra_files, interval)\n signal.signal(signal.SIGTERM, lambda *args: sys.exit(0))\n try:\n if os.environ.get(\"WERKZEUG_RUN_MAIN\") == \"true\":\n ensure_echo_on()\n t = threading.Thread(target=main_func, args=())\n t.setDaemon(True)\n t.start()\n reloader.run()\n else:\n sys.exit(reloader.restart_with_reloader())\n except KeyboardInterrupt:\n pass\n", "path": "src/werkzeug/_reloader.py"}], "after_files": [{"content": "import os\nimport subprocess\nimport sys\nimport threading\nimport time\nfrom itertools import chain\n\nfrom ._compat import iteritems\nfrom ._compat import PY2\nfrom ._compat import text_type\nfrom ._internal import _log\n\n\ndef _iter_module_files():\n \"\"\"This iterates over all relevant Python files. It goes through all\n loaded files from modules, all files in folders of already loaded modules\n as well as all files reachable through a package.\n \"\"\"\n # The list call is necessary on Python 3 in case the module\n # dictionary modifies during iteration.\n for module in list(sys.modules.values()):\n if module is None:\n continue\n filename = getattr(module, \"__file__\", None)\n if filename:\n if os.path.isdir(filename) and os.path.exists(\n os.path.join(filename, \"__init__.py\")\n ):\n filename = os.path.join(filename, \"__init__.py\")\n\n old = None\n while not os.path.isfile(filename):\n old = filename\n filename = os.path.dirname(filename)\n if filename == old:\n break\n else:\n if filename[-4:] in (\".pyc\", \".pyo\"):\n filename = filename[:-1]\n yield filename\n\n\ndef _find_observable_paths(extra_files=None):\n \"\"\"Finds all paths that should be observed.\"\"\"\n rv = set(\n os.path.dirname(os.path.abspath(x)) if os.path.isfile(x) else os.path.abspath(x)\n for x in sys.path\n )\n\n for filename in extra_files or ():\n rv.add(os.path.dirname(os.path.abspath(filename)))\n\n for module in list(sys.modules.values()):\n fn = getattr(module, \"__file__\", None)\n if fn is None:\n continue\n fn = os.path.abspath(fn)\n rv.add(os.path.dirname(fn))\n\n return _find_common_roots(rv)\n\n\ndef _get_args_for_reloading():\n \"\"\"Returns the executable. This contains a workaround for windows\n if the executable is incorrectly reported to not have the .exe\n extension which can cause bugs on reloading. This also contains\n a workaround for linux where the file is executable (possibly with\n a program other than python)\n \"\"\"\n rv = [sys.executable]\n py_script = sys.argv[0]\n args = sys.argv[1:]\n # Need to look at main module to determine how it was executed.\n __main__ = sys.modules[\"__main__\"]\n\n if __main__.__package__ is None:\n # Executed a file, like \"python app.py\".\n py_script = os.path.abspath(py_script)\n\n if os.name == \"nt\":\n # Windows entry points have \".exe\" extension and should be\n # called directly.\n if not os.path.exists(py_script) and os.path.exists(py_script + \".exe\"):\n py_script += \".exe\"\n\n if (\n os.path.splitext(rv[0])[1] == \".exe\"\n and os.path.splitext(py_script)[1] == \".exe\"\n ):\n rv.pop(0)\n\n elif os.path.isfile(py_script) and os.access(py_script, os.X_OK):\n # The file is marked as executable. Nix adds a wrapper that\n # shouldn't be called with the Python executable.\n rv.pop(0)\n\n rv.append(py_script)\n else:\n # Executed a module, like \"python -m werkzeug.serving\".\n if sys.argv[0] == \"-m\":\n # Flask works around previous behavior by putting\n # \"-m flask\" in sys.argv.\n # TODO remove this once Flask no longer misbehaves\n args = sys.argv\n else:\n if os.path.isfile(py_script):\n # Rewritten by Python from \"-m script\" to \"/path/to/script.py\".\n py_module = __main__.__package__\n name = os.path.splitext(os.path.basename(py_script))[0]\n\n if name != \"__main__\":\n py_module += \".\" + name\n else:\n # Incorrectly rewritten by pydevd debugger from \"-m script\" to \"script\".\n py_module = py_script\n\n rv.extend((\"-m\", py_module.lstrip(\".\")))\n\n rv.extend(args)\n return rv\n\n\ndef _find_common_roots(paths):\n \"\"\"Out of some paths it finds the common roots that need monitoring.\"\"\"\n paths = [x.split(os.path.sep) for x in paths]\n root = {}\n for chunks in sorted(paths, key=len, reverse=True):\n node = root\n for chunk in chunks:\n node = node.setdefault(chunk, {})\n node.clear()\n\n rv = set()\n\n def _walk(node, path):\n for prefix, child in iteritems(node):\n _walk(child, path + (prefix,))\n if not node:\n rv.add(\"/\".join(path))\n\n _walk(root, ())\n return rv\n\n\nclass ReloaderLoop(object):\n name = None\n\n # monkeypatched by testsuite. wrapping with `staticmethod` is required in\n # case time.sleep has been replaced by a non-c function (e.g. by\n # `eventlet.monkey_patch`) before we get here\n _sleep = staticmethod(time.sleep)\n\n def __init__(self, extra_files=None, interval=1):\n self.extra_files = set(os.path.abspath(x) for x in extra_files or ())\n self.interval = interval\n\n def run(self):\n pass\n\n def restart_with_reloader(self):\n \"\"\"Spawn a new Python interpreter with the same arguments as this one,\n but running the reloader thread.\n \"\"\"\n while 1:\n _log(\"info\", \" * Restarting with %s\" % self.name)\n args = _get_args_for_reloading()\n\n # a weird bug on windows. sometimes unicode strings end up in the\n # environment and subprocess.call does not like this, encode them\n # to latin1 and continue.\n if os.name == \"nt\" and PY2:\n new_environ = {}\n for key, value in iteritems(os.environ):\n if isinstance(key, text_type):\n key = key.encode(\"iso-8859-1\")\n if isinstance(value, text_type):\n value = value.encode(\"iso-8859-1\")\n new_environ[key] = value\n else:\n new_environ = os.environ.copy()\n\n new_environ[\"WERKZEUG_RUN_MAIN\"] = \"true\"\n exit_code = subprocess.call(args, env=new_environ, close_fds=False)\n if exit_code != 3:\n return exit_code\n\n def trigger_reload(self, filename):\n self.log_reload(filename)\n sys.exit(3)\n\n def log_reload(self, filename):\n filename = os.path.abspath(filename)\n _log(\"info\", \" * Detected change in %r, reloading\" % filename)\n\n\nclass StatReloaderLoop(ReloaderLoop):\n name = \"stat\"\n\n def run(self):\n mtimes = {}\n while 1:\n for filename in chain(_iter_module_files(), self.extra_files):\n try:\n mtime = os.stat(filename).st_mtime\n except OSError:\n continue\n\n old_time = mtimes.get(filename)\n if old_time is None:\n mtimes[filename] = mtime\n continue\n elif mtime > old_time:\n self.trigger_reload(filename)\n self._sleep(self.interval)\n\n\nclass WatchdogReloaderLoop(ReloaderLoop):\n def __init__(self, *args, **kwargs):\n ReloaderLoop.__init__(self, *args, **kwargs)\n from watchdog.observers import Observer\n from watchdog.events import FileSystemEventHandler\n\n self.observable_paths = set()\n\n def _check_modification(filename):\n if filename in self.extra_files:\n self.trigger_reload(filename)\n dirname = os.path.dirname(filename)\n if dirname.startswith(tuple(self.observable_paths)):\n if filename.endswith((\".pyc\", \".pyo\", \".py\")):\n self.trigger_reload(filename)\n\n class _CustomHandler(FileSystemEventHandler):\n def on_created(self, event):\n _check_modification(event.src_path)\n\n def on_modified(self, event):\n _check_modification(event.src_path)\n\n def on_moved(self, event):\n _check_modification(event.src_path)\n _check_modification(event.dest_path)\n\n def on_deleted(self, event):\n _check_modification(event.src_path)\n\n reloader_name = Observer.__name__.lower()\n if reloader_name.endswith(\"observer\"):\n reloader_name = reloader_name[:-8]\n reloader_name += \" reloader\"\n\n self.name = reloader_name\n\n self.observer_class = Observer\n self.event_handler = _CustomHandler()\n self.should_reload = False\n\n def trigger_reload(self, filename):\n # This is called inside an event handler, which means throwing\n # SystemExit has no effect.\n # https://github.com/gorakhargosh/watchdog/issues/294\n self.should_reload = True\n self.log_reload(filename)\n\n def run(self):\n watches = {}\n observer = self.observer_class()\n observer.start()\n\n try:\n while not self.should_reload:\n to_delete = set(watches)\n paths = _find_observable_paths(self.extra_files)\n for path in paths:\n if path not in watches:\n try:\n watches[path] = observer.schedule(\n self.event_handler, path, recursive=True\n )\n except OSError:\n # Clear this path from list of watches We don't want\n # the same error message showing again in the next\n # iteration.\n watches[path] = None\n to_delete.discard(path)\n for path in to_delete:\n watch = watches.pop(path, None)\n if watch is not None:\n observer.unschedule(watch)\n self.observable_paths = paths\n self._sleep(self.interval)\n finally:\n observer.stop()\n observer.join()\n\n sys.exit(3)\n\n\nreloader_loops = {\"stat\": StatReloaderLoop, \"watchdog\": WatchdogReloaderLoop}\n\ntry:\n __import__(\"watchdog.observers\")\nexcept ImportError:\n reloader_loops[\"auto\"] = reloader_loops[\"stat\"]\nelse:\n reloader_loops[\"auto\"] = reloader_loops[\"watchdog\"]\n\n\ndef ensure_echo_on():\n \"\"\"Ensure that echo mode is enabled. Some tools such as PDB disable\n it which causes usability issues after reload.\"\"\"\n # tcgetattr will fail if stdin isn't a tty\n if not sys.stdin.isatty():\n return\n try:\n import termios\n except ImportError:\n return\n attributes = termios.tcgetattr(sys.stdin)\n if not attributes[3] & termios.ECHO:\n attributes[3] |= termios.ECHO\n termios.tcsetattr(sys.stdin, termios.TCSANOW, attributes)\n\n\ndef run_with_reloader(main_func, extra_files=None, interval=1, reloader_type=\"auto\"):\n \"\"\"Run the given function in an independent python interpreter.\"\"\"\n import signal\n\n reloader = reloader_loops[reloader_type](extra_files, interval)\n signal.signal(signal.SIGTERM, lambda *args: sys.exit(0))\n try:\n if os.environ.get(\"WERKZEUG_RUN_MAIN\") == \"true\":\n ensure_echo_on()\n t = threading.Thread(target=main_func, args=())\n t.setDaemon(True)\n t.start()\n reloader.run()\n else:\n sys.exit(reloader.restart_with_reloader())\n except KeyboardInterrupt:\n pass\n", "path": "src/werkzeug/_reloader.py"}]}
| 3,769 | 392 |
gh_patches_debug_16875
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-python-2105
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
AttributeError: 'async_generator_athrow' object has no attribute '__qualname__'
### How do you use Sentry?
Sentry Saas (sentry.io)
### Version
1.19.1
### Steps to Reproduce
I'm trying to use the `asyncio` integration like this:
```python
sentry_sdk.init(dsn=os.environ.get("SENTRY_DSN"), traces_sample_rate=0.1, integrations=[AsyncioIntegration()])
```
I keep on getting a traceback that seems to be a Sentry-specific issue.
### Expected Result
No tracebacks repeatedly occur
### Actual Result
I see this traceback repeatedly printed in the logs:
```python
Task exception was never retrieved
future: <Task finished name='Task-1512' coro=<patch_asyncio.<locals>._sentry_task_factory.<locals>._coro_creating_hub_and_span() done, defined at /usr/local/lib/python3.9/site-packages/sentry_sdk/integrations/asyncio.py:34> exception=AttributeError("'async_generator_athrow' object has no attribute '__qualname__'")>
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/sentry_sdk/integrations/asyncio.py", line 40, in _coro_creating_hub_and_span
with hub.start_span(op=OP.FUNCTION, description=coro.__qualname__):
AttributeError: 'async_generator_athrow' object has no attribute '__qualname__'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sentry_sdk/integrations/asyncio.py`
Content:
```
1 from __future__ import absolute_import
2 import sys
3
4 from sentry_sdk._compat import reraise
5 from sentry_sdk.consts import OP
6 from sentry_sdk.hub import Hub
7 from sentry_sdk.integrations import Integration, DidNotEnable
8 from sentry_sdk._types import TYPE_CHECKING
9 from sentry_sdk.utils import event_from_exception
10
11 try:
12 import asyncio
13 from asyncio.tasks import Task
14 except ImportError:
15 raise DidNotEnable("asyncio not available")
16
17
18 if TYPE_CHECKING:
19 from typing import Any
20
21 from sentry_sdk._types import ExcInfo
22
23
24 def patch_asyncio():
25 # type: () -> None
26 orig_task_factory = None
27 try:
28 loop = asyncio.get_running_loop()
29 orig_task_factory = loop.get_task_factory()
30
31 def _sentry_task_factory(loop, coro):
32 # type: (Any, Any) -> Any
33
34 async def _coro_creating_hub_and_span():
35 # type: () -> Any
36 hub = Hub(Hub.current)
37 result = None
38
39 with hub:
40 with hub.start_span(op=OP.FUNCTION, description=coro.__qualname__):
41 try:
42 result = await coro
43 except Exception:
44 reraise(*_capture_exception(hub))
45
46 return result
47
48 # Trying to use user set task factory (if there is one)
49 if orig_task_factory:
50 return orig_task_factory(loop, _coro_creating_hub_and_span())
51
52 # The default task factory in `asyncio` does not have its own function
53 # but is just a couple of lines in `asyncio.base_events.create_task()`
54 # Those lines are copied here.
55
56 # WARNING:
57 # If the default behavior of the task creation in asyncio changes,
58 # this will break!
59 task = Task(_coro_creating_hub_and_span(), loop=loop)
60 if task._source_traceback: # type: ignore
61 del task._source_traceback[-1] # type: ignore
62
63 return task
64
65 loop.set_task_factory(_sentry_task_factory)
66 except RuntimeError:
67 # When there is no running loop, we have nothing to patch.
68 pass
69
70
71 def _capture_exception(hub):
72 # type: (Hub) -> ExcInfo
73 exc_info = sys.exc_info()
74
75 integration = hub.get_integration(AsyncioIntegration)
76 if integration is not None:
77 # If an integration is there, a client has to be there.
78 client = hub.client # type: Any
79
80 event, hint = event_from_exception(
81 exc_info,
82 client_options=client.options,
83 mechanism={"type": "asyncio", "handled": False},
84 )
85 hub.capture_event(event, hint=hint)
86
87 return exc_info
88
89
90 class AsyncioIntegration(Integration):
91 identifier = "asyncio"
92
93 @staticmethod
94 def setup_once():
95 # type: () -> None
96 patch_asyncio()
97
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sentry_sdk/integrations/asyncio.py b/sentry_sdk/integrations/asyncio.py
--- a/sentry_sdk/integrations/asyncio.py
+++ b/sentry_sdk/integrations/asyncio.py
@@ -21,6 +21,15 @@
from sentry_sdk._types import ExcInfo
+def get_name(coro):
+ # type: (Any) -> str
+ return (
+ getattr(coro, "__qualname__", None)
+ or getattr(coro, "__name__", None)
+ or "coroutine without __name__"
+ )
+
+
def patch_asyncio():
# type: () -> None
orig_task_factory = None
@@ -37,7 +46,7 @@
result = None
with hub:
- with hub.start_span(op=OP.FUNCTION, description=coro.__qualname__):
+ with hub.start_span(op=OP.FUNCTION, description=get_name(coro)):
try:
result = await coro
except Exception:
|
{"golden_diff": "diff --git a/sentry_sdk/integrations/asyncio.py b/sentry_sdk/integrations/asyncio.py\n--- a/sentry_sdk/integrations/asyncio.py\n+++ b/sentry_sdk/integrations/asyncio.py\n@@ -21,6 +21,15 @@\n from sentry_sdk._types import ExcInfo\n \n \n+def get_name(coro):\n+ # type: (Any) -> str\n+ return (\n+ getattr(coro, \"__qualname__\", None)\n+ or getattr(coro, \"__name__\", None)\n+ or \"coroutine without __name__\"\n+ )\n+\n+\n def patch_asyncio():\n # type: () -> None\n orig_task_factory = None\n@@ -37,7 +46,7 @@\n result = None\n \n with hub:\n- with hub.start_span(op=OP.FUNCTION, description=coro.__qualname__):\n+ with hub.start_span(op=OP.FUNCTION, description=get_name(coro)):\n try:\n result = await coro\n except Exception:\n", "issue": "AttributeError: 'async_generator_athrow' object has no attribute '__qualname__'\n### How do you use Sentry?\r\n\r\nSentry Saas (sentry.io)\r\n\r\n### Version\r\n\r\n1.19.1\r\n\r\n### Steps to Reproduce\r\n\r\nI'm trying to use the `asyncio` integration like this:\r\n\r\n```python\r\nsentry_sdk.init(dsn=os.environ.get(\"SENTRY_DSN\"), traces_sample_rate=0.1, integrations=[AsyncioIntegration()])\r\n```\r\n\r\nI keep on getting a traceback that seems to be a Sentry-specific issue.\r\n\r\n### Expected Result\r\n\r\nNo tracebacks repeatedly occur\r\n\r\n### Actual Result\r\n\r\nI see this traceback repeatedly printed in the logs:\r\n\r\n```python\r\nTask exception was never retrieved\r\nfuture: <Task finished name='Task-1512' coro=<patch_asyncio.<locals>._sentry_task_factory.<locals>._coro_creating_hub_and_span() done, defined at /usr/local/lib/python3.9/site-packages/sentry_sdk/integrations/asyncio.py:34> exception=AttributeError(\"'async_generator_athrow' object has no attribute '__qualname__'\")>\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.9/site-packages/sentry_sdk/integrations/asyncio.py\", line 40, in _coro_creating_hub_and_span\r\n with hub.start_span(op=OP.FUNCTION, description=coro.__qualname__):\r\nAttributeError: 'async_generator_athrow' object has no attribute '__qualname__'\r\n```\n", "before_files": [{"content": "from __future__ import absolute_import\nimport sys\n\nfrom sentry_sdk._compat import reraise\nfrom sentry_sdk.consts import OP\nfrom sentry_sdk.hub import Hub\nfrom sentry_sdk.integrations import Integration, DidNotEnable\nfrom sentry_sdk._types import TYPE_CHECKING\nfrom sentry_sdk.utils import event_from_exception\n\ntry:\n import asyncio\n from asyncio.tasks import Task\nexcept ImportError:\n raise DidNotEnable(\"asyncio not available\")\n\n\nif TYPE_CHECKING:\n from typing import Any\n\n from sentry_sdk._types import ExcInfo\n\n\ndef patch_asyncio():\n # type: () -> None\n orig_task_factory = None\n try:\n loop = asyncio.get_running_loop()\n orig_task_factory = loop.get_task_factory()\n\n def _sentry_task_factory(loop, coro):\n # type: (Any, Any) -> Any\n\n async def _coro_creating_hub_and_span():\n # type: () -> Any\n hub = Hub(Hub.current)\n result = None\n\n with hub:\n with hub.start_span(op=OP.FUNCTION, description=coro.__qualname__):\n try:\n result = await coro\n except Exception:\n reraise(*_capture_exception(hub))\n\n return result\n\n # Trying to use user set task factory (if there is one)\n if orig_task_factory:\n return orig_task_factory(loop, _coro_creating_hub_and_span())\n\n # The default task factory in `asyncio` does not have its own function\n # but is just a couple of lines in `asyncio.base_events.create_task()`\n # Those lines are copied here.\n\n # WARNING:\n # If the default behavior of the task creation in asyncio changes,\n # this will break!\n task = Task(_coro_creating_hub_and_span(), loop=loop)\n if task._source_traceback: # type: ignore\n del task._source_traceback[-1] # type: ignore\n\n return task\n\n loop.set_task_factory(_sentry_task_factory)\n except RuntimeError:\n # When there is no running loop, we have nothing to patch.\n pass\n\n\ndef _capture_exception(hub):\n # type: (Hub) -> ExcInfo\n exc_info = sys.exc_info()\n\n integration = hub.get_integration(AsyncioIntegration)\n if integration is not None:\n # If an integration is there, a client has to be there.\n client = hub.client # type: Any\n\n event, hint = event_from_exception(\n exc_info,\n client_options=client.options,\n mechanism={\"type\": \"asyncio\", \"handled\": False},\n )\n hub.capture_event(event, hint=hint)\n\n return exc_info\n\n\nclass AsyncioIntegration(Integration):\n identifier = \"asyncio\"\n\n @staticmethod\n def setup_once():\n # type: () -> None\n patch_asyncio()\n", "path": "sentry_sdk/integrations/asyncio.py"}], "after_files": [{"content": "from __future__ import absolute_import\nimport sys\n\nfrom sentry_sdk._compat import reraise\nfrom sentry_sdk.consts import OP\nfrom sentry_sdk.hub import Hub\nfrom sentry_sdk.integrations import Integration, DidNotEnable\nfrom sentry_sdk._types import TYPE_CHECKING\nfrom sentry_sdk.utils import event_from_exception\n\ntry:\n import asyncio\n from asyncio.tasks import Task\nexcept ImportError:\n raise DidNotEnable(\"asyncio not available\")\n\n\nif TYPE_CHECKING:\n from typing import Any\n\n from sentry_sdk._types import ExcInfo\n\n\ndef get_name(coro):\n # type: (Any) -> str\n return (\n getattr(coro, \"__qualname__\", None)\n or getattr(coro, \"__name__\", None)\n or \"coroutine without __name__\"\n )\n\n\ndef patch_asyncio():\n # type: () -> None\n orig_task_factory = None\n try:\n loop = asyncio.get_running_loop()\n orig_task_factory = loop.get_task_factory()\n\n def _sentry_task_factory(loop, coro):\n # type: (Any, Any) -> Any\n\n async def _coro_creating_hub_and_span():\n # type: () -> Any\n hub = Hub(Hub.current)\n result = None\n\n with hub:\n with hub.start_span(op=OP.FUNCTION, description=get_name(coro)):\n try:\n result = await coro\n except Exception:\n reraise(*_capture_exception(hub))\n\n return result\n\n # Trying to use user set task factory (if there is one)\n if orig_task_factory:\n return orig_task_factory(loop, _coro_creating_hub_and_span())\n\n # The default task factory in `asyncio` does not have its own function\n # but is just a couple of lines in `asyncio.base_events.create_task()`\n # Those lines are copied here.\n\n # WARNING:\n # If the default behavior of the task creation in asyncio changes,\n # this will break!\n task = Task(_coro_creating_hub_and_span(), loop=loop)\n if task._source_traceback: # type: ignore\n del task._source_traceback[-1] # type: ignore\n\n return task\n\n loop.set_task_factory(_sentry_task_factory)\n except RuntimeError:\n # When there is no running loop, we have nothing to patch.\n pass\n\n\ndef _capture_exception(hub):\n # type: (Hub) -> ExcInfo\n exc_info = sys.exc_info()\n\n integration = hub.get_integration(AsyncioIntegration)\n if integration is not None:\n # If an integration is there, a client has to be there.\n client = hub.client # type: Any\n\n event, hint = event_from_exception(\n exc_info,\n client_options=client.options,\n mechanism={\"type\": \"asyncio\", \"handled\": False},\n )\n hub.capture_event(event, hint=hint)\n\n return exc_info\n\n\nclass AsyncioIntegration(Integration):\n identifier = \"asyncio\"\n\n @staticmethod\n def setup_once():\n # type: () -> None\n patch_asyncio()\n", "path": "sentry_sdk/integrations/asyncio.py"}]}
| 1,422 | 235 |
gh_patches_debug_33915
|
rasdani/github-patches
|
git_diff
|
iterative__dvc-8209
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`data status`: update cli hints
What I'd suggest here is to change the hint in "not in cache" to always use `fetch` (maybe some specialization for no cache with `dvc pull`), and then for uncommitted changes, we can show two hints like how git does:
```console
(use "git add <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
```
```console
(use "dvc commit <file>..." to track changes)
(use "dvc checkout <file>..." to restore changes)
```
There are some questionable behaviours in checkout, so it may not always work without `--force`, but that should be fixed separately, and in checkout itself.
_Originally posted by @skshetry in https://github.com/iterative/dvc/issues/8170#issuecomment-1227310120_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dvc/commands/data.py`
Content:
```
1 import argparse
2 import logging
3 from typing import TYPE_CHECKING
4
5 from funcy import chunks, compact, log_durations
6
7 from dvc.cli.command import CmdBase
8 from dvc.cli.utils import append_doc_link, fix_subparsers
9 from dvc.ui import ui
10 from dvc.utils import colorize
11
12 if TYPE_CHECKING:
13 from dvc.repo.data import Status as DataStatus
14
15
16 logger = logging.getLogger(__name__)
17
18
19 class CmdDataStatus(CmdBase):
20 COLORS = {
21 "not_in_cache": "red",
22 "committed": "green",
23 "uncommitted": "yellow",
24 "untracked": "cyan",
25 }
26 LABELS = {
27 "not_in_cache": "Not in cache",
28 "committed": "DVC committed changes",
29 "uncommitted": "DVC uncommitted changes",
30 "untracked": "Untracked files",
31 "unchanged": "DVC unchanged files",
32 }
33 HINTS = {
34 "not_in_cache": 'use "dvc pull <file>..." to download files',
35 "committed": "git commit the corresponding dvc files "
36 "to update the repo",
37 "uncommitted": 'use "dvc commit <file>..." to track changes',
38 "untracked": 'use "git add <file> ..." or '
39 'dvc add <file>..." to commit to git or to dvc',
40 "git_dirty": "there are {}changes not tracked by dvc, "
41 'use "git status" to see',
42 }
43
44 @staticmethod
45 def _process_status(status: "DataStatus"):
46 """Flatten stage status, and filter empty stage status contents."""
47 for stage, stage_status in status.items():
48 items = stage_status
49 if isinstance(stage_status, dict):
50 items = {
51 file: state
52 for state, files in stage_status.items()
53 for file in files
54 }
55 if not items:
56 continue
57 yield stage, items
58
59 @classmethod
60 def _show_status(cls, status: "DataStatus") -> int:
61 git_info = status.pop("git") # type: ignore[misc]
62 result = dict(cls._process_status(status))
63 if not result:
64 no_changes = "No changes"
65 if git_info.get("is_empty", False):
66 no_changes += " in an empty git repo"
67 ui.write(f"{no_changes}.")
68
69 for idx, (stage, stage_status) in enumerate(result.items()):
70 if idx:
71 ui.write()
72
73 label = cls.LABELS.get(stage, stage.capitalize() + " files")
74 header = f"{label}:"
75 color = cls.COLORS.get(stage, None)
76
77 ui.write(header)
78 if hint := cls.HINTS.get(stage):
79 ui.write(f" ({hint})")
80
81 if isinstance(stage_status, dict):
82 items = [
83 ": ".join([state, file])
84 for file, state in stage_status.items()
85 ]
86 else:
87 items = stage_status
88
89 tabs = "\t".expandtabs(8)
90 for chunk in chunks(1000, items):
91 out = "\n".join(tabs + item for item in chunk)
92 ui.write(colorize(out, color))
93
94 if (hint := cls.HINTS.get("git_dirty")) and git_info.get("is_dirty"):
95 message = hint.format("other " if result else "")
96 ui.write(f"[blue]({message})[/]", styled=True)
97 return 0
98
99 def run(self) -> int:
100 with log_durations(logger.trace, "in data_status"): # type: ignore
101 status = self.repo.data_status(
102 granular=self.args.granular,
103 untracked_files=self.args.untracked_files,
104 )
105
106 if not self.args.unchanged:
107 status.pop("unchanged") # type: ignore[misc]
108 if self.args.untracked_files == "no":
109 status.pop("untracked")
110 if self.args.json:
111 status.pop("git") # type: ignore[misc]
112 ui.write_json(compact(status))
113 return 0
114 return self._show_status(status)
115
116
117 def add_parser(subparsers, parent_parser):
118 data_parser = subparsers.add_parser(
119 "data",
120 parents=[parent_parser],
121 formatter_class=argparse.RawDescriptionHelpFormatter,
122 )
123 data_subparsers = data_parser.add_subparsers(
124 dest="cmd",
125 help="Use `dvc data CMD --help` to display command-specific help.",
126 )
127 fix_subparsers(data_subparsers)
128
129 DATA_STATUS_HELP = (
130 "Show changes between the last git commit, "
131 "the dvcfiles and the workspace."
132 )
133 data_status_parser = data_subparsers.add_parser(
134 "status",
135 parents=[parent_parser],
136 description=append_doc_link(DATA_STATUS_HELP, "data/status"),
137 formatter_class=argparse.RawDescriptionHelpFormatter,
138 help=DATA_STATUS_HELP,
139 )
140 data_status_parser.add_argument(
141 "--json",
142 action="store_true",
143 default=False,
144 help="Show output in JSON format.",
145 )
146 data_status_parser.add_argument(
147 "--show-json",
148 action="store_true",
149 default=False,
150 dest="json",
151 help=argparse.SUPPRESS,
152 )
153 data_status_parser.add_argument(
154 "--granular",
155 action="store_true",
156 default=False,
157 help="Show granular file-level info for DVC-tracked directories.",
158 )
159 data_status_parser.add_argument(
160 "--unchanged",
161 action="store_true",
162 default=False,
163 help="Show unmodified DVC-tracked files.",
164 )
165 data_status_parser.add_argument(
166 "--untracked-files",
167 choices=["no", "all"],
168 default="no",
169 const="all",
170 nargs="?",
171 help="Show untracked files.",
172 )
173 data_status_parser.set_defaults(func=CmdDataStatus)
174
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/dvc/commands/data.py b/dvc/commands/data.py
--- a/dvc/commands/data.py
+++ b/dvc/commands/data.py
@@ -31,14 +31,22 @@
"unchanged": "DVC unchanged files",
}
HINTS = {
- "not_in_cache": 'use "dvc pull <file>..." to download files',
- "committed": "git commit the corresponding dvc files "
- "to update the repo",
- "uncommitted": 'use "dvc commit <file>..." to track changes',
- "untracked": 'use "git add <file> ..." or '
- 'dvc add <file>..." to commit to git or to dvc',
- "git_dirty": "there are {}changes not tracked by dvc, "
- 'use "git status" to see',
+ "not_in_cache": ('use "dvc fetch <file>..." to download files',),
+ "committed": (
+ "git commit the corresponding dvc files to update the repo",
+ ),
+ "uncommitted": (
+ 'use "dvc commit <file>..." to track changes',
+ 'use "dvc checkout <file>..." to discard changes',
+ ),
+ "untracked": (
+ 'use "git add <file> ..." or '
+ 'dvc add <file>..." to commit to git or to dvc',
+ ),
+ "git_dirty": (
+ "there are {}changes not tracked by dvc, "
+ 'use "git status" to see',
+ ),
}
@staticmethod
@@ -75,8 +83,9 @@
color = cls.COLORS.get(stage, None)
ui.write(header)
- if hint := cls.HINTS.get(stage):
- ui.write(f" ({hint})")
+ if hints := cls.HINTS.get(stage):
+ for hint in hints:
+ ui.write(f" ({hint})")
if isinstance(stage_status, dict):
items = [
@@ -91,9 +100,10 @@
out = "\n".join(tabs + item for item in chunk)
ui.write(colorize(out, color))
- if (hint := cls.HINTS.get("git_dirty")) and git_info.get("is_dirty"):
- message = hint.format("other " if result else "")
- ui.write(f"[blue]({message})[/]", styled=True)
+ if (hints := cls.HINTS.get("git_dirty")) and git_info.get("is_dirty"):
+ for hint in hints:
+ message = hint.format("other " if result else "")
+ ui.write(f"[blue]({message})[/]", styled=True)
return 0
def run(self) -> int:
|
{"golden_diff": "diff --git a/dvc/commands/data.py b/dvc/commands/data.py\n--- a/dvc/commands/data.py\n+++ b/dvc/commands/data.py\n@@ -31,14 +31,22 @@\n \"unchanged\": \"DVC unchanged files\",\n }\n HINTS = {\n- \"not_in_cache\": 'use \"dvc pull <file>...\" to download files',\n- \"committed\": \"git commit the corresponding dvc files \"\n- \"to update the repo\",\n- \"uncommitted\": 'use \"dvc commit <file>...\" to track changes',\n- \"untracked\": 'use \"git add <file> ...\" or '\n- 'dvc add <file>...\" to commit to git or to dvc',\n- \"git_dirty\": \"there are {}changes not tracked by dvc, \"\n- 'use \"git status\" to see',\n+ \"not_in_cache\": ('use \"dvc fetch <file>...\" to download files',),\n+ \"committed\": (\n+ \"git commit the corresponding dvc files to update the repo\",\n+ ),\n+ \"uncommitted\": (\n+ 'use \"dvc commit <file>...\" to track changes',\n+ 'use \"dvc checkout <file>...\" to discard changes',\n+ ),\n+ \"untracked\": (\n+ 'use \"git add <file> ...\" or '\n+ 'dvc add <file>...\" to commit to git or to dvc',\n+ ),\n+ \"git_dirty\": (\n+ \"there are {}changes not tracked by dvc, \"\n+ 'use \"git status\" to see',\n+ ),\n }\n \n @staticmethod\n@@ -75,8 +83,9 @@\n color = cls.COLORS.get(stage, None)\n \n ui.write(header)\n- if hint := cls.HINTS.get(stage):\n- ui.write(f\" ({hint})\")\n+ if hints := cls.HINTS.get(stage):\n+ for hint in hints:\n+ ui.write(f\" ({hint})\")\n \n if isinstance(stage_status, dict):\n items = [\n@@ -91,9 +100,10 @@\n out = \"\\n\".join(tabs + item for item in chunk)\n ui.write(colorize(out, color))\n \n- if (hint := cls.HINTS.get(\"git_dirty\")) and git_info.get(\"is_dirty\"):\n- message = hint.format(\"other \" if result else \"\")\n- ui.write(f\"[blue]({message})[/]\", styled=True)\n+ if (hints := cls.HINTS.get(\"git_dirty\")) and git_info.get(\"is_dirty\"):\n+ for hint in hints:\n+ message = hint.format(\"other \" if result else \"\")\n+ ui.write(f\"[blue]({message})[/]\", styled=True)\n return 0\n \n def run(self) -> int:\n", "issue": "`data status`: update cli hints\nWhat I'd suggest here is to change the hint in \"not in cache\" to always use `fetch` (maybe some specialization for no cache with `dvc pull`), and then for uncommitted changes, we can show two hints like how git does:\r\n\r\n```console\r\n (use \"git add <file>...\" to update what will be committed)\r\n (use \"git restore <file>...\" to discard changes in working directory)\r\n```\r\n```console\r\n (use \"dvc commit <file>...\" to track changes)\r\n (use \"dvc checkout <file>...\" to restore changes)\r\n```\r\n\r\nThere are some questionable behaviours in checkout, so it may not always work without `--force`, but that should be fixed separately, and in checkout itself.\r\n\r\n_Originally posted by @skshetry in https://github.com/iterative/dvc/issues/8170#issuecomment-1227310120_\n", "before_files": [{"content": "import argparse\nimport logging\nfrom typing import TYPE_CHECKING\n\nfrom funcy import chunks, compact, log_durations\n\nfrom dvc.cli.command import CmdBase\nfrom dvc.cli.utils import append_doc_link, fix_subparsers\nfrom dvc.ui import ui\nfrom dvc.utils import colorize\n\nif TYPE_CHECKING:\n from dvc.repo.data import Status as DataStatus\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass CmdDataStatus(CmdBase):\n COLORS = {\n \"not_in_cache\": \"red\",\n \"committed\": \"green\",\n \"uncommitted\": \"yellow\",\n \"untracked\": \"cyan\",\n }\n LABELS = {\n \"not_in_cache\": \"Not in cache\",\n \"committed\": \"DVC committed changes\",\n \"uncommitted\": \"DVC uncommitted changes\",\n \"untracked\": \"Untracked files\",\n \"unchanged\": \"DVC unchanged files\",\n }\n HINTS = {\n \"not_in_cache\": 'use \"dvc pull <file>...\" to download files',\n \"committed\": \"git commit the corresponding dvc files \"\n \"to update the repo\",\n \"uncommitted\": 'use \"dvc commit <file>...\" to track changes',\n \"untracked\": 'use \"git add <file> ...\" or '\n 'dvc add <file>...\" to commit to git or to dvc',\n \"git_dirty\": \"there are {}changes not tracked by dvc, \"\n 'use \"git status\" to see',\n }\n\n @staticmethod\n def _process_status(status: \"DataStatus\"):\n \"\"\"Flatten stage status, and filter empty stage status contents.\"\"\"\n for stage, stage_status in status.items():\n items = stage_status\n if isinstance(stage_status, dict):\n items = {\n file: state\n for state, files in stage_status.items()\n for file in files\n }\n if not items:\n continue\n yield stage, items\n\n @classmethod\n def _show_status(cls, status: \"DataStatus\") -> int:\n git_info = status.pop(\"git\") # type: ignore[misc]\n result = dict(cls._process_status(status))\n if not result:\n no_changes = \"No changes\"\n if git_info.get(\"is_empty\", False):\n no_changes += \" in an empty git repo\"\n ui.write(f\"{no_changes}.\")\n\n for idx, (stage, stage_status) in enumerate(result.items()):\n if idx:\n ui.write()\n\n label = cls.LABELS.get(stage, stage.capitalize() + \" files\")\n header = f\"{label}:\"\n color = cls.COLORS.get(stage, None)\n\n ui.write(header)\n if hint := cls.HINTS.get(stage):\n ui.write(f\" ({hint})\")\n\n if isinstance(stage_status, dict):\n items = [\n \": \".join([state, file])\n for file, state in stage_status.items()\n ]\n else:\n items = stage_status\n\n tabs = \"\\t\".expandtabs(8)\n for chunk in chunks(1000, items):\n out = \"\\n\".join(tabs + item for item in chunk)\n ui.write(colorize(out, color))\n\n if (hint := cls.HINTS.get(\"git_dirty\")) and git_info.get(\"is_dirty\"):\n message = hint.format(\"other \" if result else \"\")\n ui.write(f\"[blue]({message})[/]\", styled=True)\n return 0\n\n def run(self) -> int:\n with log_durations(logger.trace, \"in data_status\"): # type: ignore\n status = self.repo.data_status(\n granular=self.args.granular,\n untracked_files=self.args.untracked_files,\n )\n\n if not self.args.unchanged:\n status.pop(\"unchanged\") # type: ignore[misc]\n if self.args.untracked_files == \"no\":\n status.pop(\"untracked\")\n if self.args.json:\n status.pop(\"git\") # type: ignore[misc]\n ui.write_json(compact(status))\n return 0\n return self._show_status(status)\n\n\ndef add_parser(subparsers, parent_parser):\n data_parser = subparsers.add_parser(\n \"data\",\n parents=[parent_parser],\n formatter_class=argparse.RawDescriptionHelpFormatter,\n )\n data_subparsers = data_parser.add_subparsers(\n dest=\"cmd\",\n help=\"Use `dvc data CMD --help` to display command-specific help.\",\n )\n fix_subparsers(data_subparsers)\n\n DATA_STATUS_HELP = (\n \"Show changes between the last git commit, \"\n \"the dvcfiles and the workspace.\"\n )\n data_status_parser = data_subparsers.add_parser(\n \"status\",\n parents=[parent_parser],\n description=append_doc_link(DATA_STATUS_HELP, \"data/status\"),\n formatter_class=argparse.RawDescriptionHelpFormatter,\n help=DATA_STATUS_HELP,\n )\n data_status_parser.add_argument(\n \"--json\",\n action=\"store_true\",\n default=False,\n help=\"Show output in JSON format.\",\n )\n data_status_parser.add_argument(\n \"--show-json\",\n action=\"store_true\",\n default=False,\n dest=\"json\",\n help=argparse.SUPPRESS,\n )\n data_status_parser.add_argument(\n \"--granular\",\n action=\"store_true\",\n default=False,\n help=\"Show granular file-level info for DVC-tracked directories.\",\n )\n data_status_parser.add_argument(\n \"--unchanged\",\n action=\"store_true\",\n default=False,\n help=\"Show unmodified DVC-tracked files.\",\n )\n data_status_parser.add_argument(\n \"--untracked-files\",\n choices=[\"no\", \"all\"],\n default=\"no\",\n const=\"all\",\n nargs=\"?\",\n help=\"Show untracked files.\",\n )\n data_status_parser.set_defaults(func=CmdDataStatus)\n", "path": "dvc/commands/data.py"}], "after_files": [{"content": "import argparse\nimport logging\nfrom typing import TYPE_CHECKING\n\nfrom funcy import chunks, compact, log_durations\n\nfrom dvc.cli.command import CmdBase\nfrom dvc.cli.utils import append_doc_link, fix_subparsers\nfrom dvc.ui import ui\nfrom dvc.utils import colorize\n\nif TYPE_CHECKING:\n from dvc.repo.data import Status as DataStatus\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass CmdDataStatus(CmdBase):\n COLORS = {\n \"not_in_cache\": \"red\",\n \"committed\": \"green\",\n \"uncommitted\": \"yellow\",\n \"untracked\": \"cyan\",\n }\n LABELS = {\n \"not_in_cache\": \"Not in cache\",\n \"committed\": \"DVC committed changes\",\n \"uncommitted\": \"DVC uncommitted changes\",\n \"untracked\": \"Untracked files\",\n \"unchanged\": \"DVC unchanged files\",\n }\n HINTS = {\n \"not_in_cache\": ('use \"dvc fetch <file>...\" to download files',),\n \"committed\": (\n \"git commit the corresponding dvc files to update the repo\",\n ),\n \"uncommitted\": (\n 'use \"dvc commit <file>...\" to track changes',\n 'use \"dvc checkout <file>...\" to discard changes',\n ),\n \"untracked\": (\n 'use \"git add <file> ...\" or '\n 'dvc add <file>...\" to commit to git or to dvc',\n ),\n \"git_dirty\": (\n \"there are {}changes not tracked by dvc, \"\n 'use \"git status\" to see',\n ),\n }\n\n @staticmethod\n def _process_status(status: \"DataStatus\"):\n \"\"\"Flatten stage status, and filter empty stage status contents.\"\"\"\n for stage, stage_status in status.items():\n items = stage_status\n if isinstance(stage_status, dict):\n items = {\n file: state\n for state, files in stage_status.items()\n for file in files\n }\n if not items:\n continue\n yield stage, items\n\n @classmethod\n def _show_status(cls, status: \"DataStatus\") -> int:\n git_info = status.pop(\"git\") # type: ignore[misc]\n result = dict(cls._process_status(status))\n if not result:\n no_changes = \"No changes\"\n if git_info.get(\"is_empty\", False):\n no_changes += \" in an empty git repo\"\n ui.write(f\"{no_changes}.\")\n\n for idx, (stage, stage_status) in enumerate(result.items()):\n if idx:\n ui.write()\n\n label = cls.LABELS.get(stage, stage.capitalize() + \" files\")\n header = f\"{label}:\"\n color = cls.COLORS.get(stage, None)\n\n ui.write(header)\n if hints := cls.HINTS.get(stage):\n for hint in hints:\n ui.write(f\" ({hint})\")\n\n if isinstance(stage_status, dict):\n items = [\n \": \".join([state, file])\n for file, state in stage_status.items()\n ]\n else:\n items = stage_status\n\n tabs = \"\\t\".expandtabs(8)\n for chunk in chunks(1000, items):\n out = \"\\n\".join(tabs + item for item in chunk)\n ui.write(colorize(out, color))\n\n if (hints := cls.HINTS.get(\"git_dirty\")) and git_info.get(\"is_dirty\"):\n for hint in hints:\n message = hint.format(\"other \" if result else \"\")\n ui.write(f\"[blue]({message})[/]\", styled=True)\n return 0\n\n def run(self) -> int:\n with log_durations(logger.trace, \"in data_status\"): # type: ignore\n status = self.repo.data_status(\n granular=self.args.granular,\n untracked_files=self.args.untracked_files,\n )\n\n if not self.args.unchanged:\n status.pop(\"unchanged\") # type: ignore[misc]\n if self.args.untracked_files == \"no\":\n status.pop(\"untracked\")\n if self.args.json:\n status.pop(\"git\") # type: ignore[misc]\n ui.write_json(compact(status))\n return 0\n return self._show_status(status)\n\n\ndef add_parser(subparsers, parent_parser):\n data_parser = subparsers.add_parser(\n \"data\",\n parents=[parent_parser],\n formatter_class=argparse.RawDescriptionHelpFormatter,\n )\n data_subparsers = data_parser.add_subparsers(\n dest=\"cmd\",\n help=\"Use `dvc data CMD --help` to display command-specific help.\",\n )\n fix_subparsers(data_subparsers)\n\n DATA_STATUS_HELP = (\n \"Show changes between the last git commit, \"\n \"the dvcfiles and the workspace.\"\n )\n data_status_parser = data_subparsers.add_parser(\n \"status\",\n parents=[parent_parser],\n description=append_doc_link(DATA_STATUS_HELP, \"data/status\"),\n formatter_class=argparse.RawDescriptionHelpFormatter,\n help=DATA_STATUS_HELP,\n )\n data_status_parser.add_argument(\n \"--json\",\n action=\"store_true\",\n default=False,\n help=\"Show output in JSON format.\",\n )\n data_status_parser.add_argument(\n \"--show-json\",\n action=\"store_true\",\n default=False,\n dest=\"json\",\n help=argparse.SUPPRESS,\n )\n data_status_parser.add_argument(\n \"--granular\",\n action=\"store_true\",\n default=False,\n help=\"Show granular file-level info for DVC-tracked directories.\",\n )\n data_status_parser.add_argument(\n \"--unchanged\",\n action=\"store_true\",\n default=False,\n help=\"Show unmodified DVC-tracked files.\",\n )\n data_status_parser.add_argument(\n \"--untracked-files\",\n choices=[\"no\", \"all\"],\n default=\"no\",\n const=\"all\",\n nargs=\"?\",\n help=\"Show untracked files.\",\n )\n data_status_parser.set_defaults(func=CmdDataStatus)\n", "path": "dvc/commands/data.py"}]}
| 2,144 | 628 |
gh_patches_debug_8350
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-5984
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Auto assign should occur as actor
When using 'Fixes XXX' annotation in a commit, I noticed that while Sentry auto assigned to me (expected), it did so on behalf of itself instead of my user.

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/sentry/receivers/releases.py`
Content:
```
1 from __future__ import absolute_import, print_function
2
3 from django.db import IntegrityError, transaction
4 from django.db.models.signals import post_save
5
6 from sentry.models import (
7 Activity, Commit, GroupAssignee, GroupCommitResolution, Release, TagValue
8 )
9 from sentry.tasks.clear_expired_resolutions import clear_expired_resolutions
10
11
12 def ensure_release_exists(instance, created, **kwargs):
13 if instance.key != 'sentry:release':
14 return
15
16 if instance.data and instance.data.get('release_id'):
17 return
18
19 try:
20 with transaction.atomic():
21 release = Release.objects.create(
22 organization_id=instance.project.organization_id,
23 version=instance.value,
24 date_added=instance.first_seen,
25 )
26 except IntegrityError:
27 release = Release.objects.get(
28 organization_id=instance.project.organization_id,
29 version=instance.value,
30 )
31 release.update(date_added=instance.first_seen)
32 else:
33 instance.update(data={'release_id': release.id})
34
35 release.add_project(instance.project)
36
37
38 def resolve_group_resolutions(instance, created, **kwargs):
39 if not created:
40 return
41
42 clear_expired_resolutions.delay(release_id=instance.id)
43
44
45 def resolved_in_commit(instance, created, **kwargs):
46 # TODO(dcramer): we probably should support an updated message
47 if not created:
48 return
49
50 groups = instance.find_referenced_groups()
51 for group in groups:
52 try:
53 with transaction.atomic():
54 GroupCommitResolution.objects.create(
55 group_id=group.id,
56 commit_id=instance.id,
57 )
58 if instance.author:
59 user_list = list(instance.author.find_users())
60 else:
61 user_list = ()
62 if user_list:
63 Activity.objects.create(
64 project_id=group.project_id,
65 group=group,
66 type=Activity.SET_RESOLVED_IN_COMMIT,
67 ident=instance.id,
68 user=user_list[0],
69 data={
70 'commit': instance.id,
71 }
72 )
73 GroupAssignee.objects.assign(group=group, assigned_to=user_list[0])
74 else:
75 Activity.objects.create(
76 project_id=group.project_id,
77 group=group,
78 type=Activity.SET_RESOLVED_IN_COMMIT,
79 ident=instance.id,
80 data={
81 'commit': instance.id,
82 }
83 )
84 except IntegrityError:
85 pass
86
87
88 post_save.connect(
89 resolve_group_resolutions, sender=Release, dispatch_uid="resolve_group_resolutions", weak=False
90 )
91
92 post_save.connect(
93 ensure_release_exists, sender=TagValue, dispatch_uid="ensure_release_exists", weak=False
94 )
95
96 post_save.connect(
97 resolved_in_commit,
98 sender=Commit,
99 dispatch_uid="resolved_in_commit",
100 weak=False,
101 )
102
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/sentry/receivers/releases.py b/src/sentry/receivers/releases.py
--- a/src/sentry/receivers/releases.py
+++ b/src/sentry/receivers/releases.py
@@ -70,7 +70,8 @@
'commit': instance.id,
}
)
- GroupAssignee.objects.assign(group=group, assigned_to=user_list[0])
+ GroupAssignee.objects.assign(
+ group=group, assigned_to=user_list[0], acting_user=user_list[0])
else:
Activity.objects.create(
project_id=group.project_id,
|
{"golden_diff": "diff --git a/src/sentry/receivers/releases.py b/src/sentry/receivers/releases.py\n--- a/src/sentry/receivers/releases.py\n+++ b/src/sentry/receivers/releases.py\n@@ -70,7 +70,8 @@\n 'commit': instance.id,\n }\n )\n- GroupAssignee.objects.assign(group=group, assigned_to=user_list[0])\n+ GroupAssignee.objects.assign(\n+ group=group, assigned_to=user_list[0], acting_user=user_list[0])\n else:\n Activity.objects.create(\n project_id=group.project_id,\n", "issue": "Auto assign should occur as actor\nWhen using 'Fixes XXX' annotation in a commit, I noticed that while Sentry auto assigned to me (expected), it did so on behalf of itself instead of my user.\r\n\r\n\r\n\n", "before_files": [{"content": "from __future__ import absolute_import, print_function\n\nfrom django.db import IntegrityError, transaction\nfrom django.db.models.signals import post_save\n\nfrom sentry.models import (\n Activity, Commit, GroupAssignee, GroupCommitResolution, Release, TagValue\n)\nfrom sentry.tasks.clear_expired_resolutions import clear_expired_resolutions\n\n\ndef ensure_release_exists(instance, created, **kwargs):\n if instance.key != 'sentry:release':\n return\n\n if instance.data and instance.data.get('release_id'):\n return\n\n try:\n with transaction.atomic():\n release = Release.objects.create(\n organization_id=instance.project.organization_id,\n version=instance.value,\n date_added=instance.first_seen,\n )\n except IntegrityError:\n release = Release.objects.get(\n organization_id=instance.project.organization_id,\n version=instance.value,\n )\n release.update(date_added=instance.first_seen)\n else:\n instance.update(data={'release_id': release.id})\n\n release.add_project(instance.project)\n\n\ndef resolve_group_resolutions(instance, created, **kwargs):\n if not created:\n return\n\n clear_expired_resolutions.delay(release_id=instance.id)\n\n\ndef resolved_in_commit(instance, created, **kwargs):\n # TODO(dcramer): we probably should support an updated message\n if not created:\n return\n\n groups = instance.find_referenced_groups()\n for group in groups:\n try:\n with transaction.atomic():\n GroupCommitResolution.objects.create(\n group_id=group.id,\n commit_id=instance.id,\n )\n if instance.author:\n user_list = list(instance.author.find_users())\n else:\n user_list = ()\n if user_list:\n Activity.objects.create(\n project_id=group.project_id,\n group=group,\n type=Activity.SET_RESOLVED_IN_COMMIT,\n ident=instance.id,\n user=user_list[0],\n data={\n 'commit': instance.id,\n }\n )\n GroupAssignee.objects.assign(group=group, assigned_to=user_list[0])\n else:\n Activity.objects.create(\n project_id=group.project_id,\n group=group,\n type=Activity.SET_RESOLVED_IN_COMMIT,\n ident=instance.id,\n data={\n 'commit': instance.id,\n }\n )\n except IntegrityError:\n pass\n\n\npost_save.connect(\n resolve_group_resolutions, sender=Release, dispatch_uid=\"resolve_group_resolutions\", weak=False\n)\n\npost_save.connect(\n ensure_release_exists, sender=TagValue, dispatch_uid=\"ensure_release_exists\", weak=False\n)\n\npost_save.connect(\n resolved_in_commit,\n sender=Commit,\n dispatch_uid=\"resolved_in_commit\",\n weak=False,\n)\n", "path": "src/sentry/receivers/releases.py"}], "after_files": [{"content": "from __future__ import absolute_import, print_function\n\nfrom django.db import IntegrityError, transaction\nfrom django.db.models.signals import post_save\n\nfrom sentry.models import (\n Activity, Commit, GroupAssignee, GroupCommitResolution, Release, TagValue\n)\nfrom sentry.tasks.clear_expired_resolutions import clear_expired_resolutions\n\n\ndef ensure_release_exists(instance, created, **kwargs):\n if instance.key != 'sentry:release':\n return\n\n if instance.data and instance.data.get('release_id'):\n return\n\n try:\n with transaction.atomic():\n release = Release.objects.create(\n organization_id=instance.project.organization_id,\n version=instance.value,\n date_added=instance.first_seen,\n )\n except IntegrityError:\n release = Release.objects.get(\n organization_id=instance.project.organization_id,\n version=instance.value,\n )\n release.update(date_added=instance.first_seen)\n else:\n instance.update(data={'release_id': release.id})\n\n release.add_project(instance.project)\n\n\ndef resolve_group_resolutions(instance, created, **kwargs):\n if not created:\n return\n\n clear_expired_resolutions.delay(release_id=instance.id)\n\n\ndef resolved_in_commit(instance, created, **kwargs):\n # TODO(dcramer): we probably should support an updated message\n if not created:\n return\n\n groups = instance.find_referenced_groups()\n for group in groups:\n try:\n with transaction.atomic():\n GroupCommitResolution.objects.create(\n group_id=group.id,\n commit_id=instance.id,\n )\n if instance.author:\n user_list = list(instance.author.find_users())\n else:\n user_list = ()\n if user_list:\n Activity.objects.create(\n project_id=group.project_id,\n group=group,\n type=Activity.SET_RESOLVED_IN_COMMIT,\n ident=instance.id,\n user=user_list[0],\n data={\n 'commit': instance.id,\n }\n )\n GroupAssignee.objects.assign(\n group=group, assigned_to=user_list[0], acting_user=user_list[0])\n else:\n Activity.objects.create(\n project_id=group.project_id,\n group=group,\n type=Activity.SET_RESOLVED_IN_COMMIT,\n ident=instance.id,\n data={\n 'commit': instance.id,\n }\n )\n except IntegrityError:\n pass\n\n\npost_save.connect(\n resolve_group_resolutions, sender=Release, dispatch_uid=\"resolve_group_resolutions\", weak=False\n)\n\npost_save.connect(\n ensure_release_exists, sender=TagValue, dispatch_uid=\"ensure_release_exists\", weak=False\n)\n\npost_save.connect(\n resolved_in_commit,\n sender=Commit,\n dispatch_uid=\"resolved_in_commit\",\n weak=False,\n)\n", "path": "src/sentry/receivers/releases.py"}]}
| 1,161 | 129 |
gh_patches_debug_2965
|
rasdani/github-patches
|
git_diff
|
weecology__retriever-1104
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Incorrectly lower casing table_name for csv
It looks like we're lower casing manually set table/directory names, at least for csv but probably for all flat file engines.
```
$ mkdir TESTER
$ retriever install csv mammal-masses --table_name TESTER/test.csv
=> Installing mammal-masses
[Errno 2] No such file or directory: 'tester/test.csv'
Done!
$ mkdir tester
$ retriever install csv mammal-masses --table_name TESTER/test.csv
=> Installing mammal-masses
Progress: 5731/5731 rows inserted into tester/test.csv totaling 5731:
Done!
```
This is causing issues for the R package, see https://github.com/ropensci/rdataretriever/issues/131, but is also a general problem since directory names are case sensitive for 2/3 OSs.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `retriever/__main__.py`
Content:
```
1 """Data Retriever Wizard
2
3 Running this module directly will launch the download wizard, allowing the user
4 to choose from all scripts.
5
6 The main() function can be used for bootstrapping.
7
8 """
9 from __future__ import absolute_import
10 from __future__ import print_function
11
12 import os
13 import sys
14 from builtins import input
15 from imp import reload
16
17 from retriever.engines import engine_list, choose_engine
18 from retriever.lib.datapackage import create_json, edit_json, delete_json, get_script_filename
19 from retriever.lib.datasets import datasets, dataset_names, license
20 from retriever.lib.defaults import sample_script, CITATION, ENCODING, SCRIPT_SEARCH_PATHS
21 from retriever.lib.get_opts import parser
22 from retriever.lib.repository import check_for_updates
23 from retriever.lib.scripts import SCRIPT_LIST, get_script
24 from retriever.lib.engine_tools import name_matches, reset_retriever
25
26 encoding = ENCODING.lower()
27 # sys removes the setdefaultencoding method at startup; reload to get it back
28 reload(sys)
29 if hasattr(sys, 'setdefaultencoding'):
30 sys.setdefaultencoding(encoding)
31
32
33 def main():
34 """This function launches the Data Retriever."""
35 sys.argv[1:] = [arg.lower() for arg in sys.argv[1:]]
36 if len(sys.argv) == 1:
37 # if no command line args are passed, show the help options
38 parser.parse_args(['-h'])
39
40 else:
41 # otherwise, parse them
42
43 if not os.path.isdir(SCRIPT_SEARCH_PATHS[1]) and not \
44 [f for f in os.listdir(SCRIPT_SEARCH_PATHS[-1])
45 if os.path.exists(SCRIPT_SEARCH_PATHS[-1])]:
46 check_for_updates()
47 script_list = SCRIPT_LIST()
48
49 args = parser.parse_args()
50
51 if args.command == "install" and not args.engine:
52 parser.parse_args(['install', '-h'])
53
54 if args.quiet:
55 sys.stdout = open(os.devnull, 'w')
56
57 if args.command == 'help':
58 parser.parse_args(['-h'])
59
60 if hasattr(args, 'compile') and args.compile:
61 script_list = SCRIPT_LIST(force_compile=True)
62
63 if args.command == 'defaults':
64 for engine_item in engine_list:
65 print("Default options for engine ", engine_item.name)
66 for default_opts in engine_item.required_opts:
67 print(default_opts[0], " ", default_opts[2])
68 print()
69 return
70
71 if args.command == 'update':
72 check_for_updates(False)
73 script_list = SCRIPT_LIST()
74 return
75
76 elif args.command == 'citation':
77 if args.dataset is None:
78 print("\nCitation for retriever:\n")
79 print(CITATION)
80 else:
81 scripts = name_matches(script_list, args.dataset)
82 for dataset in scripts:
83 print("\nDataset: {}".format(dataset.name))
84 print("Citation: {}".format(dataset.citation))
85 print("Description: {}\n".format(dataset.description))
86
87 return
88
89 elif args.command == 'license':
90 dataset_license = license(args.dataset)
91 if dataset_license:
92 print(dataset_license)
93 else:
94 print("There is no license information for {}".format(args.dataset))
95 return
96
97 elif args.command == 'new':
98 f = open(args.filename, 'w')
99 f.write(sample_script)
100 f.close()
101
102 return
103
104 elif args.command == 'reset':
105 reset_retriever(args.scope)
106 return
107
108 elif args.command == 'new_json':
109 # create new JSON script
110 create_json()
111 return
112
113 elif args.command == 'edit_json':
114 # edit existing JSON script
115 json_file = get_script_filename(args.dataset.lower())
116 edit_json(json_file)
117 return
118
119 elif args.command == 'delete_json':
120 # delete existing JSON script from home directory and or script directory if exists in current dir
121 confirm = input("Really remove " + args.dataset.lower() +
122 " and all its contents? (y/N): ")
123 if confirm.lower().strip() in ['y', 'yes']:
124 json_file = get_script_filename(args.dataset.lower())
125 delete_json(json_file)
126 return
127
128 if args.command == 'ls':
129 # If scripts have never been downloaded there is nothing to list
130 if not script_list:
131 print("No scripts are currently available. Updating scripts now...")
132 check_for_updates(False)
133 print("\n\nScripts downloaded.\n")
134 if not (args.l or args.k or (type(args.v) is list)):
135 all_scripts = dataset_names()
136 print("Available datasets : {}\n".format(len(all_scripts)))
137 from retriever import lscolumns
138 lscolumns.printls(all_scripts)
139
140 elif type(args.v) is list:
141 if args.v:
142 try:
143 all_scripts = [get_script(dataset) for dataset in args.v]
144 except KeyError:
145 all_scripts = []
146 print("Dataset(s) is not found.")
147 else:
148 all_scripts = datasets()
149 count = 1
150 for script in all_scripts:
151 print("{}. {}\n{}\n{}\n{}\n".format(
152 count, script.title,
153 script.name,
154 script.keywords,
155 script.description,
156 str(script.licenses[0]['name']),
157 script.citation
158 ))
159 count += 1
160
161 else:
162 param_licenses = args.l if args.l else None
163 keywords = args.k if args.k else None
164
165 # search
166 searched_scripts = datasets(keywords, param_licenses)
167 if not searched_scripts:
168 print("No available datasets found")
169 else:
170 print("Available datasets : {}\n".format(len(searched_scripts)))
171 count = 1
172 for script in searched_scripts:
173 print("{}. {}\n{}\n{}\n{}\n".format(
174 count, script.title,
175 script.name,
176 script.keywords,
177 str(script.licenses[0]['name'])
178 ))
179 count += 1
180 return
181
182 engine = choose_engine(args.__dict__)
183
184 if hasattr(args, 'debug') and args.debug:
185 debug = True
186 else:
187 debug = False
188 sys.tracebacklimit = 0
189
190 if hasattr(args, 'debug') and args.not_cached:
191 engine.use_cache = False
192 else:
193 engine.use_cache = True
194
195 if args.dataset is not None:
196 scripts = name_matches(script_list, args.dataset)
197 else:
198 raise Exception("no dataset specified.")
199 if scripts:
200 for dataset in scripts:
201 print("=> Installing", dataset.name)
202 try:
203 dataset.download(engine, debug=debug)
204 dataset.engine.final_cleanup()
205 except KeyboardInterrupt:
206 pass
207 except Exception as e:
208 print(e)
209 if debug:
210 raise
211 print("Done!")
212 else:
213 print("Run 'retriever ls' to see a list of currently available datasets.")
214
215
216 if __name__ == "__main__":
217 main()
218
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/retriever/__main__.py b/retriever/__main__.py
--- a/retriever/__main__.py
+++ b/retriever/__main__.py
@@ -32,7 +32,6 @@
def main():
"""This function launches the Data Retriever."""
- sys.argv[1:] = [arg.lower() for arg in sys.argv[1:]]
if len(sys.argv) == 1:
# if no command line args are passed, show the help options
parser.parse_args(['-h'])
|
{"golden_diff": "diff --git a/retriever/__main__.py b/retriever/__main__.py\n--- a/retriever/__main__.py\n+++ b/retriever/__main__.py\n@@ -32,7 +32,6 @@\n \n def main():\n \"\"\"This function launches the Data Retriever.\"\"\"\n- sys.argv[1:] = [arg.lower() for arg in sys.argv[1:]]\n if len(sys.argv) == 1:\n # if no command line args are passed, show the help options\n parser.parse_args(['-h'])\n", "issue": "Incorrectly lower casing table_name for csv\nIt looks like we're lower casing manually set table/directory names, at least for csv but probably for all flat file engines.\r\n\r\n```\r\n$ mkdir TESTER\r\n$ retriever install csv mammal-masses --table_name TESTER/test.csv\r\n=> Installing mammal-masses\r\n[Errno 2] No such file or directory: 'tester/test.csv'\r\nDone!\r\n\r\n$ mkdir tester\r\n$ retriever install csv mammal-masses --table_name TESTER/test.csv\r\n=> Installing mammal-masses\r\nProgress: 5731/5731 rows inserted into tester/test.csv totaling 5731:\r\n\r\nDone!\r\n```\r\n\r\nThis is causing issues for the R package, see https://github.com/ropensci/rdataretriever/issues/131, but is also a general problem since directory names are case sensitive for 2/3 OSs.\n", "before_files": [{"content": "\"\"\"Data Retriever Wizard\n\nRunning this module directly will launch the download wizard, allowing the user\nto choose from all scripts.\n\nThe main() function can be used for bootstrapping.\n\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import print_function\n\nimport os\nimport sys\nfrom builtins import input\nfrom imp import reload\n\nfrom retriever.engines import engine_list, choose_engine\nfrom retriever.lib.datapackage import create_json, edit_json, delete_json, get_script_filename\nfrom retriever.lib.datasets import datasets, dataset_names, license\nfrom retriever.lib.defaults import sample_script, CITATION, ENCODING, SCRIPT_SEARCH_PATHS\nfrom retriever.lib.get_opts import parser\nfrom retriever.lib.repository import check_for_updates\nfrom retriever.lib.scripts import SCRIPT_LIST, get_script\nfrom retriever.lib.engine_tools import name_matches, reset_retriever\n\nencoding = ENCODING.lower()\n# sys removes the setdefaultencoding method at startup; reload to get it back\nreload(sys)\nif hasattr(sys, 'setdefaultencoding'):\n sys.setdefaultencoding(encoding)\n\n\ndef main():\n \"\"\"This function launches the Data Retriever.\"\"\"\n sys.argv[1:] = [arg.lower() for arg in sys.argv[1:]]\n if len(sys.argv) == 1:\n # if no command line args are passed, show the help options\n parser.parse_args(['-h'])\n\n else:\n # otherwise, parse them\n\n if not os.path.isdir(SCRIPT_SEARCH_PATHS[1]) and not \\\n [f for f in os.listdir(SCRIPT_SEARCH_PATHS[-1])\n if os.path.exists(SCRIPT_SEARCH_PATHS[-1])]:\n check_for_updates()\n script_list = SCRIPT_LIST()\n\n args = parser.parse_args()\n\n if args.command == \"install\" and not args.engine:\n parser.parse_args(['install', '-h'])\n\n if args.quiet:\n sys.stdout = open(os.devnull, 'w')\n\n if args.command == 'help':\n parser.parse_args(['-h'])\n\n if hasattr(args, 'compile') and args.compile:\n script_list = SCRIPT_LIST(force_compile=True)\n\n if args.command == 'defaults':\n for engine_item in engine_list:\n print(\"Default options for engine \", engine_item.name)\n for default_opts in engine_item.required_opts:\n print(default_opts[0], \" \", default_opts[2])\n print()\n return\n\n if args.command == 'update':\n check_for_updates(False)\n script_list = SCRIPT_LIST()\n return\n\n elif args.command == 'citation':\n if args.dataset is None:\n print(\"\\nCitation for retriever:\\n\")\n print(CITATION)\n else:\n scripts = name_matches(script_list, args.dataset)\n for dataset in scripts:\n print(\"\\nDataset: {}\".format(dataset.name))\n print(\"Citation: {}\".format(dataset.citation))\n print(\"Description: {}\\n\".format(dataset.description))\n\n return\n\n elif args.command == 'license':\n dataset_license = license(args.dataset)\n if dataset_license:\n print(dataset_license)\n else:\n print(\"There is no license information for {}\".format(args.dataset))\n return\n\n elif args.command == 'new':\n f = open(args.filename, 'w')\n f.write(sample_script)\n f.close()\n\n return\n\n elif args.command == 'reset':\n reset_retriever(args.scope)\n return\n\n elif args.command == 'new_json':\n # create new JSON script\n create_json()\n return\n\n elif args.command == 'edit_json':\n # edit existing JSON script\n json_file = get_script_filename(args.dataset.lower())\n edit_json(json_file)\n return\n\n elif args.command == 'delete_json':\n # delete existing JSON script from home directory and or script directory if exists in current dir\n confirm = input(\"Really remove \" + args.dataset.lower() +\n \" and all its contents? (y/N): \")\n if confirm.lower().strip() in ['y', 'yes']:\n json_file = get_script_filename(args.dataset.lower())\n delete_json(json_file)\n return\n\n if args.command == 'ls':\n # If scripts have never been downloaded there is nothing to list\n if not script_list:\n print(\"No scripts are currently available. Updating scripts now...\")\n check_for_updates(False)\n print(\"\\n\\nScripts downloaded.\\n\")\n if not (args.l or args.k or (type(args.v) is list)):\n all_scripts = dataset_names()\n print(\"Available datasets : {}\\n\".format(len(all_scripts)))\n from retriever import lscolumns\n lscolumns.printls(all_scripts)\n \n elif type(args.v) is list:\n if args.v:\n try:\n all_scripts = [get_script(dataset) for dataset in args.v]\n except KeyError:\n all_scripts = []\n print(\"Dataset(s) is not found.\")\n else:\n all_scripts = datasets()\n count = 1\n for script in all_scripts:\n print(\"{}. {}\\n{}\\n{}\\n{}\\n\".format(\n count, script.title,\n script.name,\n script.keywords,\n script.description,\n str(script.licenses[0]['name']),\n script.citation\n ))\n count += 1\n \n else:\n param_licenses = args.l if args.l else None\n keywords = args.k if args.k else None\n\n # search\n searched_scripts = datasets(keywords, param_licenses)\n if not searched_scripts:\n print(\"No available datasets found\")\n else:\n print(\"Available datasets : {}\\n\".format(len(searched_scripts)))\n count = 1\n for script in searched_scripts:\n print(\"{}. {}\\n{}\\n{}\\n{}\\n\".format(\n count, script.title,\n script.name,\n script.keywords,\n str(script.licenses[0]['name'])\n ))\n count += 1\n return\n\n engine = choose_engine(args.__dict__)\n\n if hasattr(args, 'debug') and args.debug:\n debug = True\n else:\n debug = False\n sys.tracebacklimit = 0\n\n if hasattr(args, 'debug') and args.not_cached:\n engine.use_cache = False\n else:\n engine.use_cache = True\n\n if args.dataset is not None:\n scripts = name_matches(script_list, args.dataset)\n else:\n raise Exception(\"no dataset specified.\")\n if scripts:\n for dataset in scripts:\n print(\"=> Installing\", dataset.name)\n try:\n dataset.download(engine, debug=debug)\n dataset.engine.final_cleanup()\n except KeyboardInterrupt:\n pass\n except Exception as e:\n print(e)\n if debug:\n raise\n print(\"Done!\")\n else:\n print(\"Run 'retriever ls' to see a list of currently available datasets.\")\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "retriever/__main__.py"}], "after_files": [{"content": "\"\"\"Data Retriever Wizard\n\nRunning this module directly will launch the download wizard, allowing the user\nto choose from all scripts.\n\nThe main() function can be used for bootstrapping.\n\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import print_function\n\nimport os\nimport sys\nfrom builtins import input\nfrom imp import reload\n\nfrom retriever.engines import engine_list, choose_engine\nfrom retriever.lib.datapackage import create_json, edit_json, delete_json, get_script_filename\nfrom retriever.lib.datasets import datasets, dataset_names, license\nfrom retriever.lib.defaults import sample_script, CITATION, ENCODING, SCRIPT_SEARCH_PATHS\nfrom retriever.lib.get_opts import parser\nfrom retriever.lib.repository import check_for_updates\nfrom retriever.lib.scripts import SCRIPT_LIST, get_script\nfrom retriever.lib.engine_tools import name_matches, reset_retriever\n\nencoding = ENCODING.lower()\n# sys removes the setdefaultencoding method at startup; reload to get it back\nreload(sys)\nif hasattr(sys, 'setdefaultencoding'):\n sys.setdefaultencoding(encoding)\n\n\ndef main():\n \"\"\"This function launches the Data Retriever.\"\"\"\n if len(sys.argv) == 1:\n # if no command line args are passed, show the help options\n parser.parse_args(['-h'])\n\n else:\n # otherwise, parse them\n\n if not os.path.isdir(SCRIPT_SEARCH_PATHS[1]) and not \\\n [f for f in os.listdir(SCRIPT_SEARCH_PATHS[-1])\n if os.path.exists(SCRIPT_SEARCH_PATHS[-1])]:\n check_for_updates()\n script_list = SCRIPT_LIST()\n\n args = parser.parse_args()\n\n if args.command == \"install\" and not args.engine:\n parser.parse_args(['install', '-h'])\n\n if args.quiet:\n sys.stdout = open(os.devnull, 'w')\n\n if args.command == 'help':\n parser.parse_args(['-h'])\n\n if hasattr(args, 'compile') and args.compile:\n script_list = SCRIPT_LIST(force_compile=True)\n\n if args.command == 'defaults':\n for engine_item in engine_list:\n print(\"Default options for engine \", engine_item.name)\n for default_opts in engine_item.required_opts:\n print(default_opts[0], \" \", default_opts[2])\n print()\n return\n\n if args.command == 'update':\n check_for_updates(False)\n script_list = SCRIPT_LIST()\n return\n\n elif args.command == 'citation':\n if args.dataset is None:\n print(\"\\nCitation for retriever:\\n\")\n print(CITATION)\n else:\n scripts = name_matches(script_list, args.dataset)\n for dataset in scripts:\n print(\"\\nDataset: {}\".format(dataset.name))\n print(\"Citation: {}\".format(dataset.citation))\n print(\"Description: {}\\n\".format(dataset.description))\n\n return\n\n elif args.command == 'license':\n dataset_license = license(args.dataset)\n if dataset_license:\n print(dataset_license)\n else:\n print(\"There is no license information for {}\".format(args.dataset))\n return\n\n elif args.command == 'new':\n f = open(args.filename, 'w')\n f.write(sample_script)\n f.close()\n\n return\n\n elif args.command == 'reset':\n reset_retriever(args.scope)\n return\n\n elif args.command == 'new_json':\n # create new JSON script\n create_json()\n return\n\n elif args.command == 'edit_json':\n # edit existing JSON script\n json_file = get_script_filename(args.dataset.lower())\n edit_json(json_file)\n return\n\n elif args.command == 'delete_json':\n # delete existing JSON script from home directory and or script directory if exists in current dir\n confirm = input(\"Really remove \" + args.dataset.lower() +\n \" and all its contents? (y/N): \")\n if confirm.lower().strip() in ['y', 'yes']:\n json_file = get_script_filename(args.dataset.lower())\n delete_json(json_file)\n return\n\n if args.command == 'ls':\n # If scripts have never been downloaded there is nothing to list\n if not script_list:\n print(\"No scripts are currently available. Updating scripts now...\")\n check_for_updates(False)\n print(\"\\n\\nScripts downloaded.\\n\")\n if not (args.l or args.k or (type(args.v) is list)):\n all_scripts = dataset_names()\n print(\"Available datasets : {}\\n\".format(len(all_scripts)))\n from retriever import lscolumns\n lscolumns.printls(all_scripts)\n \n elif type(args.v) is list:\n if args.v:\n try:\n all_scripts = [get_script(dataset) for dataset in args.v]\n except KeyError:\n all_scripts = []\n print(\"Dataset(s) is not found.\")\n else:\n all_scripts = datasets()\n count = 1\n for script in all_scripts:\n print(\"{}. {}\\n{}\\n{}\\n{}\\n\".format(\n count, script.title,\n script.name,\n script.keywords,\n script.description,\n str(script.licenses[0]['name']),\n script.citation\n ))\n count += 1\n \n else:\n param_licenses = args.l if args.l else None\n keywords = args.k if args.k else None\n\n # search\n searched_scripts = datasets(keywords, param_licenses)\n if not searched_scripts:\n print(\"No available datasets found\")\n else:\n print(\"Available datasets : {}\\n\".format(len(searched_scripts)))\n count = 1\n for script in searched_scripts:\n print(\"{}. {}\\n{}\\n{}\\n{}\\n\".format(\n count, script.title,\n script.name,\n script.keywords,\n str(script.licenses[0]['name'])\n ))\n count += 1\n return\n\n engine = choose_engine(args.__dict__)\n\n if hasattr(args, 'debug') and args.debug:\n debug = True\n else:\n debug = False\n sys.tracebacklimit = 0\n\n if hasattr(args, 'debug') and args.not_cached:\n engine.use_cache = False\n else:\n engine.use_cache = True\n\n if args.dataset is not None:\n scripts = name_matches(script_list, args.dataset)\n else:\n raise Exception(\"no dataset specified.\")\n if scripts:\n for dataset in scripts:\n print(\"=> Installing\", dataset.name)\n try:\n dataset.download(engine, debug=debug)\n dataset.engine.final_cleanup()\n except KeyboardInterrupt:\n pass\n except Exception as e:\n print(e)\n if debug:\n raise\n print(\"Done!\")\n else:\n print(\"Run 'retriever ls' to see a list of currently available datasets.\")\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "retriever/__main__.py"}]}
| 2,490 | 121 |
gh_patches_debug_29668
|
rasdani/github-patches
|
git_diff
|
HypothesisWorks__hypothesis-1542
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
RELEASE.rst is not checked for trailing whitespace
Which means that when such a file is merged (e.g. from #1515), subsequent builds break. This is bad and should be fixed ASAP. See #1525 for temporary workaround.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tooling/src/hypothesistooling/releasemanagement.py`
Content:
```
1 # coding=utf-8
2 #
3 # This file is part of Hypothesis, which may be found at
4 # https://github.com/HypothesisWorks/hypothesis-python
5 #
6 # Most of this work is copyright (C) 2013-2018 David R. MacIver
7 # ([email protected]), but it contains contributions by others. See
8 # CONTRIBUTING.rst for a full list of people who may hold copyright, and
9 # consult the git log if you need to determine who owns an individual
10 # contribution.
11 #
12 # This Source Code Form is subject to the terms of the Mozilla Public License,
13 # v. 2.0. If a copy of the MPL was not distributed with this file, You can
14 # obtain one at http://mozilla.org/MPL/2.0/.
15 #
16 # END HEADER
17
18 """Helpful common code for release management tasks that is shared across
19 multiple projects.
20
21 Note that most code in here is brittle and specific to our build and
22 probably makes all sorts of undocumented assumptions, even as it looks
23 like a nice tidy reusable set of functionality.
24 """
25
26
27 from __future__ import division, print_function, absolute_import
28
29 import re
30 from datetime import datetime, timedelta
31
32 import hypothesistooling as tools
33
34
35 def release_date_string():
36 """Returns a date string that represents what should be considered "today"
37 for the purposes of releasing. It is always measured in UTC, but if it's in
38 the last hour of the day it will actually be considered tomorrow.
39
40 The reason for counting it as the later day is that it ensures that
41 (unless our release process takes more than 23 hours) this value
42 remains consistent throughout the entire release.
43 """
44 now = datetime.utcnow()
45
46 return max([
47 d.strftime('%Y-%m-%d') for d in (now, now + timedelta(hours=1))
48 ])
49
50
51 def assignment_matcher(name):
52 """
53 Matches a single line of the form (some space)name = (some value). e.g.
54 " foo = 1".
55 The whole line up to the assigned value is the first matching group,
56 the rest of the line is the second matching group.
57 i.e. group 1 is the assignment, group 2 is the value. In the above
58 example group 1 would be " foo = " and group 2 would be "1"
59 """
60 return re.compile(r'\A(\s*%s\s*=\s*)(.+)\Z' % (re.escape(name),))
61
62
63 def extract_assignment_from_string(contents, name):
64 lines = contents.split('\n')
65
66 matcher = assignment_matcher(name)
67
68 for i, l in enumerate(lines):
69 match = matcher.match(l)
70 if match is not None:
71 return match[2].strip()
72
73 raise ValueError('Key %s not found in %s' % (
74 name, contents
75 ))
76
77
78 def extract_assignment(filename, name):
79 with open(filename) as i:
80 return extract_assignment_from_string(i.read(), name)
81
82
83 def replace_assignment_in_string(contents, name, value):
84 lines = contents.split('\n')
85
86 matcher = assignment_matcher(name)
87
88 count = 0
89
90 for i, l in enumerate(lines):
91 match = matcher.match(l)
92 if match is not None:
93 count += 1
94 lines[i] = match[1] + value
95
96 if count == 0:
97 raise ValueError('Key %s not found in %s' % (
98 name, contents
99 ))
100 if count > 1:
101 raise ValueError('Key %s found %d times in %s' % (
102 name, count, contents
103 ))
104
105 return '\n'.join(lines)
106
107
108 def replace_assignment(filename, name, value):
109 """Replaces a single assignment of the form key = value in a file with a
110 new value, attempting to preserve the existing format.
111
112 This is fairly fragile - in particular it knows nothing about
113 the file format. The existing value is simply the rest of the line after
114 the last space after the equals.
115 """
116 with open(filename) as i:
117 contents = i.read()
118 result = replace_assignment_in_string(contents, name, value)
119 with open(filename, 'w') as o:
120 o.write(result)
121
122
123 RELEASE_TYPE = re.compile(r"^RELEASE_TYPE: +(major|minor|patch)")
124
125
126 MAJOR = 'major'
127 MINOR = 'minor'
128 PATCH = 'patch'
129
130
131 VALID_RELEASE_TYPES = (MAJOR, MINOR, PATCH)
132
133
134 def parse_release_file(filename):
135 with open(filename) as i:
136 return parse_release_file_contents(i.read(), filename)
137
138
139 def parse_release_file_contents(release_contents, filename):
140 release_lines = release_contents.split('\n')
141
142 m = RELEASE_TYPE.match(release_lines[0])
143 if m is not None:
144 release_type = m.group(1)
145 if release_type not in VALID_RELEASE_TYPES:
146 raise ValueError('Unrecognised release type %r' % (release_type,))
147 del release_lines[0]
148 release_contents = '\n'.join(release_lines).strip()
149 else:
150 raise ValueError(
151 '%s does not start by specifying release type. The first '
152 'line of the file should be RELEASE_TYPE: followed by one of '
153 'major, minor, or patch, to specify the type of release that '
154 'this is (i.e. which version number to increment). Instead the '
155 'first line was %r' % (filename, release_lines[0],)
156 )
157
158 return release_type, release_contents
159
160
161 def bump_version_info(version_info, release_type):
162 new_version = list(version_info)
163 bump = VALID_RELEASE_TYPES.index(release_type)
164 new_version[bump] += 1
165 for i in range(bump + 1, len(new_version)):
166 new_version[i] = 0
167 new_version = tuple(new_version)
168 new_version_string = '.'.join(map(str, new_version))
169 return new_version_string, new_version
170
171
172 def update_markdown_changelog(changelog, name, version, entry):
173 with open(changelog) as i:
174 prev_contents = i.read()
175
176 title = '# %(name)s %(version)s (%(date)s)\n\n' % {
177 'name': name, 'version': version, 'date': release_date_string(),
178 }
179
180 with open(changelog, 'w') as o:
181 o.write(title)
182 o.write(entry.strip())
183 o.write('\n\n')
184 o.write(prev_contents)
185
186
187 def parse_version(version):
188 return tuple(map(int, version.split('.')))
189
190
191 def commit_pending_release(project):
192 """Create a commit with the new release."""
193 tools.git('rm', project.RELEASE_FILE)
194 tools.git('add', '-u', project.BASE_DIR)
195
196 tools.git(
197 'commit', '-m',
198 'Bump %s version to %s and update changelog'
199 '\n\n[skip ci]' % (project.PACKAGE_NAME, project.current_version(),)
200 )
201
```
Path: `tooling/src/hypothesistooling/projects/hypothesispython.py`
Content:
```
1 # coding=utf-8
2 #
3 # This file is part of Hypothesis, which may be found at
4 # https://github.com/HypothesisWorks/hypothesis-python
5 #
6 # Most of this work is copyright (C) 2013-2018 David R. MacIver
7 # ([email protected]), but it contains contributions by others. See
8 # CONTRIBUTING.rst for a full list of people who may hold copyright, and
9 # consult the git log if you need to determine who owns an individual
10 # contribution.
11 #
12 # This Source Code Form is subject to the terms of the Mozilla Public License,
13 # v. 2.0. If a copy of the MPL was not distributed with this file, You can
14 # obtain one at http://mozilla.org/MPL/2.0/.
15 #
16 # END HEADER
17
18 from __future__ import division, print_function, absolute_import
19
20 import os
21 import re
22 import sys
23 import shutil
24 import subprocess
25
26 import hypothesistooling as tools
27 import hypothesistooling.releasemanagement as rm
28 from hypothesistooling.releasemanagement import bump_version_info, \
29 replace_assignment, release_date_string
30
31 PACKAGE_NAME = 'hypothesis-python'
32
33 HYPOTHESIS_PYTHON = os.path.join(tools.ROOT, PACKAGE_NAME)
34 PYTHON_TAG_PREFIX = 'hypothesis-python-'
35
36
37 BASE_DIR = HYPOTHESIS_PYTHON
38
39 PYTHON_SRC = os.path.join(HYPOTHESIS_PYTHON, 'src')
40 PYTHON_TESTS = os.path.join(HYPOTHESIS_PYTHON, 'tests')
41
42 RELEASE_FILE = os.path.join(HYPOTHESIS_PYTHON, 'RELEASE.rst')
43
44 assert os.path.exists(PYTHON_SRC)
45
46
47 __version__ = None
48 __version_info__ = None
49
50 VERSION_FILE = os.path.join(PYTHON_SRC, 'hypothesis/version.py')
51
52 with open(VERSION_FILE) as o:
53 exec(o.read())
54
55 assert __version__ is not None
56 assert __version_info__ is not None
57
58
59 def has_release():
60 return os.path.exists(RELEASE_FILE)
61
62
63 def parse_release_file():
64 return rm.parse_release_file(RELEASE_FILE)
65
66
67 def has_source_changes():
68 return tools.has_changes([PYTHON_SRC])
69
70
71 CHANGELOG_ANCHOR = re.compile(r"^\.\. _v\d+\.\d+\.\d+:$")
72 CHANGELOG_BORDER = re.compile(r"^-+$")
73 CHANGELOG_HEADER = re.compile(r"^\d+\.\d+\.\d+ - \d\d\d\d-\d\d-\d\d$")
74
75
76 def update_changelog_and_version():
77 global __version_info__
78 global __version__
79
80 contents = changelog()
81 assert '\r' not in contents
82 lines = contents.split('\n')
83 for i, l in enumerate(lines):
84 if CHANGELOG_ANCHOR.match(l):
85 assert CHANGELOG_BORDER.match(lines[i + 2]), repr(lines[i + 2])
86 assert CHANGELOG_HEADER.match(lines[i + 3]), repr(lines[i + 3])
87 assert CHANGELOG_BORDER.match(lines[i + 4]), repr(lines[i + 4])
88 beginning = '\n'.join(lines[:i])
89 rest = '\n'.join(lines[i:])
90 assert '\n'.join((beginning, rest)) == contents
91 break
92
93 release_type, release_contents = parse_release_file()
94
95 new_version_string, new_version_info = bump_version_info(
96 __version_info__, release_type)
97
98 __version_info__ = new_version_info
99 __version__ = new_version_string
100
101 replace_assignment(
102 VERSION_FILE, '__version_info__', repr(new_version_info))
103
104 heading_for_new_version = ' - '.join((
105 new_version_string, release_date_string()))
106 border_for_new_version = '-' * len(heading_for_new_version)
107
108 new_changelog_parts = [
109 beginning.strip(),
110 '',
111 '.. _v%s:' % (new_version_string),
112 '',
113 border_for_new_version,
114 heading_for_new_version,
115 border_for_new_version,
116 '',
117 release_contents,
118 '',
119 rest
120 ]
121
122 with open(CHANGELOG_FILE, 'w') as o:
123 o.write('\n'.join(new_changelog_parts))
124
125
126 CHANGELOG_FILE = os.path.join(HYPOTHESIS_PYTHON, 'docs', 'changes.rst')
127 DIST = os.path.join(HYPOTHESIS_PYTHON, 'dist')
128
129
130 def changelog():
131 with open(CHANGELOG_FILE) as i:
132 return i.read()
133
134
135 def build_distribution():
136 if os.path.exists(DIST):
137 shutil.rmtree(DIST)
138
139 subprocess.check_output([
140 sys.executable, 'setup.py', 'sdist', '--dist-dir', DIST,
141 ])
142
143
144 def upload_distribution():
145 tools.assert_can_release()
146 subprocess.check_call([
147 sys.executable, '-m', 'twine', 'upload',
148 '--config-file', tools.PYPIRC,
149 os.path.join(DIST, '*'),
150 ])
151
152
153 def current_version():
154 return __version__
155
156
157 def latest_version():
158 versions = []
159
160 for t in tools.tags():
161 if t.startswith(PYTHON_TAG_PREFIX):
162 t = t[len(PYTHON_TAG_PREFIX):]
163 else:
164 continue
165 assert t == t.strip()
166 parts = t.split('.')
167 assert len(parts) == 3
168 v = tuple(map(int, parts))
169 versions.append((v, t))
170
171 _, latest = max(versions)
172
173 return latest
174
175
176 def tag_name():
177 return PYTHON_TAG_PREFIX + __version__
178
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/tooling/src/hypothesistooling/projects/hypothesispython.py b/tooling/src/hypothesistooling/projects/hypothesispython.py
--- a/tooling/src/hypothesistooling/projects/hypothesispython.py
+++ b/tooling/src/hypothesistooling/projects/hypothesispython.py
@@ -25,8 +25,6 @@
import hypothesistooling as tools
import hypothesistooling.releasemanagement as rm
-from hypothesistooling.releasemanagement import bump_version_info, \
- replace_assignment, release_date_string
PACKAGE_NAME = 'hypothesis-python'
@@ -92,17 +90,17 @@
release_type, release_contents = parse_release_file()
- new_version_string, new_version_info = bump_version_info(
+ new_version_string, new_version_info = rm.bump_version_info(
__version_info__, release_type)
__version_info__ = new_version_info
__version__ = new_version_string
- replace_assignment(
+ rm.replace_assignment(
VERSION_FILE, '__version_info__', repr(new_version_info))
heading_for_new_version = ' - '.join((
- new_version_string, release_date_string()))
+ new_version_string, rm.release_date_string()))
border_for_new_version = '-' * len(heading_for_new_version)
new_changelog_parts = [
diff --git a/tooling/src/hypothesistooling/releasemanagement.py b/tooling/src/hypothesistooling/releasemanagement.py
--- a/tooling/src/hypothesistooling/releasemanagement.py
+++ b/tooling/src/hypothesistooling/releasemanagement.py
@@ -137,7 +137,7 @@
def parse_release_file_contents(release_contents, filename):
- release_lines = release_contents.split('\n')
+ release_lines = [l.rstrip() for l in release_contents.split('\n')]
m = RELEASE_TYPE.match(release_lines[0])
if m is not None:
|
{"golden_diff": "diff --git a/tooling/src/hypothesistooling/projects/hypothesispython.py b/tooling/src/hypothesistooling/projects/hypothesispython.py\n--- a/tooling/src/hypothesistooling/projects/hypothesispython.py\n+++ b/tooling/src/hypothesistooling/projects/hypothesispython.py\n@@ -25,8 +25,6 @@\n \n import hypothesistooling as tools\n import hypothesistooling.releasemanagement as rm\n-from hypothesistooling.releasemanagement import bump_version_info, \\\n- replace_assignment, release_date_string\n \n PACKAGE_NAME = 'hypothesis-python'\n \n@@ -92,17 +90,17 @@\n \n release_type, release_contents = parse_release_file()\n \n- new_version_string, new_version_info = bump_version_info(\n+ new_version_string, new_version_info = rm.bump_version_info(\n __version_info__, release_type)\n \n __version_info__ = new_version_info\n __version__ = new_version_string\n \n- replace_assignment(\n+ rm.replace_assignment(\n VERSION_FILE, '__version_info__', repr(new_version_info))\n \n heading_for_new_version = ' - '.join((\n- new_version_string, release_date_string()))\n+ new_version_string, rm.release_date_string()))\n border_for_new_version = '-' * len(heading_for_new_version)\n \n new_changelog_parts = [\ndiff --git a/tooling/src/hypothesistooling/releasemanagement.py b/tooling/src/hypothesistooling/releasemanagement.py\n--- a/tooling/src/hypothesistooling/releasemanagement.py\n+++ b/tooling/src/hypothesistooling/releasemanagement.py\n@@ -137,7 +137,7 @@\n \n \n def parse_release_file_contents(release_contents, filename):\n- release_lines = release_contents.split('\\n')\n+ release_lines = [l.rstrip() for l in release_contents.split('\\n')]\n \n m = RELEASE_TYPE.match(release_lines[0])\n if m is not None:\n", "issue": "RELEASE.rst is not checked for trailing whitespace\nWhich means that when such a file is merged (e.g. from #1515), subsequent builds break. This is bad and should be fixed ASAP. See #1525 for temporary workaround.\n", "before_files": [{"content": "# coding=utf-8\n#\n# This file is part of Hypothesis, which may be found at\n# https://github.com/HypothesisWorks/hypothesis-python\n#\n# Most of this work is copyright (C) 2013-2018 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# CONTRIBUTING.rst for a full list of people who may hold copyright, and\n# consult the git log if you need to determine who owns an individual\n# contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at http://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\n\"\"\"Helpful common code for release management tasks that is shared across\nmultiple projects.\n\nNote that most code in here is brittle and specific to our build and\nprobably makes all sorts of undocumented assumptions, even as it looks\nlike a nice tidy reusable set of functionality.\n\"\"\"\n\n\nfrom __future__ import division, print_function, absolute_import\n\nimport re\nfrom datetime import datetime, timedelta\n\nimport hypothesistooling as tools\n\n\ndef release_date_string():\n \"\"\"Returns a date string that represents what should be considered \"today\"\n for the purposes of releasing. It is always measured in UTC, but if it's in\n the last hour of the day it will actually be considered tomorrow.\n\n The reason for counting it as the later day is that it ensures that\n (unless our release process takes more than 23 hours) this value\n remains consistent throughout the entire release.\n \"\"\"\n now = datetime.utcnow()\n\n return max([\n d.strftime('%Y-%m-%d') for d in (now, now + timedelta(hours=1))\n ])\n\n\ndef assignment_matcher(name):\n \"\"\"\n Matches a single line of the form (some space)name = (some value). e.g.\n \" foo = 1\".\n The whole line up to the assigned value is the first matching group,\n the rest of the line is the second matching group.\n i.e. group 1 is the assignment, group 2 is the value. In the above\n example group 1 would be \" foo = \" and group 2 would be \"1\"\n \"\"\"\n return re.compile(r'\\A(\\s*%s\\s*=\\s*)(.+)\\Z' % (re.escape(name),))\n\n\ndef extract_assignment_from_string(contents, name):\n lines = contents.split('\\n')\n\n matcher = assignment_matcher(name)\n\n for i, l in enumerate(lines):\n match = matcher.match(l)\n if match is not None:\n return match[2].strip()\n\n raise ValueError('Key %s not found in %s' % (\n name, contents\n ))\n\n\ndef extract_assignment(filename, name):\n with open(filename) as i:\n return extract_assignment_from_string(i.read(), name)\n\n\ndef replace_assignment_in_string(contents, name, value):\n lines = contents.split('\\n')\n\n matcher = assignment_matcher(name)\n\n count = 0\n\n for i, l in enumerate(lines):\n match = matcher.match(l)\n if match is not None:\n count += 1\n lines[i] = match[1] + value\n\n if count == 0:\n raise ValueError('Key %s not found in %s' % (\n name, contents\n ))\n if count > 1:\n raise ValueError('Key %s found %d times in %s' % (\n name, count, contents\n ))\n\n return '\\n'.join(lines)\n\n\ndef replace_assignment(filename, name, value):\n \"\"\"Replaces a single assignment of the form key = value in a file with a\n new value, attempting to preserve the existing format.\n\n This is fairly fragile - in particular it knows nothing about\n the file format. The existing value is simply the rest of the line after\n the last space after the equals.\n \"\"\"\n with open(filename) as i:\n contents = i.read()\n result = replace_assignment_in_string(contents, name, value)\n with open(filename, 'w') as o:\n o.write(result)\n\n\nRELEASE_TYPE = re.compile(r\"^RELEASE_TYPE: +(major|minor|patch)\")\n\n\nMAJOR = 'major'\nMINOR = 'minor'\nPATCH = 'patch'\n\n\nVALID_RELEASE_TYPES = (MAJOR, MINOR, PATCH)\n\n\ndef parse_release_file(filename):\n with open(filename) as i:\n return parse_release_file_contents(i.read(), filename)\n\n\ndef parse_release_file_contents(release_contents, filename):\n release_lines = release_contents.split('\\n')\n\n m = RELEASE_TYPE.match(release_lines[0])\n if m is not None:\n release_type = m.group(1)\n if release_type not in VALID_RELEASE_TYPES:\n raise ValueError('Unrecognised release type %r' % (release_type,))\n del release_lines[0]\n release_contents = '\\n'.join(release_lines).strip()\n else:\n raise ValueError(\n '%s does not start by specifying release type. The first '\n 'line of the file should be RELEASE_TYPE: followed by one of '\n 'major, minor, or patch, to specify the type of release that '\n 'this is (i.e. which version number to increment). Instead the '\n 'first line was %r' % (filename, release_lines[0],)\n )\n\n return release_type, release_contents\n\n\ndef bump_version_info(version_info, release_type):\n new_version = list(version_info)\n bump = VALID_RELEASE_TYPES.index(release_type)\n new_version[bump] += 1\n for i in range(bump + 1, len(new_version)):\n new_version[i] = 0\n new_version = tuple(new_version)\n new_version_string = '.'.join(map(str, new_version))\n return new_version_string, new_version\n\n\ndef update_markdown_changelog(changelog, name, version, entry):\n with open(changelog) as i:\n prev_contents = i.read()\n\n title = '# %(name)s %(version)s (%(date)s)\\n\\n' % {\n 'name': name, 'version': version, 'date': release_date_string(),\n }\n\n with open(changelog, 'w') as o:\n o.write(title)\n o.write(entry.strip())\n o.write('\\n\\n')\n o.write(prev_contents)\n\n\ndef parse_version(version):\n return tuple(map(int, version.split('.')))\n\n\ndef commit_pending_release(project):\n \"\"\"Create a commit with the new release.\"\"\"\n tools.git('rm', project.RELEASE_FILE)\n tools.git('add', '-u', project.BASE_DIR)\n\n tools.git(\n 'commit', '-m',\n 'Bump %s version to %s and update changelog'\n '\\n\\n[skip ci]' % (project.PACKAGE_NAME, project.current_version(),)\n )\n", "path": "tooling/src/hypothesistooling/releasemanagement.py"}, {"content": "# coding=utf-8\n#\n# This file is part of Hypothesis, which may be found at\n# https://github.com/HypothesisWorks/hypothesis-python\n#\n# Most of this work is copyright (C) 2013-2018 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# CONTRIBUTING.rst for a full list of people who may hold copyright, and\n# consult the git log if you need to determine who owns an individual\n# contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at http://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\nfrom __future__ import division, print_function, absolute_import\n\nimport os\nimport re\nimport sys\nimport shutil\nimport subprocess\n\nimport hypothesistooling as tools\nimport hypothesistooling.releasemanagement as rm\nfrom hypothesistooling.releasemanagement import bump_version_info, \\\n replace_assignment, release_date_string\n\nPACKAGE_NAME = 'hypothesis-python'\n\nHYPOTHESIS_PYTHON = os.path.join(tools.ROOT, PACKAGE_NAME)\nPYTHON_TAG_PREFIX = 'hypothesis-python-'\n\n\nBASE_DIR = HYPOTHESIS_PYTHON\n\nPYTHON_SRC = os.path.join(HYPOTHESIS_PYTHON, 'src')\nPYTHON_TESTS = os.path.join(HYPOTHESIS_PYTHON, 'tests')\n\nRELEASE_FILE = os.path.join(HYPOTHESIS_PYTHON, 'RELEASE.rst')\n\nassert os.path.exists(PYTHON_SRC)\n\n\n__version__ = None\n__version_info__ = None\n\nVERSION_FILE = os.path.join(PYTHON_SRC, 'hypothesis/version.py')\n\nwith open(VERSION_FILE) as o:\n exec(o.read())\n\nassert __version__ is not None\nassert __version_info__ is not None\n\n\ndef has_release():\n return os.path.exists(RELEASE_FILE)\n\n\ndef parse_release_file():\n return rm.parse_release_file(RELEASE_FILE)\n\n\ndef has_source_changes():\n return tools.has_changes([PYTHON_SRC])\n\n\nCHANGELOG_ANCHOR = re.compile(r\"^\\.\\. _v\\d+\\.\\d+\\.\\d+:$\")\nCHANGELOG_BORDER = re.compile(r\"^-+$\")\nCHANGELOG_HEADER = re.compile(r\"^\\d+\\.\\d+\\.\\d+ - \\d\\d\\d\\d-\\d\\d-\\d\\d$\")\n\n\ndef update_changelog_and_version():\n global __version_info__\n global __version__\n\n contents = changelog()\n assert '\\r' not in contents\n lines = contents.split('\\n')\n for i, l in enumerate(lines):\n if CHANGELOG_ANCHOR.match(l):\n assert CHANGELOG_BORDER.match(lines[i + 2]), repr(lines[i + 2])\n assert CHANGELOG_HEADER.match(lines[i + 3]), repr(lines[i + 3])\n assert CHANGELOG_BORDER.match(lines[i + 4]), repr(lines[i + 4])\n beginning = '\\n'.join(lines[:i])\n rest = '\\n'.join(lines[i:])\n assert '\\n'.join((beginning, rest)) == contents\n break\n\n release_type, release_contents = parse_release_file()\n\n new_version_string, new_version_info = bump_version_info(\n __version_info__, release_type)\n\n __version_info__ = new_version_info\n __version__ = new_version_string\n\n replace_assignment(\n VERSION_FILE, '__version_info__', repr(new_version_info))\n\n heading_for_new_version = ' - '.join((\n new_version_string, release_date_string()))\n border_for_new_version = '-' * len(heading_for_new_version)\n\n new_changelog_parts = [\n beginning.strip(),\n '',\n '.. _v%s:' % (new_version_string),\n '',\n border_for_new_version,\n heading_for_new_version,\n border_for_new_version,\n '',\n release_contents,\n '',\n rest\n ]\n\n with open(CHANGELOG_FILE, 'w') as o:\n o.write('\\n'.join(new_changelog_parts))\n\n\nCHANGELOG_FILE = os.path.join(HYPOTHESIS_PYTHON, 'docs', 'changes.rst')\nDIST = os.path.join(HYPOTHESIS_PYTHON, 'dist')\n\n\ndef changelog():\n with open(CHANGELOG_FILE) as i:\n return i.read()\n\n\ndef build_distribution():\n if os.path.exists(DIST):\n shutil.rmtree(DIST)\n\n subprocess.check_output([\n sys.executable, 'setup.py', 'sdist', '--dist-dir', DIST,\n ])\n\n\ndef upload_distribution():\n tools.assert_can_release()\n subprocess.check_call([\n sys.executable, '-m', 'twine', 'upload',\n '--config-file', tools.PYPIRC,\n os.path.join(DIST, '*'),\n ])\n\n\ndef current_version():\n return __version__\n\n\ndef latest_version():\n versions = []\n\n for t in tools.tags():\n if t.startswith(PYTHON_TAG_PREFIX):\n t = t[len(PYTHON_TAG_PREFIX):]\n else:\n continue\n assert t == t.strip()\n parts = t.split('.')\n assert len(parts) == 3\n v = tuple(map(int, parts))\n versions.append((v, t))\n\n _, latest = max(versions)\n\n return latest\n\n\ndef tag_name():\n return PYTHON_TAG_PREFIX + __version__\n", "path": "tooling/src/hypothesistooling/projects/hypothesispython.py"}], "after_files": [{"content": "# coding=utf-8\n#\n# This file is part of Hypothesis, which may be found at\n# https://github.com/HypothesisWorks/hypothesis-python\n#\n# Most of this work is copyright (C) 2013-2018 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# CONTRIBUTING.rst for a full list of people who may hold copyright, and\n# consult the git log if you need to determine who owns an individual\n# contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at http://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\n\"\"\"Helpful common code for release management tasks that is shared across\nmultiple projects.\n\nNote that most code in here is brittle and specific to our build and\nprobably makes all sorts of undocumented assumptions, even as it looks\nlike a nice tidy reusable set of functionality.\n\"\"\"\n\n\nfrom __future__ import division, print_function, absolute_import\n\nimport re\nfrom datetime import datetime, timedelta\n\nimport hypothesistooling as tools\n\n\ndef release_date_string():\n \"\"\"Returns a date string that represents what should be considered \"today\"\n for the purposes of releasing. It is always measured in UTC, but if it's in\n the last hour of the day it will actually be considered tomorrow.\n\n The reason for counting it as the later day is that it ensures that\n (unless our release process takes more than 23 hours) this value\n remains consistent throughout the entire release.\n \"\"\"\n now = datetime.utcnow()\n\n return max([\n d.strftime('%Y-%m-%d') for d in (now, now + timedelta(hours=1))\n ])\n\n\ndef assignment_matcher(name):\n \"\"\"\n Matches a single line of the form (some space)name = (some value). e.g.\n \" foo = 1\".\n The whole line up to the assigned value is the first matching group,\n the rest of the line is the second matching group.\n i.e. group 1 is the assignment, group 2 is the value. In the above\n example group 1 would be \" foo = \" and group 2 would be \"1\"\n \"\"\"\n return re.compile(r'\\A(\\s*%s\\s*=\\s*)(.+)\\Z' % (re.escape(name),))\n\n\ndef extract_assignment_from_string(contents, name):\n lines = contents.split('\\n')\n\n matcher = assignment_matcher(name)\n\n for i, l in enumerate(lines):\n match = matcher.match(l)\n if match is not None:\n return match[2].strip()\n\n raise ValueError('Key %s not found in %s' % (\n name, contents\n ))\n\n\ndef extract_assignment(filename, name):\n with open(filename) as i:\n return extract_assignment_from_string(i.read(), name)\n\n\ndef replace_assignment_in_string(contents, name, value):\n lines = contents.split('\\n')\n\n matcher = assignment_matcher(name)\n\n count = 0\n\n for i, l in enumerate(lines):\n match = matcher.match(l)\n if match is not None:\n count += 1\n lines[i] = match[1] + value\n\n if count == 0:\n raise ValueError('Key %s not found in %s' % (\n name, contents\n ))\n if count > 1:\n raise ValueError('Key %s found %d times in %s' % (\n name, count, contents\n ))\n\n return '\\n'.join(lines)\n\n\ndef replace_assignment(filename, name, value):\n \"\"\"Replaces a single assignment of the form key = value in a file with a\n new value, attempting to preserve the existing format.\n\n This is fairly fragile - in particular it knows nothing about\n the file format. The existing value is simply the rest of the line after\n the last space after the equals.\n \"\"\"\n with open(filename) as i:\n contents = i.read()\n result = replace_assignment_in_string(contents, name, value)\n with open(filename, 'w') as o:\n o.write(result)\n\n\nRELEASE_TYPE = re.compile(r\"^RELEASE_TYPE: +(major|minor|patch)\")\n\n\nMAJOR = 'major'\nMINOR = 'minor'\nPATCH = 'patch'\n\n\nVALID_RELEASE_TYPES = (MAJOR, MINOR, PATCH)\n\n\ndef parse_release_file(filename):\n with open(filename) as i:\n return parse_release_file_contents(i.read(), filename)\n\n\ndef parse_release_file_contents(release_contents, filename):\n release_lines = [l.rstrip() for l in release_contents.split('\\n')]\n\n m = RELEASE_TYPE.match(release_lines[0])\n if m is not None:\n release_type = m.group(1)\n if release_type not in VALID_RELEASE_TYPES:\n raise ValueError('Unrecognised release type %r' % (release_type,))\n del release_lines[0]\n release_contents = '\\n'.join(release_lines).strip()\n else:\n raise ValueError(\n '%s does not start by specifying release type. The first '\n 'line of the file should be RELEASE_TYPE: followed by one of '\n 'major, minor, or patch, to specify the type of release that '\n 'this is (i.e. which version number to increment). Instead the '\n 'first line was %r' % (filename, release_lines[0],)\n )\n\n return release_type, release_contents\n\n\ndef bump_version_info(version_info, release_type):\n new_version = list(version_info)\n bump = VALID_RELEASE_TYPES.index(release_type)\n new_version[bump] += 1\n for i in range(bump + 1, len(new_version)):\n new_version[i] = 0\n new_version = tuple(new_version)\n new_version_string = '.'.join(map(str, new_version))\n return new_version_string, new_version\n\n\ndef update_markdown_changelog(changelog, name, version, entry):\n with open(changelog) as i:\n prev_contents = i.read()\n\n title = '# %(name)s %(version)s (%(date)s)\\n\\n' % {\n 'name': name, 'version': version, 'date': release_date_string(),\n }\n\n with open(changelog, 'w') as o:\n o.write(title)\n o.write(entry.strip())\n o.write('\\n\\n')\n o.write(prev_contents)\n\n\ndef parse_version(version):\n return tuple(map(int, version.split('.')))\n\n\ndef commit_pending_release(project):\n \"\"\"Create a commit with the new release.\"\"\"\n tools.git('rm', project.RELEASE_FILE)\n tools.git('add', '-u', project.BASE_DIR)\n\n tools.git(\n 'commit', '-m',\n 'Bump %s version to %s and update changelog'\n '\\n\\n[skip ci]' % (project.PACKAGE_NAME, project.current_version(),)\n )\n", "path": "tooling/src/hypothesistooling/releasemanagement.py"}, {"content": "# coding=utf-8\n#\n# This file is part of Hypothesis, which may be found at\n# https://github.com/HypothesisWorks/hypothesis-python\n#\n# Most of this work is copyright (C) 2013-2018 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# CONTRIBUTING.rst for a full list of people who may hold copyright, and\n# consult the git log if you need to determine who owns an individual\n# contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at http://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\nfrom __future__ import division, print_function, absolute_import\n\nimport os\nimport re\nimport sys\nimport shutil\nimport subprocess\n\nimport hypothesistooling as tools\nimport hypothesistooling.releasemanagement as rm\n\nPACKAGE_NAME = 'hypothesis-python'\n\nHYPOTHESIS_PYTHON = os.path.join(tools.ROOT, PACKAGE_NAME)\nPYTHON_TAG_PREFIX = 'hypothesis-python-'\n\n\nBASE_DIR = HYPOTHESIS_PYTHON\n\nPYTHON_SRC = os.path.join(HYPOTHESIS_PYTHON, 'src')\nPYTHON_TESTS = os.path.join(HYPOTHESIS_PYTHON, 'tests')\n\nRELEASE_FILE = os.path.join(HYPOTHESIS_PYTHON, 'RELEASE.rst')\n\nassert os.path.exists(PYTHON_SRC)\n\n\n__version__ = None\n__version_info__ = None\n\nVERSION_FILE = os.path.join(PYTHON_SRC, 'hypothesis/version.py')\n\nwith open(VERSION_FILE) as o:\n exec(o.read())\n\nassert __version__ is not None\nassert __version_info__ is not None\n\n\ndef has_release():\n return os.path.exists(RELEASE_FILE)\n\n\ndef parse_release_file():\n return rm.parse_release_file(RELEASE_FILE)\n\n\ndef has_source_changes():\n return tools.has_changes([PYTHON_SRC])\n\n\nCHANGELOG_ANCHOR = re.compile(r\"^\\.\\. _v\\d+\\.\\d+\\.\\d+:$\")\nCHANGELOG_BORDER = re.compile(r\"^-+$\")\nCHANGELOG_HEADER = re.compile(r\"^\\d+\\.\\d+\\.\\d+ - \\d\\d\\d\\d-\\d\\d-\\d\\d$\")\n\n\ndef update_changelog_and_version():\n global __version_info__\n global __version__\n\n contents = changelog()\n assert '\\r' not in contents\n lines = contents.split('\\n')\n for i, l in enumerate(lines):\n if CHANGELOG_ANCHOR.match(l):\n assert CHANGELOG_BORDER.match(lines[i + 2]), repr(lines[i + 2])\n assert CHANGELOG_HEADER.match(lines[i + 3]), repr(lines[i + 3])\n assert CHANGELOG_BORDER.match(lines[i + 4]), repr(lines[i + 4])\n beginning = '\\n'.join(lines[:i])\n rest = '\\n'.join(lines[i:])\n assert '\\n'.join((beginning, rest)) == contents\n break\n\n release_type, release_contents = parse_release_file()\n\n new_version_string, new_version_info = rm.bump_version_info(\n __version_info__, release_type)\n\n __version_info__ = new_version_info\n __version__ = new_version_string\n\n rm.replace_assignment(\n VERSION_FILE, '__version_info__', repr(new_version_info))\n\n heading_for_new_version = ' - '.join((\n new_version_string, rm.release_date_string()))\n border_for_new_version = '-' * len(heading_for_new_version)\n\n new_changelog_parts = [\n beginning.strip(),\n '',\n '.. _v%s:' % (new_version_string),\n '',\n border_for_new_version,\n heading_for_new_version,\n border_for_new_version,\n '',\n release_contents,\n '',\n rest\n ]\n\n with open(CHANGELOG_FILE, 'w') as o:\n o.write('\\n'.join(new_changelog_parts))\n\n\nCHANGELOG_FILE = os.path.join(HYPOTHESIS_PYTHON, 'docs', 'changes.rst')\nDIST = os.path.join(HYPOTHESIS_PYTHON, 'dist')\n\n\ndef changelog():\n with open(CHANGELOG_FILE) as i:\n return i.read()\n\n\ndef build_distribution():\n if os.path.exists(DIST):\n shutil.rmtree(DIST)\n\n subprocess.check_output([\n sys.executable, 'setup.py', 'sdist', '--dist-dir', DIST,\n ])\n\n\ndef upload_distribution():\n tools.assert_can_release()\n subprocess.check_call([\n sys.executable, '-m', 'twine', 'upload',\n '--config-file', tools.PYPIRC,\n os.path.join(DIST, '*'),\n ])\n\n\ndef current_version():\n return __version__\n\n\ndef latest_version():\n versions = []\n\n for t in tools.tags():\n if t.startswith(PYTHON_TAG_PREFIX):\n t = t[len(PYTHON_TAG_PREFIX):]\n else:\n continue\n assert t == t.strip()\n parts = t.split('.')\n assert len(parts) == 3\n v = tuple(map(int, parts))\n versions.append((v, t))\n\n _, latest = max(versions)\n\n return latest\n\n\ndef tag_name():\n return PYTHON_TAG_PREFIX + __version__\n", "path": "tooling/src/hypothesistooling/projects/hypothesispython.py"}]}
| 3,997 | 456 |
gh_patches_debug_61141
|
rasdani/github-patches
|
git_diff
|
e2nIEE__pandapower-2263
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[bug] __format_version__ not increased.
### Issue Description
The `__format_version__` in `_version.py` has not been increased eventhough the format got changed!
This is an issue in the develop branch **not** in master!
In my fork I made an update to many test cases since I changed the format, so I saved many networks in as files, they contain the current format_version (2.14.0). After merging the current version of develop I got some tests that suddenly failed eventhough my code should not mess with them. So I did a little diging and found that the expected and actual results differ in `net.res_switch_est` DataFrame. This is because the expected result only contains the old columns while the actual result contains the updated columns.
This is because the expected results are loaded form file using the `pandapower.from_json` function and since then format version is the same as the current format verison in `_version.py` the conversion to the newest format is not done. So the network is returned as loaded from file.
The actual results however are a product of a conversion from a different network type. So they are the output of a converter that creates a new pandapowerNet. These then contain all new columns.
If new columns are added `__format_version__` should be incremented at least in the bugfix number. But I would expect that this constitutes at least a minor release as a new format version most likely breaks backwards compatibility. On a bugfix version I would expect I go backwards and forwards without issue. But this is not the case if the format version changes! A 2.13.1 Network should sucessfully load on 2.13.0 but this will not work if new columns are added. So this change should be reflected by an increase of the format verison to at least 2.15.0 in my opinion.
The breaking commit is 516f8af as it changed the format without changeing the format version.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pandapower/_version.py`
Content:
```
1 import importlib.metadata
2
3 __version__ = importlib.metadata.version("pandapower")
4 __format_version__ = "2.14.0"
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pandapower/_version.py b/pandapower/_version.py
--- a/pandapower/_version.py
+++ b/pandapower/_version.py
@@ -1,4 +1,4 @@
import importlib.metadata
__version__ = importlib.metadata.version("pandapower")
-__format_version__ = "2.14.0"
+__format_version__ = "2.15.0"
|
{"golden_diff": "diff --git a/pandapower/_version.py b/pandapower/_version.py\n--- a/pandapower/_version.py\n+++ b/pandapower/_version.py\n@@ -1,4 +1,4 @@\n import importlib.metadata\n \n __version__ = importlib.metadata.version(\"pandapower\")\n-__format_version__ = \"2.14.0\"\n+__format_version__ = \"2.15.0\"\n", "issue": "[bug] __format_version__ not increased.\n### Issue Description\r\n\r\nThe `__format_version__` in `_version.py` has not been increased eventhough the format got changed!\r\n\r\nThis is an issue in the develop branch **not** in master!\r\n\r\nIn my fork I made an update to many test cases since I changed the format, so I saved many networks in as files, they contain the current format_version (2.14.0). After merging the current version of develop I got some tests that suddenly failed eventhough my code should not mess with them. So I did a little diging and found that the expected and actual results differ in `net.res_switch_est` DataFrame. This is because the expected result only contains the old columns while the actual result contains the updated columns.\r\n\r\nThis is because the expected results are loaded form file using the `pandapower.from_json` function and since then format version is the same as the current format verison in `_version.py` the conversion to the newest format is not done. So the network is returned as loaded from file.\r\nThe actual results however are a product of a conversion from a different network type. So they are the output of a converter that creates a new pandapowerNet. These then contain all new columns.\r\n\r\nIf new columns are added `__format_version__` should be incremented at least in the bugfix number. But I would expect that this constitutes at least a minor release as a new format version most likely breaks backwards compatibility. On a bugfix version I would expect I go backwards and forwards without issue. But this is not the case if the format version changes! A 2.13.1 Network should sucessfully load on 2.13.0 but this will not work if new columns are added. So this change should be reflected by an increase of the format verison to at least 2.15.0 in my opinion.\r\n\r\nThe breaking commit is 516f8af as it changed the format without changeing the format version.\r\n\n", "before_files": [{"content": "import importlib.metadata\n\n__version__ = importlib.metadata.version(\"pandapower\")\n__format_version__ = \"2.14.0\"\n", "path": "pandapower/_version.py"}], "after_files": [{"content": "import importlib.metadata\n\n__version__ = importlib.metadata.version(\"pandapower\")\n__format_version__ = \"2.15.0\"\n", "path": "pandapower/_version.py"}]}
| 721 | 97 |
gh_patches_debug_21148
|
rasdani/github-patches
|
git_diff
|
quantumlib__Cirq-3637
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
For QASM of decomposed circuits add comment
I noticed that if I export CCZ into qasm, I get the h, ccx, h operations with no leading comment saying that they correspond to a CCZ. I'm pretty sure this used to happen.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cirq/circuits/qasm_output.py`
Content:
```
1 # Copyright 2018 The Cirq Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Utility classes for representing QASM."""
16
17 from typing import Callable, Dict, Optional, Sequence, Set, Tuple, Union, TYPE_CHECKING
18
19 import re
20 import numpy as np
21
22 from cirq import ops, linalg, protocols, value
23
24 if TYPE_CHECKING:
25 import cirq
26
27
28 @value.value_equality(approximate=True)
29 class QasmUGate(ops.SingleQubitGate):
30 def __init__(self, theta, phi, lmda) -> None:
31 """A QASM gate representing any single qubit unitary with a series of
32 three rotations, Z, Y, and Z.
33
34 The angles are normalized to the range [0, 2) half_turns.
35
36 Args:
37 theta: Half turns to rotate about Y (applied second).
38 phi: Half turns to rotate about Z (applied last).
39 lmda: Half turns to rotate about Z (applied first).
40 """
41 self.lmda = lmda % 2
42 self.theta = theta % 2
43 self.phi = phi % 2
44
45 @staticmethod
46 def from_matrix(mat: np.array) -> 'QasmUGate':
47 pre_phase, rotation, post_phase = linalg.deconstruct_single_qubit_matrix_into_angles(mat)
48 return QasmUGate(
49 rotation / np.pi,
50 post_phase / np.pi,
51 pre_phase / np.pi,
52 )
53
54 def _has_unitary_(self):
55 return True
56
57 def _qasm_(self, qubits: Tuple['cirq.Qid', ...], args: 'cirq.QasmArgs') -> str:
58 args.validate_version('2.0')
59 return args.format(
60 'u3({0:half_turns},{1:half_turns},{2:half_turns}) {3};\n',
61 self.theta,
62 self.phi,
63 self.lmda,
64 qubits[0],
65 )
66
67 def __repr__(self) -> str:
68 return (
69 f'cirq.circuits.qasm_output.QasmUGate('
70 f'theta={self.theta!r}, '
71 f'phi={self.phi!r}, '
72 f'lmda={self.lmda})'
73 )
74
75 def _decompose_(self, qubits):
76 q = qubits[0]
77 return [
78 ops.rz(self.lmda * np.pi).on(q),
79 ops.ry(self.theta * np.pi).on(q),
80 ops.rz(self.phi * np.pi).on(q),
81 ]
82
83 def _value_equality_values_(self):
84 return self.lmda, self.theta, self.phi
85
86
87 @value.value_equality
88 class QasmTwoQubitGate(ops.TwoQubitGate):
89 def __init__(self, kak: linalg.KakDecomposition) -> None:
90 """A two qubit gate represented in QASM by the KAK decomposition.
91
92 All angles are in half turns. Assumes a canonicalized KAK
93 decomposition.
94
95 Args:
96 kak: KAK decomposition of the two-qubit gate.
97 """
98 self.kak = kak
99
100 def _value_equality_values_(self):
101 return self.kak
102
103 @staticmethod
104 def from_matrix(mat: np.array, atol=1e-8) -> 'QasmTwoQubitGate':
105 """Creates a QasmTwoQubitGate from the given matrix.
106
107 Args:
108 mat: The unitary matrix of the two qubit gate.
109 atol: Absolute error tolerance when decomposing.
110
111 Returns:
112 A QasmTwoQubitGate implementing the matrix.
113 """
114 kak = linalg.kak_decomposition(mat, atol=atol)
115 return QasmTwoQubitGate(kak)
116
117 def _unitary_(self):
118 return protocols.unitary(self.kak)
119
120 def _decompose_(self, qubits: Sequence['cirq.Qid']) -> 'cirq.OP_TREE':
121 q0, q1 = qubits
122 x, y, z = self.kak.interaction_coefficients
123 a = x * -2 / np.pi + 0.5
124 b = y * -2 / np.pi + 0.5
125 c = z * -2 / np.pi + 0.5
126
127 b0, b1 = self.kak.single_qubit_operations_before
128 yield QasmUGate.from_matrix(b0).on(q0)
129 yield QasmUGate.from_matrix(b1).on(q1)
130
131 yield ops.X(q0) ** 0.5
132 yield ops.CNOT(q0, q1)
133 yield ops.X(q0) ** a
134 yield ops.Y(q1) ** b
135 yield ops.CNOT(q1, q0)
136 yield ops.X(q1) ** -0.5
137 yield ops.Z(q1) ** c
138 yield ops.CNOT(q0, q1)
139
140 a0, a1 = self.kak.single_qubit_operations_after
141 yield QasmUGate.from_matrix(a0).on(q0)
142 yield QasmUGate.from_matrix(a1).on(q1)
143
144 def __repr__(self) -> str:
145 return 'cirq.circuits.qasm_output.QasmTwoQubitGate({!r})'.format(self.kak)
146
147
148 class QasmOutput:
149 """Representation of a circuit in QASM (quantum assembly) format.
150
151 Please note that the QASM importer is in an experimental state and
152 currently only supports a subset of the full OpenQASM spec.
153 Amongst others, classical control, arbitrary gate definitions,
154 and even some of the gates that don't have a one-to-one representation
155 in Cirq, are not yet supported.
156
157 QASM output can be saved to a file using the save method.
158 """
159
160 valid_id_re = re.compile(r'[a-z][a-zA-Z0-9_]*\Z')
161
162 def __init__(
163 self,
164 operations: 'cirq.OP_TREE',
165 qubits: Tuple['cirq.Qid', ...],
166 header: str = '',
167 precision: int = 10,
168 version: str = '2.0',
169 ) -> None:
170 self.operations = tuple(ops.flatten_to_ops(operations))
171 self.qubits = qubits
172 self.header = header
173 self.measurements = tuple(
174 op for op in self.operations if isinstance(op.gate, ops.MeasurementGate)
175 )
176 meas_key_id_map, meas_comments = self._generate_measurement_ids()
177 self.meas_comments = meas_comments
178 qubit_id_map = self._generate_qubit_ids()
179 self.args = protocols.QasmArgs(
180 precision=precision,
181 version=version,
182 qubit_id_map=qubit_id_map,
183 meas_key_id_map=meas_key_id_map,
184 )
185
186 def _generate_measurement_ids(self) -> Tuple[Dict[str, str], Dict[str, Optional[str]]]:
187 # Pick an id for the creg that will store each measurement
188 meas_key_id_map = {} # type: Dict[str, str]
189 meas_comments = {} # type: Dict[str, Optional[str]]
190 meas_i = 0
191 for meas in self.measurements:
192 key = protocols.measurement_key(meas)
193 if key in meas_key_id_map:
194 continue
195 meas_id = 'm_{}'.format(key)
196 if self.is_valid_qasm_id(meas_id):
197 meas_comments[key] = None
198 else:
199 meas_id = 'm{}'.format(meas_i)
200 meas_i += 1
201 meas_comments[key] = ' '.join(key.split('\n'))
202 meas_key_id_map[key] = meas_id
203 return meas_key_id_map, meas_comments
204
205 def _generate_qubit_ids(self) -> Dict['cirq.Qid', str]:
206 return {qubit: 'q[{}]'.format(i) for i, qubit in enumerate(self.qubits)}
207
208 def is_valid_qasm_id(self, id_str: str) -> bool:
209 """Test if id_str is a valid id in QASM grammar."""
210 return self.valid_id_re.match(id_str) != None
211
212 def save(self, path: Union[str, bytes, int]) -> None:
213 """Write QASM output to a file specified by path."""
214 with open(path, 'w') as f:
215
216 def write(s: str) -> None:
217 f.write(s)
218
219 self._write_qasm(write)
220
221 def __str__(self) -> str:
222 """Return QASM output as a string."""
223 output = []
224 self._write_qasm(lambda s: output.append(s))
225 return ''.join(output)
226
227 def _write_qasm(self, output_func: Callable[[str], None]) -> None:
228 self.args.validate_version('2.0')
229
230 # Generate nice line spacing
231 line_gap = [0]
232
233 def output_line_gap(n):
234 line_gap[0] = max(line_gap[0], n)
235
236 def output(text):
237 if line_gap[0] > 0:
238 output_func('\n' * line_gap[0])
239 line_gap[0] = 0
240 output_func(text)
241
242 # Comment header
243 if self.header:
244 for line in self.header.split('\n'):
245 output(('// ' + line).rstrip() + '\n')
246 output('\n')
247
248 # Version
249 output('OPENQASM 2.0;\n')
250 output('include "qelib1.inc";\n')
251 output_line_gap(2)
252
253 # Function definitions
254 # None yet
255
256 # Register definitions
257 # Qubit registers
258 output('// Qubits: [{}]\n'.format(', '.join(map(str, self.qubits))))
259 if len(self.qubits) > 0:
260 output('qreg q[{}];\n'.format(len(self.qubits)))
261 # Classical registers
262 # Pick an id for the creg that will store each measurement
263 already_output_keys: Set[str] = set()
264 for meas in self.measurements:
265 key = protocols.measurement_key(meas)
266 if key in already_output_keys:
267 continue
268 already_output_keys.add(key)
269 meas_id = self.args.meas_key_id_map[key]
270 comment = self.meas_comments[key]
271 if comment is None:
272 output('creg {}[{}];\n'.format(meas_id, len(meas.qubits)))
273 else:
274 output(
275 'creg {}[{}]; // Measurement: {}\n'.format(meas_id, len(meas.qubits), comment)
276 )
277 output_line_gap(2)
278
279 # Operations
280 self._write_operations(self.operations, output, output_line_gap)
281
282 def _write_operations(
283 self,
284 op_tree: 'cirq.OP_TREE',
285 output: Callable[[str], None],
286 output_line_gap: Callable[[int], None],
287 ) -> None:
288 def keep(op: 'cirq.Operation') -> bool:
289 return protocols.qasm(op, args=self.args, default=None) is not None
290
291 def fallback(op):
292 if len(op.qubits) not in [1, 2]:
293 return NotImplemented
294
295 mat = protocols.unitary(op, None)
296 if mat is None:
297 return NotImplemented
298
299 if len(op.qubits) == 1:
300 return QasmUGate.from_matrix(mat).on(*op.qubits)
301 return QasmTwoQubitGate.from_matrix(mat).on(*op.qubits)
302
303 def on_stuck(bad_op):
304 return ValueError('Cannot output operation as QASM: {!r}'.format(bad_op))
305
306 for main_op in ops.flatten_op_tree(op_tree):
307 decomposed = protocols.decompose(
308 main_op, keep=keep, fallback_decomposer=fallback, on_stuck_raise=on_stuck
309 )
310
311 should_annotate = decomposed != [main_op]
312 if should_annotate:
313 output_line_gap(1)
314 if isinstance(main_op, ops.GateOperation):
315 x = str(main_op.gate).replace('\n', '\n //')
316 output('// Gate: {!s}\n'.format(x))
317 else:
318 x = str(main_op).replace('\n', '\n //')
319 output('// Operation: {!s}\n'.format(x))
320
321 for decomposed_op in decomposed:
322 output(protocols.qasm(decomposed_op, args=self.args))
323
324 if should_annotate:
325 output_line_gap(1)
326
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/cirq/circuits/qasm_output.py b/cirq/circuits/qasm_output.py
--- a/cirq/circuits/qasm_output.py
+++ b/cirq/circuits/qasm_output.py
@@ -308,7 +308,9 @@
main_op, keep=keep, fallback_decomposer=fallback, on_stuck_raise=on_stuck
)
- should_annotate = decomposed != [main_op]
+ qasms = [protocols.qasm(op, args=self.args) for op in decomposed]
+
+ should_annotate = decomposed != [main_op] or qasms[0].count('\n') > 1
if should_annotate:
output_line_gap(1)
if isinstance(main_op, ops.GateOperation):
@@ -318,8 +320,8 @@
x = str(main_op).replace('\n', '\n //')
output('// Operation: {!s}\n'.format(x))
- for decomposed_op in decomposed:
- output(protocols.qasm(decomposed_op, args=self.args))
+ for qasm in qasms:
+ output(qasm)
if should_annotate:
output_line_gap(1)
|
{"golden_diff": "diff --git a/cirq/circuits/qasm_output.py b/cirq/circuits/qasm_output.py\n--- a/cirq/circuits/qasm_output.py\n+++ b/cirq/circuits/qasm_output.py\n@@ -308,7 +308,9 @@\n main_op, keep=keep, fallback_decomposer=fallback, on_stuck_raise=on_stuck\n )\n \n- should_annotate = decomposed != [main_op]\n+ qasms = [protocols.qasm(op, args=self.args) for op in decomposed]\n+\n+ should_annotate = decomposed != [main_op] or qasms[0].count('\\n') > 1\n if should_annotate:\n output_line_gap(1)\n if isinstance(main_op, ops.GateOperation):\n@@ -318,8 +320,8 @@\n x = str(main_op).replace('\\n', '\\n //')\n output('// Operation: {!s}\\n'.format(x))\n \n- for decomposed_op in decomposed:\n- output(protocols.qasm(decomposed_op, args=self.args))\n+ for qasm in qasms:\n+ output(qasm)\n \n if should_annotate:\n output_line_gap(1)\n", "issue": "For QASM of decomposed circuits add comment\nI noticed that if I export CCZ into qasm, I get the h, ccx, h operations with no leading comment saying that they correspond to a CCZ. I'm pretty sure this used to happen.\n", "before_files": [{"content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Utility classes for representing QASM.\"\"\"\n\nfrom typing import Callable, Dict, Optional, Sequence, Set, Tuple, Union, TYPE_CHECKING\n\nimport re\nimport numpy as np\n\nfrom cirq import ops, linalg, protocols, value\n\nif TYPE_CHECKING:\n import cirq\n\n\[email protected]_equality(approximate=True)\nclass QasmUGate(ops.SingleQubitGate):\n def __init__(self, theta, phi, lmda) -> None:\n \"\"\"A QASM gate representing any single qubit unitary with a series of\n three rotations, Z, Y, and Z.\n\n The angles are normalized to the range [0, 2) half_turns.\n\n Args:\n theta: Half turns to rotate about Y (applied second).\n phi: Half turns to rotate about Z (applied last).\n lmda: Half turns to rotate about Z (applied first).\n \"\"\"\n self.lmda = lmda % 2\n self.theta = theta % 2\n self.phi = phi % 2\n\n @staticmethod\n def from_matrix(mat: np.array) -> 'QasmUGate':\n pre_phase, rotation, post_phase = linalg.deconstruct_single_qubit_matrix_into_angles(mat)\n return QasmUGate(\n rotation / np.pi,\n post_phase / np.pi,\n pre_phase / np.pi,\n )\n\n def _has_unitary_(self):\n return True\n\n def _qasm_(self, qubits: Tuple['cirq.Qid', ...], args: 'cirq.QasmArgs') -> str:\n args.validate_version('2.0')\n return args.format(\n 'u3({0:half_turns},{1:half_turns},{2:half_turns}) {3};\\n',\n self.theta,\n self.phi,\n self.lmda,\n qubits[0],\n )\n\n def __repr__(self) -> str:\n return (\n f'cirq.circuits.qasm_output.QasmUGate('\n f'theta={self.theta!r}, '\n f'phi={self.phi!r}, '\n f'lmda={self.lmda})'\n )\n\n def _decompose_(self, qubits):\n q = qubits[0]\n return [\n ops.rz(self.lmda * np.pi).on(q),\n ops.ry(self.theta * np.pi).on(q),\n ops.rz(self.phi * np.pi).on(q),\n ]\n\n def _value_equality_values_(self):\n return self.lmda, self.theta, self.phi\n\n\[email protected]_equality\nclass QasmTwoQubitGate(ops.TwoQubitGate):\n def __init__(self, kak: linalg.KakDecomposition) -> None:\n \"\"\"A two qubit gate represented in QASM by the KAK decomposition.\n\n All angles are in half turns. Assumes a canonicalized KAK\n decomposition.\n\n Args:\n kak: KAK decomposition of the two-qubit gate.\n \"\"\"\n self.kak = kak\n\n def _value_equality_values_(self):\n return self.kak\n\n @staticmethod\n def from_matrix(mat: np.array, atol=1e-8) -> 'QasmTwoQubitGate':\n \"\"\"Creates a QasmTwoQubitGate from the given matrix.\n\n Args:\n mat: The unitary matrix of the two qubit gate.\n atol: Absolute error tolerance when decomposing.\n\n Returns:\n A QasmTwoQubitGate implementing the matrix.\n \"\"\"\n kak = linalg.kak_decomposition(mat, atol=atol)\n return QasmTwoQubitGate(kak)\n\n def _unitary_(self):\n return protocols.unitary(self.kak)\n\n def _decompose_(self, qubits: Sequence['cirq.Qid']) -> 'cirq.OP_TREE':\n q0, q1 = qubits\n x, y, z = self.kak.interaction_coefficients\n a = x * -2 / np.pi + 0.5\n b = y * -2 / np.pi + 0.5\n c = z * -2 / np.pi + 0.5\n\n b0, b1 = self.kak.single_qubit_operations_before\n yield QasmUGate.from_matrix(b0).on(q0)\n yield QasmUGate.from_matrix(b1).on(q1)\n\n yield ops.X(q0) ** 0.5\n yield ops.CNOT(q0, q1)\n yield ops.X(q0) ** a\n yield ops.Y(q1) ** b\n yield ops.CNOT(q1, q0)\n yield ops.X(q1) ** -0.5\n yield ops.Z(q1) ** c\n yield ops.CNOT(q0, q1)\n\n a0, a1 = self.kak.single_qubit_operations_after\n yield QasmUGate.from_matrix(a0).on(q0)\n yield QasmUGate.from_matrix(a1).on(q1)\n\n def __repr__(self) -> str:\n return 'cirq.circuits.qasm_output.QasmTwoQubitGate({!r})'.format(self.kak)\n\n\nclass QasmOutput:\n \"\"\"Representation of a circuit in QASM (quantum assembly) format.\n\n Please note that the QASM importer is in an experimental state and\n currently only supports a subset of the full OpenQASM spec.\n Amongst others, classical control, arbitrary gate definitions,\n and even some of the gates that don't have a one-to-one representation\n in Cirq, are not yet supported.\n\n QASM output can be saved to a file using the save method.\n \"\"\"\n\n valid_id_re = re.compile(r'[a-z][a-zA-Z0-9_]*\\Z')\n\n def __init__(\n self,\n operations: 'cirq.OP_TREE',\n qubits: Tuple['cirq.Qid', ...],\n header: str = '',\n precision: int = 10,\n version: str = '2.0',\n ) -> None:\n self.operations = tuple(ops.flatten_to_ops(operations))\n self.qubits = qubits\n self.header = header\n self.measurements = tuple(\n op for op in self.operations if isinstance(op.gate, ops.MeasurementGate)\n )\n meas_key_id_map, meas_comments = self._generate_measurement_ids()\n self.meas_comments = meas_comments\n qubit_id_map = self._generate_qubit_ids()\n self.args = protocols.QasmArgs(\n precision=precision,\n version=version,\n qubit_id_map=qubit_id_map,\n meas_key_id_map=meas_key_id_map,\n )\n\n def _generate_measurement_ids(self) -> Tuple[Dict[str, str], Dict[str, Optional[str]]]:\n # Pick an id for the creg that will store each measurement\n meas_key_id_map = {} # type: Dict[str, str]\n meas_comments = {} # type: Dict[str, Optional[str]]\n meas_i = 0\n for meas in self.measurements:\n key = protocols.measurement_key(meas)\n if key in meas_key_id_map:\n continue\n meas_id = 'm_{}'.format(key)\n if self.is_valid_qasm_id(meas_id):\n meas_comments[key] = None\n else:\n meas_id = 'm{}'.format(meas_i)\n meas_i += 1\n meas_comments[key] = ' '.join(key.split('\\n'))\n meas_key_id_map[key] = meas_id\n return meas_key_id_map, meas_comments\n\n def _generate_qubit_ids(self) -> Dict['cirq.Qid', str]:\n return {qubit: 'q[{}]'.format(i) for i, qubit in enumerate(self.qubits)}\n\n def is_valid_qasm_id(self, id_str: str) -> bool:\n \"\"\"Test if id_str is a valid id in QASM grammar.\"\"\"\n return self.valid_id_re.match(id_str) != None\n\n def save(self, path: Union[str, bytes, int]) -> None:\n \"\"\"Write QASM output to a file specified by path.\"\"\"\n with open(path, 'w') as f:\n\n def write(s: str) -> None:\n f.write(s)\n\n self._write_qasm(write)\n\n def __str__(self) -> str:\n \"\"\"Return QASM output as a string.\"\"\"\n output = []\n self._write_qasm(lambda s: output.append(s))\n return ''.join(output)\n\n def _write_qasm(self, output_func: Callable[[str], None]) -> None:\n self.args.validate_version('2.0')\n\n # Generate nice line spacing\n line_gap = [0]\n\n def output_line_gap(n):\n line_gap[0] = max(line_gap[0], n)\n\n def output(text):\n if line_gap[0] > 0:\n output_func('\\n' * line_gap[0])\n line_gap[0] = 0\n output_func(text)\n\n # Comment header\n if self.header:\n for line in self.header.split('\\n'):\n output(('// ' + line).rstrip() + '\\n')\n output('\\n')\n\n # Version\n output('OPENQASM 2.0;\\n')\n output('include \"qelib1.inc\";\\n')\n output_line_gap(2)\n\n # Function definitions\n # None yet\n\n # Register definitions\n # Qubit registers\n output('// Qubits: [{}]\\n'.format(', '.join(map(str, self.qubits))))\n if len(self.qubits) > 0:\n output('qreg q[{}];\\n'.format(len(self.qubits)))\n # Classical registers\n # Pick an id for the creg that will store each measurement\n already_output_keys: Set[str] = set()\n for meas in self.measurements:\n key = protocols.measurement_key(meas)\n if key in already_output_keys:\n continue\n already_output_keys.add(key)\n meas_id = self.args.meas_key_id_map[key]\n comment = self.meas_comments[key]\n if comment is None:\n output('creg {}[{}];\\n'.format(meas_id, len(meas.qubits)))\n else:\n output(\n 'creg {}[{}]; // Measurement: {}\\n'.format(meas_id, len(meas.qubits), comment)\n )\n output_line_gap(2)\n\n # Operations\n self._write_operations(self.operations, output, output_line_gap)\n\n def _write_operations(\n self,\n op_tree: 'cirq.OP_TREE',\n output: Callable[[str], None],\n output_line_gap: Callable[[int], None],\n ) -> None:\n def keep(op: 'cirq.Operation') -> bool:\n return protocols.qasm(op, args=self.args, default=None) is not None\n\n def fallback(op):\n if len(op.qubits) not in [1, 2]:\n return NotImplemented\n\n mat = protocols.unitary(op, None)\n if mat is None:\n return NotImplemented\n\n if len(op.qubits) == 1:\n return QasmUGate.from_matrix(mat).on(*op.qubits)\n return QasmTwoQubitGate.from_matrix(mat).on(*op.qubits)\n\n def on_stuck(bad_op):\n return ValueError('Cannot output operation as QASM: {!r}'.format(bad_op))\n\n for main_op in ops.flatten_op_tree(op_tree):\n decomposed = protocols.decompose(\n main_op, keep=keep, fallback_decomposer=fallback, on_stuck_raise=on_stuck\n )\n\n should_annotate = decomposed != [main_op]\n if should_annotate:\n output_line_gap(1)\n if isinstance(main_op, ops.GateOperation):\n x = str(main_op.gate).replace('\\n', '\\n //')\n output('// Gate: {!s}\\n'.format(x))\n else:\n x = str(main_op).replace('\\n', '\\n //')\n output('// Operation: {!s}\\n'.format(x))\n\n for decomposed_op in decomposed:\n output(protocols.qasm(decomposed_op, args=self.args))\n\n if should_annotate:\n output_line_gap(1)\n", "path": "cirq/circuits/qasm_output.py"}], "after_files": [{"content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Utility classes for representing QASM.\"\"\"\n\nfrom typing import Callable, Dict, Optional, Sequence, Set, Tuple, Union, TYPE_CHECKING\n\nimport re\nimport numpy as np\n\nfrom cirq import ops, linalg, protocols, value\n\nif TYPE_CHECKING:\n import cirq\n\n\[email protected]_equality(approximate=True)\nclass QasmUGate(ops.SingleQubitGate):\n def __init__(self, theta, phi, lmda) -> None:\n \"\"\"A QASM gate representing any single qubit unitary with a series of\n three rotations, Z, Y, and Z.\n\n The angles are normalized to the range [0, 2) half_turns.\n\n Args:\n theta: Half turns to rotate about Y (applied second).\n phi: Half turns to rotate about Z (applied last).\n lmda: Half turns to rotate about Z (applied first).\n \"\"\"\n self.lmda = lmda % 2\n self.theta = theta % 2\n self.phi = phi % 2\n\n @staticmethod\n def from_matrix(mat: np.array) -> 'QasmUGate':\n pre_phase, rotation, post_phase = linalg.deconstruct_single_qubit_matrix_into_angles(mat)\n return QasmUGate(\n rotation / np.pi,\n post_phase / np.pi,\n pre_phase / np.pi,\n )\n\n def _has_unitary_(self):\n return True\n\n def _qasm_(self, qubits: Tuple['cirq.Qid', ...], args: 'cirq.QasmArgs') -> str:\n args.validate_version('2.0')\n return args.format(\n 'u3({0:half_turns},{1:half_turns},{2:half_turns}) {3};\\n',\n self.theta,\n self.phi,\n self.lmda,\n qubits[0],\n )\n\n def __repr__(self) -> str:\n return (\n f'cirq.circuits.qasm_output.QasmUGate('\n f'theta={self.theta!r}, '\n f'phi={self.phi!r}, '\n f'lmda={self.lmda})'\n )\n\n def _decompose_(self, qubits):\n q = qubits[0]\n return [\n ops.rz(self.lmda * np.pi).on(q),\n ops.ry(self.theta * np.pi).on(q),\n ops.rz(self.phi * np.pi).on(q),\n ]\n\n def _value_equality_values_(self):\n return self.lmda, self.theta, self.phi\n\n\[email protected]_equality\nclass QasmTwoQubitGate(ops.TwoQubitGate):\n def __init__(self, kak: linalg.KakDecomposition) -> None:\n \"\"\"A two qubit gate represented in QASM by the KAK decomposition.\n\n All angles are in half turns. Assumes a canonicalized KAK\n decomposition.\n\n Args:\n kak: KAK decomposition of the two-qubit gate.\n \"\"\"\n self.kak = kak\n\n def _value_equality_values_(self):\n return self.kak\n\n @staticmethod\n def from_matrix(mat: np.array, atol=1e-8) -> 'QasmTwoQubitGate':\n \"\"\"Creates a QasmTwoQubitGate from the given matrix.\n\n Args:\n mat: The unitary matrix of the two qubit gate.\n atol: Absolute error tolerance when decomposing.\n\n Returns:\n A QasmTwoQubitGate implementing the matrix.\n \"\"\"\n kak = linalg.kak_decomposition(mat, atol=atol)\n return QasmTwoQubitGate(kak)\n\n def _unitary_(self):\n return protocols.unitary(self.kak)\n\n def _decompose_(self, qubits: Sequence['cirq.Qid']) -> 'cirq.OP_TREE':\n q0, q1 = qubits\n x, y, z = self.kak.interaction_coefficients\n a = x * -2 / np.pi + 0.5\n b = y * -2 / np.pi + 0.5\n c = z * -2 / np.pi + 0.5\n\n b0, b1 = self.kak.single_qubit_operations_before\n yield QasmUGate.from_matrix(b0).on(q0)\n yield QasmUGate.from_matrix(b1).on(q1)\n\n yield ops.X(q0) ** 0.5\n yield ops.CNOT(q0, q1)\n yield ops.X(q0) ** a\n yield ops.Y(q1) ** b\n yield ops.CNOT(q1, q0)\n yield ops.X(q1) ** -0.5\n yield ops.Z(q1) ** c\n yield ops.CNOT(q0, q1)\n\n a0, a1 = self.kak.single_qubit_operations_after\n yield QasmUGate.from_matrix(a0).on(q0)\n yield QasmUGate.from_matrix(a1).on(q1)\n\n def __repr__(self) -> str:\n return 'cirq.circuits.qasm_output.QasmTwoQubitGate({!r})'.format(self.kak)\n\n\nclass QasmOutput:\n \"\"\"Representation of a circuit in QASM (quantum assembly) format.\n\n Please note that the QASM importer is in an experimental state and\n currently only supports a subset of the full OpenQASM spec.\n Amongst others, classical control, arbitrary gate definitions,\n and even some of the gates that don't have a one-to-one representation\n in Cirq, are not yet supported.\n\n QASM output can be saved to a file using the save method.\n \"\"\"\n\n valid_id_re = re.compile(r'[a-z][a-zA-Z0-9_]*\\Z')\n\n def __init__(\n self,\n operations: 'cirq.OP_TREE',\n qubits: Tuple['cirq.Qid', ...],\n header: str = '',\n precision: int = 10,\n version: str = '2.0',\n ) -> None:\n self.operations = tuple(ops.flatten_to_ops(operations))\n self.qubits = qubits\n self.header = header\n self.measurements = tuple(\n op for op in self.operations if isinstance(op.gate, ops.MeasurementGate)\n )\n meas_key_id_map, meas_comments = self._generate_measurement_ids()\n self.meas_comments = meas_comments\n qubit_id_map = self._generate_qubit_ids()\n self.args = protocols.QasmArgs(\n precision=precision,\n version=version,\n qubit_id_map=qubit_id_map,\n meas_key_id_map=meas_key_id_map,\n )\n\n def _generate_measurement_ids(self) -> Tuple[Dict[str, str], Dict[str, Optional[str]]]:\n # Pick an id for the creg that will store each measurement\n meas_key_id_map = {} # type: Dict[str, str]\n meas_comments = {} # type: Dict[str, Optional[str]]\n meas_i = 0\n for meas in self.measurements:\n key = protocols.measurement_key(meas)\n if key in meas_key_id_map:\n continue\n meas_id = 'm_{}'.format(key)\n if self.is_valid_qasm_id(meas_id):\n meas_comments[key] = None\n else:\n meas_id = 'm{}'.format(meas_i)\n meas_i += 1\n meas_comments[key] = ' '.join(key.split('\\n'))\n meas_key_id_map[key] = meas_id\n return meas_key_id_map, meas_comments\n\n def _generate_qubit_ids(self) -> Dict['cirq.Qid', str]:\n return {qubit: 'q[{}]'.format(i) for i, qubit in enumerate(self.qubits)}\n\n def is_valid_qasm_id(self, id_str: str) -> bool:\n \"\"\"Test if id_str is a valid id in QASM grammar.\"\"\"\n return self.valid_id_re.match(id_str) != None\n\n def save(self, path: Union[str, bytes, int]) -> None:\n \"\"\"Write QASM output to a file specified by path.\"\"\"\n with open(path, 'w') as f:\n\n def write(s: str) -> None:\n f.write(s)\n\n self._write_qasm(write)\n\n def __str__(self) -> str:\n \"\"\"Return QASM output as a string.\"\"\"\n output = []\n self._write_qasm(lambda s: output.append(s))\n return ''.join(output)\n\n def _write_qasm(self, output_func: Callable[[str], None]) -> None:\n self.args.validate_version('2.0')\n\n # Generate nice line spacing\n line_gap = [0]\n\n def output_line_gap(n):\n line_gap[0] = max(line_gap[0], n)\n\n def output(text):\n if line_gap[0] > 0:\n output_func('\\n' * line_gap[0])\n line_gap[0] = 0\n output_func(text)\n\n # Comment header\n if self.header:\n for line in self.header.split('\\n'):\n output(('// ' + line).rstrip() + '\\n')\n output('\\n')\n\n # Version\n output('OPENQASM 2.0;\\n')\n output('include \"qelib1.inc\";\\n')\n output_line_gap(2)\n\n # Function definitions\n # None yet\n\n # Register definitions\n # Qubit registers\n output('// Qubits: [{}]\\n'.format(', '.join(map(str, self.qubits))))\n if len(self.qubits) > 0:\n output('qreg q[{}];\\n'.format(len(self.qubits)))\n # Classical registers\n # Pick an id for the creg that will store each measurement\n already_output_keys: Set[str] = set()\n for meas in self.measurements:\n key = protocols.measurement_key(meas)\n if key in already_output_keys:\n continue\n already_output_keys.add(key)\n meas_id = self.args.meas_key_id_map[key]\n comment = self.meas_comments[key]\n if comment is None:\n output('creg {}[{}];\\n'.format(meas_id, len(meas.qubits)))\n else:\n output(\n 'creg {}[{}]; // Measurement: {}\\n'.format(meas_id, len(meas.qubits), comment)\n )\n output_line_gap(2)\n\n # Operations\n self._write_operations(self.operations, output, output_line_gap)\n\n def _write_operations(\n self,\n op_tree: 'cirq.OP_TREE',\n output: Callable[[str], None],\n output_line_gap: Callable[[int], None],\n ) -> None:\n def keep(op: 'cirq.Operation') -> bool:\n return protocols.qasm(op, args=self.args, default=None) is not None\n\n def fallback(op):\n if len(op.qubits) not in [1, 2]:\n return NotImplemented\n\n mat = protocols.unitary(op, None)\n if mat is None:\n return NotImplemented\n\n if len(op.qubits) == 1:\n return QasmUGate.from_matrix(mat).on(*op.qubits)\n return QasmTwoQubitGate.from_matrix(mat).on(*op.qubits)\n\n def on_stuck(bad_op):\n return ValueError('Cannot output operation as QASM: {!r}'.format(bad_op))\n\n for main_op in ops.flatten_op_tree(op_tree):\n decomposed = protocols.decompose(\n main_op, keep=keep, fallback_decomposer=fallback, on_stuck_raise=on_stuck\n )\n\n qasms = [protocols.qasm(op, args=self.args) for op in decomposed]\n\n should_annotate = decomposed != [main_op] or qasms[0].count('\\n') > 1\n if should_annotate:\n output_line_gap(1)\n if isinstance(main_op, ops.GateOperation):\n x = str(main_op.gate).replace('\\n', '\\n //')\n output('// Gate: {!s}\\n'.format(x))\n else:\n x = str(main_op).replace('\\n', '\\n //')\n output('// Operation: {!s}\\n'.format(x))\n\n for qasm in qasms:\n output(qasm)\n\n if should_annotate:\n output_line_gap(1)\n", "path": "cirq/circuits/qasm_output.py"}]}
| 4,026 | 274 |
gh_patches_debug_14138
|
rasdani/github-patches
|
git_diff
|
digitalfabrik__integreat-cms-923
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Autosave changes unversioned `Page` model
### Describe the Bug
<!-- A clear and concise description of what the bug is. -->
When an autosave is triggered, the complete `Page` and `PageTranslation` model instances are saved.
Since just the `PageTranslation` is versioned, this might lead to unexpected changes on public pages.
### Steps to Reproduce
1. Go to a page form
2. Edit something in the content textarea to trigger the tinymce autosave
3. Change some field in the page form, e.g. embed live content
4. Wait for the autosave to get triggered
5. Return without saving
### Expected Behavior
<!-- A clear and concise description of what you expected to happen. -->
All page fields should be unchanged, since they are not versioned and there is no backup of the "public" state. If the page field is exited without saving, the page model instance should be exactly the same as before.
### Actual Behavior
<!-- A clear and concise description of what actually happened. -->
All fields are automatically saved and the page instance is saved as well.
### Additional Information
<!-- Add any other context (e.g. logs, screenshots, etc.) about the problem here. -->
Probably, this can be solved by limiting the `FormData` in [src/frontend/js/forms/autosave.ts](https://github.com/Integreat/integreat-cms/blob/8219586acbb98c681635210d0c45817285a0c13e/src/frontend/js/forms/autosave.ts#L7) so only the fields `slug`, `title` and `text`/`description` are saved.
_Edit:_ Probably not, since this would cause the validation to fail or remove specific values... so maybe we need a completely new view for the autosave?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cms/views/pages/page_view.py`
Content:
```
1 import logging
2
3 from django.contrib import messages
4 from django.contrib.auth.decorators import login_required
5 from django.core.exceptions import PermissionDenied
6 from django.shortcuts import render, redirect, get_object_or_404
7 from django.urls import reverse
8 from django.utils.decorators import method_decorator
9 from django.utils.translation import ugettext as _
10 from django.views.generic import TemplateView
11
12 from ...constants import status, text_directions
13 from ...decorators import region_permission_required, permission_required
14 from ...forms import PageForm, PageTranslationForm
15 from ...models import PageTranslation, Region
16 from .page_context_mixin import PageContextMixin
17 from ..media.media_context_mixin import MediaContextMixin
18
19 logger = logging.getLogger(__name__)
20
21
22 @method_decorator(login_required, name="dispatch")
23 @method_decorator(region_permission_required, name="dispatch")
24 @method_decorator(permission_required("cms.view_page"), name="dispatch")
25 class PageView(TemplateView, PageContextMixin, MediaContextMixin):
26 """
27 View for the page form and page translation form
28 """
29
30 #: The template to render (see :class:`~django.views.generic.base.TemplateResponseMixin`)
31 template_name = "pages/page_form.html"
32 #: The context dict passed to the template (see :class:`~django.views.generic.base.ContextMixin`)
33 base_context = {
34 "current_menu_item": "new_page",
35 "PUBLIC": status.PUBLIC,
36 }
37
38 # pylint: disable=too-many-locals
39 def get(self, request, *args, **kwargs):
40 """
41 Render :class:`~cms.forms.pages.page_form.PageForm` and :class:`~cms.forms.pages.page_translation_form.PageTranslationForm`
42
43 :param request: The current request
44 :type request: ~django.http.HttpResponse
45
46 :param args: The supplied arguments
47 :type args: list
48
49 :param kwargs: The supplied keyword arguments
50 :type kwargs: dict
51
52 :raises ~django.core.exceptions.PermissionDenied: If user does not have the permission to edit the specific page
53
54 :return: The rendered template response
55 :rtype: ~django.template.response.TemplateResponse
56 """
57
58 region = Region.get_current_region(request)
59 language = get_object_or_404(region.languages, slug=kwargs.get("language_slug"))
60
61 # get page and translation objects if they exist
62 page = region.pages.filter(id=kwargs.get("page_id")).first()
63 page_translation = PageTranslation.objects.filter(
64 page=page,
65 language=language,
66 ).first()
67
68 disabled = False
69 if page:
70 # Make form disabled if page is archived
71 if page.explicitly_archived:
72 disabled = True
73 messages.warning(
74 request, _("You cannot edit this page because it is archived.")
75 )
76 elif page.implicitly_archived:
77 disabled = True
78 messages.warning(
79 request,
80 _(
81 "You cannot edit this page, because one of its parent pages is archived and therefore, this page is archived as well."
82 ),
83 )
84 # Show information if latest changes are only saved as draft
85 public_translation = page.get_public_translation(language.slug)
86 if public_translation and page_translation != public_translation:
87 messages.info(
88 request,
89 _(
90 "The latest changes have only been saved as a draft. Currently, <a href='%(revision_url)s' class='underline hover:no-underline'>version %(revision)s</a> of this page is displayed in the app."
91 )
92 % {
93 "revision_url": reverse(
94 "page_revisions",
95 kwargs={
96 "region_slug": region.slug,
97 "language_slug": language.slug,
98 "page_id": page.id,
99 "selected_revision": public_translation.version,
100 },
101 ),
102 "revision": public_translation.version,
103 },
104 )
105
106 # Make form disabled if user has no permission to edit the page
107 if not request.user.has_perm("cms.change_page_object", page):
108 disabled = True
109 messages.warning(
110 request,
111 _("You don't have the permission to edit this page."),
112 )
113 # Show warning if user has no permission to publish the page
114 if not request.user.has_perm("cms.publish_page_object", page):
115 messages.warning(
116 request,
117 _(
118 "You don't have the permission to publish this page, but you can propose changes and submit them for review instead."
119 ),
120 )
121
122 page_form = PageForm(
123 instance=page,
124 disabled=disabled,
125 additional_instance_attributes={
126 "region": region,
127 },
128 )
129 page_translation_form = PageTranslationForm(
130 instance=page_translation, disabled=disabled
131 )
132
133 # Pass side by side language options
134 side_by_side_language_options = self.get_side_by_side_language_options(
135 region, language, page
136 )
137
138 # Pass siblings to template to enable rendering of page order table
139 if not page or not page.parent:
140 siblings = region.pages.filter(level=0)
141 else:
142 siblings = page.parent.children.all()
143 context = self.get_context_data(**kwargs)
144 return render(
145 request,
146 self.template_name,
147 {
148 **self.base_context,
149 **context,
150 "page_form": page_form,
151 "page_translation_form": page_translation_form,
152 "page": page,
153 "siblings": siblings,
154 "language": language,
155 # Languages for tab view
156 "languages": region.languages if page else [language],
157 "side_by_side_language_options": side_by_side_language_options,
158 "right_to_left": (
159 language.text_direction == text_directions.RIGHT_TO_LEFT
160 ),
161 },
162 )
163
164 # pylint: disable=too-many-branches,unused-argument
165 def post(self, request, *args, **kwargs):
166 """
167 Submit :class:`~cms.forms.pages.page_form.PageForm` and
168 :class:`~cms.forms.pages.page_translation_form.PageTranslationForm` and save :class:`~cms.models.pages.page.Page`
169 and :class:`~cms.models.pages.page_translation.PageTranslation` objects.
170 Forms containing images/files need to be additionally instantiated with the FILES attribute of request objects,
171 see :doc:`django:topics/http/file-uploads`
172
173 :param request: The current request
174 :type request: ~django.http.HttpResponse
175
176 :param args: The supplied arguments
177 :type args: list
178
179 :param kwargs: The supplied keyword arguments
180 :type kwargs: dict
181
182 :raises ~django.core.exceptions.PermissionDenied: If user does not have the permission to edit the specific page
183
184 :return: The rendered template response
185 :rtype: ~django.template.response.TemplateResponse
186 """
187
188 region = Region.get_current_region(request)
189 language = get_object_or_404(region.languages, slug=kwargs.get("language_slug"))
190
191 page_instance = region.pages.filter(id=kwargs.get("page_id")).first()
192
193 if not request.user.has_perm("cms.change_page_object", page_instance):
194 raise PermissionDenied(
195 f"{request.user.profile!r} does not have the permission to edit {page_instance!r}"
196 )
197
198 page_translation_instance = PageTranslation.objects.filter(
199 page=page_instance,
200 language=language,
201 ).first()
202
203 # Pass siblings to template to enable rendering of page order table
204 if not page_instance or not page_instance.parent:
205 siblings = region.pages.filter(level=0)
206 else:
207 siblings = page_instance.parent.children.all()
208
209 page_form = PageForm(
210 data=request.POST,
211 files=request.FILES,
212 instance=page_instance,
213 additional_instance_attributes={
214 "region": region,
215 },
216 )
217 page_translation_form = PageTranslationForm(
218 data=request.POST,
219 instance=page_translation_instance,
220 additional_instance_attributes={
221 "creator": request.user,
222 "language": language,
223 "page": page_form.instance,
224 },
225 )
226
227 if not page_form.is_valid() or not page_translation_form.is_valid():
228 # Add error messages
229 page_form.add_error_messages(request)
230 page_translation_form.add_error_messages(request)
231 elif (
232 not request.user.has_perm("cms.publish_page_object", page_form.instance)
233 and page_translation_form.cleaned_data.get("status") == status.PUBLIC
234 ):
235 # Raise PermissionDenied if user wants to publish page but doesn't have the permission
236 raise PermissionDenied(
237 f"{request.user.profile!r} does not have the permission to publish {page_form.instance!r}"
238 )
239 elif (
240 page_translation_form.instance.status == status.AUTO_SAVE
241 and not page_form.has_changed()
242 and not page_translation_form.has_changed()
243 ):
244 messages.info(request, _("No changes detected, autosave skipped"))
245
246 else:
247 # Save forms
248 page_translation_form.instance.page = page_form.save()
249 page_translation_form.save()
250 # Add the success message and redirect to the edit page
251 if not page_instance:
252 messages.success(
253 request,
254 _('Page "{}" was successfully created').format(
255 page_translation_form.instance.title
256 ),
257 )
258 return redirect(
259 "edit_page",
260 **{
261 "page_id": page_form.instance.id,
262 "region_slug": region.slug,
263 "language_slug": language.slug,
264 },
265 )
266
267 if not page_form.has_changed() and not page_translation_form.has_changed():
268 messages.info(request, _("No changes detected, but date refreshed"))
269 else:
270 # Add the success message
271 page_translation_form.add_success_message(request)
272
273 return render(
274 request,
275 self.template_name,
276 {
277 **self.base_context,
278 "page_form": page_form,
279 "page_translation_form": page_translation_form,
280 "page": page_instance,
281 "siblings": siblings,
282 "language": language,
283 # Languages for tab view
284 "languages": region.languages if page_instance else [language],
285 "side_by_side_language_options": self.get_side_by_side_language_options(
286 region, language, page_instance
287 ),
288 "right_to_left": (
289 language.text_direction == text_directions.RIGHT_TO_LEFT
290 ),
291 },
292 )
293
294 @staticmethod
295 def get_side_by_side_language_options(region, language, page):
296 """
297 This is a helper function to generate the side-by-side language options for both the get and post requests.
298
299 :param region: The current region
300 :type region: ~cms.models.regions.region.Region
301
302 :param language: The current language
303 :type language: ~cms.models.languages.language.Language
304
305 :param page: The current page
306 :type page: ~cms.models.pages.page.Page
307
308 :return: The list of language options, each represented by a dict
309 :rtype: list
310 """
311
312 side_by_side_language_options = []
313 for language_node in region.language_tree_nodes.all():
314 if language_node.parent:
315 source_translation = PageTranslation.objects.filter(
316 page=page,
317 language=language_node.parent.language,
318 )
319 side_by_side_language_options.append(
320 {
321 "value": language_node.language.slug,
322 "label": _("{source_language} to {target_language}").format(
323 source_language=language_node.parent.language.translated_name,
324 target_language=language_node.language.translated_name,
325 ),
326 "selected": language_node.language == language,
327 "disabled": not source_translation.exists(),
328 }
329 )
330 return side_by_side_language_options
331
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/cms/views/pages/page_view.py b/src/cms/views/pages/page_view.py
--- a/src/cms/views/pages/page_view.py
+++ b/src/cms/views/pages/page_view.py
@@ -244,8 +244,13 @@
messages.info(request, _("No changes detected, autosave skipped"))
else:
- # Save forms
- page_translation_form.instance.page = page_form.save()
+ # Only save page form if page does not yet exist or if translation is no auto save
+ if (
+ not page_instance
+ or page_translation_form.instance.status != status.AUTO_SAVE
+ ):
+ page_translation_form.instance.page = page_form.save()
+ # Save page translation form
page_translation_form.save()
# Add the success message and redirect to the edit page
if not page_instance:
|
{"golden_diff": "diff --git a/src/cms/views/pages/page_view.py b/src/cms/views/pages/page_view.py\n--- a/src/cms/views/pages/page_view.py\n+++ b/src/cms/views/pages/page_view.py\n@@ -244,8 +244,13 @@\n messages.info(request, _(\"No changes detected, autosave skipped\"))\n \n else:\n- # Save forms\n- page_translation_form.instance.page = page_form.save()\n+ # Only save page form if page does not yet exist or if translation is no auto save\n+ if (\n+ not page_instance\n+ or page_translation_form.instance.status != status.AUTO_SAVE\n+ ):\n+ page_translation_form.instance.page = page_form.save()\n+ # Save page translation form\n page_translation_form.save()\n # Add the success message and redirect to the edit page\n if not page_instance:\n", "issue": "Autosave changes unversioned `Page` model\n### Describe the Bug\r\n<!-- A clear and concise description of what the bug is. -->\r\nWhen an autosave is triggered, the complete `Page` and `PageTranslation` model instances are saved.\r\nSince just the `PageTranslation` is versioned, this might lead to unexpected changes on public pages.\r\n\r\n### Steps to Reproduce\r\n\r\n1. Go to a page form\r\n2. Edit something in the content textarea to trigger the tinymce autosave\r\n3. Change some field in the page form, e.g. embed live content\r\n4. Wait for the autosave to get triggered\r\n5. Return without saving\r\n\r\n### Expected Behavior\r\n<!-- A clear and concise description of what you expected to happen. -->\r\nAll page fields should be unchanged, since they are not versioned and there is no backup of the \"public\" state. If the page field is exited without saving, the page model instance should be exactly the same as before.\r\n\r\n### Actual Behavior\r\n<!-- A clear and concise description of what actually happened. -->\r\nAll fields are automatically saved and the page instance is saved as well.\r\n\r\n### Additional Information\r\n<!-- Add any other context (e.g. logs, screenshots, etc.) about the problem here. -->\r\nProbably, this can be solved by limiting the `FormData` in [src/frontend/js/forms/autosave.ts](https://github.com/Integreat/integreat-cms/blob/8219586acbb98c681635210d0c45817285a0c13e/src/frontend/js/forms/autosave.ts#L7) so only the fields `slug`, `title` and `text`/`description` are saved.\r\n_Edit:_ Probably not, since this would cause the validation to fail or remove specific values... so maybe we need a completely new view for the autosave?\n", "before_files": [{"content": "import logging\n\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.core.exceptions import PermissionDenied\nfrom django.shortcuts import render, redirect, get_object_or_404\nfrom django.urls import reverse\nfrom django.utils.decorators import method_decorator\nfrom django.utils.translation import ugettext as _\nfrom django.views.generic import TemplateView\n\nfrom ...constants import status, text_directions\nfrom ...decorators import region_permission_required, permission_required\nfrom ...forms import PageForm, PageTranslationForm\nfrom ...models import PageTranslation, Region\nfrom .page_context_mixin import PageContextMixin\nfrom ..media.media_context_mixin import MediaContextMixin\n\nlogger = logging.getLogger(__name__)\n\n\n@method_decorator(login_required, name=\"dispatch\")\n@method_decorator(region_permission_required, name=\"dispatch\")\n@method_decorator(permission_required(\"cms.view_page\"), name=\"dispatch\")\nclass PageView(TemplateView, PageContextMixin, MediaContextMixin):\n \"\"\"\n View for the page form and page translation form\n \"\"\"\n\n #: The template to render (see :class:`~django.views.generic.base.TemplateResponseMixin`)\n template_name = \"pages/page_form.html\"\n #: The context dict passed to the template (see :class:`~django.views.generic.base.ContextMixin`)\n base_context = {\n \"current_menu_item\": \"new_page\",\n \"PUBLIC\": status.PUBLIC,\n }\n\n # pylint: disable=too-many-locals\n def get(self, request, *args, **kwargs):\n \"\"\"\n Render :class:`~cms.forms.pages.page_form.PageForm` and :class:`~cms.forms.pages.page_translation_form.PageTranslationForm`\n\n :param request: The current request\n :type request: ~django.http.HttpResponse\n\n :param args: The supplied arguments\n :type args: list\n\n :param kwargs: The supplied keyword arguments\n :type kwargs: dict\n\n :raises ~django.core.exceptions.PermissionDenied: If user does not have the permission to edit the specific page\n\n :return: The rendered template response\n :rtype: ~django.template.response.TemplateResponse\n \"\"\"\n\n region = Region.get_current_region(request)\n language = get_object_or_404(region.languages, slug=kwargs.get(\"language_slug\"))\n\n # get page and translation objects if they exist\n page = region.pages.filter(id=kwargs.get(\"page_id\")).first()\n page_translation = PageTranslation.objects.filter(\n page=page,\n language=language,\n ).first()\n\n disabled = False\n if page:\n # Make form disabled if page is archived\n if page.explicitly_archived:\n disabled = True\n messages.warning(\n request, _(\"You cannot edit this page because it is archived.\")\n )\n elif page.implicitly_archived:\n disabled = True\n messages.warning(\n request,\n _(\n \"You cannot edit this page, because one of its parent pages is archived and therefore, this page is archived as well.\"\n ),\n )\n # Show information if latest changes are only saved as draft\n public_translation = page.get_public_translation(language.slug)\n if public_translation and page_translation != public_translation:\n messages.info(\n request,\n _(\n \"The latest changes have only been saved as a draft. Currently, <a href='%(revision_url)s' class='underline hover:no-underline'>version %(revision)s</a> of this page is displayed in the app.\"\n )\n % {\n \"revision_url\": reverse(\n \"page_revisions\",\n kwargs={\n \"region_slug\": region.slug,\n \"language_slug\": language.slug,\n \"page_id\": page.id,\n \"selected_revision\": public_translation.version,\n },\n ),\n \"revision\": public_translation.version,\n },\n )\n\n # Make form disabled if user has no permission to edit the page\n if not request.user.has_perm(\"cms.change_page_object\", page):\n disabled = True\n messages.warning(\n request,\n _(\"You don't have the permission to edit this page.\"),\n )\n # Show warning if user has no permission to publish the page\n if not request.user.has_perm(\"cms.publish_page_object\", page):\n messages.warning(\n request,\n _(\n \"You don't have the permission to publish this page, but you can propose changes and submit them for review instead.\"\n ),\n )\n\n page_form = PageForm(\n instance=page,\n disabled=disabled,\n additional_instance_attributes={\n \"region\": region,\n },\n )\n page_translation_form = PageTranslationForm(\n instance=page_translation, disabled=disabled\n )\n\n # Pass side by side language options\n side_by_side_language_options = self.get_side_by_side_language_options(\n region, language, page\n )\n\n # Pass siblings to template to enable rendering of page order table\n if not page or not page.parent:\n siblings = region.pages.filter(level=0)\n else:\n siblings = page.parent.children.all()\n context = self.get_context_data(**kwargs)\n return render(\n request,\n self.template_name,\n {\n **self.base_context,\n **context,\n \"page_form\": page_form,\n \"page_translation_form\": page_translation_form,\n \"page\": page,\n \"siblings\": siblings,\n \"language\": language,\n # Languages for tab view\n \"languages\": region.languages if page else [language],\n \"side_by_side_language_options\": side_by_side_language_options,\n \"right_to_left\": (\n language.text_direction == text_directions.RIGHT_TO_LEFT\n ),\n },\n )\n\n # pylint: disable=too-many-branches,unused-argument\n def post(self, request, *args, **kwargs):\n \"\"\"\n Submit :class:`~cms.forms.pages.page_form.PageForm` and\n :class:`~cms.forms.pages.page_translation_form.PageTranslationForm` and save :class:`~cms.models.pages.page.Page`\n and :class:`~cms.models.pages.page_translation.PageTranslation` objects.\n Forms containing images/files need to be additionally instantiated with the FILES attribute of request objects,\n see :doc:`django:topics/http/file-uploads`\n\n :param request: The current request\n :type request: ~django.http.HttpResponse\n\n :param args: The supplied arguments\n :type args: list\n\n :param kwargs: The supplied keyword arguments\n :type kwargs: dict\n\n :raises ~django.core.exceptions.PermissionDenied: If user does not have the permission to edit the specific page\n\n :return: The rendered template response\n :rtype: ~django.template.response.TemplateResponse\n \"\"\"\n\n region = Region.get_current_region(request)\n language = get_object_or_404(region.languages, slug=kwargs.get(\"language_slug\"))\n\n page_instance = region.pages.filter(id=kwargs.get(\"page_id\")).first()\n\n if not request.user.has_perm(\"cms.change_page_object\", page_instance):\n raise PermissionDenied(\n f\"{request.user.profile!r} does not have the permission to edit {page_instance!r}\"\n )\n\n page_translation_instance = PageTranslation.objects.filter(\n page=page_instance,\n language=language,\n ).first()\n\n # Pass siblings to template to enable rendering of page order table\n if not page_instance or not page_instance.parent:\n siblings = region.pages.filter(level=0)\n else:\n siblings = page_instance.parent.children.all()\n\n page_form = PageForm(\n data=request.POST,\n files=request.FILES,\n instance=page_instance,\n additional_instance_attributes={\n \"region\": region,\n },\n )\n page_translation_form = PageTranslationForm(\n data=request.POST,\n instance=page_translation_instance,\n additional_instance_attributes={\n \"creator\": request.user,\n \"language\": language,\n \"page\": page_form.instance,\n },\n )\n\n if not page_form.is_valid() or not page_translation_form.is_valid():\n # Add error messages\n page_form.add_error_messages(request)\n page_translation_form.add_error_messages(request)\n elif (\n not request.user.has_perm(\"cms.publish_page_object\", page_form.instance)\n and page_translation_form.cleaned_data.get(\"status\") == status.PUBLIC\n ):\n # Raise PermissionDenied if user wants to publish page but doesn't have the permission\n raise PermissionDenied(\n f\"{request.user.profile!r} does not have the permission to publish {page_form.instance!r}\"\n )\n elif (\n page_translation_form.instance.status == status.AUTO_SAVE\n and not page_form.has_changed()\n and not page_translation_form.has_changed()\n ):\n messages.info(request, _(\"No changes detected, autosave skipped\"))\n\n else:\n # Save forms\n page_translation_form.instance.page = page_form.save()\n page_translation_form.save()\n # Add the success message and redirect to the edit page\n if not page_instance:\n messages.success(\n request,\n _('Page \"{}\" was successfully created').format(\n page_translation_form.instance.title\n ),\n )\n return redirect(\n \"edit_page\",\n **{\n \"page_id\": page_form.instance.id,\n \"region_slug\": region.slug,\n \"language_slug\": language.slug,\n },\n )\n\n if not page_form.has_changed() and not page_translation_form.has_changed():\n messages.info(request, _(\"No changes detected, but date refreshed\"))\n else:\n # Add the success message\n page_translation_form.add_success_message(request)\n\n return render(\n request,\n self.template_name,\n {\n **self.base_context,\n \"page_form\": page_form,\n \"page_translation_form\": page_translation_form,\n \"page\": page_instance,\n \"siblings\": siblings,\n \"language\": language,\n # Languages for tab view\n \"languages\": region.languages if page_instance else [language],\n \"side_by_side_language_options\": self.get_side_by_side_language_options(\n region, language, page_instance\n ),\n \"right_to_left\": (\n language.text_direction == text_directions.RIGHT_TO_LEFT\n ),\n },\n )\n\n @staticmethod\n def get_side_by_side_language_options(region, language, page):\n \"\"\"\n This is a helper function to generate the side-by-side language options for both the get and post requests.\n\n :param region: The current region\n :type region: ~cms.models.regions.region.Region\n\n :param language: The current language\n :type language: ~cms.models.languages.language.Language\n\n :param page: The current page\n :type page: ~cms.models.pages.page.Page\n\n :return: The list of language options, each represented by a dict\n :rtype: list\n \"\"\"\n\n side_by_side_language_options = []\n for language_node in region.language_tree_nodes.all():\n if language_node.parent:\n source_translation = PageTranslation.objects.filter(\n page=page,\n language=language_node.parent.language,\n )\n side_by_side_language_options.append(\n {\n \"value\": language_node.language.slug,\n \"label\": _(\"{source_language} to {target_language}\").format(\n source_language=language_node.parent.language.translated_name,\n target_language=language_node.language.translated_name,\n ),\n \"selected\": language_node.language == language,\n \"disabled\": not source_translation.exists(),\n }\n )\n return side_by_side_language_options\n", "path": "src/cms/views/pages/page_view.py"}], "after_files": [{"content": "import logging\n\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.core.exceptions import PermissionDenied\nfrom django.shortcuts import render, redirect, get_object_or_404\nfrom django.urls import reverse\nfrom django.utils.decorators import method_decorator\nfrom django.utils.translation import ugettext as _\nfrom django.views.generic import TemplateView\n\nfrom ...constants import status, text_directions\nfrom ...decorators import region_permission_required, permission_required\nfrom ...forms import PageForm, PageTranslationForm\nfrom ...models import PageTranslation, Region\nfrom .page_context_mixin import PageContextMixin\nfrom ..media.media_context_mixin import MediaContextMixin\n\nlogger = logging.getLogger(__name__)\n\n\n@method_decorator(login_required, name=\"dispatch\")\n@method_decorator(region_permission_required, name=\"dispatch\")\n@method_decorator(permission_required(\"cms.view_page\"), name=\"dispatch\")\nclass PageView(TemplateView, PageContextMixin, MediaContextMixin):\n \"\"\"\n View for the page form and page translation form\n \"\"\"\n\n #: The template to render (see :class:`~django.views.generic.base.TemplateResponseMixin`)\n template_name = \"pages/page_form.html\"\n #: The context dict passed to the template (see :class:`~django.views.generic.base.ContextMixin`)\n base_context = {\n \"current_menu_item\": \"new_page\",\n \"PUBLIC\": status.PUBLIC,\n }\n\n # pylint: disable=too-many-locals\n def get(self, request, *args, **kwargs):\n \"\"\"\n Render :class:`~cms.forms.pages.page_form.PageForm` and :class:`~cms.forms.pages.page_translation_form.PageTranslationForm`\n\n :param request: The current request\n :type request: ~django.http.HttpResponse\n\n :param args: The supplied arguments\n :type args: list\n\n :param kwargs: The supplied keyword arguments\n :type kwargs: dict\n\n :raises ~django.core.exceptions.PermissionDenied: If user does not have the permission to edit the specific page\n\n :return: The rendered template response\n :rtype: ~django.template.response.TemplateResponse\n \"\"\"\n\n region = Region.get_current_region(request)\n language = get_object_or_404(region.languages, slug=kwargs.get(\"language_slug\"))\n\n # get page and translation objects if they exist\n page = region.pages.filter(id=kwargs.get(\"page_id\")).first()\n page_translation = PageTranslation.objects.filter(\n page=page,\n language=language,\n ).first()\n\n disabled = False\n if page:\n # Make form disabled if page is archived\n if page.explicitly_archived:\n disabled = True\n messages.warning(\n request, _(\"You cannot edit this page because it is archived.\")\n )\n elif page.implicitly_archived:\n disabled = True\n messages.warning(\n request,\n _(\n \"You cannot edit this page, because one of its parent pages is archived and therefore, this page is archived as well.\"\n ),\n )\n # Show information if latest changes are only saved as draft\n public_translation = page.get_public_translation(language.slug)\n if public_translation and page_translation != public_translation:\n messages.info(\n request,\n _(\n \"The latest changes have only been saved as a draft. Currently, <a href='%(revision_url)s' class='underline hover:no-underline'>version %(revision)s</a> of this page is displayed in the app.\"\n )\n % {\n \"revision_url\": reverse(\n \"page_revisions\",\n kwargs={\n \"region_slug\": region.slug,\n \"language_slug\": language.slug,\n \"page_id\": page.id,\n \"selected_revision\": public_translation.version,\n },\n ),\n \"revision\": public_translation.version,\n },\n )\n\n # Make form disabled if user has no permission to edit the page\n if not request.user.has_perm(\"cms.change_page_object\", page):\n disabled = True\n messages.warning(\n request,\n _(\"You don't have the permission to edit this page.\"),\n )\n # Show warning if user has no permission to publish the page\n if not request.user.has_perm(\"cms.publish_page_object\", page):\n messages.warning(\n request,\n _(\n \"You don't have the permission to publish this page, but you can propose changes and submit them for review instead.\"\n ),\n )\n\n page_form = PageForm(\n instance=page,\n disabled=disabled,\n additional_instance_attributes={\n \"region\": region,\n },\n )\n page_translation_form = PageTranslationForm(\n instance=page_translation, disabled=disabled\n )\n\n # Pass side by side language options\n side_by_side_language_options = self.get_side_by_side_language_options(\n region, language, page\n )\n\n # Pass siblings to template to enable rendering of page order table\n if not page or not page.parent:\n siblings = region.pages.filter(level=0)\n else:\n siblings = page.parent.children.all()\n context = self.get_context_data(**kwargs)\n return render(\n request,\n self.template_name,\n {\n **self.base_context,\n **context,\n \"page_form\": page_form,\n \"page_translation_form\": page_translation_form,\n \"page\": page,\n \"siblings\": siblings,\n \"language\": language,\n # Languages for tab view\n \"languages\": region.languages if page else [language],\n \"side_by_side_language_options\": side_by_side_language_options,\n \"right_to_left\": (\n language.text_direction == text_directions.RIGHT_TO_LEFT\n ),\n },\n )\n\n # pylint: disable=too-many-branches,unused-argument\n def post(self, request, *args, **kwargs):\n \"\"\"\n Submit :class:`~cms.forms.pages.page_form.PageForm` and\n :class:`~cms.forms.pages.page_translation_form.PageTranslationForm` and save :class:`~cms.models.pages.page.Page`\n and :class:`~cms.models.pages.page_translation.PageTranslation` objects.\n Forms containing images/files need to be additionally instantiated with the FILES attribute of request objects,\n see :doc:`django:topics/http/file-uploads`\n\n :param request: The current request\n :type request: ~django.http.HttpResponse\n\n :param args: The supplied arguments\n :type args: list\n\n :param kwargs: The supplied keyword arguments\n :type kwargs: dict\n\n :raises ~django.core.exceptions.PermissionDenied: If user does not have the permission to edit the specific page\n\n :return: The rendered template response\n :rtype: ~django.template.response.TemplateResponse\n \"\"\"\n\n region = Region.get_current_region(request)\n language = get_object_or_404(region.languages, slug=kwargs.get(\"language_slug\"))\n\n page_instance = region.pages.filter(id=kwargs.get(\"page_id\")).first()\n\n if not request.user.has_perm(\"cms.change_page_object\", page_instance):\n raise PermissionDenied(\n f\"{request.user.profile!r} does not have the permission to edit {page_instance!r}\"\n )\n\n page_translation_instance = PageTranslation.objects.filter(\n page=page_instance,\n language=language,\n ).first()\n\n # Pass siblings to template to enable rendering of page order table\n if not page_instance or not page_instance.parent:\n siblings = region.pages.filter(level=0)\n else:\n siblings = page_instance.parent.children.all()\n\n page_form = PageForm(\n data=request.POST,\n files=request.FILES,\n instance=page_instance,\n additional_instance_attributes={\n \"region\": region,\n },\n )\n page_translation_form = PageTranslationForm(\n data=request.POST,\n instance=page_translation_instance,\n additional_instance_attributes={\n \"creator\": request.user,\n \"language\": language,\n \"page\": page_form.instance,\n },\n )\n\n if not page_form.is_valid() or not page_translation_form.is_valid():\n # Add error messages\n page_form.add_error_messages(request)\n page_translation_form.add_error_messages(request)\n elif (\n not request.user.has_perm(\"cms.publish_page_object\", page_form.instance)\n and page_translation_form.cleaned_data.get(\"status\") == status.PUBLIC\n ):\n # Raise PermissionDenied if user wants to publish page but doesn't have the permission\n raise PermissionDenied(\n f\"{request.user.profile!r} does not have the permission to publish {page_form.instance!r}\"\n )\n elif (\n page_translation_form.instance.status == status.AUTO_SAVE\n and not page_form.has_changed()\n and not page_translation_form.has_changed()\n ):\n messages.info(request, _(\"No changes detected, autosave skipped\"))\n\n else:\n # Only save page form if page does not yet exist or if translation is no auto save\n if (\n not page_instance\n or page_translation_form.instance.status != status.AUTO_SAVE\n ):\n page_translation_form.instance.page = page_form.save()\n # Save page translation form\n page_translation_form.save()\n # Add the success message and redirect to the edit page\n if not page_instance:\n messages.success(\n request,\n _('Page \"{}\" was successfully created').format(\n page_translation_form.instance.title\n ),\n )\n return redirect(\n \"edit_page\",\n **{\n \"page_id\": page_form.instance.id,\n \"region_slug\": region.slug,\n \"language_slug\": language.slug,\n },\n )\n\n if not page_form.has_changed() and not page_translation_form.has_changed():\n messages.info(request, _(\"No changes detected, but date refreshed\"))\n else:\n # Add the success message\n page_translation_form.add_success_message(request)\n\n return render(\n request,\n self.template_name,\n {\n **self.base_context,\n \"page_form\": page_form,\n \"page_translation_form\": page_translation_form,\n \"page\": page_instance,\n \"siblings\": siblings,\n \"language\": language,\n # Languages for tab view\n \"languages\": region.languages if page_instance else [language],\n \"side_by_side_language_options\": self.get_side_by_side_language_options(\n region, language, page_instance\n ),\n \"right_to_left\": (\n language.text_direction == text_directions.RIGHT_TO_LEFT\n ),\n },\n )\n\n @staticmethod\n def get_side_by_side_language_options(region, language, page):\n \"\"\"\n This is a helper function to generate the side-by-side language options for both the get and post requests.\n\n :param region: The current region\n :type region: ~cms.models.regions.region.Region\n\n :param language: The current language\n :type language: ~cms.models.languages.language.Language\n\n :param page: The current page\n :type page: ~cms.models.pages.page.Page\n\n :return: The list of language options, each represented by a dict\n :rtype: list\n \"\"\"\n\n side_by_side_language_options = []\n for language_node in region.language_tree_nodes.all():\n if language_node.parent:\n source_translation = PageTranslation.objects.filter(\n page=page,\n language=language_node.parent.language,\n )\n side_by_side_language_options.append(\n {\n \"value\": language_node.language.slug,\n \"label\": _(\"{source_language} to {target_language}\").format(\n source_language=language_node.parent.language.translated_name,\n target_language=language_node.language.translated_name,\n ),\n \"selected\": language_node.language == language,\n \"disabled\": not source_translation.exists(),\n }\n )\n return side_by_side_language_options\n", "path": "src/cms/views/pages/page_view.py"}]}
| 3,951 | 183 |
gh_patches_debug_19223
|
rasdani/github-patches
|
git_diff
|
coala__coala-4989
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Create Indentation aspects under Formatting.Spacing
Create an aspects named `Indentation` in files `Formatting.py`. The new aspects should have fullname of `root.Formatting.Spacing.Indentation`. It should have atleast the following taste:
- `indent_type` - what type of indentation used, could be `tab` or `space`
- `indent_size` - number of spaces per indentation level
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `coalib/bearlib/aspects/Formatting.py`
Content:
```
1 from coalib.bearlib.aspects import Root, Taste
2
3
4 @Root.subaspect
5 class Formatting:
6 """
7 The visual appearance of source code.
8 """
9 class docs:
10 example = """
11 # Here is an example of Python code with lots of
12 # formatting issues including: trailing spaces, missing spaces
13 # around operators, strange and inconsistent indentation etc.
14
15 z = 'hello'+'world'
16 def f ( a):
17 pass
18 """
19 example_language = 'Python'
20 importance_reason = """
21 A coding style (the of rules or guidelines used when writing the
22 source code) can drastically affect the readability, and
23 maintainability of a program and might as well introduce bugs.
24 """
25 fix_suggestions = """
26 Defining a clearly and thoughtful coding style (based on the available
27 ones given the programming language in use) and strictly respect it or
28 apply it through out the implementation of a project.
29 """
30
31
32 @Formatting.subaspect
33 class Length:
34 """
35 Hold sub-aspects for file and line length.
36 """
37 class docs:
38 example = """
39 # We assume that the maximum number of characters per line is 10
40 # and that the maximum number of lines per files is 3.
41
42 def run(bear, file, filename, aspectlist):
43 return bear.run(file, filename, aspectlist)
44 """
45 example_language = 'Python'
46 importance_reason = """
47 Too long lines of code and too large files result in code difficult to
48 read, understand and maintain.
49 """
50 fix_suggestions = """
51 Length issues can be fixed by writing shorter lines of code (splitting
52 long lines into multiple shorter lines); writing shorter files
53 (splitting files into modules, writing shorter methods and classes.).
54 """
55
56
57 @Length.subaspect
58 class LineLength:
59 """
60 Number of characters found in a line of code.
61 """
62 class docs:
63 example = """
64 print('The length of this line is 38')
65 """
66 example_langague = 'Python'
67 importance_reason = """
68 Too long lines make code very difficult to read and maintain.
69 """
70 fix_suggestions = """
71 Splitting long lines of code into multiple shorter lines whenever
72 possible. Avoiding the usage of in-line language specific constructs
73 whenever they result in too long lines.
74 """
75 max_line_length = Taste[int](
76 'Maximum number of character for a line.',
77 (79, 80, 100, 120, 160), default=80)
78
79
80 @Length.subaspect
81 class FileLength:
82 """
83 Number of lines found in a file.
84 """
85 class docs:
86 example = """
87 # This file would be a large file if we assume that the max number of
88 # lines per file is 10
89
90 class Node:
91 def __init__(self, value, left_most_child, left_sibling):
92 self.value=value
93 self.left_most_child=left_most_child
94 self.left_sibling=left_sibling
95
96 # This is example is just showing what this aspect is about, because
97 # the max number of lines per file is usually 999.
98 """
99 example_language = 'Python 3'
100 importance_reason = """
101 Too long programs (or files) are difficult to read, maintain and
102 understand.
103 """
104 fix_suggestions = """
105 Splitting files into modules, writing shorter methods and classes.
106 """
107 max_file_length = Taste[int](
108 'Maximum number of line for a file',
109 (999,), default=999)
110
111
112 @Formatting.subaspect
113 class Spacing:
114 """
115 All whitespace found between non-whitespace characters.
116 """
117 class docs:
118 example = """
119 # Here is an example of code with spacing issues including
120 # unnecessary blank lines and missing space around operators.
121
122
123
124 def func( ):
125 return 37*-+2
126 """
127 example_language = 'Python'
128 importance_reason = """
129 Useless spacing affects the readability and maintainability of a code.
130 """
131 fix_suggestions = """
132 Removing the trailing spaces and the meaningless blank lines.
133 """
134
135
136 @Spacing.subaspect
137 class TrailingSpace:
138 """
139 Unnecessary whitespace at end of a line.
140
141 Trailing space is all whitespace found after the last non-whitespace
142 character on the line until the newline. This includes tabs "\\\\t",
143 blank lines, blanks etc.
144 """
145 class docs:
146 example = """
147 def func( a ):
148 pass
149
150 """.replace('\n', '\t\n')
151 example_language = 'Python'
152 importance_reason = """
153 Trailing spaces make code less readable and maintainable.
154 """
155 fix_suggestions = """
156 Removing the trailing spaces.
157 """
158 allow_trailing_spaces = Taste[bool](
159 'Determines whether or not trailing spaces should be allowed or not.',
160 (True, False), default=False)
161
162
163 @Spacing.subaspect
164 class BlankLine:
165 """
166 A line with zero characters.
167 """
168 class docs:
169 example = """
170 name = input('What is your name?')
171
172
173 print('Hi, {}'.format(name))
174 """
175 example_language = 'Python 3'
176 importance_reason = """
177 Various programming styles use blank lines in different places.
178 The usage of blank lines affects the readability, maintainability and
179 length of a code i.e blank lines can either make code longer, less
180 readable and maintainable or do the reverse.
181 """
182 fix_suggestions = """
183 Following specific rules about the usage of blank lines: using them
184 only when necessary.
185 """
186
187
188 @BlankLine.subaspect
189 class BlankLineAfterDeclaration:
190 """
191 Those found after declarations.
192 """
193 class docs:
194 example = """
195 #include <stdio.h>
196
197 int main ()
198 {
199 int a;
200 float b;
201
202 scanf("%d%f", &a, &b);
203 printf("a = %d and b = %f", a, b);
204 return 0;
205 }
206 """
207 example_language = 'C'
208 importance_reason = """
209 Having a specific and reasonable number of blank lines after every
210 block of declarations improves on the readability of the code.
211 """
212 fix_suggestions = """
213 `BlankLintAfterDeclaration` issues can be fixed specifying (and of
214 course using) a reasonable number of blank lines to use after block
215 declaration.
216 """
217 blank_lines_after_declarations = Taste[int](
218 'Represents the number of blank lines after declarations',
219 (0, 1, 2), default=0)
220
221
222 @BlankLine.subaspect
223 class BlankLineAfterProcedure:
224 """
225 Those found after procedures or functions.
226 """
227 class docs:
228 example = """
229 #include <stdio.h>
230
231 void proc(void){
232 printf("this does nothing");
233 } int add(float a, float b){
234 return a + b;
235 }
236 """
237 example_language = 'C'
238 importance_reason = """
239 Having a specific and reasonable number of blank lines after every
240 procedures improves on the readability of the code.
241 """
242 fix_suggestions = """
243 `BlankLintAfterProcedure` issues can be fixed specifying (and of
244 course using) a reasonable number of blank lines to use after
245 procedures' definition.
246 """
247 blank_lines_after_procedures = Taste[int](
248 'Represents the number of blank lines to use after a procedure or'
249 'a function', (0, 1, 2), default=1)
250
251
252 @BlankLine.subaspect
253 class BlankLineAfterClass:
254 """
255 Those found after classes' definitions.
256 """
257 class docs:
258 example = """
259 class SomeClass:
260 def __init__(self):
261 raise RuntimeError('Never instantiate this class')
262
263
264 def func():
265 pass
266 """
267 example_language = 'Python 3'
268 importance_reason = """
269 Having a specific number of blank lines after every classes'
270 definitions declarations improves on the readability of the code.
271 """
272 fix_suggestions = """
273 """
274 blank_lines_after_class = Taste[int](
275 'Represents the number of blank lines to use after a class'
276 'definition.', (1, 2), default=2)
277
278
279 @BlankLine.subaspect
280 class NewlineAtEOF:
281 """
282 Newline character (usually '\\\\n', aka CR) found at the end of file.
283 """
284 class docs:
285 example = """
286 def do_nothing():
287 pass
288 """ + ('\n')
289 example_language = 'Python'
290 importance_reason = """
291 A text file consists of a series of lines, each of which ends with a
292 newline character (\\\\n). A file that is not empty and does not end
293 with a newline is therefore not a text file.
294
295 It's not just bad style, it can lead to unexpected behavior, utilities
296 that are supposed to operate on text files may not cope well with files
297 that don't end with a newline.
298 """
299 fix_suggestions = """
300 `NewlineAtEOF` issues can be fixed by simply adding a newline at the
301 end of the file.
302 """
303 newline_at_EOF = Taste[bool](
304 'If ``True``, enforce a newline at End Of File.',
305 (True, False), default=True)
306
307
308 @Spacing.subaspect
309 class SpacesAroundOperator:
310 """
311 Spacing around operators.
312 """
313 class docs:
314 example = """
315 def f(a, x):
316 return 37+a[42 - x]
317 """
318 example_language = 'Python'
319 importance_reason = """
320 Having a specific and reasonable number of whitespace (blank) around
321 operators improves on the readability of the code.
322 """
323 fix_suggestions = """
324 `SpacesAroundOperator` issues can be fixed by simply specifying and
325 the number of whitespace to be used after each operator.
326 """
327 spaces_around_operators = Taste[int](
328 'Represents the number of space to be used around operators.',
329 (0, 1), default=1)
330 spaces_before_colon = Taste[int](
331 'Represents the number of blank spaces before colons.',
332 (0, 1), default=0)
333 spaces_after_colon = Taste[int](
334 'Represents the number of blank spaces after colons.',
335 (0, 1), default=1)
336
337
338 @Formatting.subaspect
339 class Quotation:
340 """
341 Quotation mark used for strings and docstrings.
342 """
343 class docs:
344 example = """
345 # Here is an example of code where both '' and "" quotation mark
346 # Are used.
347
348 string = 'coala is always written with lowercase c.'
349 string = "coala is always written with lowercase c."
350 """
351 example_language = 'Python'
352 importance_reason = """
353 Using the same quotation whenever possible in the code, improve on its
354 readability by introducing consistency.
355 """
356 fix_suggestions = """
357 Choosing a preferred quotation and using it everywhere (if possible).
358 """
359 preferred_quotation = Taste[str](
360 'Represents the preferred quotation',
361 ('\'', '"'), default='\'')
362
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/coalib/bearlib/aspects/Formatting.py b/coalib/bearlib/aspects/Formatting.py
--- a/coalib/bearlib/aspects/Formatting.py
+++ b/coalib/bearlib/aspects/Formatting.py
@@ -133,6 +133,40 @@
"""
[email protected]
+class Indentation:
+ """
+ Spaces/tabs used before blocks of code to convey a program's structure.
+ """
+ class docs:
+ example = """
+ # If this code was written on an editor that defined a tab as 2
+ # spaces, mixing tabs and spaces would look like this on a different
+ # editor defining tabs as four spaces.
+
+ def spaces():
+ pass
+
+ def tabs():
+ pass
+ """
+ example_language = 'Python'
+ importance_reason = """
+ Mixing tabs and spaces can cause issues when collaborating on
+ code, as well as during testing and compilation.
+ """
+ fix_suggestions = """
+ Using either tabs or spaces consistently.
+ If using spaces, by using a suitable number of spaces, preferably four.
+ """
+ indent_type = Taste[int](
+ 'Represents the type of indent used.',
+ ('tab', 'space'), default='tab')
+ indent_size = Taste[int](
+ 'Represents the number of spaces per indentation level.',
+ (2, 3, 4, 5, 6), default=4)
+
+
@Spacing.subaspect
class TrailingSpace:
"""
|
{"golden_diff": "diff --git a/coalib/bearlib/aspects/Formatting.py b/coalib/bearlib/aspects/Formatting.py\n--- a/coalib/bearlib/aspects/Formatting.py\n+++ b/coalib/bearlib/aspects/Formatting.py\n@@ -133,6 +133,40 @@\n \"\"\"\n \n \[email protected]\n+class Indentation:\n+ \"\"\"\n+ Spaces/tabs used before blocks of code to convey a program's structure.\n+ \"\"\"\n+ class docs:\n+ example = \"\"\"\n+ # If this code was written on an editor that defined a tab as 2\n+ # spaces, mixing tabs and spaces would look like this on a different\n+ # editor defining tabs as four spaces.\n+\n+ def spaces():\n+ pass\n+\n+ def tabs():\n+ pass\n+ \"\"\"\n+ example_language = 'Python'\n+ importance_reason = \"\"\"\n+ Mixing tabs and spaces can cause issues when collaborating on\n+ code, as well as during testing and compilation.\n+ \"\"\"\n+ fix_suggestions = \"\"\"\n+ Using either tabs or spaces consistently.\n+ If using spaces, by using a suitable number of spaces, preferably four.\n+ \"\"\"\n+ indent_type = Taste[int](\n+ 'Represents the type of indent used.',\n+ ('tab', 'space'), default='tab')\n+ indent_size = Taste[int](\n+ 'Represents the number of spaces per indentation level.',\n+ (2, 3, 4, 5, 6), default=4)\n+\n+\n @Spacing.subaspect\n class TrailingSpace:\n \"\"\"\n", "issue": "Create Indentation aspects under Formatting.Spacing\nCreate an aspects named `Indentation` in files `Formatting.py`. The new aspects should have fullname of `root.Formatting.Spacing.Indentation`. It should have atleast the following taste:\r\n\r\n- `indent_type` - what type of indentation used, could be `tab` or `space`\r\n- `indent_size` - number of spaces per indentation level\n", "before_files": [{"content": "from coalib.bearlib.aspects import Root, Taste\n\n\[email protected]\nclass Formatting:\n \"\"\"\n The visual appearance of source code.\n \"\"\"\n class docs:\n example = \"\"\"\n # Here is an example of Python code with lots of\n # formatting issues including: trailing spaces, missing spaces\n # around operators, strange and inconsistent indentation etc.\n\n z = 'hello'+'world'\n def f ( a):\n pass\n \"\"\"\n example_language = 'Python'\n importance_reason = \"\"\"\n A coding style (the of rules or guidelines used when writing the\n source code) can drastically affect the readability, and\n maintainability of a program and might as well introduce bugs.\n \"\"\"\n fix_suggestions = \"\"\"\n Defining a clearly and thoughtful coding style (based on the available\n ones given the programming language in use) and strictly respect it or\n apply it through out the implementation of a project.\n \"\"\"\n\n\[email protected]\nclass Length:\n \"\"\"\n Hold sub-aspects for file and line length.\n \"\"\"\n class docs:\n example = \"\"\"\n # We assume that the maximum number of characters per line is 10\n # and that the maximum number of lines per files is 3.\n\n def run(bear, file, filename, aspectlist):\n return bear.run(file, filename, aspectlist)\n \"\"\"\n example_language = 'Python'\n importance_reason = \"\"\"\n Too long lines of code and too large files result in code difficult to\n read, understand and maintain.\n \"\"\"\n fix_suggestions = \"\"\"\n Length issues can be fixed by writing shorter lines of code (splitting\n long lines into multiple shorter lines); writing shorter files\n (splitting files into modules, writing shorter methods and classes.).\n \"\"\"\n\n\[email protected]\nclass LineLength:\n \"\"\"\n Number of characters found in a line of code.\n \"\"\"\n class docs:\n example = \"\"\"\n print('The length of this line is 38')\n \"\"\"\n example_langague = 'Python'\n importance_reason = \"\"\"\n Too long lines make code very difficult to read and maintain.\n \"\"\"\n fix_suggestions = \"\"\"\n Splitting long lines of code into multiple shorter lines whenever\n possible. Avoiding the usage of in-line language specific constructs\n whenever they result in too long lines.\n \"\"\"\n max_line_length = Taste[int](\n 'Maximum number of character for a line.',\n (79, 80, 100, 120, 160), default=80)\n\n\[email protected]\nclass FileLength:\n \"\"\"\n Number of lines found in a file.\n \"\"\"\n class docs:\n example = \"\"\"\n # This file would be a large file if we assume that the max number of\n # lines per file is 10\n\n class Node:\n def __init__(self, value, left_most_child, left_sibling):\n self.value=value\n self.left_most_child=left_most_child\n self.left_sibling=left_sibling\n\n # This is example is just showing what this aspect is about, because\n # the max number of lines per file is usually 999.\n \"\"\"\n example_language = 'Python 3'\n importance_reason = \"\"\"\n Too long programs (or files) are difficult to read, maintain and\n understand.\n \"\"\"\n fix_suggestions = \"\"\"\n Splitting files into modules, writing shorter methods and classes.\n \"\"\"\n max_file_length = Taste[int](\n 'Maximum number of line for a file',\n (999,), default=999)\n\n\[email protected]\nclass Spacing:\n \"\"\"\n All whitespace found between non-whitespace characters.\n \"\"\"\n class docs:\n example = \"\"\"\n # Here is an example of code with spacing issues including\n # unnecessary blank lines and missing space around operators.\n\n\n\n def func( ):\n return 37*-+2\n \"\"\"\n example_language = 'Python'\n importance_reason = \"\"\"\n Useless spacing affects the readability and maintainability of a code.\n \"\"\"\n fix_suggestions = \"\"\"\n Removing the trailing spaces and the meaningless blank lines.\n \"\"\"\n\n\[email protected]\nclass TrailingSpace:\n \"\"\"\n Unnecessary whitespace at end of a line.\n\n Trailing space is all whitespace found after the last non-whitespace\n character on the line until the newline. This includes tabs \"\\\\\\\\t\",\n blank lines, blanks etc.\n \"\"\"\n class docs:\n example = \"\"\"\n def func( a ):\n pass\n\n \"\"\".replace('\\n', '\\t\\n')\n example_language = 'Python'\n importance_reason = \"\"\"\n Trailing spaces make code less readable and maintainable.\n \"\"\"\n fix_suggestions = \"\"\"\n Removing the trailing spaces.\n \"\"\"\n allow_trailing_spaces = Taste[bool](\n 'Determines whether or not trailing spaces should be allowed or not.',\n (True, False), default=False)\n\n\[email protected]\nclass BlankLine:\n \"\"\"\n A line with zero characters.\n \"\"\"\n class docs:\n example = \"\"\"\n name = input('What is your name?')\n\n\n print('Hi, {}'.format(name))\n \"\"\"\n example_language = 'Python 3'\n importance_reason = \"\"\"\n Various programming styles use blank lines in different places.\n The usage of blank lines affects the readability, maintainability and\n length of a code i.e blank lines can either make code longer, less\n readable and maintainable or do the reverse.\n \"\"\"\n fix_suggestions = \"\"\"\n Following specific rules about the usage of blank lines: using them\n only when necessary.\n \"\"\"\n\n\[email protected]\nclass BlankLineAfterDeclaration:\n \"\"\"\n Those found after declarations.\n \"\"\"\n class docs:\n example = \"\"\"\n #include <stdio.h>\n\n int main ()\n {\n int a;\n float b;\n\n scanf(\"%d%f\", &a, &b);\n printf(\"a = %d and b = %f\", a, b);\n return 0;\n }\n \"\"\"\n example_language = 'C'\n importance_reason = \"\"\"\n Having a specific and reasonable number of blank lines after every\n block of declarations improves on the readability of the code.\n \"\"\"\n fix_suggestions = \"\"\"\n `BlankLintAfterDeclaration` issues can be fixed specifying (and of\n course using) a reasonable number of blank lines to use after block\n declaration.\n \"\"\"\n blank_lines_after_declarations = Taste[int](\n 'Represents the number of blank lines after declarations',\n (0, 1, 2), default=0)\n\n\[email protected]\nclass BlankLineAfterProcedure:\n \"\"\"\n Those found after procedures or functions.\n \"\"\"\n class docs:\n example = \"\"\"\n #include <stdio.h>\n\n void proc(void){\n printf(\"this does nothing\");\n } int add(float a, float b){\n return a + b;\n }\n \"\"\"\n example_language = 'C'\n importance_reason = \"\"\"\n Having a specific and reasonable number of blank lines after every\n procedures improves on the readability of the code.\n \"\"\"\n fix_suggestions = \"\"\"\n `BlankLintAfterProcedure` issues can be fixed specifying (and of\n course using) a reasonable number of blank lines to use after\n procedures' definition.\n \"\"\"\n blank_lines_after_procedures = Taste[int](\n 'Represents the number of blank lines to use after a procedure or'\n 'a function', (0, 1, 2), default=1)\n\n\[email protected]\nclass BlankLineAfterClass:\n \"\"\"\n Those found after classes' definitions.\n \"\"\"\n class docs:\n example = \"\"\"\n class SomeClass:\n def __init__(self):\n raise RuntimeError('Never instantiate this class')\n\n\n def func():\n pass\n \"\"\"\n example_language = 'Python 3'\n importance_reason = \"\"\"\n Having a specific number of blank lines after every classes'\n definitions declarations improves on the readability of the code.\n \"\"\"\n fix_suggestions = \"\"\"\n \"\"\"\n blank_lines_after_class = Taste[int](\n 'Represents the number of blank lines to use after a class'\n 'definition.', (1, 2), default=2)\n\n\[email protected]\nclass NewlineAtEOF:\n \"\"\"\n Newline character (usually '\\\\\\\\n', aka CR) found at the end of file.\n \"\"\"\n class docs:\n example = \"\"\"\n def do_nothing():\n pass\n \"\"\" + ('\\n')\n example_language = 'Python'\n importance_reason = \"\"\"\n A text file consists of a series of lines, each of which ends with a\n newline character (\\\\\\\\n). A file that is not empty and does not end\n with a newline is therefore not a text file.\n\n It's not just bad style, it can lead to unexpected behavior, utilities\n that are supposed to operate on text files may not cope well with files\n that don't end with a newline.\n \"\"\"\n fix_suggestions = \"\"\"\n `NewlineAtEOF` issues can be fixed by simply adding a newline at the\n end of the file.\n \"\"\"\n newline_at_EOF = Taste[bool](\n 'If ``True``, enforce a newline at End Of File.',\n (True, False), default=True)\n\n\[email protected]\nclass SpacesAroundOperator:\n \"\"\"\n Spacing around operators.\n \"\"\"\n class docs:\n example = \"\"\"\n def f(a, x):\n return 37+a[42 - x]\n \"\"\"\n example_language = 'Python'\n importance_reason = \"\"\"\n Having a specific and reasonable number of whitespace (blank) around\n operators improves on the readability of the code.\n \"\"\"\n fix_suggestions = \"\"\"\n `SpacesAroundOperator` issues can be fixed by simply specifying and\n the number of whitespace to be used after each operator.\n \"\"\"\n spaces_around_operators = Taste[int](\n 'Represents the number of space to be used around operators.',\n (0, 1), default=1)\n spaces_before_colon = Taste[int](\n 'Represents the number of blank spaces before colons.',\n (0, 1), default=0)\n spaces_after_colon = Taste[int](\n 'Represents the number of blank spaces after colons.',\n (0, 1), default=1)\n\n\[email protected]\nclass Quotation:\n \"\"\"\n Quotation mark used for strings and docstrings.\n \"\"\"\n class docs:\n example = \"\"\"\n # Here is an example of code where both '' and \"\" quotation mark\n # Are used.\n\n string = 'coala is always written with lowercase c.'\n string = \"coala is always written with lowercase c.\"\n \"\"\"\n example_language = 'Python'\n importance_reason = \"\"\"\n Using the same quotation whenever possible in the code, improve on its\n readability by introducing consistency.\n \"\"\"\n fix_suggestions = \"\"\"\n Choosing a preferred quotation and using it everywhere (if possible).\n \"\"\"\n preferred_quotation = Taste[str](\n 'Represents the preferred quotation',\n ('\\'', '\"'), default='\\'')\n", "path": "coalib/bearlib/aspects/Formatting.py"}], "after_files": [{"content": "from coalib.bearlib.aspects import Root, Taste\n\n\[email protected]\nclass Formatting:\n \"\"\"\n The visual appearance of source code.\n \"\"\"\n class docs:\n example = \"\"\"\n # Here is an example of Python code with lots of\n # formatting issues including: trailing spaces, missing spaces\n # around operators, strange and inconsistent indentation etc.\n\n z = 'hello'+'world'\n def f ( a):\n pass\n \"\"\"\n example_language = 'Python'\n importance_reason = \"\"\"\n A coding style (the of rules or guidelines used when writing the\n source code) can drastically affect the readability, and\n maintainability of a program and might as well introduce bugs.\n \"\"\"\n fix_suggestions = \"\"\"\n Defining a clearly and thoughtful coding style (based on the available\n ones given the programming language in use) and strictly respect it or\n apply it through out the implementation of a project.\n \"\"\"\n\n\[email protected]\nclass Length:\n \"\"\"\n Hold sub-aspects for file and line length.\n \"\"\"\n class docs:\n example = \"\"\"\n # We assume that the maximum number of characters per line is 10\n # and that the maximum number of lines per files is 3.\n\n def run(bear, file, filename, aspectlist):\n return bear.run(file, filename, aspectlist)\n \"\"\"\n example_language = 'Python'\n importance_reason = \"\"\"\n Too long lines of code and too large files result in code difficult to\n read, understand and maintain.\n \"\"\"\n fix_suggestions = \"\"\"\n Length issues can be fixed by writing shorter lines of code (splitting\n long lines into multiple shorter lines); writing shorter files\n (splitting files into modules, writing shorter methods and classes.).\n \"\"\"\n\n\[email protected]\nclass LineLength:\n \"\"\"\n Number of characters found in a line of code.\n \"\"\"\n class docs:\n example = \"\"\"\n print('The length of this line is 38')\n \"\"\"\n example_langague = 'Python'\n importance_reason = \"\"\"\n Too long lines make code very difficult to read and maintain.\n \"\"\"\n fix_suggestions = \"\"\"\n Splitting long lines of code into multiple shorter lines whenever\n possible. Avoiding the usage of in-line language specific constructs\n whenever they result in too long lines.\n \"\"\"\n max_line_length = Taste[int](\n 'Maximum number of character for a line.',\n (79, 80, 100, 120, 160), default=80)\n\n\[email protected]\nclass FileLength:\n \"\"\"\n Number of lines found in a file.\n \"\"\"\n class docs:\n example = \"\"\"\n # This file would be a large file if we assume that the max number of\n # lines per file is 10\n\n class Node:\n def __init__(self, value, left_most_child, left_sibling):\n self.value=value\n self.left_most_child=left_most_child\n self.left_sibling=left_sibling\n\n # This is example is just showing what this aspect is about, because\n # the max number of lines per file is usually 999.\n \"\"\"\n example_language = 'Python 3'\n importance_reason = \"\"\"\n Too long programs (or files) are difficult to read, maintain and\n understand.\n \"\"\"\n fix_suggestions = \"\"\"\n Splitting files into modules, writing shorter methods and classes.\n \"\"\"\n max_file_length = Taste[int](\n 'Maximum number of line for a file',\n (999,), default=999)\n\n\[email protected]\nclass Spacing:\n \"\"\"\n All whitespace found between non-whitespace characters.\n \"\"\"\n class docs:\n example = \"\"\"\n # Here is an example of code with spacing issues including\n # unnecessary blank lines and missing space around operators.\n\n\n\n def func( ):\n return 37*-+2\n \"\"\"\n example_language = 'Python'\n importance_reason = \"\"\"\n Useless spacing affects the readability and maintainability of a code.\n \"\"\"\n fix_suggestions = \"\"\"\n Removing the trailing spaces and the meaningless blank lines.\n \"\"\"\n\n\[email protected]\nclass Indentation:\n \"\"\"\n Spaces/tabs used before blocks of code to convey a program's structure.\n \"\"\"\n class docs:\n example = \"\"\"\n # If this code was written on an editor that defined a tab as 2\n # spaces, mixing tabs and spaces would look like this on a different\n # editor defining tabs as four spaces.\n\n def spaces():\n pass\n\n def tabs():\n pass\n \"\"\"\n example_language = 'Python'\n importance_reason = \"\"\"\n Mixing tabs and spaces can cause issues when collaborating on\n code, as well as during testing and compilation.\n \"\"\"\n fix_suggestions = \"\"\"\n Using either tabs or spaces consistently.\n If using spaces, by using a suitable number of spaces, preferably four.\n \"\"\"\n indent_type = Taste[int](\n 'Represents the type of indent used.',\n ('tab', 'space'), default='tab')\n indent_size = Taste[int](\n 'Represents the number of spaces per indentation level.',\n (2, 3, 4, 5, 6), default=4)\n\n\[email protected]\nclass TrailingSpace:\n \"\"\"\n Unnecessary whitespace at end of a line.\n\n Trailing space is all whitespace found after the last non-whitespace\n character on the line until the newline. This includes tabs \"\\\\\\\\t\",\n blank lines, blanks etc.\n \"\"\"\n class docs:\n example = \"\"\"\n def func( a ):\n pass\n\n \"\"\".replace('\\n', '\\t\\n')\n example_language = 'Python'\n importance_reason = \"\"\"\n Trailing spaces make code less readable and maintainable.\n \"\"\"\n fix_suggestions = \"\"\"\n Removing the trailing spaces.\n \"\"\"\n allow_trailing_spaces = Taste[bool](\n 'Determines whether or not trailing spaces should be allowed or not.',\n (True, False), default=False)\n\n\[email protected]\nclass BlankLine:\n \"\"\"\n A line with zero characters.\n \"\"\"\n class docs:\n example = \"\"\"\n name = input('What is your name?')\n\n\n print('Hi, {}'.format(name))\n \"\"\"\n example_language = 'Python 3'\n importance_reason = \"\"\"\n Various programming styles use blank lines in different places.\n The usage of blank lines affects the readability, maintainability and\n length of a code i.e blank lines can either make code longer, less\n readable and maintainable or do the reverse.\n \"\"\"\n fix_suggestions = \"\"\"\n Following specific rules about the usage of blank lines: using them\n only when necessary.\n \"\"\"\n\n\[email protected]\nclass BlankLineAfterDeclaration:\n \"\"\"\n Those found after declarations.\n \"\"\"\n class docs:\n example = \"\"\"\n #include <stdio.h>\n\n int main ()\n {\n int a;\n float b;\n\n scanf(\"%d%f\", &a, &b);\n printf(\"a = %d and b = %f\", a, b);\n return 0;\n }\n \"\"\"\n example_language = 'C'\n importance_reason = \"\"\"\n Having a specific and reasonable number of blank lines after every\n block of declarations improves on the readability of the code.\n \"\"\"\n fix_suggestions = \"\"\"\n `BlankLintAfterDeclaration` issues can be fixed specifying (and of\n course using) a reasonable number of blank lines to use after block\n declaration.\n \"\"\"\n blank_lines_after_declarations = Taste[int](\n 'Represents the number of blank lines after declarations',\n (0, 1, 2), default=0)\n\n\[email protected]\nclass BlankLineAfterProcedure:\n \"\"\"\n Those found after procedures or functions.\n \"\"\"\n class docs:\n example = \"\"\"\n #include <stdio.h>\n\n void proc(void){\n printf(\"this does nothing\");\n } int add(float a, float b){\n return a + b;\n }\n \"\"\"\n example_language = 'C'\n importance_reason = \"\"\"\n Having a specific and reasonable number of blank lines after every\n procedures improves on the readability of the code.\n \"\"\"\n fix_suggestions = \"\"\"\n `BlankLintAfterProcedure` issues can be fixed specifying (and of\n course using) a reasonable number of blank lines to use after\n procedures' definition.\n \"\"\"\n blank_lines_after_procedures = Taste[int](\n 'Represents the number of blank lines to use after a procedure or'\n 'a function', (0, 1, 2), default=1)\n\n\[email protected]\nclass BlankLineAfterClass:\n \"\"\"\n Those found after classes' definitions.\n \"\"\"\n class docs:\n example = \"\"\"\n class SomeClass:\n def __init__(self):\n raise RuntimeError('Never instantiate this class')\n\n\n def func():\n pass\n \"\"\"\n example_language = 'Python 3'\n importance_reason = \"\"\"\n Having a specific number of blank lines after every classes'\n definitions declarations improves on the readability of the code.\n \"\"\"\n fix_suggestions = \"\"\"\n \"\"\"\n blank_lines_after_class = Taste[int](\n 'Represents the number of blank lines to use after a class'\n 'definition.', (1, 2), default=2)\n\n\[email protected]\nclass NewlineAtEOF:\n \"\"\"\n Newline character (usually '\\\\\\\\n', aka CR) found at the end of file.\n \"\"\"\n class docs:\n example = \"\"\"\n def do_nothing():\n pass\n \"\"\" + ('\\n')\n example_language = 'Python'\n importance_reason = \"\"\"\n A text file consists of a series of lines, each of which ends with a\n newline character (\\\\\\\\n). A file that is not empty and does not end\n with a newline is therefore not a text file.\n\n It's not just bad style, it can lead to unexpected behavior, utilities\n that are supposed to operate on text files may not cope well with files\n that don't end with a newline.\n \"\"\"\n fix_suggestions = \"\"\"\n `NewlineAtEOF` issues can be fixed by simply adding a newline at the\n end of the file.\n \"\"\"\n newline_at_EOF = Taste[bool](\n 'If ``True``, enforce a newline at End Of File.',\n (True, False), default=True)\n\n\[email protected]\nclass SpacesAroundOperator:\n \"\"\"\n Spacing around operators.\n \"\"\"\n class docs:\n example = \"\"\"\n def f(a, x):\n return 37+a[42 - x]\n \"\"\"\n example_language = 'Python'\n importance_reason = \"\"\"\n Having a specific and reasonable number of whitespace (blank) around\n operators improves on the readability of the code.\n \"\"\"\n fix_suggestions = \"\"\"\n `SpacesAroundOperator` issues can be fixed by simply specifying and\n the number of whitespace to be used after each operator.\n \"\"\"\n spaces_around_operators = Taste[int](\n 'Represents the number of space to be used around operators.',\n (0, 1), default=1)\n spaces_before_colon = Taste[int](\n 'Represents the number of blank spaces before colons.',\n (0, 1), default=0)\n spaces_after_colon = Taste[int](\n 'Represents the number of blank spaces after colons.',\n (0, 1), default=1)\n\n\[email protected]\nclass Quotation:\n \"\"\"\n Quotation mark used for strings and docstrings.\n \"\"\"\n class docs:\n example = \"\"\"\n # Here is an example of code where both '' and \"\" quotation mark\n # Are used.\n\n string = 'coala is always written with lowercase c.'\n string = \"coala is always written with lowercase c.\"\n \"\"\"\n example_language = 'Python'\n importance_reason = \"\"\"\n Using the same quotation whenever possible in the code, improve on its\n readability by introducing consistency.\n \"\"\"\n fix_suggestions = \"\"\"\n Choosing a preferred quotation and using it everywhere (if possible).\n \"\"\"\n preferred_quotation = Taste[str](\n 'Represents the preferred quotation',\n ('\\'', '\"'), default='\\'')\n", "path": "coalib/bearlib/aspects/Formatting.py"}]}
| 3,722 | 353 |
gh_patches_debug_9115
|
rasdani/github-patches
|
git_diff
|
alltheplaces__alltheplaces-2637
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Spider cvs is broken
During the global build at 2021-08-18-14-42-26, spider **cvs** failed with **0 features** and **9870 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/logs/cvs.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/output/cvs.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/output/cvs.geojson))
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/cvs.py`
Content:
```
1 import json
2 import scrapy
3 import re
4 from locations.items import GeojsonPointItem
5 from locations.hours import OpeningHours
6
7 DAYS = [
8 'Mo',
9 'Tu',
10 'We',
11 'Th',
12 'Fr',
13 'Sa',
14 'Su'
15 ]
16
17
18 class CVSSpider(scrapy.Spider):
19
20 name = "cvs"
21 item_attributes = { 'brand': "CVS", 'brand_wikidata': "Q2078880" }
22 allowed_domains = ["www.cvs.com"]
23 download_delay = 0.5
24 start_urls = (
25 'https://www.cvs.com/store-locator/cvs-pharmacy-locations',
26 )
27
28 def parse_hours(self, hours):
29 opening_hours = OpeningHours()
30
31 for group in hours:
32 if 'closed' in group:
33 continue
34 if 'open 24 hours' in group:
35 days = re.search(r'([a-zA-Z\-]+)\s+open 24 hours', group).groups()[0]
36 open_time, close_time = '00:00:00', '23:59:00'
37 else:
38 try:
39 days, open_time, close_time = re.search(r'([a-zA-Z\-]+)\s+([\d:\sapm]+)-([\d:\sapm]+)', group).groups()
40 except AttributeError:
41 continue # no hours listed, just day
42 try:
43 start_day, end_day = days.split('-')
44 except ValueError:
45 start_day, end_day = days, days
46 for day in DAYS[DAYS.index(start_day):DAYS.index(end_day) + 1]:
47 if 'm' in open_time:
48 open_time = open_time.strip(' apm') + ":00"
49 if 'm' in close_time:
50 close_time = close_time.strip(' apm') + ":00"
51 opening_hours.add_range(day=day,
52 open_time=open_time.strip(),
53 close_time=close_time.strip(),
54 time_format='%H:%M:%S')
55
56 return opening_hours.as_opening_hours()
57
58 def parse_stores(self, response):
59 try:
60 data = json.loads(response.xpath('//script[@type="application/ld+json" and contains(text(), "streetAddress")]/text()').extract_first())[0]
61 except json.decoder.JSONDecodeError:
62 # one malformed json body on this store:
63 # https://www.cvs.com/store-locator/cvs-pharmacy-address/84+South+Avenue+tops+Plaza+-Hilton-NY-14468/storeid=5076
64 data = response.xpath('//script[@type="application/ld+json" and contains(text(), "streetAddress")]/text()').extract_first()
65 data = re.sub(r'"tops Plaza\s*"', '', data)
66 data = json.loads(data)[0]
67 except TypeError:
68 return # empty store page
69
70 properties = {
71 'name': data["name"],
72 'ref': re.search(r'.+/?storeid=(.+)', response.url).group(1),
73 'addr_full': data["address"]["streetAddress"].strip(', '),
74 'city': data["address"]["addressLocality"],
75 'state': data["address"]["addressRegion"],
76 'postcode': data["address"]["postalCode"],
77 'country': data["address"]["addressCountry"],
78 'phone': data["address"].get("telephone"),
79 'website': data.get("url") or response.url,
80 'lat': float(data["geo"]["latitude"]),
81 'lon': float(data["geo"]["longitude"]),
82 }
83
84 hours = self.parse_hours(data["openingHours"])
85 if hours:
86 properties["opening_hours"] = hours
87
88 yield GeojsonPointItem(**properties)
89
90 def parse_city_stores(self, response):
91 stores = response.xpath('//div[@class="each-store"]')
92
93 for store in stores:
94
95 direction = store.xpath('normalize-space(.//span[@class="store-number"]/a/@href)').extract_first()
96 if direction:
97 yield scrapy.Request(response.urljoin(direction), callback=self.parse_stores)
98
99 def parse_state(self, response):
100 city_urls = response.xpath('//div[@class="states"]/ul/li/a/@href').extract()
101 for path in city_urls:
102 yield scrapy.Request(response.urljoin(path), callback=self.parse_city_stores)
103
104 def parse(self, response):
105 urls = response.xpath('//div[@class="states"]/ul/li/a/@href').extract()
106 for path in urls:
107 yield scrapy.Request(response.urljoin(path), callback=self.parse_state)
108
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/locations/spiders/cvs.py b/locations/spiders/cvs.py
--- a/locations/spiders/cvs.py
+++ b/locations/spiders/cvs.py
@@ -77,8 +77,8 @@
'country': data["address"]["addressCountry"],
'phone': data["address"].get("telephone"),
'website': data.get("url") or response.url,
- 'lat': float(data["geo"]["latitude"]),
- 'lon': float(data["geo"]["longitude"]),
+ 'lat': data["geo"]["latitude"] or None,
+ 'lon': data["geo"]["longitude"] or None,
}
hours = self.parse_hours(data["openingHours"])
|
{"golden_diff": "diff --git a/locations/spiders/cvs.py b/locations/spiders/cvs.py\n--- a/locations/spiders/cvs.py\n+++ b/locations/spiders/cvs.py\n@@ -77,8 +77,8 @@\n 'country': data[\"address\"][\"addressCountry\"],\n 'phone': data[\"address\"].get(\"telephone\"),\n 'website': data.get(\"url\") or response.url,\n- 'lat': float(data[\"geo\"][\"latitude\"]),\n- 'lon': float(data[\"geo\"][\"longitude\"]),\n+ 'lat': data[\"geo\"][\"latitude\"] or None,\n+ 'lon': data[\"geo\"][\"longitude\"] or None,\n }\n \n hours = self.parse_hours(data[\"openingHours\"])\n", "issue": "Spider cvs is broken\nDuring the global build at 2021-08-18-14-42-26, spider **cvs** failed with **0 features** and **9870 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/logs/cvs.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/output/cvs.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/output/cvs.geojson))\n", "before_files": [{"content": "import json\nimport scrapy\nimport re\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\nDAYS = [\n 'Mo',\n 'Tu',\n 'We',\n 'Th',\n 'Fr',\n 'Sa',\n 'Su'\n]\n\n\nclass CVSSpider(scrapy.Spider):\n\n name = \"cvs\"\n item_attributes = { 'brand': \"CVS\", 'brand_wikidata': \"Q2078880\" }\n allowed_domains = [\"www.cvs.com\"]\n download_delay = 0.5\n start_urls = (\n 'https://www.cvs.com/store-locator/cvs-pharmacy-locations',\n )\n\n def parse_hours(self, hours):\n opening_hours = OpeningHours()\n\n for group in hours:\n if 'closed' in group:\n continue\n if 'open 24 hours' in group:\n days = re.search(r'([a-zA-Z\\-]+)\\s+open 24 hours', group).groups()[0]\n open_time, close_time = '00:00:00', '23:59:00'\n else:\n try:\n days, open_time, close_time = re.search(r'([a-zA-Z\\-]+)\\s+([\\d:\\sapm]+)-([\\d:\\sapm]+)', group).groups()\n except AttributeError:\n continue # no hours listed, just day\n try:\n start_day, end_day = days.split('-')\n except ValueError:\n start_day, end_day = days, days\n for day in DAYS[DAYS.index(start_day):DAYS.index(end_day) + 1]:\n if 'm' in open_time:\n open_time = open_time.strip(' apm') + \":00\"\n if 'm' in close_time:\n close_time = close_time.strip(' apm') + \":00\"\n opening_hours.add_range(day=day,\n open_time=open_time.strip(),\n close_time=close_time.strip(),\n time_format='%H:%M:%S')\n\n return opening_hours.as_opening_hours()\n\n def parse_stores(self, response):\n try:\n data = json.loads(response.xpath('//script[@type=\"application/ld+json\" and contains(text(), \"streetAddress\")]/text()').extract_first())[0]\n except json.decoder.JSONDecodeError:\n # one malformed json body on this store:\n # https://www.cvs.com/store-locator/cvs-pharmacy-address/84+South+Avenue+tops+Plaza+-Hilton-NY-14468/storeid=5076\n data = response.xpath('//script[@type=\"application/ld+json\" and contains(text(), \"streetAddress\")]/text()').extract_first()\n data = re.sub(r'\"tops Plaza\\s*\"', '', data)\n data = json.loads(data)[0]\n except TypeError:\n return # empty store page\n\n properties = {\n 'name': data[\"name\"],\n 'ref': re.search(r'.+/?storeid=(.+)', response.url).group(1),\n 'addr_full': data[\"address\"][\"streetAddress\"].strip(', '),\n 'city': data[\"address\"][\"addressLocality\"],\n 'state': data[\"address\"][\"addressRegion\"],\n 'postcode': data[\"address\"][\"postalCode\"],\n 'country': data[\"address\"][\"addressCountry\"],\n 'phone': data[\"address\"].get(\"telephone\"),\n 'website': data.get(\"url\") or response.url,\n 'lat': float(data[\"geo\"][\"latitude\"]),\n 'lon': float(data[\"geo\"][\"longitude\"]),\n }\n\n hours = self.parse_hours(data[\"openingHours\"])\n if hours:\n properties[\"opening_hours\"] = hours\n\n yield GeojsonPointItem(**properties)\n\n def parse_city_stores(self, response):\n stores = response.xpath('//div[@class=\"each-store\"]')\n\n for store in stores:\n\n direction = store.xpath('normalize-space(.//span[@class=\"store-number\"]/a/@href)').extract_first()\n if direction:\n yield scrapy.Request(response.urljoin(direction), callback=self.parse_stores)\n\n def parse_state(self, response):\n city_urls = response.xpath('//div[@class=\"states\"]/ul/li/a/@href').extract()\n for path in city_urls:\n yield scrapy.Request(response.urljoin(path), callback=self.parse_city_stores)\n\n def parse(self, response):\n urls = response.xpath('//div[@class=\"states\"]/ul/li/a/@href').extract()\n for path in urls:\n yield scrapy.Request(response.urljoin(path), callback=self.parse_state)\n", "path": "locations/spiders/cvs.py"}], "after_files": [{"content": "import json\nimport scrapy\nimport re\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\nDAYS = [\n 'Mo',\n 'Tu',\n 'We',\n 'Th',\n 'Fr',\n 'Sa',\n 'Su'\n]\n\n\nclass CVSSpider(scrapy.Spider):\n\n name = \"cvs\"\n item_attributes = { 'brand': \"CVS\", 'brand_wikidata': \"Q2078880\" }\n allowed_domains = [\"www.cvs.com\"]\n download_delay = 0.5\n start_urls = (\n 'https://www.cvs.com/store-locator/cvs-pharmacy-locations',\n )\n\n def parse_hours(self, hours):\n opening_hours = OpeningHours()\n\n for group in hours:\n if 'closed' in group:\n continue\n if 'open 24 hours' in group:\n days = re.search(r'([a-zA-Z\\-]+)\\s+open 24 hours', group).groups()[0]\n open_time, close_time = '00:00:00', '23:59:00'\n else:\n try:\n days, open_time, close_time = re.search(r'([a-zA-Z\\-]+)\\s+([\\d:\\sapm]+)-([\\d:\\sapm]+)', group).groups()\n except AttributeError:\n continue # no hours listed, just day\n try:\n start_day, end_day = days.split('-')\n except ValueError:\n start_day, end_day = days, days\n for day in DAYS[DAYS.index(start_day):DAYS.index(end_day) + 1]:\n if 'm' in open_time:\n open_time = open_time.strip(' apm') + \":00\"\n if 'm' in close_time:\n close_time = close_time.strip(' apm') + \":00\"\n opening_hours.add_range(day=day,\n open_time=open_time.strip(),\n close_time=close_time.strip(),\n time_format='%H:%M:%S')\n\n return opening_hours.as_opening_hours()\n\n def parse_stores(self, response):\n try:\n data = json.loads(response.xpath('//script[@type=\"application/ld+json\" and contains(text(), \"streetAddress\")]/text()').extract_first())[0]\n except json.decoder.JSONDecodeError:\n # one malformed json body on this store:\n # https://www.cvs.com/store-locator/cvs-pharmacy-address/84+South+Avenue+tops+Plaza+-Hilton-NY-14468/storeid=5076\n data = response.xpath('//script[@type=\"application/ld+json\" and contains(text(), \"streetAddress\")]/text()').extract_first()\n data = re.sub(r'\"tops Plaza\\s*\"', '', data)\n data = json.loads(data)[0]\n except TypeError:\n return # empty store page\n\n properties = {\n 'name': data[\"name\"],\n 'ref': re.search(r'.+/?storeid=(.+)', response.url).group(1),\n 'addr_full': data[\"address\"][\"streetAddress\"].strip(', '),\n 'city': data[\"address\"][\"addressLocality\"],\n 'state': data[\"address\"][\"addressRegion\"],\n 'postcode': data[\"address\"][\"postalCode\"],\n 'country': data[\"address\"][\"addressCountry\"],\n 'phone': data[\"address\"].get(\"telephone\"),\n 'website': data.get(\"url\") or response.url,\n 'lat': data[\"geo\"][\"latitude\"] or None,\n 'lon': data[\"geo\"][\"longitude\"] or None,\n }\n\n hours = self.parse_hours(data[\"openingHours\"])\n if hours:\n properties[\"opening_hours\"] = hours\n\n yield GeojsonPointItem(**properties)\n\n def parse_city_stores(self, response):\n stores = response.xpath('//div[@class=\"each-store\"]')\n\n for store in stores:\n\n direction = store.xpath('normalize-space(.//span[@class=\"store-number\"]/a/@href)').extract_first()\n if direction:\n yield scrapy.Request(response.urljoin(direction), callback=self.parse_stores)\n\n def parse_state(self, response):\n city_urls = response.xpath('//div[@class=\"states\"]/ul/li/a/@href').extract()\n for path in city_urls:\n yield scrapy.Request(response.urljoin(path), callback=self.parse_city_stores)\n\n def parse(self, response):\n urls = response.xpath('//div[@class=\"states\"]/ul/li/a/@href').extract()\n for path in urls:\n yield scrapy.Request(response.urljoin(path), callback=self.parse_state)\n", "path": "locations/spiders/cvs.py"}]}
| 1,654 | 154 |
gh_patches_debug_40464
|
rasdani/github-patches
|
git_diff
|
streamlink__streamlink-4729
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
plugins.picarto: Could not find server netloc
### Checklist
- [X] This is a plugin issue and not a different kind of issue
- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)
- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)
- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)
### Streamlink version
Latest stable release
### Description
Plugin suddenly stopped working today.
Checked on multiple streams as well as on Linux and Windows 10 with the same result.
I can still manually watch the streams on VLC with "https://1-edge1-eu-west.picarto.tv/stream/hls/golive%2bUSERNAME/index.m3u8" as URL source.
### Debug log
```text
C:\PICARTO>streamlink https://picarto.tv/USERNAME best -l debug
[cli][debug] OS: Windows 10
[cli][debug] Python: 3.10.5
[cli][debug] Streamlink: 4.2.0
[cli][debug] Dependencies:
[cli][debug] isodate: 0.6.1
[cli][debug] lxml: 4.9.1
[cli][debug] pycountry: 22.3.5
[cli][debug] pycryptodome: 3.15.0
[cli][debug] PySocks: 1.7.1
[cli][debug] requests: 2.28.1
[cli][debug] websocket-client: 1.3.3
[cli][debug] Arguments:
[cli][debug] url=https://picarto.tv/USERNAME
[cli][debug] stream=['best']
[cli][debug] --loglevel=debug
[cli][debug] --ffmpeg-ffmpeg=C:\Program Files\Streamlink\ffmpeg\ffmpeg.exe
[cli][info] Found matching plugin picarto for URL https://picarto.tv/USERNAME
[plugins.picarto][debug] Type=Live
[plugins.picarto][error] Could not find server netloc
error: No playable streams found on this URL: https://picarto.tv/USERNAME
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink/plugins/picarto.py`
Content:
```
1 """
2 $description Global live streaming and video hosting platform for the creative community.
3 $url picarto.tv
4 $type live, vod
5 """
6
7 import logging
8 import re
9 from urllib.parse import urlparse
10
11 from streamlink.plugin import Plugin, pluginmatcher
12 from streamlink.plugin.api import validate
13 from streamlink.stream.hls import HLSStream
14
15 log = logging.getLogger(__name__)
16
17
18 @pluginmatcher(re.compile(r"""
19 https?://(?:www\.)?picarto\.tv/
20 (?:
21 streampopout/(?P<po_user>[^/]+)/public
22 |
23 videopopout/(?P<po_vod_id>\d+)
24 |
25 [^/]+/videos/(?P<vod_id>\d+)
26 |
27 (?P<user>[^/?&]+)
28 )$
29 """, re.VERBOSE))
30 class Picarto(Plugin):
31 API_URL_LIVE = "https://ptvintern.picarto.tv/api/channel/detail/{username}"
32 API_URL_VOD = "https://ptvintern.picarto.tv/ptvapi"
33 HLS_URL = "https://{netloc}/stream/hls/{file_name}/index.m3u8"
34
35 def get_live(self, username):
36 netloc = self.session.http.get(self.url, schema=validate.Schema(
37 validate.parse_html(),
38 validate.xml_xpath_string(".//script[contains(@src,'/stream/player.js')][1]/@src"),
39 validate.any(None, validate.transform(lambda src: urlparse(src).netloc))
40 ))
41 if not netloc:
42 log.error("Could not find server netloc")
43 return
44
45 channel, multistreams = self.session.http.get(self.API_URL_LIVE.format(username=username), schema=validate.Schema(
46 validate.parse_json(),
47 {
48 "channel": validate.any(None, {
49 "stream_name": str,
50 "title": str,
51 "online": bool,
52 "private": bool,
53 "categories": [{"label": str}],
54 }),
55 "getMultiStreams": validate.any(None, {
56 "multistream": bool,
57 "streams": [{
58 "name": str,
59 "online": bool,
60 }],
61 }),
62 },
63 validate.union_get("channel", "getMultiStreams")
64 ))
65 if not channel or not multistreams:
66 log.debug("Missing channel or streaming data")
67 return
68
69 log.trace(f"netloc={netloc!r}")
70 log.trace(f"channel={channel!r}")
71 log.trace(f"multistreams={multistreams!r}")
72
73 if not channel["online"]:
74 log.error("User is not online")
75 return
76
77 if channel["private"]:
78 log.info("This is a private stream")
79 return
80
81 self.author = username
82 self.category = channel["categories"][0]["label"]
83 self.title = channel["title"]
84
85 hls_url = self.HLS_URL.format(
86 netloc=netloc,
87 file_name=channel["stream_name"]
88 )
89
90 return HLSStream.parse_variant_playlist(self.session, hls_url)
91
92 def get_vod(self, vod_id):
93 data = {
94 'query': (
95 'query ($videoId: ID!) {\n'
96 ' video(id: $videoId) {\n'
97 ' id\n'
98 ' title\n'
99 ' file_name\n'
100 ' video_recording_image_url\n'
101 ' channel {\n'
102 ' name\n'
103 ' }'
104 ' }\n'
105 '}\n'
106 ),
107 'variables': {'videoId': vod_id},
108 }
109 vod_data = self.session.http.post(self.API_URL_VOD, json=data, schema=validate.Schema(
110 validate.parse_json(),
111 {"data": {
112 "video": validate.any(None, {
113 "id": str,
114 "title": str,
115 "file_name": str,
116 "video_recording_image_url": str,
117 "channel": {"name": str},
118 }),
119 }},
120 validate.get(("data", "video"))
121 ))
122
123 if not vod_data:
124 log.debug("Missing video data")
125 return
126
127 log.trace(f"vod_data={vod_data!r}")
128
129 self.author = vod_data["channel"]["name"]
130 self.category = "VOD"
131 self.title = vod_data["title"]
132
133 netloc = urlparse(vod_data["video_recording_image_url"]).netloc
134 hls_url = self.HLS_URL.format(
135 netloc=netloc,
136 file_name=vod_data["file_name"]
137 )
138
139 return HLSStream.parse_variant_playlist(self.session, hls_url)
140
141 def _get_streams(self):
142 m = self.match.groupdict()
143
144 if m['po_vod_id'] or m['vod_id']:
145 log.debug('Type=VOD')
146 return self.get_vod(m['po_vod_id'] or m['vod_id'])
147 elif m['po_user'] or m['user']:
148 log.debug('Type=Live')
149 return self.get_live(m['po_user'] or m['user'])
150
151
152 __plugin__ = Picarto
153
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/streamlink/plugins/picarto.py b/src/streamlink/plugins/picarto.py
--- a/src/streamlink/plugins/picarto.py
+++ b/src/streamlink/plugins/picarto.py
@@ -33,40 +33,37 @@
HLS_URL = "https://{netloc}/stream/hls/{file_name}/index.m3u8"
def get_live(self, username):
- netloc = self.session.http.get(self.url, schema=validate.Schema(
- validate.parse_html(),
- validate.xml_xpath_string(".//script[contains(@src,'/stream/player.js')][1]/@src"),
- validate.any(None, validate.transform(lambda src: urlparse(src).netloc))
- ))
- if not netloc:
- log.error("Could not find server netloc")
- return
-
- channel, multistreams = self.session.http.get(self.API_URL_LIVE.format(username=username), schema=validate.Schema(
- validate.parse_json(),
- {
- "channel": validate.any(None, {
- "stream_name": str,
- "title": str,
- "online": bool,
- "private": bool,
- "categories": [{"label": str}],
- }),
- "getMultiStreams": validate.any(None, {
- "multistream": bool,
- "streams": [{
- "name": str,
+ channel, multistreams, loadbalancer = self.session.http.get(
+ self.API_URL_LIVE.format(username=username),
+ schema=validate.Schema(
+ validate.parse_json(),
+ {
+ "channel": validate.any(None, {
+ "stream_name": str,
+ "title": str,
"online": bool,
- }],
- }),
- },
- validate.union_get("channel", "getMultiStreams")
- ))
- if not channel or not multistreams:
+ "private": bool,
+ "categories": [{"label": str}],
+ }),
+ "getMultiStreams": validate.any(None, {
+ "multistream": bool,
+ "streams": [{
+ "name": str,
+ "online": bool,
+ }],
+ }),
+ "getLoadBalancerUrl": validate.any(None, {
+ "url": validate.any(None, validate.transform(lambda url: urlparse(url).netloc))
+ })
+ },
+ validate.union_get("channel", "getMultiStreams", "getLoadBalancerUrl"),
+ )
+ )
+ if not channel or not multistreams or not loadbalancer:
log.debug("Missing channel or streaming data")
return
- log.trace(f"netloc={netloc!r}")
+ log.trace(f"loadbalancer={loadbalancer!r}")
log.trace(f"channel={channel!r}")
log.trace(f"multistreams={multistreams!r}")
@@ -83,7 +80,7 @@
self.title = channel["title"]
hls_url = self.HLS_URL.format(
- netloc=netloc,
+ netloc=loadbalancer["url"],
file_name=channel["stream_name"]
)
@@ -110,7 +107,7 @@
validate.parse_json(),
{"data": {
"video": validate.any(None, {
- "id": str,
+ "id": int,
"title": str,
"file_name": str,
"video_recording_image_url": str,
|
{"golden_diff": "diff --git a/src/streamlink/plugins/picarto.py b/src/streamlink/plugins/picarto.py\n--- a/src/streamlink/plugins/picarto.py\n+++ b/src/streamlink/plugins/picarto.py\n@@ -33,40 +33,37 @@\n HLS_URL = \"https://{netloc}/stream/hls/{file_name}/index.m3u8\"\n \n def get_live(self, username):\n- netloc = self.session.http.get(self.url, schema=validate.Schema(\n- validate.parse_html(),\n- validate.xml_xpath_string(\".//script[contains(@src,'/stream/player.js')][1]/@src\"),\n- validate.any(None, validate.transform(lambda src: urlparse(src).netloc))\n- ))\n- if not netloc:\n- log.error(\"Could not find server netloc\")\n- return\n-\n- channel, multistreams = self.session.http.get(self.API_URL_LIVE.format(username=username), schema=validate.Schema(\n- validate.parse_json(),\n- {\n- \"channel\": validate.any(None, {\n- \"stream_name\": str,\n- \"title\": str,\n- \"online\": bool,\n- \"private\": bool,\n- \"categories\": [{\"label\": str}],\n- }),\n- \"getMultiStreams\": validate.any(None, {\n- \"multistream\": bool,\n- \"streams\": [{\n- \"name\": str,\n+ channel, multistreams, loadbalancer = self.session.http.get(\n+ self.API_URL_LIVE.format(username=username),\n+ schema=validate.Schema(\n+ validate.parse_json(),\n+ {\n+ \"channel\": validate.any(None, {\n+ \"stream_name\": str,\n+ \"title\": str,\n \"online\": bool,\n- }],\n- }),\n- },\n- validate.union_get(\"channel\", \"getMultiStreams\")\n- ))\n- if not channel or not multistreams:\n+ \"private\": bool,\n+ \"categories\": [{\"label\": str}],\n+ }),\n+ \"getMultiStreams\": validate.any(None, {\n+ \"multistream\": bool,\n+ \"streams\": [{\n+ \"name\": str,\n+ \"online\": bool,\n+ }],\n+ }),\n+ \"getLoadBalancerUrl\": validate.any(None, {\n+ \"url\": validate.any(None, validate.transform(lambda url: urlparse(url).netloc))\n+ })\n+ },\n+ validate.union_get(\"channel\", \"getMultiStreams\", \"getLoadBalancerUrl\"),\n+ )\n+ )\n+ if not channel or not multistreams or not loadbalancer:\n log.debug(\"Missing channel or streaming data\")\n return\n \n- log.trace(f\"netloc={netloc!r}\")\n+ log.trace(f\"loadbalancer={loadbalancer!r}\")\n log.trace(f\"channel={channel!r}\")\n log.trace(f\"multistreams={multistreams!r}\")\n \n@@ -83,7 +80,7 @@\n self.title = channel[\"title\"]\n \n hls_url = self.HLS_URL.format(\n- netloc=netloc,\n+ netloc=loadbalancer[\"url\"],\n file_name=channel[\"stream_name\"]\n )\n \n@@ -110,7 +107,7 @@\n validate.parse_json(),\n {\"data\": {\n \"video\": validate.any(None, {\n- \"id\": str,\n+ \"id\": int,\n \"title\": str,\n \"file_name\": str,\n \"video_recording_image_url\": str,\n", "issue": "plugins.picarto: Could not find server netloc\n### Checklist\n\n- [X] This is a plugin issue and not a different kind of issue\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\n- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)\n- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)\n\n### Streamlink version\n\nLatest stable release\n\n### Description\n\nPlugin suddenly stopped working today. \r\nChecked on multiple streams as well as on Linux and Windows 10 with the same result.\r\nI can still manually watch the streams on VLC with \"https://1-edge1-eu-west.picarto.tv/stream/hls/golive%2bUSERNAME/index.m3u8\" as URL source.\n\n### Debug log\n\n```text\nC:\\PICARTO>streamlink https://picarto.tv/USERNAME best -l debug\r\n[cli][debug] OS: Windows 10\r\n[cli][debug] Python: 3.10.5\r\n[cli][debug] Streamlink: 4.2.0\r\n[cli][debug] Dependencies:\r\n[cli][debug] isodate: 0.6.1\r\n[cli][debug] lxml: 4.9.1\r\n[cli][debug] pycountry: 22.3.5\r\n[cli][debug] pycryptodome: 3.15.0\r\n[cli][debug] PySocks: 1.7.1\r\n[cli][debug] requests: 2.28.1\r\n[cli][debug] websocket-client: 1.3.3\r\n[cli][debug] Arguments:\r\n[cli][debug] url=https://picarto.tv/USERNAME\r\n[cli][debug] stream=['best']\r\n[cli][debug] --loglevel=debug\r\n[cli][debug] --ffmpeg-ffmpeg=C:\\Program Files\\Streamlink\\ffmpeg\\ffmpeg.exe\r\n[cli][info] Found matching plugin picarto for URL https://picarto.tv/USERNAME\r\n[plugins.picarto][debug] Type=Live\r\n[plugins.picarto][error] Could not find server netloc\r\nerror: No playable streams found on this URL: https://picarto.tv/USERNAME\n```\n\n", "before_files": [{"content": "\"\"\"\n$description Global live streaming and video hosting platform for the creative community.\n$url picarto.tv\n$type live, vod\n\"\"\"\n\nimport logging\nimport re\nfrom urllib.parse import urlparse\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.hls import HLSStream\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(r\"\"\"\n https?://(?:www\\.)?picarto\\.tv/\n (?:\n streampopout/(?P<po_user>[^/]+)/public\n |\n videopopout/(?P<po_vod_id>\\d+)\n |\n [^/]+/videos/(?P<vod_id>\\d+)\n |\n (?P<user>[^/?&]+)\n )$\n\"\"\", re.VERBOSE))\nclass Picarto(Plugin):\n API_URL_LIVE = \"https://ptvintern.picarto.tv/api/channel/detail/{username}\"\n API_URL_VOD = \"https://ptvintern.picarto.tv/ptvapi\"\n HLS_URL = \"https://{netloc}/stream/hls/{file_name}/index.m3u8\"\n\n def get_live(self, username):\n netloc = self.session.http.get(self.url, schema=validate.Schema(\n validate.parse_html(),\n validate.xml_xpath_string(\".//script[contains(@src,'/stream/player.js')][1]/@src\"),\n validate.any(None, validate.transform(lambda src: urlparse(src).netloc))\n ))\n if not netloc:\n log.error(\"Could not find server netloc\")\n return\n\n channel, multistreams = self.session.http.get(self.API_URL_LIVE.format(username=username), schema=validate.Schema(\n validate.parse_json(),\n {\n \"channel\": validate.any(None, {\n \"stream_name\": str,\n \"title\": str,\n \"online\": bool,\n \"private\": bool,\n \"categories\": [{\"label\": str}],\n }),\n \"getMultiStreams\": validate.any(None, {\n \"multistream\": bool,\n \"streams\": [{\n \"name\": str,\n \"online\": bool,\n }],\n }),\n },\n validate.union_get(\"channel\", \"getMultiStreams\")\n ))\n if not channel or not multistreams:\n log.debug(\"Missing channel or streaming data\")\n return\n\n log.trace(f\"netloc={netloc!r}\")\n log.trace(f\"channel={channel!r}\")\n log.trace(f\"multistreams={multistreams!r}\")\n\n if not channel[\"online\"]:\n log.error(\"User is not online\")\n return\n\n if channel[\"private\"]:\n log.info(\"This is a private stream\")\n return\n\n self.author = username\n self.category = channel[\"categories\"][0][\"label\"]\n self.title = channel[\"title\"]\n\n hls_url = self.HLS_URL.format(\n netloc=netloc,\n file_name=channel[\"stream_name\"]\n )\n\n return HLSStream.parse_variant_playlist(self.session, hls_url)\n\n def get_vod(self, vod_id):\n data = {\n 'query': (\n 'query ($videoId: ID!) {\\n'\n ' video(id: $videoId) {\\n'\n ' id\\n'\n ' title\\n'\n ' file_name\\n'\n ' video_recording_image_url\\n'\n ' channel {\\n'\n ' name\\n'\n ' }'\n ' }\\n'\n '}\\n'\n ),\n 'variables': {'videoId': vod_id},\n }\n vod_data = self.session.http.post(self.API_URL_VOD, json=data, schema=validate.Schema(\n validate.parse_json(),\n {\"data\": {\n \"video\": validate.any(None, {\n \"id\": str,\n \"title\": str,\n \"file_name\": str,\n \"video_recording_image_url\": str,\n \"channel\": {\"name\": str},\n }),\n }},\n validate.get((\"data\", \"video\"))\n ))\n\n if not vod_data:\n log.debug(\"Missing video data\")\n return\n\n log.trace(f\"vod_data={vod_data!r}\")\n\n self.author = vod_data[\"channel\"][\"name\"]\n self.category = \"VOD\"\n self.title = vod_data[\"title\"]\n\n netloc = urlparse(vod_data[\"video_recording_image_url\"]).netloc\n hls_url = self.HLS_URL.format(\n netloc=netloc,\n file_name=vod_data[\"file_name\"]\n )\n\n return HLSStream.parse_variant_playlist(self.session, hls_url)\n\n def _get_streams(self):\n m = self.match.groupdict()\n\n if m['po_vod_id'] or m['vod_id']:\n log.debug('Type=VOD')\n return self.get_vod(m['po_vod_id'] or m['vod_id'])\n elif m['po_user'] or m['user']:\n log.debug('Type=Live')\n return self.get_live(m['po_user'] or m['user'])\n\n\n__plugin__ = Picarto\n", "path": "src/streamlink/plugins/picarto.py"}], "after_files": [{"content": "\"\"\"\n$description Global live streaming and video hosting platform for the creative community.\n$url picarto.tv\n$type live, vod\n\"\"\"\n\nimport logging\nimport re\nfrom urllib.parse import urlparse\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.hls import HLSStream\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(r\"\"\"\n https?://(?:www\\.)?picarto\\.tv/\n (?:\n streampopout/(?P<po_user>[^/]+)/public\n |\n videopopout/(?P<po_vod_id>\\d+)\n |\n [^/]+/videos/(?P<vod_id>\\d+)\n |\n (?P<user>[^/?&]+)\n )$\n\"\"\", re.VERBOSE))\nclass Picarto(Plugin):\n API_URL_LIVE = \"https://ptvintern.picarto.tv/api/channel/detail/{username}\"\n API_URL_VOD = \"https://ptvintern.picarto.tv/ptvapi\"\n HLS_URL = \"https://{netloc}/stream/hls/{file_name}/index.m3u8\"\n\n def get_live(self, username):\n channel, multistreams, loadbalancer = self.session.http.get(\n self.API_URL_LIVE.format(username=username),\n schema=validate.Schema(\n validate.parse_json(),\n {\n \"channel\": validate.any(None, {\n \"stream_name\": str,\n \"title\": str,\n \"online\": bool,\n \"private\": bool,\n \"categories\": [{\"label\": str}],\n }),\n \"getMultiStreams\": validate.any(None, {\n \"multistream\": bool,\n \"streams\": [{\n \"name\": str,\n \"online\": bool,\n }],\n }),\n \"getLoadBalancerUrl\": validate.any(None, {\n \"url\": validate.any(None, validate.transform(lambda url: urlparse(url).netloc))\n })\n },\n validate.union_get(\"channel\", \"getMultiStreams\", \"getLoadBalancerUrl\"),\n )\n )\n if not channel or not multistreams or not loadbalancer:\n log.debug(\"Missing channel or streaming data\")\n return\n\n log.trace(f\"loadbalancer={loadbalancer!r}\")\n log.trace(f\"channel={channel!r}\")\n log.trace(f\"multistreams={multistreams!r}\")\n\n if not channel[\"online\"]:\n log.error(\"User is not online\")\n return\n\n if channel[\"private\"]:\n log.info(\"This is a private stream\")\n return\n\n self.author = username\n self.category = channel[\"categories\"][0][\"label\"]\n self.title = channel[\"title\"]\n\n hls_url = self.HLS_URL.format(\n netloc=loadbalancer[\"url\"],\n file_name=channel[\"stream_name\"]\n )\n\n return HLSStream.parse_variant_playlist(self.session, hls_url)\n\n def get_vod(self, vod_id):\n data = {\n 'query': (\n 'query ($videoId: ID!) {\\n'\n ' video(id: $videoId) {\\n'\n ' id\\n'\n ' title\\n'\n ' file_name\\n'\n ' video_recording_image_url\\n'\n ' channel {\\n'\n ' name\\n'\n ' }'\n ' }\\n'\n '}\\n'\n ),\n 'variables': {'videoId': vod_id},\n }\n vod_data = self.session.http.post(self.API_URL_VOD, json=data, schema=validate.Schema(\n validate.parse_json(),\n {\"data\": {\n \"video\": validate.any(None, {\n \"id\": int,\n \"title\": str,\n \"file_name\": str,\n \"video_recording_image_url\": str,\n \"channel\": {\"name\": str},\n }),\n }},\n validate.get((\"data\", \"video\"))\n ))\n\n if not vod_data:\n log.debug(\"Missing video data\")\n return\n\n log.trace(f\"vod_data={vod_data!r}\")\n\n self.author = vod_data[\"channel\"][\"name\"]\n self.category = \"VOD\"\n self.title = vod_data[\"title\"]\n\n netloc = urlparse(vod_data[\"video_recording_image_url\"]).netloc\n hls_url = self.HLS_URL.format(\n netloc=netloc,\n file_name=vod_data[\"file_name\"]\n )\n\n return HLSStream.parse_variant_playlist(self.session, hls_url)\n\n def _get_streams(self):\n m = self.match.groupdict()\n\n if m['po_vod_id'] or m['vod_id']:\n log.debug('Type=VOD')\n return self.get_vod(m['po_vod_id'] or m['vod_id'])\n elif m['po_user'] or m['user']:\n log.debug('Type=Live')\n return self.get_live(m['po_user'] or m['user'])\n\n\n__plugin__ = Picarto\n", "path": "src/streamlink/plugins/picarto.py"}]}
| 2,258 | 760 |
gh_patches_debug_25010
|
rasdani/github-patches
|
git_diff
|
beetbox__beets-908
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
mbsync: Deal with albums that have multiple copies of the same recording
the current way mbsync plugin used to obtain track mapping list is to use the MusicBrainz recoding ID from each track, it's a workaround to handle "missing or extra tracks". This method is based on an assumption that for each MB release, there are no multiple tracks with same MB recording ID. It usually works, and in my case, only 4 out of 700+ albums disobey this assumption. But for this four albums, I have to fix them by tag track number by hand and re-import.
Considering it's called "mbsync", Why not make an assumption that track number in metadata is not corrupt and use it if possible, or fallback to MB recording ID way if it's corrupted(missing or extra track detected)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `beetsplug/mbsync.py`
Content:
```
1 # This file is part of beets.
2 # Copyright 2014, Jakob Schnitzer.
3 #
4 # Permission is hereby granted, free of charge, to any person obtaining
5 # a copy of this software and associated documentation files (the
6 # "Software"), to deal in the Software without restriction, including
7 # without limitation the rights to use, copy, modify, merge, publish,
8 # distribute, sublicense, and/or sell copies of the Software, and to
9 # permit persons to whom the Software is furnished to do so, subject to
10 # the following conditions:
11 #
12 # The above copyright notice and this permission notice shall be
13 # included in all copies or substantial portions of the Software.
14
15 """Update library's tags using MusicBrainz.
16 """
17 import logging
18
19 from beets.plugins import BeetsPlugin
20 from beets import autotag, library, ui, util
21 from beets.autotag import hooks
22 from beets import config
23
24 log = logging.getLogger('beets')
25
26
27 def mbsync_singletons(lib, query, move, pretend, write):
28 """Retrieve and apply info from the autotagger for items matched by
29 query.
30 """
31 for item in lib.items(query + ['singleton:true']):
32 if not item.mb_trackid:
33 log.info(u'Skipping singleton {0}: has no mb_trackid'
34 .format(item.title))
35 continue
36
37 # Get the MusicBrainz recording info.
38 track_info = hooks.track_for_mbid(item.mb_trackid)
39 if not track_info:
40 log.info(u'Recording ID not found: {0}'.format(item.mb_trackid))
41 continue
42
43 # Apply.
44 with lib.transaction():
45 autotag.apply_item_metadata(item, track_info)
46 apply_item_changes(lib, item, move, pretend, write)
47
48
49 def mbsync_albums(lib, query, move, pretend, write):
50 """Retrieve and apply info from the autotagger for albums matched by
51 query and their items.
52 """
53 # Process matching albums.
54 for a in lib.albums(query):
55 if not a.mb_albumid:
56 log.info(u'Skipping album {0}: has no mb_albumid'.format(a.id))
57 continue
58
59 items = list(a.items())
60
61 # Get the MusicBrainz album information.
62 album_info = hooks.album_for_mbid(a.mb_albumid)
63 if not album_info:
64 log.info(u'Release ID not found: {0}'.format(a.mb_albumid))
65 continue
66
67 # Construct a track mapping according to MBIDs. This should work
68 # for albums that have missing or extra tracks.
69 mapping = {}
70 for item in items:
71 for track_info in album_info.tracks:
72 if item.mb_trackid == track_info.track_id:
73 mapping[item] = track_info
74 break
75
76 # Apply.
77 with lib.transaction():
78 autotag.apply_metadata(album_info, mapping)
79 changed = False
80 for item in items:
81 item_changed = ui.show_model_changes(item)
82 changed |= item_changed
83 if item_changed:
84 apply_item_changes(lib, item, move, pretend, write)
85
86 if not changed:
87 # No change to any item.
88 continue
89
90 if not pretend:
91 # Update album structure to reflect an item in it.
92 for key in library.Album.item_keys:
93 a[key] = items[0][key]
94 a.store()
95
96 # Move album art (and any inconsistent items).
97 if move and lib.directory in util.ancestry(items[0].path):
98 log.debug(u'moving album {0}'.format(a.id))
99 a.move()
100
101
102 def apply_item_changes(lib, item, move, pretend, write):
103 """Store, move and write the item according to the arguments.
104 """
105 if not pretend:
106 # Move the item if it's in the library.
107 if move and lib.directory in util.ancestry(item.path):
108 item.move(with_album=False)
109
110 if write:
111 item.try_write()
112 item.store()
113
114
115 def mbsync_func(lib, opts, args):
116 """Command handler for the mbsync function.
117 """
118 move = opts.move
119 pretend = opts.pretend
120 write = opts.write
121 query = ui.decargs(args)
122
123 mbsync_singletons(lib, query, move, pretend, write)
124 mbsync_albums(lib, query, move, pretend, write)
125
126
127 class MBSyncPlugin(BeetsPlugin):
128 def __init__(self):
129 super(MBSyncPlugin, self).__init__()
130
131 def commands(self):
132 cmd = ui.Subcommand('mbsync',
133 help='update metadata from musicbrainz')
134 cmd.parser.add_option('-p', '--pretend', action='store_true',
135 help='show all changes but do nothing')
136 cmd.parser.add_option('-M', '--nomove', action='store_false',
137 default=True, dest='move',
138 help="don't move files in library")
139 cmd.parser.add_option('-W', '--nowrite', action='store_false',
140 default=config['import']['write'], dest='write',
141 help="don't write updated metadata to files")
142 cmd.func = mbsync_func
143 return [cmd]
144
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/beetsplug/mbsync.py b/beetsplug/mbsync.py
--- a/beetsplug/mbsync.py
+++ b/beetsplug/mbsync.py
@@ -64,13 +64,29 @@
log.info(u'Release ID not found: {0}'.format(a.mb_albumid))
continue
+ # Construct an hash mapping recording MBIDs to their information. A
+ # release can have recording MBIDs that appear multiple times in the
+ # same release.
+ track_index = {}
+ for track_info in album_info.tracks:
+ if track_info.track_id in track_index:
+ track_index[track_info.track_id].append(track_info)
+ else:
+ track_index[track_info.track_id] = [track_info]
+
# Construct a track mapping according to MBIDs. This should work
- # for albums that have missing or extra tracks.
+ # for albums that have missing or extra tracks. If a mapping is
+ # ambiguous, the items' disc and track number need to match in order
+ # for an item to be mapped.
mapping = {}
for item in items:
- for track_info in album_info.tracks:
- if item.mb_trackid == track_info.track_id:
- mapping[item] = track_info
+ candidates = track_index.get(item.mb_trackid, [])
+ if len(candidates) == 1:
+ mapping[item] = candidates[0]
+ continue
+ for c in candidates:
+ if c.medium_index == item.track and c.medium == item.disc:
+ mapping[item] = c
break
# Apply.
|
{"golden_diff": "diff --git a/beetsplug/mbsync.py b/beetsplug/mbsync.py\n--- a/beetsplug/mbsync.py\n+++ b/beetsplug/mbsync.py\n@@ -64,13 +64,29 @@\n log.info(u'Release ID not found: {0}'.format(a.mb_albumid))\n continue\n \n+ # Construct an hash mapping recording MBIDs to their information. A\n+ # release can have recording MBIDs that appear multiple times in the\n+ # same release.\n+ track_index = {}\n+ for track_info in album_info.tracks:\n+ if track_info.track_id in track_index:\n+ track_index[track_info.track_id].append(track_info)\n+ else:\n+ track_index[track_info.track_id] = [track_info]\n+\n # Construct a track mapping according to MBIDs. This should work\n- # for albums that have missing or extra tracks.\n+ # for albums that have missing or extra tracks. If a mapping is\n+ # ambiguous, the items' disc and track number need to match in order\n+ # for an item to be mapped.\n mapping = {}\n for item in items:\n- for track_info in album_info.tracks:\n- if item.mb_trackid == track_info.track_id:\n- mapping[item] = track_info\n+ candidates = track_index.get(item.mb_trackid, [])\n+ if len(candidates) == 1:\n+ mapping[item] = candidates[0]\n+ continue\n+ for c in candidates:\n+ if c.medium_index == item.track and c.medium == item.disc:\n+ mapping[item] = c\n break\n \n # Apply.\n", "issue": "mbsync: Deal with albums that have multiple copies of the same recording\nthe current way mbsync plugin used to obtain track mapping list is to use the MusicBrainz recoding ID from each track, it's a workaround to handle \"missing or extra tracks\". This method is based on an assumption that for each MB release, there are no multiple tracks with same MB recording ID. It usually works, and in my case, only 4 out of 700+ albums disobey this assumption. But for this four albums, I have to fix them by tag track number by hand and re-import.\n\nConsidering it's called \"mbsync\", Why not make an assumption that track number in metadata is not corrupt and use it if possible, or fallback to MB recording ID way if it's corrupted(missing or extra track detected)\n\n", "before_files": [{"content": "# This file is part of beets.\n# Copyright 2014, Jakob Schnitzer.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\n\"\"\"Update library's tags using MusicBrainz.\n\"\"\"\nimport logging\n\nfrom beets.plugins import BeetsPlugin\nfrom beets import autotag, library, ui, util\nfrom beets.autotag import hooks\nfrom beets import config\n\nlog = logging.getLogger('beets')\n\n\ndef mbsync_singletons(lib, query, move, pretend, write):\n \"\"\"Retrieve and apply info from the autotagger for items matched by\n query.\n \"\"\"\n for item in lib.items(query + ['singleton:true']):\n if not item.mb_trackid:\n log.info(u'Skipping singleton {0}: has no mb_trackid'\n .format(item.title))\n continue\n\n # Get the MusicBrainz recording info.\n track_info = hooks.track_for_mbid(item.mb_trackid)\n if not track_info:\n log.info(u'Recording ID not found: {0}'.format(item.mb_trackid))\n continue\n\n # Apply.\n with lib.transaction():\n autotag.apply_item_metadata(item, track_info)\n apply_item_changes(lib, item, move, pretend, write)\n\n\ndef mbsync_albums(lib, query, move, pretend, write):\n \"\"\"Retrieve and apply info from the autotagger for albums matched by\n query and their items.\n \"\"\"\n # Process matching albums.\n for a in lib.albums(query):\n if not a.mb_albumid:\n log.info(u'Skipping album {0}: has no mb_albumid'.format(a.id))\n continue\n\n items = list(a.items())\n\n # Get the MusicBrainz album information.\n album_info = hooks.album_for_mbid(a.mb_albumid)\n if not album_info:\n log.info(u'Release ID not found: {0}'.format(a.mb_albumid))\n continue\n\n # Construct a track mapping according to MBIDs. This should work\n # for albums that have missing or extra tracks.\n mapping = {}\n for item in items:\n for track_info in album_info.tracks:\n if item.mb_trackid == track_info.track_id:\n mapping[item] = track_info\n break\n\n # Apply.\n with lib.transaction():\n autotag.apply_metadata(album_info, mapping)\n changed = False\n for item in items:\n item_changed = ui.show_model_changes(item)\n changed |= item_changed\n if item_changed:\n apply_item_changes(lib, item, move, pretend, write)\n\n if not changed:\n # No change to any item.\n continue\n\n if not pretend:\n # Update album structure to reflect an item in it.\n for key in library.Album.item_keys:\n a[key] = items[0][key]\n a.store()\n\n # Move album art (and any inconsistent items).\n if move and lib.directory in util.ancestry(items[0].path):\n log.debug(u'moving album {0}'.format(a.id))\n a.move()\n\n\ndef apply_item_changes(lib, item, move, pretend, write):\n \"\"\"Store, move and write the item according to the arguments.\n \"\"\"\n if not pretend:\n # Move the item if it's in the library.\n if move and lib.directory in util.ancestry(item.path):\n item.move(with_album=False)\n\n if write:\n item.try_write()\n item.store()\n\n\ndef mbsync_func(lib, opts, args):\n \"\"\"Command handler for the mbsync function.\n \"\"\"\n move = opts.move\n pretend = opts.pretend\n write = opts.write\n query = ui.decargs(args)\n\n mbsync_singletons(lib, query, move, pretend, write)\n mbsync_albums(lib, query, move, pretend, write)\n\n\nclass MBSyncPlugin(BeetsPlugin):\n def __init__(self):\n super(MBSyncPlugin, self).__init__()\n\n def commands(self):\n cmd = ui.Subcommand('mbsync',\n help='update metadata from musicbrainz')\n cmd.parser.add_option('-p', '--pretend', action='store_true',\n help='show all changes but do nothing')\n cmd.parser.add_option('-M', '--nomove', action='store_false',\n default=True, dest='move',\n help=\"don't move files in library\")\n cmd.parser.add_option('-W', '--nowrite', action='store_false',\n default=config['import']['write'], dest='write',\n help=\"don't write updated metadata to files\")\n cmd.func = mbsync_func\n return [cmd]\n", "path": "beetsplug/mbsync.py"}], "after_files": [{"content": "# This file is part of beets.\n# Copyright 2014, Jakob Schnitzer.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\n\"\"\"Update library's tags using MusicBrainz.\n\"\"\"\nimport logging\n\nfrom beets.plugins import BeetsPlugin\nfrom beets import autotag, library, ui, util\nfrom beets.autotag import hooks\nfrom beets import config\n\nlog = logging.getLogger('beets')\n\n\ndef mbsync_singletons(lib, query, move, pretend, write):\n \"\"\"Retrieve and apply info from the autotagger for items matched by\n query.\n \"\"\"\n for item in lib.items(query + ['singleton:true']):\n if not item.mb_trackid:\n log.info(u'Skipping singleton {0}: has no mb_trackid'\n .format(item.title))\n continue\n\n # Get the MusicBrainz recording info.\n track_info = hooks.track_for_mbid(item.mb_trackid)\n if not track_info:\n log.info(u'Recording ID not found: {0}'.format(item.mb_trackid))\n continue\n\n # Apply.\n with lib.transaction():\n autotag.apply_item_metadata(item, track_info)\n apply_item_changes(lib, item, move, pretend, write)\n\n\ndef mbsync_albums(lib, query, move, pretend, write):\n \"\"\"Retrieve and apply info from the autotagger for albums matched by\n query and their items.\n \"\"\"\n # Process matching albums.\n for a in lib.albums(query):\n if not a.mb_albumid:\n log.info(u'Skipping album {0}: has no mb_albumid'.format(a.id))\n continue\n\n items = list(a.items())\n\n # Get the MusicBrainz album information.\n album_info = hooks.album_for_mbid(a.mb_albumid)\n if not album_info:\n log.info(u'Release ID not found: {0}'.format(a.mb_albumid))\n continue\n\n # Construct an hash mapping recording MBIDs to their information. A\n # release can have recording MBIDs that appear multiple times in the\n # same release.\n track_index = {}\n for track_info in album_info.tracks:\n if track_info.track_id in track_index:\n track_index[track_info.track_id].append(track_info)\n else:\n track_index[track_info.track_id] = [track_info]\n\n # Construct a track mapping according to MBIDs. This should work\n # for albums that have missing or extra tracks. If a mapping is\n # ambiguous, the items' disc and track number need to match in order\n # for an item to be mapped.\n mapping = {}\n for item in items:\n candidates = track_index.get(item.mb_trackid, [])\n if len(candidates) == 1:\n mapping[item] = candidates[0]\n continue\n for c in candidates:\n if c.medium_index == item.track and c.medium == item.disc:\n mapping[item] = c\n break\n\n # Apply.\n with lib.transaction():\n autotag.apply_metadata(album_info, mapping)\n changed = False\n for item in items:\n item_changed = ui.show_model_changes(item)\n changed |= item_changed\n if item_changed:\n apply_item_changes(lib, item, move, pretend, write)\n\n if not changed:\n # No change to any item.\n continue\n\n if not pretend:\n # Update album structure to reflect an item in it.\n for key in library.Album.item_keys:\n a[key] = items[0][key]\n a.store()\n\n # Move album art (and any inconsistent items).\n if move and lib.directory in util.ancestry(items[0].path):\n log.debug(u'moving album {0}'.format(a.id))\n a.move()\n\n\ndef apply_item_changes(lib, item, move, pretend, write):\n \"\"\"Store, move and write the item according to the arguments.\n \"\"\"\n if not pretend:\n # Move the item if it's in the library.\n if move and lib.directory in util.ancestry(item.path):\n item.move(with_album=False)\n\n if write:\n item.try_write()\n item.store()\n\n\ndef mbsync_func(lib, opts, args):\n \"\"\"Command handler for the mbsync function.\n \"\"\"\n move = opts.move\n pretend = opts.pretend\n write = opts.write\n query = ui.decargs(args)\n\n mbsync_singletons(lib, query, move, pretend, write)\n mbsync_albums(lib, query, move, pretend, write)\n\n\nclass MBSyncPlugin(BeetsPlugin):\n def __init__(self):\n super(MBSyncPlugin, self).__init__()\n\n def commands(self):\n cmd = ui.Subcommand('mbsync',\n help='update metadata from musicbrainz')\n cmd.parser.add_option('-p', '--pretend', action='store_true',\n help='show all changes but do nothing')\n cmd.parser.add_option('-M', '--nomove', action='store_false',\n default=True, dest='move',\n help=\"don't move files in library\")\n cmd.parser.add_option('-W', '--nowrite', action='store_false',\n default=config['import']['write'], dest='write',\n help=\"don't write updated metadata to files\")\n cmd.func = mbsync_func\n return [cmd]\n", "path": "beetsplug/mbsync.py"}]}
| 1,873 | 365 |
gh_patches_debug_38226
|
rasdani/github-patches
|
git_diff
|
TabbycatDebate__tabbycat-1644
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Preformed panels supporting outrounds
I have been having a play with preformed panels and from my quick attempt to generate them for outrounds, it seems to generate preformed panels as if it was generating panels for an additional preliminary round rather than a break round.
For example this is the preformed panels that generated when I generated preformed panels for quarter finals for one of our tournaments.

We did end up changing some thing to do with the round sequence for these rounds (we added 2 additional in rounds, deleted the octo finals and edited the sequence numbers, but this round is set up as per the settings below:

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tabbycat/adjallocation/preformed/anticipated.py`
Content:
```
1 """Functions for computing an anticipated draw."""
2
3 import itertools
4
5 from breakqual.utils import calculate_live_thresholds, determine_liveness
6 from participants.prefetch import populate_win_counts
7
8
9 def calculate_anticipated_draw(round):
10 """Calculates an anticipated draw for the next round, based on the draw for
11 the last round. Returns a list of tuples
12 `(bracket_min, bracket_max, liveness)`,
13 being the minimum and maximum brackets possible for that room, and the
14 maximum number of teams that might be live in it. If the previous round's
15 draw doesn't exist, it will just return an empty list.
16
17 Procedure:
18 1. Take the (actual) draw of the last round, with team points
19 2. For each room, compute a (min, max) of outcomes for each team.
20 3. Take the min, divide into rooms to make the `bracket_min` for each room.
21 4. Take the max, divide into rooms to make the `bracket_max` for each room.
22
23 `round` should be the round for which you want an anticipated draw (the
24 "next round").
25 """
26
27 nteamsindebate = 4 if round.tournament.pref('teams_in_debate') == 'bp' else 2
28
29 if round.prev is None or not round.prev.debate_set.exists():
30 # Special case: If this is the first round, everyone will be on zero.
31 # Just take all teams, rounded down -- if this is done, it'll typically
32 # be done before availability is locked down. Also do this if the last
33 # round hasn't yet been drawn, since that's premature for bracket
34 # predictions.
35 npanels = round.tournament.team_set.count() // nteamsindebate
36 return [(0, 0, 0) for i in range(npanels)]
37
38 # 1. Take the (actual) draw of the last round, with team points
39 debates = round.prev.debate_set_with_prefetches(ordering=('room_rank',),
40 teams=True, adjudicators=False, speakers=False, venues=False)
41 if round.prev.prev:
42 populate_win_counts([team for debate in debates for team in debate.teams],
43 round=round.prev.prev)
44 else:
45 # just say everyone is on zero (since no rounds have finished yet)
46 for debate in debates:
47 for team in debate.teams:
48 team._points = 0
49
50 # 2. Compute a (min, max) of outcomes for each team
51 team_points_after = []
52 points_available = [round.prev.weight * i for i in range(nteamsindebate)]
53 for debate in debates:
54 points_now = [team.points_count for team in debate.teams]
55 highest = max(points_now)
56 lowest = min(points_now)
57
58 # Most cases will be single-point rooms or rooms with pull-ups from only
59 # one bracket; in these cases it's easy to prove this closed-form
60 # guarantee for what the teams in that room will look like afterwards.
61 if highest - lowest <= 1:
62 points_after = [(lowest+i, highest+i) for i in points_available]
63
64 # For more complicated rooms (e.g. [9, 8, 8, 7]), it gets harder; just
65 # use brute force. For few enough rooms this won't be too bad a hit.
66 else:
67 possible_outcomes = []
68 for result in itertools.permutations(points_available):
69 outcome = [n + r for n, r in zip(points_now, result)]
70 outcome.sort(reverse=True)
71 possible_outcomes.append(outcome)
72 points_after = [(min(team_after), max(team_after)) for team_after in zip(*possible_outcomes)]
73
74 team_points_after.extend(points_after)
75
76 # 3. Take the min, divide into rooms to make the `bracket_min` for each room.
77 # 4. Take the max, divide into rooms to make the `bracket_max` for each room.
78 lowers, uppers = [sorted(x, reverse=True) for x in zip(*team_points_after)]
79 brackets_min = [max(r) for r in zip(*([iter(lowers)] * nteamsindebate))]
80 brackets_max = [max(r) for r in zip(*([iter(uppers)] * nteamsindebate))]
81
82 open_category = round.tournament.breakcategory_set.filter(is_general=True).first()
83 if open_category:
84 live_thresholds = calculate_live_thresholds(open_category, round.tournament, round)
85 liveness_by_lower = [determine_liveness(live_thresholds, x) for x in lowers]
86 liveness_by_upper = [determine_liveness(live_thresholds, x) for x in uppers]
87 liveness_by_team = [x == 'live' or y == 'live' for x, y in zip(liveness_by_lower, liveness_by_upper)]
88 liveness = [x.count(True) for x in zip(*([iter(liveness_by_team)] * nteamsindebate))]
89 else:
90 liveness = [0] * len(debates)
91
92 return zip(brackets_min, brackets_max, liveness)
93
```
Path: `tabbycat/draw/generator/utils.py`
Content:
```
1 """Miscellaneous utilities for the draw."""
2
3
4 def ispow2(n):
5 """Returns True if n is a power of 2. Works for positive integers only."""
6 return n & (n - 1) == 0
7
8
9 def nextpow2(n):
10 return 1 << (n-1).bit_length()
11
12
13 def partial_break_round_split(break_size):
14 """Returns a tuple `(debating, bypassing)`, where `debating` is how many
15 teams will debate in the first break round, and `bypassing` is how many
16 teams will bypass the first break round, qualifying directly for the
17 second."""
18
19 assert break_size > 1, "break rounds only make sense for break_size > 1 (found %d)" % (break_size,)
20
21 teams_in_second_break_round = nextpow2(break_size) // 2
22 debates = break_size - teams_in_second_break_round
23 bypassing = teams_in_second_break_round - debates
24
25 assert 2*debates + bypassing == break_size, "2 * %d teams debating + %d teams bypassing doesn't add to break size %d" % (debates, bypassing, break_size)
26 assert debates > 0, "%d <= 0 debates in first break round (%d teams bypassing)" % (debates, bypassing)
27 assert bypassing >= 0, "%d < 0 teams bypassing (%d debates)" % (bypassing, debates)
28 return debates, bypassing
29
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/tabbycat/adjallocation/preformed/anticipated.py b/tabbycat/adjallocation/preformed/anticipated.py
--- a/tabbycat/adjallocation/preformed/anticipated.py
+++ b/tabbycat/adjallocation/preformed/anticipated.py
@@ -3,6 +3,7 @@
import itertools
from breakqual.utils import calculate_live_thresholds, determine_liveness
+from draw.generator.utils import ispow2, partial_break_round_split
from participants.prefetch import populate_win_counts
@@ -26,13 +27,35 @@
nteamsindebate = 4 if round.tournament.pref('teams_in_debate') == 'bp' else 2
- if round.prev is None or not round.prev.debate_set.exists():
- # Special case: If this is the first round, everyone will be on zero.
+ if round.prev is None or not round.prev.debate_set.exists() or round.is_break_round:
+ # Special cases: If this is the first round, everyone will be on zero.
# Just take all teams, rounded down -- if this is done, it'll typically
# be done before availability is locked down. Also do this if the last
# round hasn't yet been drawn, since that's premature for bracket
# predictions.
- npanels = round.tournament.team_set.count() // nteamsindebate
+ #
+ # Also occurs for elimination rounds as everyone is just as live.
+
+ nteams = 0
+ if round.is_break_round:
+ break_size = round.break_category.break_size
+ nprev_rounds = round.break_category.round_set.filter(seq__lt=round.seq).count()
+ partial_two = nteamsindebate == 2 and not ispow2(break_size)
+ partial_bp = nteamsindebate == 4 and ispow2(break_size // 6)
+ if nprev_rounds > 0 and (partial_two or partial_bp):
+ # If using partial elimination rounds, the second round is the first for
+ # the powers of two, so start counting from here.
+ nprev_rounds -= 1
+
+ if nprev_rounds == 0 and nteamsindebate == 2:
+ nteams = partial_break_round_split(break_size)[0] * 2
+ else:
+ # Subsequent rounds are half the previous, but always a power of 2
+ nteams = 1 << (break_size.bit_length() - 1 - nprev_rounds)
+ else:
+ nteams = round.tournament.team_set.count()
+
+ npanels = nteams // nteamsindebate
return [(0, 0, 0) for i in range(npanels)]
# 1. Take the (actual) draw of the last round, with team points
diff --git a/tabbycat/draw/generator/utils.py b/tabbycat/draw/generator/utils.py
--- a/tabbycat/draw/generator/utils.py
+++ b/tabbycat/draw/generator/utils.py
@@ -11,8 +11,8 @@
def partial_break_round_split(break_size):
- """Returns a tuple `(debating, bypassing)`, where `debating` is how many
- teams will debate in the first break round, and `bypassing` is how many
+ """Returns a tuple `(debates, bypassing)`, where `debating` is how many
+ debates there is in the first break round, and `bypassing` is how many
teams will bypass the first break round, qualifying directly for the
second."""
|
{"golden_diff": "diff --git a/tabbycat/adjallocation/preformed/anticipated.py b/tabbycat/adjallocation/preformed/anticipated.py\n--- a/tabbycat/adjallocation/preformed/anticipated.py\n+++ b/tabbycat/adjallocation/preformed/anticipated.py\n@@ -3,6 +3,7 @@\n import itertools\n \n from breakqual.utils import calculate_live_thresholds, determine_liveness\n+from draw.generator.utils import ispow2, partial_break_round_split\n from participants.prefetch import populate_win_counts\n \n \n@@ -26,13 +27,35 @@\n \n nteamsindebate = 4 if round.tournament.pref('teams_in_debate') == 'bp' else 2\n \n- if round.prev is None or not round.prev.debate_set.exists():\n- # Special case: If this is the first round, everyone will be on zero.\n+ if round.prev is None or not round.prev.debate_set.exists() or round.is_break_round:\n+ # Special cases: If this is the first round, everyone will be on zero.\n # Just take all teams, rounded down -- if this is done, it'll typically\n # be done before availability is locked down. Also do this if the last\n # round hasn't yet been drawn, since that's premature for bracket\n # predictions.\n- npanels = round.tournament.team_set.count() // nteamsindebate\n+ #\n+ # Also occurs for elimination rounds as everyone is just as live.\n+\n+ nteams = 0\n+ if round.is_break_round:\n+ break_size = round.break_category.break_size\n+ nprev_rounds = round.break_category.round_set.filter(seq__lt=round.seq).count()\n+ partial_two = nteamsindebate == 2 and not ispow2(break_size)\n+ partial_bp = nteamsindebate == 4 and ispow2(break_size // 6)\n+ if nprev_rounds > 0 and (partial_two or partial_bp):\n+ # If using partial elimination rounds, the second round is the first for\n+ # the powers of two, so start counting from here.\n+ nprev_rounds -= 1\n+\n+ if nprev_rounds == 0 and nteamsindebate == 2:\n+ nteams = partial_break_round_split(break_size)[0] * 2\n+ else:\n+ # Subsequent rounds are half the previous, but always a power of 2\n+ nteams = 1 << (break_size.bit_length() - 1 - nprev_rounds)\n+ else:\n+ nteams = round.tournament.team_set.count()\n+\n+ npanels = nteams // nteamsindebate\n return [(0, 0, 0) for i in range(npanels)]\n \n # 1. Take the (actual) draw of the last round, with team points\ndiff --git a/tabbycat/draw/generator/utils.py b/tabbycat/draw/generator/utils.py\n--- a/tabbycat/draw/generator/utils.py\n+++ b/tabbycat/draw/generator/utils.py\n@@ -11,8 +11,8 @@\n \n \n def partial_break_round_split(break_size):\n- \"\"\"Returns a tuple `(debating, bypassing)`, where `debating` is how many\n- teams will debate in the first break round, and `bypassing` is how many\n+ \"\"\"Returns a tuple `(debates, bypassing)`, where `debating` is how many\n+ debates there is in the first break round, and `bypassing` is how many\n teams will bypass the first break round, qualifying directly for the\n second.\"\"\"\n", "issue": "Preformed panels supporting outrounds\nI have been having a play with preformed panels and from my quick attempt to generate them for outrounds, it seems to generate preformed panels as if it was generating panels for an additional preliminary round rather than a break round. \r\n\r\nFor example this is the preformed panels that generated when I generated preformed panels for quarter finals for one of our tournaments.\r\n\r\n\r\n\r\nWe did end up changing some thing to do with the round sequence for these rounds (we added 2 additional in rounds, deleted the octo finals and edited the sequence numbers, but this round is set up as per the settings below:\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"Functions for computing an anticipated draw.\"\"\"\n\nimport itertools\n\nfrom breakqual.utils import calculate_live_thresholds, determine_liveness\nfrom participants.prefetch import populate_win_counts\n\n\ndef calculate_anticipated_draw(round):\n \"\"\"Calculates an anticipated draw for the next round, based on the draw for\n the last round. Returns a list of tuples\n `(bracket_min, bracket_max, liveness)`,\n being the minimum and maximum brackets possible for that room, and the\n maximum number of teams that might be live in it. If the previous round's\n draw doesn't exist, it will just return an empty list.\n\n Procedure:\n 1. Take the (actual) draw of the last round, with team points\n 2. For each room, compute a (min, max) of outcomes for each team.\n 3. Take the min, divide into rooms to make the `bracket_min` for each room.\n 4. Take the max, divide into rooms to make the `bracket_max` for each room.\n\n `round` should be the round for which you want an anticipated draw (the\n \"next round\").\n \"\"\"\n\n nteamsindebate = 4 if round.tournament.pref('teams_in_debate') == 'bp' else 2\n\n if round.prev is None or not round.prev.debate_set.exists():\n # Special case: If this is the first round, everyone will be on zero.\n # Just take all teams, rounded down -- if this is done, it'll typically\n # be done before availability is locked down. Also do this if the last\n # round hasn't yet been drawn, since that's premature for bracket\n # predictions.\n npanels = round.tournament.team_set.count() // nteamsindebate\n return [(0, 0, 0) for i in range(npanels)]\n\n # 1. Take the (actual) draw of the last round, with team points\n debates = round.prev.debate_set_with_prefetches(ordering=('room_rank',),\n teams=True, adjudicators=False, speakers=False, venues=False)\n if round.prev.prev:\n populate_win_counts([team for debate in debates for team in debate.teams],\n round=round.prev.prev)\n else:\n # just say everyone is on zero (since no rounds have finished yet)\n for debate in debates:\n for team in debate.teams:\n team._points = 0\n\n # 2. Compute a (min, max) of outcomes for each team\n team_points_after = []\n points_available = [round.prev.weight * i for i in range(nteamsindebate)]\n for debate in debates:\n points_now = [team.points_count for team in debate.teams]\n highest = max(points_now)\n lowest = min(points_now)\n\n # Most cases will be single-point rooms or rooms with pull-ups from only\n # one bracket; in these cases it's easy to prove this closed-form\n # guarantee for what the teams in that room will look like afterwards.\n if highest - lowest <= 1:\n points_after = [(lowest+i, highest+i) for i in points_available]\n\n # For more complicated rooms (e.g. [9, 8, 8, 7]), it gets harder; just\n # use brute force. For few enough rooms this won't be too bad a hit.\n else:\n possible_outcomes = []\n for result in itertools.permutations(points_available):\n outcome = [n + r for n, r in zip(points_now, result)]\n outcome.sort(reverse=True)\n possible_outcomes.append(outcome)\n points_after = [(min(team_after), max(team_after)) for team_after in zip(*possible_outcomes)]\n\n team_points_after.extend(points_after)\n\n # 3. Take the min, divide into rooms to make the `bracket_min` for each room.\n # 4. Take the max, divide into rooms to make the `bracket_max` for each room.\n lowers, uppers = [sorted(x, reverse=True) for x in zip(*team_points_after)]\n brackets_min = [max(r) for r in zip(*([iter(lowers)] * nteamsindebate))]\n brackets_max = [max(r) for r in zip(*([iter(uppers)] * nteamsindebate))]\n\n open_category = round.tournament.breakcategory_set.filter(is_general=True).first()\n if open_category:\n live_thresholds = calculate_live_thresholds(open_category, round.tournament, round)\n liveness_by_lower = [determine_liveness(live_thresholds, x) for x in lowers]\n liveness_by_upper = [determine_liveness(live_thresholds, x) for x in uppers]\n liveness_by_team = [x == 'live' or y == 'live' for x, y in zip(liveness_by_lower, liveness_by_upper)]\n liveness = [x.count(True) for x in zip(*([iter(liveness_by_team)] * nteamsindebate))]\n else:\n liveness = [0] * len(debates)\n\n return zip(brackets_min, brackets_max, liveness)\n", "path": "tabbycat/adjallocation/preformed/anticipated.py"}, {"content": "\"\"\"Miscellaneous utilities for the draw.\"\"\"\n\n\ndef ispow2(n):\n \"\"\"Returns True if n is a power of 2. Works for positive integers only.\"\"\"\n return n & (n - 1) == 0\n\n\ndef nextpow2(n):\n return 1 << (n-1).bit_length()\n\n\ndef partial_break_round_split(break_size):\n \"\"\"Returns a tuple `(debating, bypassing)`, where `debating` is how many\n teams will debate in the first break round, and `bypassing` is how many\n teams will bypass the first break round, qualifying directly for the\n second.\"\"\"\n\n assert break_size > 1, \"break rounds only make sense for break_size > 1 (found %d)\" % (break_size,)\n\n teams_in_second_break_round = nextpow2(break_size) // 2\n debates = break_size - teams_in_second_break_round\n bypassing = teams_in_second_break_round - debates\n\n assert 2*debates + bypassing == break_size, \"2 * %d teams debating + %d teams bypassing doesn't add to break size %d\" % (debates, bypassing, break_size)\n assert debates > 0, \"%d <= 0 debates in first break round (%d teams bypassing)\" % (debates, bypassing)\n assert bypassing >= 0, \"%d < 0 teams bypassing (%d debates)\" % (bypassing, debates)\n return debates, bypassing\n", "path": "tabbycat/draw/generator/utils.py"}], "after_files": [{"content": "\"\"\"Functions for computing an anticipated draw.\"\"\"\n\nimport itertools\n\nfrom breakqual.utils import calculate_live_thresholds, determine_liveness\nfrom draw.generator.utils import ispow2, partial_break_round_split\nfrom participants.prefetch import populate_win_counts\n\n\ndef calculate_anticipated_draw(round):\n \"\"\"Calculates an anticipated draw for the next round, based on the draw for\n the last round. Returns a list of tuples\n `(bracket_min, bracket_max, liveness)`,\n being the minimum and maximum brackets possible for that room, and the\n maximum number of teams that might be live in it. If the previous round's\n draw doesn't exist, it will just return an empty list.\n\n Procedure:\n 1. Take the (actual) draw of the last round, with team points\n 2. For each room, compute a (min, max) of outcomes for each team.\n 3. Take the min, divide into rooms to make the `bracket_min` for each room.\n 4. Take the max, divide into rooms to make the `bracket_max` for each room.\n\n `round` should be the round for which you want an anticipated draw (the\n \"next round\").\n \"\"\"\n\n nteamsindebate = 4 if round.tournament.pref('teams_in_debate') == 'bp' else 2\n\n if round.prev is None or not round.prev.debate_set.exists() or round.is_break_round:\n # Special cases: If this is the first round, everyone will be on zero.\n # Just take all teams, rounded down -- if this is done, it'll typically\n # be done before availability is locked down. Also do this if the last\n # round hasn't yet been drawn, since that's premature for bracket\n # predictions.\n #\n # Also occurs for elimination rounds as everyone is just as live.\n\n nteams = 0\n if round.is_break_round:\n break_size = round.break_category.break_size\n nprev_rounds = round.break_category.round_set.filter(seq__lt=round.seq).count()\n partial_two = nteamsindebate == 2 and not ispow2(break_size)\n partial_bp = nteamsindebate == 4 and ispow2(break_size // 6)\n if nprev_rounds > 0 and (partial_two or partial_bp):\n # If using partial elimination rounds, the second round is the first for\n # the powers of two, so start counting from here.\n nprev_rounds -= 1\n\n if nprev_rounds == 0 and nteamsindebate == 2:\n nteams = partial_break_round_split(break_size)[0] * 2\n else:\n # Subsequent rounds are half the previous, but always a power of 2\n nteams = 1 << (break_size.bit_length() - 1 - nprev_rounds)\n else:\n nteams = round.tournament.team_set.count()\n\n npanels = nteams // nteamsindebate\n return [(0, 0, 0) for i in range(npanels)]\n\n # 1. Take the (actual) draw of the last round, with team points\n debates = round.prev.debate_set_with_prefetches(ordering=('room_rank',),\n teams=True, adjudicators=False, speakers=False, venues=False)\n if round.prev.prev:\n populate_win_counts([team for debate in debates for team in debate.teams],\n round=round.prev.prev)\n else:\n # just say everyone is on zero (since no rounds have finished yet)\n for debate in debates:\n for team in debate.teams:\n team._points = 0\n\n # 2. Compute a (min, max) of outcomes for each team\n team_points_after = []\n for debate in debates:\n points_now = [team.points_count for team in debate.teams]\n highest = max(points_now)\n lowest = min(points_now)\n\n # Most cases will be single-point rooms or rooms with pull-ups from only\n # one bracket; in these cases it's easy to prove this closed-form\n # guarantee for what the teams in that room will look like afterwards.\n if highest - lowest <= 1:\n points_after = [(lowest+i, highest+i) for i in range(nteamsindebate)]\n\n # For more complicated rooms (e.g. [9, 8, 8, 7]), it gets harder; just\n # use brute force. For few enough rooms this won't be too bad a hit.\n else:\n possible_outcomes = []\n for result in itertools.permutations(range(nteamsindebate)):\n outcome = [n + r for n, r in zip(points_now, result)]\n outcome.sort(reverse=True)\n possible_outcomes.append(outcome)\n points_after = [(min(team_after), max(team_after)) for team_after in zip(*possible_outcomes)]\n\n team_points_after.extend(points_after)\n\n # 3. Take the min, divide into rooms to make the `bracket_min` for each room.\n # 4. Take the max, divide into rooms to make the `bracket_max` for each room.\n lowers, uppers = [sorted(x, reverse=True) for x in zip(*team_points_after)]\n brackets_min = [max(r) for r in zip(*([iter(lowers)] * nteamsindebate))]\n brackets_max = [max(r) for r in zip(*([iter(uppers)] * nteamsindebate))]\n\n open_category = round.tournament.breakcategory_set.filter(is_general=True).first()\n if open_category:\n live_thresholds = calculate_live_thresholds(open_category, round.tournament, round)\n liveness_by_lower = [determine_liveness(live_thresholds, x) for x in lowers]\n liveness_by_upper = [determine_liveness(live_thresholds, x) for x in uppers]\n liveness_by_team = [x == 'live' or y == 'live' for x, y in zip(liveness_by_lower, liveness_by_upper)]\n liveness = [x.count(True) for x in zip(*([iter(liveness_by_team)] * nteamsindebate))]\n else:\n liveness = [0] * len(debates)\n\n return zip(brackets_min, brackets_max, liveness)\n", "path": "tabbycat/adjallocation/preformed/anticipated.py"}, {"content": "\"\"\"Miscellaneous utilities for the draw.\"\"\"\n\n\ndef ispow2(n):\n \"\"\"Returns True if n is a power of 2. Works for positive integers only.\"\"\"\n return n & (n - 1) == 0\n\n\ndef nextpow2(n):\n return 1 << (n-1).bit_length()\n\n\ndef partial_break_round_split(break_size):\n \"\"\"Returns a tuple `(debates, bypassing)`, where `debating` is how many\n debates there is in the first break round, and `bypassing` is how many\n teams will bypass the first break round, qualifying directly for the\n second.\"\"\"\n\n assert break_size > 1, \"break rounds only make sense for break_size > 1 (found %d)\" % (break_size,)\n\n teams_in_second_break_round = nextpow2(break_size) // 2\n debates = break_size - teams_in_second_break_round\n bypassing = teams_in_second_break_round - debates\n\n assert 2*debates + bypassing == break_size, \"2 * %d teams debating + %d teams bypassing doesn't add to break size %d\" % (debates, bypassing, break_size)\n assert debates > 0, \"%d <= 0 debates in first break round (%d teams bypassing)\" % (debates, bypassing)\n assert bypassing >= 0, \"%d < 0 teams bypassing (%d debates)\" % (bypassing, debates)\n return debates, bypassing\n", "path": "tabbycat/draw/generator/utils.py"}]}
| 2,207 | 800 |
gh_patches_debug_29204
|
rasdani/github-patches
|
git_diff
|
microsoft__botbuilder-python-426
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug in 45.state-management sample
To create the user profile property , it should refer the UserState but in the sample its referring the
conversationstate.
Current code : self.user_profile = self.conversation_state.create_property("UserProfile")
Expected code : self.user_profile = self.user_state.create_property("UserProfile")
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `samples/45.state-management/bots/state_management_bot.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 import time
5 import pytz
6 from datetime import datetime
7
8 from botbuilder.core import ActivityHandler, ConversationState, TurnContext, UserState
9 from botbuilder.schema import ChannelAccount
10
11 from data_models import ConversationData, UserProfile
12
13
14 class StateManagementBot(ActivityHandler):
15 def __init__(self, conversation_state: ConversationState, user_state: UserState):
16 if conversation_state is None:
17 raise TypeError(
18 "[StateManagementBot]: Missing parameter. conversation_state is required but None was given"
19 )
20 if user_state is None:
21 raise TypeError(
22 "[StateManagementBot]: Missing parameter. user_state is required but None was given"
23 )
24
25 self.conversation_state = conversation_state
26 self.user_state = user_state
27
28 self.conversation_data = self.conversation_state.create_property(
29 "ConversationData"
30 )
31 self.user_profile = self.conversation_state.create_property("UserProfile")
32
33 async def on_turn(self, turn_context: TurnContext):
34 await super().on_turn(turn_context)
35
36 await self.conversation_state.save_changes(turn_context)
37 await self.user_state.save_changes(turn_context)
38
39 async def on_members_added_activity(
40 self, members_added: [ChannelAccount], turn_context: TurnContext
41 ):
42 for member in members_added:
43 if member.id != turn_context.activity.recipient.id:
44 await turn_context.send_activity(
45 "Welcome to State Bot Sample. Type anything to get started."
46 )
47
48 async def on_message_activity(self, turn_context: TurnContext):
49 # Get the state properties from the turn context.
50 user_profile = await self.user_profile.get(turn_context, UserProfile)
51 conversation_data = await self.conversation_data.get(
52 turn_context, ConversationData
53 )
54
55 if user_profile.name is None:
56 # First time around this is undefined, so we will prompt user for name.
57 if conversation_data.prompted_for_user_name:
58 # Set the name to what the user provided.
59 user_profile.name = turn_context.activity.text
60
61 # Acknowledge that we got their name.
62 await turn_context.send_activity(
63 f"Thanks { user_profile.name }. To see conversation data, type anything."
64 )
65
66 # Reset the flag to allow the bot to go though the cycle again.
67 conversation_data.prompted_for_user_name = False
68 else:
69 # Prompt the user for their name.
70 await turn_context.send_activity("What is your name?")
71
72 # Set the flag to true, so we don't prompt in the next turn.
73 conversation_data.prompted_for_user_name = True
74 else:
75 # Add message details to the conversation data.
76 conversation_data.timestamp = self.__datetime_from_utc_to_local(
77 turn_context.activity.timestamp
78 )
79 conversation_data.channel_id = turn_context.activity.channel_id
80
81 # Display state data.
82 await turn_context.send_activity(
83 f"{ user_profile.name } sent: { turn_context.activity.text }"
84 )
85 await turn_context.send_activity(
86 f"Message received at: { conversation_data.timestamp }"
87 )
88 await turn_context.send_activity(
89 f"Message received from: { conversation_data.channel_id }"
90 )
91
92 def __datetime_from_utc_to_local(self, utc_datetime):
93 now_timestamp = time.time()
94 offset = datetime.fromtimestamp(now_timestamp) - datetime.utcfromtimestamp(
95 now_timestamp
96 )
97 result = utc_datetime + offset
98 return result.strftime("%I:%M:%S %p, %A, %B %d of %Y")
99
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/samples/45.state-management/bots/state_management_bot.py b/samples/45.state-management/bots/state_management_bot.py
--- a/samples/45.state-management/bots/state_management_bot.py
+++ b/samples/45.state-management/bots/state_management_bot.py
@@ -2,7 +2,6 @@
# Licensed under the MIT License.
import time
-import pytz
from datetime import datetime
from botbuilder.core import ActivityHandler, ConversationState, TurnContext, UserState
@@ -25,10 +24,10 @@
self.conversation_state = conversation_state
self.user_state = user_state
- self.conversation_data = self.conversation_state.create_property(
+ self.conversation_data_accessor = self.conversation_state.create_property(
"ConversationData"
)
- self.user_profile = self.conversation_state.create_property("UserProfile")
+ self.user_profile_accessor = self.user_state.create_property("UserProfile")
async def on_turn(self, turn_context: TurnContext):
await super().on_turn(turn_context)
@@ -47,8 +46,8 @@
async def on_message_activity(self, turn_context: TurnContext):
# Get the state properties from the turn context.
- user_profile = await self.user_profile.get(turn_context, UserProfile)
- conversation_data = await self.conversation_data.get(
+ user_profile = await self.user_profile_accessor.get(turn_context, UserProfile)
+ conversation_data = await self.conversation_data_accessor.get(
turn_context, ConversationData
)
|
{"golden_diff": "diff --git a/samples/45.state-management/bots/state_management_bot.py b/samples/45.state-management/bots/state_management_bot.py\n--- a/samples/45.state-management/bots/state_management_bot.py\n+++ b/samples/45.state-management/bots/state_management_bot.py\n@@ -2,7 +2,6 @@\n # Licensed under the MIT License.\n \n import time\n-import pytz\n from datetime import datetime\n \n from botbuilder.core import ActivityHandler, ConversationState, TurnContext, UserState\n@@ -25,10 +24,10 @@\n self.conversation_state = conversation_state\n self.user_state = user_state\n \n- self.conversation_data = self.conversation_state.create_property(\n+ self.conversation_data_accessor = self.conversation_state.create_property(\n \"ConversationData\"\n )\n- self.user_profile = self.conversation_state.create_property(\"UserProfile\")\n+ self.user_profile_accessor = self.user_state.create_property(\"UserProfile\")\n \n async def on_turn(self, turn_context: TurnContext):\n await super().on_turn(turn_context)\n@@ -47,8 +46,8 @@\n \n async def on_message_activity(self, turn_context: TurnContext):\n # Get the state properties from the turn context.\n- user_profile = await self.user_profile.get(turn_context, UserProfile)\n- conversation_data = await self.conversation_data.get(\n+ user_profile = await self.user_profile_accessor.get(turn_context, UserProfile)\n+ conversation_data = await self.conversation_data_accessor.get(\n turn_context, ConversationData\n )\n", "issue": "Bug in 45.state-management sample\n\r\nTo create the user profile property , it should refer the UserState but in the sample its referring the \r\nconversationstate.\r\n\r\nCurrent code : self.user_profile = self.conversation_state.create_property(\"UserProfile\")\r\n\r\nExpected code : self.user_profile = self.user_state.create_property(\"UserProfile\")\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nimport time\nimport pytz\nfrom datetime import datetime\n\nfrom botbuilder.core import ActivityHandler, ConversationState, TurnContext, UserState\nfrom botbuilder.schema import ChannelAccount\n\nfrom data_models import ConversationData, UserProfile\n\n\nclass StateManagementBot(ActivityHandler):\n def __init__(self, conversation_state: ConversationState, user_state: UserState):\n if conversation_state is None:\n raise TypeError(\n \"[StateManagementBot]: Missing parameter. conversation_state is required but None was given\"\n )\n if user_state is None:\n raise TypeError(\n \"[StateManagementBot]: Missing parameter. user_state is required but None was given\"\n )\n\n self.conversation_state = conversation_state\n self.user_state = user_state\n\n self.conversation_data = self.conversation_state.create_property(\n \"ConversationData\"\n )\n self.user_profile = self.conversation_state.create_property(\"UserProfile\")\n\n async def on_turn(self, turn_context: TurnContext):\n await super().on_turn(turn_context)\n\n await self.conversation_state.save_changes(turn_context)\n await self.user_state.save_changes(turn_context)\n\n async def on_members_added_activity(\n self, members_added: [ChannelAccount], turn_context: TurnContext\n ):\n for member in members_added:\n if member.id != turn_context.activity.recipient.id:\n await turn_context.send_activity(\n \"Welcome to State Bot Sample. Type anything to get started.\"\n )\n\n async def on_message_activity(self, turn_context: TurnContext):\n # Get the state properties from the turn context.\n user_profile = await self.user_profile.get(turn_context, UserProfile)\n conversation_data = await self.conversation_data.get(\n turn_context, ConversationData\n )\n\n if user_profile.name is None:\n # First time around this is undefined, so we will prompt user for name.\n if conversation_data.prompted_for_user_name:\n # Set the name to what the user provided.\n user_profile.name = turn_context.activity.text\n\n # Acknowledge that we got their name.\n await turn_context.send_activity(\n f\"Thanks { user_profile.name }. To see conversation data, type anything.\"\n )\n\n # Reset the flag to allow the bot to go though the cycle again.\n conversation_data.prompted_for_user_name = False\n else:\n # Prompt the user for their name.\n await turn_context.send_activity(\"What is your name?\")\n\n # Set the flag to true, so we don't prompt in the next turn.\n conversation_data.prompted_for_user_name = True\n else:\n # Add message details to the conversation data.\n conversation_data.timestamp = self.__datetime_from_utc_to_local(\n turn_context.activity.timestamp\n )\n conversation_data.channel_id = turn_context.activity.channel_id\n\n # Display state data.\n await turn_context.send_activity(\n f\"{ user_profile.name } sent: { turn_context.activity.text }\"\n )\n await turn_context.send_activity(\n f\"Message received at: { conversation_data.timestamp }\"\n )\n await turn_context.send_activity(\n f\"Message received from: { conversation_data.channel_id }\"\n )\n\n def __datetime_from_utc_to_local(self, utc_datetime):\n now_timestamp = time.time()\n offset = datetime.fromtimestamp(now_timestamp) - datetime.utcfromtimestamp(\n now_timestamp\n )\n result = utc_datetime + offset\n return result.strftime(\"%I:%M:%S %p, %A, %B %d of %Y\")\n", "path": "samples/45.state-management/bots/state_management_bot.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nimport time\nfrom datetime import datetime\n\nfrom botbuilder.core import ActivityHandler, ConversationState, TurnContext, UserState\nfrom botbuilder.schema import ChannelAccount\n\nfrom data_models import ConversationData, UserProfile\n\n\nclass StateManagementBot(ActivityHandler):\n def __init__(self, conversation_state: ConversationState, user_state: UserState):\n if conversation_state is None:\n raise TypeError(\n \"[StateManagementBot]: Missing parameter. conversation_state is required but None was given\"\n )\n if user_state is None:\n raise TypeError(\n \"[StateManagementBot]: Missing parameter. user_state is required but None was given\"\n )\n\n self.conversation_state = conversation_state\n self.user_state = user_state\n\n self.conversation_data_accessor = self.conversation_state.create_property(\n \"ConversationData\"\n )\n self.user_profile_accessor = self.user_state.create_property(\"UserProfile\")\n\n async def on_turn(self, turn_context: TurnContext):\n await super().on_turn(turn_context)\n\n await self.conversation_state.save_changes(turn_context)\n await self.user_state.save_changes(turn_context)\n\n async def on_members_added_activity(\n self, members_added: [ChannelAccount], turn_context: TurnContext\n ):\n for member in members_added:\n if member.id != turn_context.activity.recipient.id:\n await turn_context.send_activity(\n \"Welcome to State Bot Sample. Type anything to get started.\"\n )\n\n async def on_message_activity(self, turn_context: TurnContext):\n # Get the state properties from the turn context.\n user_profile = await self.user_profile_accessor.get(turn_context, UserProfile)\n conversation_data = await self.conversation_data_accessor.get(\n turn_context, ConversationData\n )\n\n if user_profile.name is None:\n # First time around this is undefined, so we will prompt user for name.\n if conversation_data.prompted_for_user_name:\n # Set the name to what the user provided.\n user_profile.name = turn_context.activity.text\n\n # Acknowledge that we got their name.\n await turn_context.send_activity(\n f\"Thanks { user_profile.name }. To see conversation data, type anything.\"\n )\n\n # Reset the flag to allow the bot to go though the cycle again.\n conversation_data.prompted_for_user_name = False\n else:\n # Prompt the user for their name.\n await turn_context.send_activity(\"What is your name?\")\n\n # Set the flag to true, so we don't prompt in the next turn.\n conversation_data.prompted_for_user_name = True\n else:\n # Add message details to the conversation data.\n conversation_data.timestamp = self.__datetime_from_utc_to_local(\n turn_context.activity.timestamp\n )\n conversation_data.channel_id = turn_context.activity.channel_id\n\n # Display state data.\n await turn_context.send_activity(\n f\"{ user_profile.name } sent: { turn_context.activity.text }\"\n )\n await turn_context.send_activity(\n f\"Message received at: { conversation_data.timestamp }\"\n )\n await turn_context.send_activity(\n f\"Message received from: { conversation_data.channel_id }\"\n )\n\n def __datetime_from_utc_to_local(self, utc_datetime):\n now_timestamp = time.time()\n offset = datetime.fromtimestamp(now_timestamp) - datetime.utcfromtimestamp(\n now_timestamp\n )\n result = utc_datetime + offset\n return result.strftime(\"%I:%M:%S %p, %A, %B %d of %Y\")\n", "path": "samples/45.state-management/bots/state_management_bot.py"}]}
| 1,280 | 334 |
gh_patches_debug_18487
|
rasdani/github-patches
|
git_diff
|
kymatio__kymatio-308
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BUG `scattering3d_qm7.py` crashes on GPU
The input is on the GPU, but the scattering object has not been put in GPU mode, see https://github.com/kymatio/kymatio/blob/master/examples/3d/scattering3d_qm7.py#L212.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/3d/scattering3d_qm7.py`
Content:
```
1 """
2 3D scattering quantum chemistry regression
3 ==========================================
4 This uses the 3D scattering on a standard dataset.
5 """
6
7 import numpy as np
8 import time
9 import torch
10 import os
11
12 from sklearn import linear_model, model_selection, preprocessing, pipeline
13 from kymatio.scattering3d import HarmonicScattering3D
14 from kymatio.scattering3d.utils import compute_integrals, generate_weighted_sum_of_gaussians
15 from kymatio.datasets import fetch_qm7
16 from kymatio.caching import get_cache_dir
17 from scipy.spatial.distance import pdist
18
19
20 def evaluate_linear_regression(X, y, n_folds=5):
21 """
22 Evaluates linear ridge regression predictions of y using X.
23
24 Parameters
25 ----------
26 X: numpy array
27 input features, shape (N, D)
28 y: numpy array
29 target value, shape (N, 1)
30
31 """
32 n_datapoints = X.shape[0]
33 P = np.random.permutation(n_datapoints).reshape((n_folds, -1))
34 cross_val_folds = []
35
36 for i_fold in range(n_folds):
37 fold = (np.concatenate(P[np.arange(n_folds) != i_fold], axis=0), P[i_fold])
38 cross_val_folds.append(fold)
39
40 alphas = 10.**(-np.arange(0, 10))
41 for i, alpha in enumerate(alphas):
42 regressor = pipeline.make_pipeline(
43 preprocessing.StandardScaler(), linear_model.Ridge(alpha=alpha))
44 y_prediction = model_selection.cross_val_predict(
45 regressor, X=X, y=y, cv=cross_val_folds)
46 MAE = np.mean(np.abs(y_prediction - y))
47 RMSE = np.sqrt(np.mean((y_prediction - y)**2))
48 print('Ridge regression, alpha: {}, MAE: {}, RMSE: {}'.format(
49 alpha, MAE, RMSE))
50
51
52 def get_valence(charges):
53 """
54 Returns the number valence electrons of a particle given the
55 nuclear charge.
56
57 Parameters
58 ----------
59 charges: numpy array
60 array containing the nuclear charges, arbitrary size
61
62 Returns
63 -------
64 valence_charges : numpy array
65 same size as the input
66 """
67 return (
68 charges * (charges <= 2) +
69 (charges - 2) * np.logical_and(charges > 2, charges <= 10) +
70 (charges - 10) * np.logical_and(charges > 10, charges <= 18))
71
72
73 def get_qm7_energies():
74 """
75 Loads the energies of the molecules of the QM7 dataset.
76
77 Returns
78 -------
79 energies: numpy array
80 array containing the energies of the molecules
81 """
82 qm7 = fetch_qm7()
83 return qm7['energies']
84
85
86
87 def get_qm7_positions_and_charges(sigma, overlapping_precision=1e-1):
88 """
89 Loads the positions and charges of the molecules of the QM7 dataset.
90 QM7 is a dataset of 7165 organic molecules with up to 7 non-hydrogen
91 atoms, whose energies were computed with a quantun chemistry
92 computational method named Density Functional Theory.
93 This dataset has been made available to train machine learning models
94 to predict these energies.
95
96 Parameters
97 ----------
98 sigma : float
99 width parameter of the Gaussian that represents a particle
100
101 overlapping_precision : float, optional
102 affects the scaling of the positions. The positions are re-scaled
103 such that two Gaussian functions of width sigma centerd at the qm7
104 positions overlapp with amplitude <= the overlapping_precision
105
106 Returns
107 -------
108 positions, charges, valence_charges: torch arrays
109 array containing the positions, charges and valence charges
110 of the QM7 database molecules
111 """
112 qm7 = fetch_qm7(align=True)
113 positions = qm7['positions']
114 charges = qm7['charges'].astype('float32')
115 valence_charges = get_valence(charges)
116
117 # normalize positions
118 min_dist = np.inf
119 for i in range(positions.shape[0]):
120 n_atoms = np.sum(charges[i] != 0)
121 pos = positions[i, :n_atoms, :]
122 min_dist = min(min_dist, pdist(pos).min())
123 delta = sigma * np.sqrt(-8 * np.log(overlapping_precision))
124 positions = positions * delta / min_dist
125
126 return (torch.from_numpy(positions),
127 torch.from_numpy(charges),
128 torch.from_numpy(valence_charges))
129
130
131 def compute_qm7_solid_harmonic_scattering_coefficients(
132 M=192, N=128, O=96, sigma=2., J=2, L=3,
133 integral_powers=(0.5, 1., 2., 3.), batch_size=16):
134 """
135 Computes the scattering coefficients of the molecules of the
136 QM7 database. Channels used are full charges, valence charges
137 and core charges. Linear regression of the qm7 energies with
138 the given values gives MAE 2.75, RMSE 4.18 (kcal.mol-1).
139
140 Parameters
141 ----------
142 M, N, O: int
143 dimensions of the numerical grid
144 sigma : float
145 width parameter of the Gaussian that represents a particle
146 J: int
147 maximal scale of the solid harmonic wavelets
148 L: int
149 maximal first order of the solid harmonic wavelets
150 integral_powers: list of int
151 powers for the integrals
152 batch_size: int
153 size of the batch for computations
154
155 Returns
156 -------
157 order_0: torch tensor
158 array containing zeroth-order scattering coefficients
159 orders_1_and_2: torch tensor
160 array containing first- and second-order scattering coefficients
161 """
162 cuda = torch.cuda.is_available()
163 grid = torch.from_numpy(
164 np.fft.ifftshift(
165 np.mgrid[-M//2:-M//2+M, -N//2:-N//2+N, -O//2:-O//2+O].astype('float32'),
166 axes=(1, 2, 3)))
167 pos, full_charges, valence_charges = get_qm7_positions_and_charges(sigma)
168
169 if cuda:
170 grid = grid.cuda()
171 pos = pos.cuda()
172 full_charges = full_charges.cuda()
173 valence_charges = valence_charges.cuda()
174
175 n_molecules = pos.size(0)
176 n_batches = np.ceil(n_molecules / batch_size).astype(int)
177
178 scattering = HarmonicScattering3D(J=J, shape=(M, N, O), L=L, sigma_0=sigma)
179
180 order_0, order_1, order_2 = [], [], []
181 print('Computing solid harmonic scattering coefficients of {} molecules '
182 'of QM7 database on {}'.format(pos.size(0), 'GPU' if cuda else 'CPU'))
183 print('sigma: {}, L: {}, J: {}, integral powers: {}'.format(sigma, L, J, integral_powers))
184
185 this_time = None
186 last_time = None
187 for i in range(n_batches):
188 this_time = time.time()
189 if last_time is not None:
190 dt = this_time - last_time
191 print("Iteration {} ETA: [{:02}:{:02}:{:02}]".format(
192 i + 1, int(((n_batches - i - 1) * dt) // 3600),
193 int((((n_batches - i - 1) * dt) // 60) % 60),
194 int(((n_batches - i - 1) * dt) % 60)), end='\r')
195 else:
196 print("Iteration {} ETA: {}".format(i + 1,'-'),end='\r')
197 last_time = this_time
198 time.sleep(1)
199
200 start, end = i * batch_size, min((i + 1) * batch_size, n_molecules)
201
202 pos_batch = pos[start:end]
203 full_batch = full_charges[start:end]
204 val_batch = valence_charges[start:end]
205
206 full_density_batch = generate_weighted_sum_of_gaussians(
207 grid, pos_batch, full_batch, sigma, cuda=cuda)
208 full_order_0 = compute_integrals(full_density_batch, integral_powers)
209 scattering.max_order = 2
210 scattering.method = 'integral'
211 scattering.integral_powers = integral_powers
212 full_scattering = scattering(full_density_batch)
213
214 val_density_batch = generate_weighted_sum_of_gaussians(
215 grid, pos_batch, val_batch, sigma, cuda=cuda)
216 val_order_0 = compute_integrals(val_density_batch, integral_powers)
217 val_scattering= scattering(val_density_batch)
218
219 core_density_batch = full_density_batch - val_density_batch
220 core_order_0 = compute_integrals(core_density_batch, integral_powers)
221 core_scattering = scattering(core_density_batch)
222
223
224 order_0.append(
225 torch.stack([full_order_0, val_order_0, core_order_0], dim=-1))
226 orders_1_and_2.append(
227 torch.stack(
228 [full_scattering, val_scattering, core_scattering], dim=-1))
229
230 order_0 = torch.cat(order_0, dim=0)
231 orders_1_and_2 = torch.cat(orders_1_and_2, dim=0)
232
233 return order_0, orders_1_and_2
234
235 M, N, O, J, L = 192, 128, 96, 2, 3
236 integral_powers = [0.5, 1., 2., 3.]
237 sigma = 2.
238
239 order_0, orders_1_and_2 = compute_qm7_solid_harmonic_scattering_coefficients(
240 M=M, N=N, O=O, J=J, L=L, integral_powers=integral_powers,
241 sigma=sigma, batch_size=8)
242
243 n_molecules = order_0.size(0)
244
245 np_order_0 = order_0.numpy().reshape((n_molecules, -1))
246 np_orders_1_and_2 = orders_1_and_2.numpy().reshape((n_molecules, -1))
247
248 basename = 'qm7_L_{}_J_{}_sigma_{}_MNO_{}_powers_{}.npy'.format(
249 L, J, sigma, (M, N, O), integral_powers)
250 cachedir = get_cache_dir("qm7/experiments")
251 np.save(os.path.join(cachedir, 'order_0_' + basename), np_order_0)
252 np.save(os.path.join(
253 cachedir, 'orders_1_and_2_' + basename), np_orders_1_and_2)
254
255 scattering_coef = np.concatenate([np_order_0, np_orders_1_and_2], axis=1)
256 target = get_qm7_energies()
257
258 evaluate_linear_regression(scattering_coef, target)
259
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/examples/3d/scattering3d_qm7.py b/examples/3d/scattering3d_qm7.py
--- a/examples/3d/scattering3d_qm7.py
+++ b/examples/3d/scattering3d_qm7.py
@@ -166,16 +166,17 @@
axes=(1, 2, 3)))
pos, full_charges, valence_charges = get_qm7_positions_and_charges(sigma)
+ n_molecules = pos.size(0)
+ n_batches = np.ceil(n_molecules / batch_size).astype(int)
+
+ scattering = HarmonicScattering3D(J=J, shape=(M, N, O), L=L, sigma_0=sigma)
+
if cuda:
grid = grid.cuda()
pos = pos.cuda()
full_charges = full_charges.cuda()
valence_charges = valence_charges.cuda()
-
- n_molecules = pos.size(0)
- n_batches = np.ceil(n_molecules / batch_size).astype(int)
-
- scattering = HarmonicScattering3D(J=J, shape=(M, N, O), L=L, sigma_0=sigma)
+ scattering.cuda()
order_0, order_1, order_2 = [], [], []
print('Computing solid harmonic scattering coefficients of {} molecules '
|
{"golden_diff": "diff --git a/examples/3d/scattering3d_qm7.py b/examples/3d/scattering3d_qm7.py\n--- a/examples/3d/scattering3d_qm7.py\n+++ b/examples/3d/scattering3d_qm7.py\n@@ -166,16 +166,17 @@\n axes=(1, 2, 3)))\n pos, full_charges, valence_charges = get_qm7_positions_and_charges(sigma)\n \n+ n_molecules = pos.size(0)\n+ n_batches = np.ceil(n_molecules / batch_size).astype(int)\n+\n+ scattering = HarmonicScattering3D(J=J, shape=(M, N, O), L=L, sigma_0=sigma)\n+\n if cuda:\n grid = grid.cuda()\n pos = pos.cuda()\n full_charges = full_charges.cuda()\n valence_charges = valence_charges.cuda()\n-\n- n_molecules = pos.size(0)\n- n_batches = np.ceil(n_molecules / batch_size).astype(int)\n-\n- scattering = HarmonicScattering3D(J=J, shape=(M, N, O), L=L, sigma_0=sigma)\n+ scattering.cuda()\n \n order_0, order_1, order_2 = [], [], []\n print('Computing solid harmonic scattering coefficients of {} molecules '\n", "issue": "BUG `scattering3d_qm7.py` crashes on GPU\nThe input is on the GPU, but the scattering object has not been put in GPU mode, see https://github.com/kymatio/kymatio/blob/master/examples/3d/scattering3d_qm7.py#L212.\n", "before_files": [{"content": "\"\"\"\n3D scattering quantum chemistry regression\n==========================================\nThis uses the 3D scattering on a standard dataset.\n\"\"\"\n\nimport numpy as np\nimport time\nimport torch\nimport os\n\nfrom sklearn import linear_model, model_selection, preprocessing, pipeline\nfrom kymatio.scattering3d import HarmonicScattering3D\nfrom kymatio.scattering3d.utils import compute_integrals, generate_weighted_sum_of_gaussians\nfrom kymatio.datasets import fetch_qm7\nfrom kymatio.caching import get_cache_dir\nfrom scipy.spatial.distance import pdist\n\n\ndef evaluate_linear_regression(X, y, n_folds=5):\n \"\"\"\n Evaluates linear ridge regression predictions of y using X.\n\n Parameters\n ----------\n X: numpy array\n input features, shape (N, D)\n y: numpy array\n target value, shape (N, 1)\n\n \"\"\"\n n_datapoints = X.shape[0]\n P = np.random.permutation(n_datapoints).reshape((n_folds, -1))\n cross_val_folds = []\n\n for i_fold in range(n_folds):\n fold = (np.concatenate(P[np.arange(n_folds) != i_fold], axis=0), P[i_fold])\n cross_val_folds.append(fold)\n\n alphas = 10.**(-np.arange(0, 10))\n for i, alpha in enumerate(alphas):\n regressor = pipeline.make_pipeline(\n preprocessing.StandardScaler(), linear_model.Ridge(alpha=alpha))\n y_prediction = model_selection.cross_val_predict(\n regressor, X=X, y=y, cv=cross_val_folds)\n MAE = np.mean(np.abs(y_prediction - y))\n RMSE = np.sqrt(np.mean((y_prediction - y)**2))\n print('Ridge regression, alpha: {}, MAE: {}, RMSE: {}'.format(\n alpha, MAE, RMSE))\n\n\ndef get_valence(charges):\n \"\"\"\n Returns the number valence electrons of a particle given the\n nuclear charge.\n\n Parameters\n ----------\n charges: numpy array\n array containing the nuclear charges, arbitrary size\n\n Returns\n -------\n valence_charges : numpy array\n same size as the input\n \"\"\"\n return (\n charges * (charges <= 2) +\n (charges - 2) * np.logical_and(charges > 2, charges <= 10) +\n (charges - 10) * np.logical_and(charges > 10, charges <= 18))\n\n\ndef get_qm7_energies():\n \"\"\"\n Loads the energies of the molecules of the QM7 dataset.\n\n Returns\n -------\n energies: numpy array\n array containing the energies of the molecules\n \"\"\"\n qm7 = fetch_qm7()\n return qm7['energies']\n\n\n\ndef get_qm7_positions_and_charges(sigma, overlapping_precision=1e-1):\n \"\"\"\n Loads the positions and charges of the molecules of the QM7 dataset.\n QM7 is a dataset of 7165 organic molecules with up to 7 non-hydrogen\n atoms, whose energies were computed with a quantun chemistry\n computational method named Density Functional Theory.\n This dataset has been made available to train machine learning models\n to predict these energies.\n\n Parameters\n ----------\n sigma : float\n width parameter of the Gaussian that represents a particle\n\n overlapping_precision : float, optional\n affects the scaling of the positions. The positions are re-scaled\n such that two Gaussian functions of width sigma centerd at the qm7\n positions overlapp with amplitude <= the overlapping_precision\n\n Returns\n -------\n positions, charges, valence_charges: torch arrays\n array containing the positions, charges and valence charges\n of the QM7 database molecules\n \"\"\"\n qm7 = fetch_qm7(align=True)\n positions = qm7['positions']\n charges = qm7['charges'].astype('float32')\n valence_charges = get_valence(charges)\n\n # normalize positions\n min_dist = np.inf\n for i in range(positions.shape[0]):\n n_atoms = np.sum(charges[i] != 0)\n pos = positions[i, :n_atoms, :]\n min_dist = min(min_dist, pdist(pos).min())\n delta = sigma * np.sqrt(-8 * np.log(overlapping_precision))\n positions = positions * delta / min_dist\n\n return (torch.from_numpy(positions),\n torch.from_numpy(charges),\n torch.from_numpy(valence_charges))\n\n\ndef compute_qm7_solid_harmonic_scattering_coefficients(\n M=192, N=128, O=96, sigma=2., J=2, L=3,\n integral_powers=(0.5, 1., 2., 3.), batch_size=16):\n \"\"\"\n Computes the scattering coefficients of the molecules of the\n QM7 database. Channels used are full charges, valence charges\n and core charges. Linear regression of the qm7 energies with\n the given values gives MAE 2.75, RMSE 4.18 (kcal.mol-1).\n\n Parameters\n ----------\n M, N, O: int\n dimensions of the numerical grid\n sigma : float\n width parameter of the Gaussian that represents a particle\n J: int\n maximal scale of the solid harmonic wavelets\n L: int\n maximal first order of the solid harmonic wavelets\n integral_powers: list of int\n powers for the integrals\n batch_size: int\n size of the batch for computations\n\n Returns\n -------\n order_0: torch tensor\n array containing zeroth-order scattering coefficients\n orders_1_and_2: torch tensor\n array containing first- and second-order scattering coefficients\n \"\"\"\n cuda = torch.cuda.is_available()\n grid = torch.from_numpy(\n np.fft.ifftshift(\n np.mgrid[-M//2:-M//2+M, -N//2:-N//2+N, -O//2:-O//2+O].astype('float32'),\n axes=(1, 2, 3)))\n pos, full_charges, valence_charges = get_qm7_positions_and_charges(sigma)\n\n if cuda:\n grid = grid.cuda()\n pos = pos.cuda()\n full_charges = full_charges.cuda()\n valence_charges = valence_charges.cuda()\n\n n_molecules = pos.size(0)\n n_batches = np.ceil(n_molecules / batch_size).astype(int)\n\n scattering = HarmonicScattering3D(J=J, shape=(M, N, O), L=L, sigma_0=sigma)\n\n order_0, order_1, order_2 = [], [], []\n print('Computing solid harmonic scattering coefficients of {} molecules '\n 'of QM7 database on {}'.format(pos.size(0), 'GPU' if cuda else 'CPU'))\n print('sigma: {}, L: {}, J: {}, integral powers: {}'.format(sigma, L, J, integral_powers))\n\n this_time = None\n last_time = None\n for i in range(n_batches):\n this_time = time.time()\n if last_time is not None:\n dt = this_time - last_time\n print(\"Iteration {} ETA: [{:02}:{:02}:{:02}]\".format(\n i + 1, int(((n_batches - i - 1) * dt) // 3600),\n int((((n_batches - i - 1) * dt) // 60) % 60),\n int(((n_batches - i - 1) * dt) % 60)), end='\\r')\n else:\n print(\"Iteration {} ETA: {}\".format(i + 1,'-'),end='\\r')\n last_time = this_time\n time.sleep(1)\n\n start, end = i * batch_size, min((i + 1) * batch_size, n_molecules)\n\n pos_batch = pos[start:end]\n full_batch = full_charges[start:end]\n val_batch = valence_charges[start:end]\n\n full_density_batch = generate_weighted_sum_of_gaussians(\n grid, pos_batch, full_batch, sigma, cuda=cuda)\n full_order_0 = compute_integrals(full_density_batch, integral_powers)\n scattering.max_order = 2\n scattering.method = 'integral'\n scattering.integral_powers = integral_powers\n full_scattering = scattering(full_density_batch)\n\n val_density_batch = generate_weighted_sum_of_gaussians(\n grid, pos_batch, val_batch, sigma, cuda=cuda)\n val_order_0 = compute_integrals(val_density_batch, integral_powers)\n val_scattering= scattering(val_density_batch)\n\n core_density_batch = full_density_batch - val_density_batch\n core_order_0 = compute_integrals(core_density_batch, integral_powers)\n core_scattering = scattering(core_density_batch)\n\n\n order_0.append(\n torch.stack([full_order_0, val_order_0, core_order_0], dim=-1))\n orders_1_and_2.append(\n torch.stack(\n [full_scattering, val_scattering, core_scattering], dim=-1))\n\n order_0 = torch.cat(order_0, dim=0)\n orders_1_and_2 = torch.cat(orders_1_and_2, dim=0)\n\n return order_0, orders_1_and_2\n\nM, N, O, J, L = 192, 128, 96, 2, 3\nintegral_powers = [0.5, 1., 2., 3.]\nsigma = 2.\n\norder_0, orders_1_and_2 = compute_qm7_solid_harmonic_scattering_coefficients(\n M=M, N=N, O=O, J=J, L=L, integral_powers=integral_powers,\n sigma=sigma, batch_size=8)\n\nn_molecules = order_0.size(0)\n\nnp_order_0 = order_0.numpy().reshape((n_molecules, -1))\nnp_orders_1_and_2 = orders_1_and_2.numpy().reshape((n_molecules, -1))\n\nbasename = 'qm7_L_{}_J_{}_sigma_{}_MNO_{}_powers_{}.npy'.format(\n L, J, sigma, (M, N, O), integral_powers)\ncachedir = get_cache_dir(\"qm7/experiments\")\nnp.save(os.path.join(cachedir, 'order_0_' + basename), np_order_0)\nnp.save(os.path.join(\n cachedir, 'orders_1_and_2_' + basename), np_orders_1_and_2)\n\nscattering_coef = np.concatenate([np_order_0, np_orders_1_and_2], axis=1)\ntarget = get_qm7_energies()\n\nevaluate_linear_regression(scattering_coef, target)\n", "path": "examples/3d/scattering3d_qm7.py"}], "after_files": [{"content": "\"\"\"\n3D scattering quantum chemistry regression\n==========================================\nThis uses the 3D scattering on a standard dataset.\n\"\"\"\n\nimport numpy as np\nimport time\nimport torch\nimport os\n\nfrom sklearn import linear_model, model_selection, preprocessing, pipeline\nfrom kymatio.scattering3d import HarmonicScattering3D\nfrom kymatio.scattering3d.utils import compute_integrals, generate_weighted_sum_of_gaussians\nfrom kymatio.datasets import fetch_qm7\nfrom kymatio.caching import get_cache_dir\nfrom scipy.spatial.distance import pdist\n\n\ndef evaluate_linear_regression(X, y, n_folds=5):\n \"\"\"\n Evaluates linear ridge regression predictions of y using X.\n\n Parameters\n ----------\n X: numpy array\n input features, shape (N, D)\n y: numpy array\n target value, shape (N, 1)\n\n \"\"\"\n n_datapoints = X.shape[0]\n P = np.random.permutation(n_datapoints).reshape((n_folds, -1))\n cross_val_folds = []\n\n for i_fold in range(n_folds):\n fold = (np.concatenate(P[np.arange(n_folds) != i_fold], axis=0), P[i_fold])\n cross_val_folds.append(fold)\n\n alphas = 10.**(-np.arange(0, 10))\n for i, alpha in enumerate(alphas):\n regressor = pipeline.make_pipeline(\n preprocessing.StandardScaler(), linear_model.Ridge(alpha=alpha))\n y_prediction = model_selection.cross_val_predict(\n regressor, X=X, y=y, cv=cross_val_folds)\n MAE = np.mean(np.abs(y_prediction - y))\n RMSE = np.sqrt(np.mean((y_prediction - y)**2))\n print('Ridge regression, alpha: {}, MAE: {}, RMSE: {}'.format(\n alpha, MAE, RMSE))\n\n\ndef get_valence(charges):\n \"\"\"\n Returns the number valence electrons of a particle given the\n nuclear charge.\n\n Parameters\n ----------\n charges: numpy array\n array containing the nuclear charges, arbitrary size\n\n Returns\n -------\n valence_charges : numpy array\n same size as the input\n \"\"\"\n return (\n charges * (charges <= 2) +\n (charges - 2) * np.logical_and(charges > 2, charges <= 10) +\n (charges - 10) * np.logical_and(charges > 10, charges <= 18))\n\n\ndef get_qm7_energies():\n \"\"\"\n Loads the energies of the molecules of the QM7 dataset.\n\n Returns\n -------\n energies: numpy array\n array containing the energies of the molecules\n \"\"\"\n qm7 = fetch_qm7()\n return qm7['energies']\n\n\n\ndef get_qm7_positions_and_charges(sigma, overlapping_precision=1e-1):\n \"\"\"\n Loads the positions and charges of the molecules of the QM7 dataset.\n QM7 is a dataset of 7165 organic molecules with up to 7 non-hydrogen\n atoms, whose energies were computed with a quantun chemistry\n computational method named Density Functional Theory.\n This dataset has been made available to train machine learning models\n to predict these energies.\n\n Parameters\n ----------\n sigma : float\n width parameter of the Gaussian that represents a particle\n\n overlapping_precision : float, optional\n affects the scaling of the positions. The positions are re-scaled\n such that two Gaussian functions of width sigma centerd at the qm7\n positions overlapp with amplitude <= the overlapping_precision\n\n Returns\n -------\n positions, charges, valence_charges: torch arrays\n array containing the positions, charges and valence charges\n of the QM7 database molecules\n \"\"\"\n qm7 = fetch_qm7(align=True)\n positions = qm7['positions']\n charges = qm7['charges'].astype('float32')\n valence_charges = get_valence(charges)\n\n # normalize positions\n min_dist = np.inf\n for i in range(positions.shape[0]):\n n_atoms = np.sum(charges[i] != 0)\n pos = positions[i, :n_atoms, :]\n min_dist = min(min_dist, pdist(pos).min())\n delta = sigma * np.sqrt(-8 * np.log(overlapping_precision))\n positions = positions * delta / min_dist\n\n return (torch.from_numpy(positions),\n torch.from_numpy(charges),\n torch.from_numpy(valence_charges))\n\n\ndef compute_qm7_solid_harmonic_scattering_coefficients(\n M=192, N=128, O=96, sigma=2., J=2, L=3,\n integral_powers=(0.5, 1., 2., 3.), batch_size=16):\n \"\"\"\n Computes the scattering coefficients of the molecules of the\n QM7 database. Channels used are full charges, valence charges\n and core charges. Linear regression of the qm7 energies with\n the given values gives MAE 2.75, RMSE 4.18 (kcal.mol-1).\n\n Parameters\n ----------\n M, N, O: int\n dimensions of the numerical grid\n sigma : float\n width parameter of the Gaussian that represents a particle\n J: int\n maximal scale of the solid harmonic wavelets\n L: int\n maximal first order of the solid harmonic wavelets\n integral_powers: list of int\n powers for the integrals\n batch_size: int\n size of the batch for computations\n\n Returns\n -------\n order_0: torch tensor\n array containing zeroth-order scattering coefficients\n orders_1_and_2: torch tensor\n array containing first- and second-order scattering coefficients\n \"\"\"\n cuda = torch.cuda.is_available()\n grid = torch.from_numpy(\n np.fft.ifftshift(\n np.mgrid[-M//2:-M//2+M, -N//2:-N//2+N, -O//2:-O//2+O].astype('float32'),\n axes=(1, 2, 3)))\n pos, full_charges, valence_charges = get_qm7_positions_and_charges(sigma)\n\n n_molecules = pos.size(0)\n n_batches = np.ceil(n_molecules / batch_size).astype(int)\n\n scattering = HarmonicScattering3D(J=J, shape=(M, N, O), L=L, sigma_0=sigma)\n\n if cuda:\n grid = grid.cuda()\n pos = pos.cuda()\n full_charges = full_charges.cuda()\n valence_charges = valence_charges.cuda()\n scattering.cuda()\n\n order_0, order_1, order_2 = [], [], []\n print('Computing solid harmonic scattering coefficients of {} molecules '\n 'of QM7 database on {}'.format(pos.size(0), 'GPU' if cuda else 'CPU'))\n print('sigma: {}, L: {}, J: {}, integral powers: {}'.format(sigma, L, J, integral_powers))\n\n this_time = None\n last_time = None\n for i in range(n_batches):\n this_time = time.time()\n if last_time is not None:\n dt = this_time - last_time\n print(\"Iteration {} ETA: [{:02}:{:02}:{:02}]\".format(\n i + 1, int(((n_batches - i - 1) * dt) // 3600),\n int((((n_batches - i - 1) * dt) // 60) % 60),\n int(((n_batches - i - 1) * dt) % 60)), end='\\r')\n else:\n print(\"Iteration {} ETA: {}\".format(i + 1,'-'),end='\\r')\n last_time = this_time\n time.sleep(1)\n\n start, end = i * batch_size, min((i + 1) * batch_size, n_molecules)\n\n pos_batch = pos[start:end]\n full_batch = full_charges[start:end]\n val_batch = valence_charges[start:end]\n\n full_density_batch = generate_weighted_sum_of_gaussians(\n grid, pos_batch, full_batch, sigma, cuda=cuda)\n full_order_0 = compute_integrals(full_density_batch, integral_powers)\n scattering.max_order = 2\n scattering.method = 'integral'\n scattering.integral_powers = integral_powers\n full_scattering = scattering(full_density_batch)\n\n val_density_batch = generate_weighted_sum_of_gaussians(\n grid, pos_batch, val_batch, sigma, cuda=cuda)\n val_order_0 = compute_integrals(val_density_batch, integral_powers)\n val_scattering= scattering(val_density_batch)\n\n core_density_batch = full_density_batch - val_density_batch\n core_order_0 = compute_integrals(core_density_batch, integral_powers)\n core_scattering = scattering(core_density_batch)\n\n\n order_0.append(\n torch.stack([full_order_0, val_order_0, core_order_0], dim=-1))\n orders_1_and_2.append(\n torch.stack(\n [full_scattering, val_scattering, core_scattering], dim=-1))\n\n order_0 = torch.cat(order_0, dim=0)\n orders_1_and_2 = torch.cat(orders_1_and_2, dim=0)\n\n return order_0, orders_1_and_2\n\nM, N, O, J, L = 192, 128, 96, 2, 3\nintegral_powers = [0.5, 1., 2., 3.]\nsigma = 2.\n\norder_0, orders_1_and_2 = compute_qm7_solid_harmonic_scattering_coefficients(\n M=M, N=N, O=O, J=J, L=L, integral_powers=integral_powers,\n sigma=sigma, batch_size=8)\n\nn_molecules = order_0.size(0)\n\nnp_order_0 = order_0.numpy().reshape((n_molecules, -1))\nnp_orders_1_and_2 = orders_1_and_2.numpy().reshape((n_molecules, -1))\n\nbasename = 'qm7_L_{}_J_{}_sigma_{}_MNO_{}_powers_{}.npy'.format(\n L, J, sigma, (M, N, O), integral_powers)\ncachedir = get_cache_dir(\"qm7/experiments\")\nnp.save(os.path.join(cachedir, 'order_0_' + basename), np_order_0)\nnp.save(os.path.join(\n cachedir, 'orders_1_and_2_' + basename), np_orders_1_and_2)\n\nscattering_coef = np.concatenate([np_order_0, np_orders_1_and_2], axis=1)\ntarget = get_qm7_energies()\n\nevaluate_linear_regression(scattering_coef, target)\n", "path": "examples/3d/scattering3d_qm7.py"}]}
| 3,429 | 305 |
gh_patches_debug_10133
|
rasdani/github-patches
|
git_diff
|
mozmeao__snippets-service-818
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ASR Snippet Admin: Be able to search for a snippet by Snippet ID in top search bar on list view page
Currently the search bar only allows for searching keywords.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `snippets/base/admin/adminmodels.py`
Content:
```
1 import re
2
3 from django.contrib import admin
4 from django.db.models import TextField, Q
5 from django.template.loader import get_template
6 from django.utils.safestring import mark_safe
7
8 from reversion.admin import VersionAdmin
9 from django_ace import AceWidget
10 from django_statsd.clients import statsd
11 from jinja2.meta import find_undeclared_variables
12 from django_admin_listfilter_dropdown.filters import RelatedDropdownFilter
13
14 from snippets.base import forms, models
15 from snippets.base.models import JINJA_ENV
16 from snippets.base.admin.filters import ModifiedFilter, ReleaseFilter
17 from snippets.base.admin.actions import duplicate_snippets_action
18
19
20 MATCH_LOCALE_REGEX = re.compile('(\w+(?:-\w+)*)')
21 RESERVED_VARIABLES = ('_', 'snippet_id')
22
23
24 class ClientMatchRuleAdmin(VersionAdmin, admin.ModelAdmin):
25 list_display = ('description', 'is_exclusion', 'startpage_version', 'name',
26 'version', 'locale', 'appbuildid', 'build_target',
27 'channel', 'os_version', 'distribution',
28 'distribution_version', 'modified')
29 list_filter = ('name', 'version', 'os_version', 'appbuildid',
30 'build_target', 'channel', 'distribution', 'locale')
31 save_on_top = True
32 search_fields = ('description',)
33
34
35 class LogEntryAdmin(admin.ModelAdmin):
36 list_display = ('user', 'content_type', 'object_id', 'object_repr', 'change_message')
37 list_filter = ('user', 'content_type')
38
39
40 class SnippetTemplateVariableInline(admin.TabularInline):
41 model = models.SnippetTemplateVariable
42 formset = forms.SnippetTemplateVariableInlineFormset
43 max_num = 0
44 can_delete = False
45 readonly_fields = ('name',)
46 fields = ('name', 'type', 'order', 'description')
47
48
49 class SnippetTemplateAdmin(VersionAdmin, admin.ModelAdmin):
50 save_on_top = True
51 list_display = ('name', 'priority', 'hidden')
52 list_filter = ('hidden', 'startpage')
53 inlines = (SnippetTemplateVariableInline,)
54 formfield_overrides = {
55 TextField: {'widget': AceWidget(mode='html', theme='github',
56 width='1200px', height='500px')},
57 }
58
59 class Media:
60 css = {
61 'all': ('css/admin.css',)
62 }
63
64 def save_related(self, request, form, formsets, change):
65 """
66 After saving the related objects, remove and add
67 SnippetTemplateVariables depending on how the template code changed.
68 """
69 super(SnippetTemplateAdmin, self).save_related(request, form, formsets,
70 change)
71
72 # Parse the template code and find any undefined variables.
73 ast = JINJA_ENV.env.parse(form.instance.code)
74 new_vars = find_undeclared_variables(ast)
75 var_manager = form.instance.variable_set
76
77 # Filter out reserved variable names.
78 new_vars = [x for x in new_vars if x not in RESERVED_VARIABLES]
79
80 # Delete variables not in the new set.
81 var_manager.filter(~Q(name__in=new_vars)).delete()
82
83 # Create variables that don't exist.
84 for i, variable in enumerate(new_vars, start=1):
85 obj, _ = models.SnippetTemplateVariable.objects.get_or_create(
86 template=form.instance, name=variable)
87 if obj.order == 0:
88 obj.order = i * 10
89 obj.save()
90
91
92 class UploadedFileAdmin(admin.ModelAdmin):
93 readonly_fields = ('url', 'preview', 'snippets')
94 list_display = ('name', 'url', 'preview', 'modified')
95 prepopulated_fields = {'name': ('file',)}
96 form = forms.UploadedFileAdminForm
97
98 def preview(self, obj):
99 template = get_template('base/uploadedfile_preview.jinja')
100 return mark_safe(template.render({'file': obj}))
101
102 def snippets(self, obj):
103 """Snippets using this file."""
104 template = get_template('base/uploadedfile_snippets.jinja')
105 return mark_safe(template.render({'snippets': obj.snippets}))
106
107
108 class AddonAdmin(admin.ModelAdmin):
109 list_display = ('name', 'guid')
110
111
112 class ASRSnippetAdmin(admin.ModelAdmin):
113 form = forms.ASRSnippetAdminForm
114
115 list_display_links = (
116 'id',
117 'name',
118 )
119 list_display = (
120 'id',
121 'name',
122 'status',
123 'modified',
124 )
125 list_filter = (
126 ModifiedFilter,
127 'status',
128 ReleaseFilter,
129 ('template', RelatedDropdownFilter),
130 )
131 search_fields = (
132 'name',
133 )
134 autocomplete_fields = (
135 'campaign',
136 'target',
137 )
138 preserve_filters = True
139 readonly_fields = (
140 'created',
141 'modified',
142 'uuid',
143 'creator',
144 'preview_url',
145 )
146 filter_horizontal = ('locales',)
147 save_on_top = True
148 save_as = True
149 view_on_site = False
150 actions = (
151 duplicate_snippets_action,
152 )
153
154 fieldsets = (
155 ('ID', {'fields': ('creator', 'name', 'status', 'preview_url')}),
156 ('Content', {
157 'description': (
158 '''
159 <strong>Available deep links:</strong><br/>
160 <ol>
161 <li><code>special:accounts</code> to open Firefox Accounts</li>
162 <li><code>special:appMenu</code> to open the hamburger menu</li>
163 </ol><br/>
164 <strong>Automatically add Snippet ID:</strong><br/>
165 You can use <code>[[snippet_id]]</code> in any field and it
166 will be automatically replaced by Snippet ID when served to users.
167 <br/>
168 Example: This is a <code><a href="https://example.com?utm_term=[[snippet_id]]">link</a></code> # noqa
169 <br/>
170 '''
171 ),
172 'fields': ('template', 'data'),
173 }),
174 ('Publishing Options', {
175 'fields': ('campaign', 'target', ('publish_start', 'publish_end'), 'locales', 'weight',)
176 }),
177 ('Other Info', {
178 'fields': ('uuid', ('created', 'modified')),
179 'classes': ('collapse',)
180 }),
181 )
182
183 class Media:
184 css = {
185 'all': ('css/admin/ASRSnippetAdmin.css',)
186 }
187 js = (
188 'js/admin/clipboard.min.js',
189 'js/admin/copy_preview.js',
190 )
191
192 def save_model(self, request, obj, form, change):
193 if not obj.creator_id:
194 obj.creator = request.user
195 statsd.incr('save.asrsnippet')
196 super().save_model(request, obj, form, change)
197
198 def preview_url(self, obj):
199 text = f'''
200 <span id="previewLinkUrl">{obj.get_preview_url()}</span>
201 <button id="copyPreviewLink" class="btn"
202 data-clipboard-target="#previewLinkUrl"
203 originalText="Copy to Clipboard" type="button">
204 Copy to Clipboard
205 </button>
206 '''
207 return mark_safe(text)
208
209 def change_view(self, request, *args, **kwargs):
210 if request.method == 'POST' and '_saveasnew' in request.POST:
211 # Always saved cloned snippets as un-published and un-check ready for review.
212 post_data = request.POST.copy()
213 post_data['status'] = models.STATUS_CHOICES['Draft']
214 request.POST = post_data
215 return super().change_view(request, *args, **kwargs)
216
217
218 class CampaignAdmin(admin.ModelAdmin):
219 readonly_fields = ('created', 'modified', 'creator',)
220 prepopulated_fields = {'slug': ('name',)}
221
222 fieldsets = (
223 ('ID', {'fields': ('name', 'slug')}),
224 ('Other Info', {
225 'fields': ('creator', ('created', 'modified')),
226 }),
227 )
228 search_fields = (
229 'name',
230 )
231
232 def save_model(self, request, obj, form, change):
233 if not obj.creator_id:
234 obj.creator = request.user
235 statsd.incr('save.campaign')
236 super().save_model(request, obj, form, change)
237
238
239 class TargetAdmin(admin.ModelAdmin):
240 form = forms.TargetAdminForm
241 readonly_fields = ('created', 'modified', 'creator', 'jexl_expr')
242 search_fields = (
243 'name',
244 )
245 fieldsets = (
246 ('ID', {'fields': ('name',)}),
247 ('Product channels', {
248 'description': 'What channels will this snippet be available in?',
249 'fields': (('on_release', 'on_beta', 'on_aurora', 'on_nightly', 'on_esr'),)
250 }),
251 ('Targeting', {
252 'fields': (
253 'filtr_is_default_browser',
254 'filtr_updates_enabled',
255 'filtr_updates_autodownload_enabled',
256 'filtr_profile_age_created',
257 'filtr_firefox_version',
258 'filtr_previous_session_end',
259 'filtr_uses_firefox_sync',
260 'filtr_country',
261 'filtr_is_developer',
262 'filtr_current_search_engine',
263 'filtr_browser_addon',
264 'filtr_total_bookmarks_count',
265 )
266 }),
267 ('Other Info', {
268 'fields': ('creator', ('created', 'modified'), 'jexl_expr'),
269 }),
270 )
271
272 def save_model(self, request, obj, form, change):
273 if not obj.creator_id:
274 obj.creator = request.user
275 statsd.incr('save.target')
276 super().save_model(request, obj, form, change)
277
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/snippets/base/admin/adminmodels.py b/snippets/base/admin/adminmodels.py
--- a/snippets/base/admin/adminmodels.py
+++ b/snippets/base/admin/adminmodels.py
@@ -130,6 +130,9 @@
)
search_fields = (
'name',
+ 'id',
+ 'campaign__name',
+ 'target__name',
)
autocomplete_fields = (
'campaign',
@@ -143,7 +146,9 @@
'creator',
'preview_url',
)
- filter_horizontal = ('locales',)
+ filter_horizontal = (
+ 'locales',
+ )
save_on_top = True
save_as = True
view_on_site = False
|
{"golden_diff": "diff --git a/snippets/base/admin/adminmodels.py b/snippets/base/admin/adminmodels.py\n--- a/snippets/base/admin/adminmodels.py\n+++ b/snippets/base/admin/adminmodels.py\n@@ -130,6 +130,9 @@\n )\n search_fields = (\n 'name',\n+ 'id',\n+ 'campaign__name',\n+ 'target__name',\n )\n autocomplete_fields = (\n 'campaign',\n@@ -143,7 +146,9 @@\n 'creator',\n 'preview_url',\n )\n- filter_horizontal = ('locales',)\n+ filter_horizontal = (\n+ 'locales',\n+ )\n save_on_top = True\n save_as = True\n view_on_site = False\n", "issue": "ASR Snippet Admin: Be able to search for a snippet by Snippet ID in top search bar on list view page\nCurrently the search bar only allows for searching keywords.\n", "before_files": [{"content": "import re\n\nfrom django.contrib import admin\nfrom django.db.models import TextField, Q\nfrom django.template.loader import get_template\nfrom django.utils.safestring import mark_safe\n\nfrom reversion.admin import VersionAdmin\nfrom django_ace import AceWidget\nfrom django_statsd.clients import statsd\nfrom jinja2.meta import find_undeclared_variables\nfrom django_admin_listfilter_dropdown.filters import RelatedDropdownFilter\n\nfrom snippets.base import forms, models\nfrom snippets.base.models import JINJA_ENV\nfrom snippets.base.admin.filters import ModifiedFilter, ReleaseFilter\nfrom snippets.base.admin.actions import duplicate_snippets_action\n\n\nMATCH_LOCALE_REGEX = re.compile('(\\w+(?:-\\w+)*)')\nRESERVED_VARIABLES = ('_', 'snippet_id')\n\n\nclass ClientMatchRuleAdmin(VersionAdmin, admin.ModelAdmin):\n list_display = ('description', 'is_exclusion', 'startpage_version', 'name',\n 'version', 'locale', 'appbuildid', 'build_target',\n 'channel', 'os_version', 'distribution',\n 'distribution_version', 'modified')\n list_filter = ('name', 'version', 'os_version', 'appbuildid',\n 'build_target', 'channel', 'distribution', 'locale')\n save_on_top = True\n search_fields = ('description',)\n\n\nclass LogEntryAdmin(admin.ModelAdmin):\n list_display = ('user', 'content_type', 'object_id', 'object_repr', 'change_message')\n list_filter = ('user', 'content_type')\n\n\nclass SnippetTemplateVariableInline(admin.TabularInline):\n model = models.SnippetTemplateVariable\n formset = forms.SnippetTemplateVariableInlineFormset\n max_num = 0\n can_delete = False\n readonly_fields = ('name',)\n fields = ('name', 'type', 'order', 'description')\n\n\nclass SnippetTemplateAdmin(VersionAdmin, admin.ModelAdmin):\n save_on_top = True\n list_display = ('name', 'priority', 'hidden')\n list_filter = ('hidden', 'startpage')\n inlines = (SnippetTemplateVariableInline,)\n formfield_overrides = {\n TextField: {'widget': AceWidget(mode='html', theme='github',\n width='1200px', height='500px')},\n }\n\n class Media:\n css = {\n 'all': ('css/admin.css',)\n }\n\n def save_related(self, request, form, formsets, change):\n \"\"\"\n After saving the related objects, remove and add\n SnippetTemplateVariables depending on how the template code changed.\n \"\"\"\n super(SnippetTemplateAdmin, self).save_related(request, form, formsets,\n change)\n\n # Parse the template code and find any undefined variables.\n ast = JINJA_ENV.env.parse(form.instance.code)\n new_vars = find_undeclared_variables(ast)\n var_manager = form.instance.variable_set\n\n # Filter out reserved variable names.\n new_vars = [x for x in new_vars if x not in RESERVED_VARIABLES]\n\n # Delete variables not in the new set.\n var_manager.filter(~Q(name__in=new_vars)).delete()\n\n # Create variables that don't exist.\n for i, variable in enumerate(new_vars, start=1):\n obj, _ = models.SnippetTemplateVariable.objects.get_or_create(\n template=form.instance, name=variable)\n if obj.order == 0:\n obj.order = i * 10\n obj.save()\n\n\nclass UploadedFileAdmin(admin.ModelAdmin):\n readonly_fields = ('url', 'preview', 'snippets')\n list_display = ('name', 'url', 'preview', 'modified')\n prepopulated_fields = {'name': ('file',)}\n form = forms.UploadedFileAdminForm\n\n def preview(self, obj):\n template = get_template('base/uploadedfile_preview.jinja')\n return mark_safe(template.render({'file': obj}))\n\n def snippets(self, obj):\n \"\"\"Snippets using this file.\"\"\"\n template = get_template('base/uploadedfile_snippets.jinja')\n return mark_safe(template.render({'snippets': obj.snippets}))\n\n\nclass AddonAdmin(admin.ModelAdmin):\n list_display = ('name', 'guid')\n\n\nclass ASRSnippetAdmin(admin.ModelAdmin):\n form = forms.ASRSnippetAdminForm\n\n list_display_links = (\n 'id',\n 'name',\n )\n list_display = (\n 'id',\n 'name',\n 'status',\n 'modified',\n )\n list_filter = (\n ModifiedFilter,\n 'status',\n ReleaseFilter,\n ('template', RelatedDropdownFilter),\n )\n search_fields = (\n 'name',\n )\n autocomplete_fields = (\n 'campaign',\n 'target',\n )\n preserve_filters = True\n readonly_fields = (\n 'created',\n 'modified',\n 'uuid',\n 'creator',\n 'preview_url',\n )\n filter_horizontal = ('locales',)\n save_on_top = True\n save_as = True\n view_on_site = False\n actions = (\n duplicate_snippets_action,\n )\n\n fieldsets = (\n ('ID', {'fields': ('creator', 'name', 'status', 'preview_url')}),\n ('Content', {\n 'description': (\n '''\n <strong>Available deep links:</strong><br/>\n <ol>\n <li><code>special:accounts</code> to open Firefox Accounts</li>\n <li><code>special:appMenu</code> to open the hamburger menu</li>\n </ol><br/>\n <strong>Automatically add Snippet ID:</strong><br/>\n You can use <code>[[snippet_id]]</code> in any field and it\n will be automatically replaced by Snippet ID when served to users.\n <br/>\n Example: This is a <code><a href="https://example.com?utm_term=[[snippet_id]]">link</a></code> # noqa\n <br/>\n '''\n ),\n 'fields': ('template', 'data'),\n }),\n ('Publishing Options', {\n 'fields': ('campaign', 'target', ('publish_start', 'publish_end'), 'locales', 'weight',)\n }),\n ('Other Info', {\n 'fields': ('uuid', ('created', 'modified')),\n 'classes': ('collapse',)\n }),\n )\n\n class Media:\n css = {\n 'all': ('css/admin/ASRSnippetAdmin.css',)\n }\n js = (\n 'js/admin/clipboard.min.js',\n 'js/admin/copy_preview.js',\n )\n\n def save_model(self, request, obj, form, change):\n if not obj.creator_id:\n obj.creator = request.user\n statsd.incr('save.asrsnippet')\n super().save_model(request, obj, form, change)\n\n def preview_url(self, obj):\n text = f'''\n <span id=\"previewLinkUrl\">{obj.get_preview_url()}</span>\n <button id=\"copyPreviewLink\" class=\"btn\"\n data-clipboard-target=\"#previewLinkUrl\"\n originalText=\"Copy to Clipboard\" type=\"button\">\n Copy to Clipboard\n </button>\n '''\n return mark_safe(text)\n\n def change_view(self, request, *args, **kwargs):\n if request.method == 'POST' and '_saveasnew' in request.POST:\n # Always saved cloned snippets as un-published and un-check ready for review.\n post_data = request.POST.copy()\n post_data['status'] = models.STATUS_CHOICES['Draft']\n request.POST = post_data\n return super().change_view(request, *args, **kwargs)\n\n\nclass CampaignAdmin(admin.ModelAdmin):\n readonly_fields = ('created', 'modified', 'creator',)\n prepopulated_fields = {'slug': ('name',)}\n\n fieldsets = (\n ('ID', {'fields': ('name', 'slug')}),\n ('Other Info', {\n 'fields': ('creator', ('created', 'modified')),\n }),\n )\n search_fields = (\n 'name',\n )\n\n def save_model(self, request, obj, form, change):\n if not obj.creator_id:\n obj.creator = request.user\n statsd.incr('save.campaign')\n super().save_model(request, obj, form, change)\n\n\nclass TargetAdmin(admin.ModelAdmin):\n form = forms.TargetAdminForm\n readonly_fields = ('created', 'modified', 'creator', 'jexl_expr')\n search_fields = (\n 'name',\n )\n fieldsets = (\n ('ID', {'fields': ('name',)}),\n ('Product channels', {\n 'description': 'What channels will this snippet be available in?',\n 'fields': (('on_release', 'on_beta', 'on_aurora', 'on_nightly', 'on_esr'),)\n }),\n ('Targeting', {\n 'fields': (\n 'filtr_is_default_browser',\n 'filtr_updates_enabled',\n 'filtr_updates_autodownload_enabled',\n 'filtr_profile_age_created',\n 'filtr_firefox_version',\n 'filtr_previous_session_end',\n 'filtr_uses_firefox_sync',\n 'filtr_country',\n 'filtr_is_developer',\n 'filtr_current_search_engine',\n 'filtr_browser_addon',\n 'filtr_total_bookmarks_count',\n )\n }),\n ('Other Info', {\n 'fields': ('creator', ('created', 'modified'), 'jexl_expr'),\n }),\n )\n\n def save_model(self, request, obj, form, change):\n if not obj.creator_id:\n obj.creator = request.user\n statsd.incr('save.target')\n super().save_model(request, obj, form, change)\n", "path": "snippets/base/admin/adminmodels.py"}], "after_files": [{"content": "import re\n\nfrom django.contrib import admin\nfrom django.db.models import TextField, Q\nfrom django.template.loader import get_template\nfrom django.utils.safestring import mark_safe\n\nfrom reversion.admin import VersionAdmin\nfrom django_ace import AceWidget\nfrom django_statsd.clients import statsd\nfrom jinja2.meta import find_undeclared_variables\nfrom django_admin_listfilter_dropdown.filters import RelatedDropdownFilter\n\nfrom snippets.base import forms, models\nfrom snippets.base.models import JINJA_ENV\nfrom snippets.base.admin.filters import ModifiedFilter, ReleaseFilter\nfrom snippets.base.admin.actions import duplicate_snippets_action\n\n\nMATCH_LOCALE_REGEX = re.compile('(\\w+(?:-\\w+)*)')\nRESERVED_VARIABLES = ('_', 'snippet_id')\n\n\nclass ClientMatchRuleAdmin(VersionAdmin, admin.ModelAdmin):\n list_display = ('description', 'is_exclusion', 'startpage_version', 'name',\n 'version', 'locale', 'appbuildid', 'build_target',\n 'channel', 'os_version', 'distribution',\n 'distribution_version', 'modified')\n list_filter = ('name', 'version', 'os_version', 'appbuildid',\n 'build_target', 'channel', 'distribution', 'locale')\n save_on_top = True\n search_fields = ('description',)\n\n\nclass LogEntryAdmin(admin.ModelAdmin):\n list_display = ('user', 'content_type', 'object_id', 'object_repr', 'change_message')\n list_filter = ('user', 'content_type')\n\n\nclass SnippetTemplateVariableInline(admin.TabularInline):\n model = models.SnippetTemplateVariable\n formset = forms.SnippetTemplateVariableInlineFormset\n max_num = 0\n can_delete = False\n readonly_fields = ('name',)\n fields = ('name', 'type', 'order', 'description')\n\n\nclass SnippetTemplateAdmin(VersionAdmin, admin.ModelAdmin):\n save_on_top = True\n list_display = ('name', 'priority', 'hidden')\n list_filter = ('hidden', 'startpage')\n inlines = (SnippetTemplateVariableInline,)\n formfield_overrides = {\n TextField: {'widget': AceWidget(mode='html', theme='github',\n width='1200px', height='500px')},\n }\n\n class Media:\n css = {\n 'all': ('css/admin.css',)\n }\n\n def save_related(self, request, form, formsets, change):\n \"\"\"\n After saving the related objects, remove and add\n SnippetTemplateVariables depending on how the template code changed.\n \"\"\"\n super(SnippetTemplateAdmin, self).save_related(request, form, formsets,\n change)\n\n # Parse the template code and find any undefined variables.\n ast = JINJA_ENV.env.parse(form.instance.code)\n new_vars = find_undeclared_variables(ast)\n var_manager = form.instance.variable_set\n\n # Filter out reserved variable names.\n new_vars = [x for x in new_vars if x not in RESERVED_VARIABLES]\n\n # Delete variables not in the new set.\n var_manager.filter(~Q(name__in=new_vars)).delete()\n\n # Create variables that don't exist.\n for i, variable in enumerate(new_vars, start=1):\n obj, _ = models.SnippetTemplateVariable.objects.get_or_create(\n template=form.instance, name=variable)\n if obj.order == 0:\n obj.order = i * 10\n obj.save()\n\n\nclass UploadedFileAdmin(admin.ModelAdmin):\n readonly_fields = ('url', 'preview', 'snippets')\n list_display = ('name', 'url', 'preview', 'modified')\n prepopulated_fields = {'name': ('file',)}\n form = forms.UploadedFileAdminForm\n\n def preview(self, obj):\n template = get_template('base/uploadedfile_preview.jinja')\n return mark_safe(template.render({'file': obj}))\n\n def snippets(self, obj):\n \"\"\"Snippets using this file.\"\"\"\n template = get_template('base/uploadedfile_snippets.jinja')\n return mark_safe(template.render({'snippets': obj.snippets}))\n\n\nclass AddonAdmin(admin.ModelAdmin):\n list_display = ('name', 'guid')\n\n\nclass ASRSnippetAdmin(admin.ModelAdmin):\n form = forms.ASRSnippetAdminForm\n\n list_display_links = (\n 'id',\n 'name',\n )\n list_display = (\n 'id',\n 'name',\n 'status',\n 'modified',\n )\n list_filter = (\n ModifiedFilter,\n 'status',\n ReleaseFilter,\n ('template', RelatedDropdownFilter),\n )\n search_fields = (\n 'name',\n 'id',\n 'campaign__name',\n 'target__name',\n )\n autocomplete_fields = (\n 'campaign',\n 'target',\n )\n preserve_filters = True\n readonly_fields = (\n 'created',\n 'modified',\n 'uuid',\n 'creator',\n 'preview_url',\n )\n filter_horizontal = (\n 'locales',\n )\n save_on_top = True\n save_as = True\n view_on_site = False\n actions = (\n duplicate_snippets_action,\n )\n\n fieldsets = (\n ('ID', {'fields': ('creator', 'name', 'status', 'preview_url')}),\n ('Content', {\n 'description': (\n '''\n <strong>Available deep links:</strong><br/>\n <ol>\n <li><code>special:accounts</code> to open Firefox Accounts</li>\n <li><code>special:appMenu</code> to open the hamburger menu</li>\n </ol><br/>\n <strong>Automatically add Snippet ID:</strong><br/>\n You can use <code>[[snippet_id]]</code> in any field and it\n will be automatically replaced by Snippet ID when served to users.\n <br/>\n Example: This is a <code><a href="https://example.com?utm_term=[[snippet_id]]">link</a></code> # noqa\n <br/>\n '''\n ),\n 'fields': ('template', 'data'),\n }),\n ('Publishing Options', {\n 'fields': ('campaign', 'target', ('publish_start', 'publish_end'), 'locales', 'weight',)\n }),\n ('Other Info', {\n 'fields': ('uuid', ('created', 'modified')),\n 'classes': ('collapse',)\n }),\n )\n\n class Media:\n css = {\n 'all': ('css/admin/ASRSnippetAdmin.css',)\n }\n js = (\n 'js/admin/clipboard.min.js',\n 'js/admin/copy_preview.js',\n )\n\n def save_model(self, request, obj, form, change):\n if not obj.creator_id:\n obj.creator = request.user\n statsd.incr('save.asrsnippet')\n super().save_model(request, obj, form, change)\n\n def preview_url(self, obj):\n text = f'''\n <span id=\"previewLinkUrl\">{obj.get_preview_url()}</span>\n <button id=\"copyPreviewLink\" class=\"btn\"\n data-clipboard-target=\"#previewLinkUrl\"\n originalText=\"Copy to Clipboard\" type=\"button\">\n Copy to Clipboard\n </button>\n '''\n return mark_safe(text)\n\n def change_view(self, request, *args, **kwargs):\n if request.method == 'POST' and '_saveasnew' in request.POST:\n # Always saved cloned snippets as un-published and un-check ready for review.\n post_data = request.POST.copy()\n post_data['status'] = models.STATUS_CHOICES['Draft']\n request.POST = post_data\n return super().change_view(request, *args, **kwargs)\n\n\nclass CampaignAdmin(admin.ModelAdmin):\n readonly_fields = ('created', 'modified', 'creator',)\n prepopulated_fields = {'slug': ('name',)}\n\n fieldsets = (\n ('ID', {'fields': ('name', 'slug')}),\n ('Other Info', {\n 'fields': ('creator', ('created', 'modified')),\n }),\n )\n search_fields = (\n 'name',\n )\n\n def save_model(self, request, obj, form, change):\n if not obj.creator_id:\n obj.creator = request.user\n statsd.incr('save.campaign')\n super().save_model(request, obj, form, change)\n\n\nclass TargetAdmin(admin.ModelAdmin):\n form = forms.TargetAdminForm\n readonly_fields = ('created', 'modified', 'creator', 'jexl_expr')\n search_fields = (\n 'name',\n )\n fieldsets = (\n ('ID', {'fields': ('name',)}),\n ('Product channels', {\n 'description': 'What channels will this snippet be available in?',\n 'fields': (('on_release', 'on_beta', 'on_aurora', 'on_nightly', 'on_esr'),)\n }),\n ('Targeting', {\n 'fields': (\n 'filtr_is_default_browser',\n 'filtr_updates_enabled',\n 'filtr_updates_autodownload_enabled',\n 'filtr_profile_age_created',\n 'filtr_firefox_version',\n 'filtr_previous_session_end',\n 'filtr_uses_firefox_sync',\n 'filtr_country',\n 'filtr_is_developer',\n 'filtr_current_search_engine',\n 'filtr_browser_addon',\n 'filtr_total_bookmarks_count',\n )\n }),\n ('Other Info', {\n 'fields': ('creator', ('created', 'modified'), 'jexl_expr'),\n }),\n )\n\n def save_model(self, request, obj, form, change):\n if not obj.creator_id:\n obj.creator = request.user\n statsd.incr('save.target')\n super().save_model(request, obj, form, change)\n", "path": "snippets/base/admin/adminmodels.py"}]}
| 3,100 | 162 |
gh_patches_debug_19517
|
rasdani/github-patches
|
git_diff
|
python-pillow__Pillow-5268
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ImageQt does not work as expected in PyQt5
### What did you do?
Attempted to use ImageQt to load a Pillow image in a PyQt5 application
### What did you expect to happen?
I expected the image to load correctly. There are no errors, but it does not load the image in PyQt5 correctly.
### What actually happened?
The image loaded as a mostly white image with kind of ghost image of the actual photo (see screenshot). I have attached the PyQt5 and PySide6 code, including screenshots from when I ran both files.
**Note: The same code works in PySide6, but not in PyQt5.**
### What are your OS, Python and Pillow versions?
* OS: MacOS Mojave and Windows 10
* Python: Python 3.9
* Pillow: 8.0.0

<!--
Please include **code** that reproduces the issue and whenever possible, an **image** that demonstrates the issue. Please upload images to GitHub, not to third-party file hosting sites. If necessary, add the image to a zip or tar archive.
The best reproductions are self-contained scripts with minimal dependencies. If you are using a framework such as Plone, Django, or Buildout, try to replicate the issue just using Pillow.
-->
```python
import sys
from PIL import Image, ImageQt
from PyQt5.QtGui import QPixmap, QImage
from PyQt5.QtWidgets import QWidget, QLabel
from PyQt5.QtWidgets import QVBoxLayout, QApplication
class ImageViewer(QWidget):
def __init__(self):
QWidget.__init__(self)
self.setWindowTitle("PyQt Image Viewer")
# Open up image in Pillow
image = Image.open("pink_flower.jpg")
qt_image = ImageQt.ImageQt(image)
pixmap = QPixmap.fromImage(qt_image)
self.image_label = QLabel('')
self.image_label.setPixmap(pixmap)
self.main_layout = QVBoxLayout()
self.main_layout.addWidget(self.image_label)
self.setLayout(self.main_layout)
if __name__ == "__main__":
app = QApplication(sys.argv)
viewer = ImageViewer()
viewer.show()
app.exec_()
```
[pyqt_pillow_issue.zip](https://github.com/python-pillow/Pillow/files/5976581/pyqt_pillow_issue.zip)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/PIL/ImageQt.py`
Content:
```
1 #
2 # The Python Imaging Library.
3 # $Id$
4 #
5 # a simple Qt image interface.
6 #
7 # history:
8 # 2006-06-03 fl: created
9 # 2006-06-04 fl: inherit from QImage instead of wrapping it
10 # 2006-06-05 fl: removed toimage helper; move string support to ImageQt
11 # 2013-11-13 fl: add support for Qt5 ([email protected])
12 #
13 # Copyright (c) 2006 by Secret Labs AB
14 # Copyright (c) 2006 by Fredrik Lundh
15 #
16 # See the README file for information on usage and redistribution.
17 #
18
19 import sys
20 from io import BytesIO
21
22 from . import Image
23 from ._util import isPath
24
25 qt_versions = [
26 ["6", "PyQt6"],
27 ["side6", "PySide6"],
28 ["5", "PyQt5"],
29 ["side2", "PySide2"],
30 ]
31
32 # If a version has already been imported, attempt it first
33 qt_versions.sort(key=lambda qt_version: qt_version[1] in sys.modules, reverse=True)
34 for qt_version, qt_module in qt_versions:
35 try:
36 if qt_module == "PyQt6":
37 from PyQt6.QtCore import QBuffer, QIODevice
38 from PyQt6.QtGui import QImage, QPixmap, qRgba
39 elif qt_module == "PySide6":
40 from PySide6.QtCore import QBuffer, QIODevice
41 from PySide6.QtGui import QImage, QPixmap, qRgba
42 elif qt_module == "PyQt5":
43 from PyQt5.QtCore import QBuffer, QIODevice
44 from PyQt5.QtGui import QImage, QPixmap, qRgba
45 elif qt_module == "PySide2":
46 from PySide2.QtCore import QBuffer, QIODevice
47 from PySide2.QtGui import QImage, QPixmap, qRgba
48 except (ImportError, RuntimeError):
49 continue
50 qt_is_installed = True
51 break
52 else:
53 qt_is_installed = False
54 qt_version = None
55
56
57 def rgb(r, g, b, a=255):
58 """(Internal) Turns an RGB color into a Qt compatible color integer."""
59 # use qRgb to pack the colors, and then turn the resulting long
60 # into a negative integer with the same bitpattern.
61 return qRgba(r, g, b, a) & 0xFFFFFFFF
62
63
64 def fromqimage(im):
65 """
66 :param im: QImage or PIL ImageQt object
67 """
68 buffer = QBuffer()
69 qt_openmode = QIODevice.OpenMode if qt_version == "6" else QIODevice
70 buffer.open(qt_openmode.ReadWrite)
71 # preserve alpha channel with png
72 # otherwise ppm is more friendly with Image.open
73 if im.hasAlphaChannel():
74 im.save(buffer, "png")
75 else:
76 im.save(buffer, "ppm")
77
78 b = BytesIO()
79 b.write(buffer.data())
80 buffer.close()
81 b.seek(0)
82
83 return Image.open(b)
84
85
86 def fromqpixmap(im):
87 return fromqimage(im)
88 # buffer = QBuffer()
89 # buffer.open(QIODevice.ReadWrite)
90 # # im.save(buffer)
91 # # What if png doesn't support some image features like animation?
92 # im.save(buffer, 'ppm')
93 # bytes_io = BytesIO()
94 # bytes_io.write(buffer.data())
95 # buffer.close()
96 # bytes_io.seek(0)
97 # return Image.open(bytes_io)
98
99
100 def align8to32(bytes, width, mode):
101 """
102 converts each scanline of data from 8 bit to 32 bit aligned
103 """
104
105 bits_per_pixel = {"1": 1, "L": 8, "P": 8}[mode]
106
107 # calculate bytes per line and the extra padding if needed
108 bits_per_line = bits_per_pixel * width
109 full_bytes_per_line, remaining_bits_per_line = divmod(bits_per_line, 8)
110 bytes_per_line = full_bytes_per_line + (1 if remaining_bits_per_line else 0)
111
112 extra_padding = -bytes_per_line % 4
113
114 # already 32 bit aligned by luck
115 if not extra_padding:
116 return bytes
117
118 new_data = []
119 for i in range(len(bytes) // bytes_per_line):
120 new_data.append(
121 bytes[i * bytes_per_line : (i + 1) * bytes_per_line]
122 + b"\x00" * extra_padding
123 )
124
125 return b"".join(new_data)
126
127
128 def _toqclass_helper(im):
129 data = None
130 colortable = None
131 exclusive_fp = False
132
133 # handle filename, if given instead of image name
134 if hasattr(im, "toUtf8"):
135 # FIXME - is this really the best way to do this?
136 im = str(im.toUtf8(), "utf-8")
137 if isPath(im):
138 im = Image.open(im)
139 exclusive_fp = True
140
141 qt_format = QImage.Format if qt_version == "6" else QImage
142 if im.mode == "1":
143 format = qt_format.Format_Mono
144 elif im.mode == "L":
145 format = qt_format.Format_Indexed8
146 colortable = []
147 for i in range(256):
148 colortable.append(rgb(i, i, i))
149 elif im.mode == "P":
150 format = qt_format.Format_Indexed8
151 colortable = []
152 palette = im.getpalette()
153 for i in range(0, len(palette), 3):
154 colortable.append(rgb(*palette[i : i + 3]))
155 elif im.mode == "RGB":
156 data = im.tobytes("raw", "BGRX")
157 format = qt_format.Format_RGB32
158 elif im.mode == "RGBA":
159 data = im.tobytes("raw", "BGRA")
160 format = qt_format.Format_ARGB32
161 else:
162 if exclusive_fp:
163 im.close()
164 raise ValueError(f"unsupported image mode {repr(im.mode)}")
165
166 size = im.size
167 __data = data or align8to32(im.tobytes(), size[0], im.mode)
168 if exclusive_fp:
169 im.close()
170 return {"data": __data, "size": size, "format": format, "colortable": colortable}
171
172
173 if qt_is_installed:
174
175 class ImageQt(QImage):
176 def __init__(self, im):
177 """
178 An PIL image wrapper for Qt. This is a subclass of PyQt's QImage
179 class.
180
181 :param im: A PIL Image object, or a file name (given either as
182 Python string or a PyQt string object).
183 """
184 im_data = _toqclass_helper(im)
185 # must keep a reference, or Qt will crash!
186 # All QImage constructors that take data operate on an existing
187 # buffer, so this buffer has to hang on for the life of the image.
188 # Fixes https://github.com/python-pillow/Pillow/issues/1370
189 self.__data = im_data["data"]
190 super().__init__(
191 self.__data,
192 im_data["size"][0],
193 im_data["size"][1],
194 im_data["format"],
195 )
196 if im_data["colortable"]:
197 self.setColorTable(im_data["colortable"])
198
199
200 def toqimage(im):
201 return ImageQt(im)
202
203
204 def toqpixmap(im):
205 # # This doesn't work. For now using a dumb approach.
206 # im_data = _toqclass_helper(im)
207 # result = QPixmap(im_data["size"][0], im_data["size"][1])
208 # result.loadFromData(im_data["data"])
209 # Fix some strange bug that causes
210 if im.mode == "RGB":
211 im = im.convert("RGBA")
212
213 qimage = toqimage(im)
214 return QPixmap.fromImage(qimage)
215
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/PIL/ImageQt.py b/src/PIL/ImageQt.py
--- a/src/PIL/ImageQt.py
+++ b/src/PIL/ImageQt.py
@@ -153,7 +153,10 @@
for i in range(0, len(palette), 3):
colortable.append(rgb(*palette[i : i + 3]))
elif im.mode == "RGB":
- data = im.tobytes("raw", "BGRX")
+ # Populate the 4th channel with 255
+ im = im.convert("RGBA")
+
+ data = im.tobytes("raw", "BGRA")
format = qt_format.Format_RGB32
elif im.mode == "RGBA":
data = im.tobytes("raw", "BGRA")
@@ -206,9 +209,5 @@
# im_data = _toqclass_helper(im)
# result = QPixmap(im_data["size"][0], im_data["size"][1])
# result.loadFromData(im_data["data"])
- # Fix some strange bug that causes
- if im.mode == "RGB":
- im = im.convert("RGBA")
-
qimage = toqimage(im)
return QPixmap.fromImage(qimage)
|
{"golden_diff": "diff --git a/src/PIL/ImageQt.py b/src/PIL/ImageQt.py\n--- a/src/PIL/ImageQt.py\n+++ b/src/PIL/ImageQt.py\n@@ -153,7 +153,10 @@\n for i in range(0, len(palette), 3):\n colortable.append(rgb(*palette[i : i + 3]))\n elif im.mode == \"RGB\":\n- data = im.tobytes(\"raw\", \"BGRX\")\n+ # Populate the 4th channel with 255\n+ im = im.convert(\"RGBA\")\n+\n+ data = im.tobytes(\"raw\", \"BGRA\")\n format = qt_format.Format_RGB32\n elif im.mode == \"RGBA\":\n data = im.tobytes(\"raw\", \"BGRA\")\n@@ -206,9 +209,5 @@\n # im_data = _toqclass_helper(im)\n # result = QPixmap(im_data[\"size\"][0], im_data[\"size\"][1])\n # result.loadFromData(im_data[\"data\"])\n- # Fix some strange bug that causes\n- if im.mode == \"RGB\":\n- im = im.convert(\"RGBA\")\n-\n qimage = toqimage(im)\n return QPixmap.fromImage(qimage)\n", "issue": "ImageQt does not work as expected in PyQt5\n\r\n### What did you do?\r\n\r\nAttempted to use ImageQt to load a Pillow image in a PyQt5 application\r\n\r\n### What did you expect to happen?\r\n\r\nI expected the image to load correctly. There are no errors, but it does not load the image in PyQt5 correctly.\r\n\r\n### What actually happened?\r\n\r\nThe image loaded as a mostly white image with kind of ghost image of the actual photo (see screenshot). I have attached the PyQt5 and PySide6 code, including screenshots from when I ran both files.\r\n\r\n**Note: The same code works in PySide6, but not in PyQt5.**\r\n\r\n### What are your OS, Python and Pillow versions?\r\n\r\n* OS: MacOS Mojave and Windows 10\r\n* Python: Python 3.9\r\n* Pillow: 8.0.0\r\n\r\n\r\n\r\n\r\n\r\n<!--\r\nPlease include **code** that reproduces the issue and whenever possible, an **image** that demonstrates the issue. Please upload images to GitHub, not to third-party file hosting sites. If necessary, add the image to a zip or tar archive.\r\n\r\nThe best reproductions are self-contained scripts with minimal dependencies. If you are using a framework such as Plone, Django, or Buildout, try to replicate the issue just using Pillow.\r\n-->\r\n\r\n```python\r\nimport sys\r\n\r\nfrom PIL import Image, ImageQt\r\nfrom PyQt5.QtGui import QPixmap, QImage\r\nfrom PyQt5.QtWidgets import QWidget, QLabel\r\nfrom PyQt5.QtWidgets import QVBoxLayout, QApplication\r\n\r\n\r\nclass ImageViewer(QWidget):\r\n\r\n def __init__(self):\r\n QWidget.__init__(self)\r\n self.setWindowTitle(\"PyQt Image Viewer\")\r\n\r\n # Open up image in Pillow\r\n image = Image.open(\"pink_flower.jpg\")\r\n qt_image = ImageQt.ImageQt(image)\r\n pixmap = QPixmap.fromImage(qt_image)\r\n\r\n self.image_label = QLabel('')\r\n self.image_label.setPixmap(pixmap)\r\n\r\n self.main_layout = QVBoxLayout()\r\n self.main_layout.addWidget(self.image_label)\r\n self.setLayout(self.main_layout)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n app = QApplication(sys.argv)\r\n viewer = ImageViewer()\r\n viewer.show()\r\n app.exec_()\r\n```\r\n\r\n[pyqt_pillow_issue.zip](https://github.com/python-pillow/Pillow/files/5976581/pyqt_pillow_issue.zip)\n", "before_files": [{"content": "#\n# The Python Imaging Library.\n# $Id$\n#\n# a simple Qt image interface.\n#\n# history:\n# 2006-06-03 fl: created\n# 2006-06-04 fl: inherit from QImage instead of wrapping it\n# 2006-06-05 fl: removed toimage helper; move string support to ImageQt\n# 2013-11-13 fl: add support for Qt5 ([email protected])\n#\n# Copyright (c) 2006 by Secret Labs AB\n# Copyright (c) 2006 by Fredrik Lundh\n#\n# See the README file for information on usage and redistribution.\n#\n\nimport sys\nfrom io import BytesIO\n\nfrom . import Image\nfrom ._util import isPath\n\nqt_versions = [\n [\"6\", \"PyQt6\"],\n [\"side6\", \"PySide6\"],\n [\"5\", \"PyQt5\"],\n [\"side2\", \"PySide2\"],\n]\n\n# If a version has already been imported, attempt it first\nqt_versions.sort(key=lambda qt_version: qt_version[1] in sys.modules, reverse=True)\nfor qt_version, qt_module in qt_versions:\n try:\n if qt_module == \"PyQt6\":\n from PyQt6.QtCore import QBuffer, QIODevice\n from PyQt6.QtGui import QImage, QPixmap, qRgba\n elif qt_module == \"PySide6\":\n from PySide6.QtCore import QBuffer, QIODevice\n from PySide6.QtGui import QImage, QPixmap, qRgba\n elif qt_module == \"PyQt5\":\n from PyQt5.QtCore import QBuffer, QIODevice\n from PyQt5.QtGui import QImage, QPixmap, qRgba\n elif qt_module == \"PySide2\":\n from PySide2.QtCore import QBuffer, QIODevice\n from PySide2.QtGui import QImage, QPixmap, qRgba\n except (ImportError, RuntimeError):\n continue\n qt_is_installed = True\n break\nelse:\n qt_is_installed = False\n qt_version = None\n\n\ndef rgb(r, g, b, a=255):\n \"\"\"(Internal) Turns an RGB color into a Qt compatible color integer.\"\"\"\n # use qRgb to pack the colors, and then turn the resulting long\n # into a negative integer with the same bitpattern.\n return qRgba(r, g, b, a) & 0xFFFFFFFF\n\n\ndef fromqimage(im):\n \"\"\"\n :param im: QImage or PIL ImageQt object\n \"\"\"\n buffer = QBuffer()\n qt_openmode = QIODevice.OpenMode if qt_version == \"6\" else QIODevice\n buffer.open(qt_openmode.ReadWrite)\n # preserve alpha channel with png\n # otherwise ppm is more friendly with Image.open\n if im.hasAlphaChannel():\n im.save(buffer, \"png\")\n else:\n im.save(buffer, \"ppm\")\n\n b = BytesIO()\n b.write(buffer.data())\n buffer.close()\n b.seek(0)\n\n return Image.open(b)\n\n\ndef fromqpixmap(im):\n return fromqimage(im)\n # buffer = QBuffer()\n # buffer.open(QIODevice.ReadWrite)\n # # im.save(buffer)\n # # What if png doesn't support some image features like animation?\n # im.save(buffer, 'ppm')\n # bytes_io = BytesIO()\n # bytes_io.write(buffer.data())\n # buffer.close()\n # bytes_io.seek(0)\n # return Image.open(bytes_io)\n\n\ndef align8to32(bytes, width, mode):\n \"\"\"\n converts each scanline of data from 8 bit to 32 bit aligned\n \"\"\"\n\n bits_per_pixel = {\"1\": 1, \"L\": 8, \"P\": 8}[mode]\n\n # calculate bytes per line and the extra padding if needed\n bits_per_line = bits_per_pixel * width\n full_bytes_per_line, remaining_bits_per_line = divmod(bits_per_line, 8)\n bytes_per_line = full_bytes_per_line + (1 if remaining_bits_per_line else 0)\n\n extra_padding = -bytes_per_line % 4\n\n # already 32 bit aligned by luck\n if not extra_padding:\n return bytes\n\n new_data = []\n for i in range(len(bytes) // bytes_per_line):\n new_data.append(\n bytes[i * bytes_per_line : (i + 1) * bytes_per_line]\n + b\"\\x00\" * extra_padding\n )\n\n return b\"\".join(new_data)\n\n\ndef _toqclass_helper(im):\n data = None\n colortable = None\n exclusive_fp = False\n\n # handle filename, if given instead of image name\n if hasattr(im, \"toUtf8\"):\n # FIXME - is this really the best way to do this?\n im = str(im.toUtf8(), \"utf-8\")\n if isPath(im):\n im = Image.open(im)\n exclusive_fp = True\n\n qt_format = QImage.Format if qt_version == \"6\" else QImage\n if im.mode == \"1\":\n format = qt_format.Format_Mono\n elif im.mode == \"L\":\n format = qt_format.Format_Indexed8\n colortable = []\n for i in range(256):\n colortable.append(rgb(i, i, i))\n elif im.mode == \"P\":\n format = qt_format.Format_Indexed8\n colortable = []\n palette = im.getpalette()\n for i in range(0, len(palette), 3):\n colortable.append(rgb(*palette[i : i + 3]))\n elif im.mode == \"RGB\":\n data = im.tobytes(\"raw\", \"BGRX\")\n format = qt_format.Format_RGB32\n elif im.mode == \"RGBA\":\n data = im.tobytes(\"raw\", \"BGRA\")\n format = qt_format.Format_ARGB32\n else:\n if exclusive_fp:\n im.close()\n raise ValueError(f\"unsupported image mode {repr(im.mode)}\")\n\n size = im.size\n __data = data or align8to32(im.tobytes(), size[0], im.mode)\n if exclusive_fp:\n im.close()\n return {\"data\": __data, \"size\": size, \"format\": format, \"colortable\": colortable}\n\n\nif qt_is_installed:\n\n class ImageQt(QImage):\n def __init__(self, im):\n \"\"\"\n An PIL image wrapper for Qt. This is a subclass of PyQt's QImage\n class.\n\n :param im: A PIL Image object, or a file name (given either as\n Python string or a PyQt string object).\n \"\"\"\n im_data = _toqclass_helper(im)\n # must keep a reference, or Qt will crash!\n # All QImage constructors that take data operate on an existing\n # buffer, so this buffer has to hang on for the life of the image.\n # Fixes https://github.com/python-pillow/Pillow/issues/1370\n self.__data = im_data[\"data\"]\n super().__init__(\n self.__data,\n im_data[\"size\"][0],\n im_data[\"size\"][1],\n im_data[\"format\"],\n )\n if im_data[\"colortable\"]:\n self.setColorTable(im_data[\"colortable\"])\n\n\ndef toqimage(im):\n return ImageQt(im)\n\n\ndef toqpixmap(im):\n # # This doesn't work. For now using a dumb approach.\n # im_data = _toqclass_helper(im)\n # result = QPixmap(im_data[\"size\"][0], im_data[\"size\"][1])\n # result.loadFromData(im_data[\"data\"])\n # Fix some strange bug that causes\n if im.mode == \"RGB\":\n im = im.convert(\"RGBA\")\n\n qimage = toqimage(im)\n return QPixmap.fromImage(qimage)\n", "path": "src/PIL/ImageQt.py"}], "after_files": [{"content": "#\n# The Python Imaging Library.\n# $Id$\n#\n# a simple Qt image interface.\n#\n# history:\n# 2006-06-03 fl: created\n# 2006-06-04 fl: inherit from QImage instead of wrapping it\n# 2006-06-05 fl: removed toimage helper; move string support to ImageQt\n# 2013-11-13 fl: add support for Qt5 ([email protected])\n#\n# Copyright (c) 2006 by Secret Labs AB\n# Copyright (c) 2006 by Fredrik Lundh\n#\n# See the README file for information on usage and redistribution.\n#\n\nimport sys\nfrom io import BytesIO\n\nfrom . import Image\nfrom ._util import isPath\n\nqt_versions = [\n [\"6\", \"PyQt6\"],\n [\"side6\", \"PySide6\"],\n [\"5\", \"PyQt5\"],\n [\"side2\", \"PySide2\"],\n]\n\n# If a version has already been imported, attempt it first\nqt_versions.sort(key=lambda qt_version: qt_version[1] in sys.modules, reverse=True)\nfor qt_version, qt_module in qt_versions:\n try:\n if qt_module == \"PyQt6\":\n from PyQt6.QtCore import QBuffer, QIODevice\n from PyQt6.QtGui import QImage, QPixmap, qRgba\n elif qt_module == \"PySide6\":\n from PySide6.QtCore import QBuffer, QIODevice\n from PySide6.QtGui import QImage, QPixmap, qRgba\n elif qt_module == \"PyQt5\":\n from PyQt5.QtCore import QBuffer, QIODevice\n from PyQt5.QtGui import QImage, QPixmap, qRgba\n elif qt_module == \"PySide2\":\n from PySide2.QtCore import QBuffer, QIODevice\n from PySide2.QtGui import QImage, QPixmap, qRgba\n except (ImportError, RuntimeError):\n continue\n qt_is_installed = True\n break\nelse:\n qt_is_installed = False\n qt_version = None\n\n\ndef rgb(r, g, b, a=255):\n \"\"\"(Internal) Turns an RGB color into a Qt compatible color integer.\"\"\"\n # use qRgb to pack the colors, and then turn the resulting long\n # into a negative integer with the same bitpattern.\n return qRgba(r, g, b, a) & 0xFFFFFFFF\n\n\ndef fromqimage(im):\n \"\"\"\n :param im: QImage or PIL ImageQt object\n \"\"\"\n buffer = QBuffer()\n qt_openmode = QIODevice.OpenMode if qt_version == \"6\" else QIODevice\n buffer.open(qt_openmode.ReadWrite)\n # preserve alpha channel with png\n # otherwise ppm is more friendly with Image.open\n if im.hasAlphaChannel():\n im.save(buffer, \"png\")\n else:\n im.save(buffer, \"ppm\")\n\n b = BytesIO()\n b.write(buffer.data())\n buffer.close()\n b.seek(0)\n\n return Image.open(b)\n\n\ndef fromqpixmap(im):\n return fromqimage(im)\n # buffer = QBuffer()\n # buffer.open(QIODevice.ReadWrite)\n # # im.save(buffer)\n # # What if png doesn't support some image features like animation?\n # im.save(buffer, 'ppm')\n # bytes_io = BytesIO()\n # bytes_io.write(buffer.data())\n # buffer.close()\n # bytes_io.seek(0)\n # return Image.open(bytes_io)\n\n\ndef align8to32(bytes, width, mode):\n \"\"\"\n converts each scanline of data from 8 bit to 32 bit aligned\n \"\"\"\n\n bits_per_pixel = {\"1\": 1, \"L\": 8, \"P\": 8}[mode]\n\n # calculate bytes per line and the extra padding if needed\n bits_per_line = bits_per_pixel * width\n full_bytes_per_line, remaining_bits_per_line = divmod(bits_per_line, 8)\n bytes_per_line = full_bytes_per_line + (1 if remaining_bits_per_line else 0)\n\n extra_padding = -bytes_per_line % 4\n\n # already 32 bit aligned by luck\n if not extra_padding:\n return bytes\n\n new_data = []\n for i in range(len(bytes) // bytes_per_line):\n new_data.append(\n bytes[i * bytes_per_line : (i + 1) * bytes_per_line]\n + b\"\\x00\" * extra_padding\n )\n\n return b\"\".join(new_data)\n\n\ndef _toqclass_helper(im):\n data = None\n colortable = None\n exclusive_fp = False\n\n # handle filename, if given instead of image name\n if hasattr(im, \"toUtf8\"):\n # FIXME - is this really the best way to do this?\n im = str(im.toUtf8(), \"utf-8\")\n if isPath(im):\n im = Image.open(im)\n exclusive_fp = True\n\n qt_format = QImage.Format if qt_version == \"6\" else QImage\n if im.mode == \"1\":\n format = qt_format.Format_Mono\n elif im.mode == \"L\":\n format = qt_format.Format_Indexed8\n colortable = []\n for i in range(256):\n colortable.append(rgb(i, i, i))\n elif im.mode == \"P\":\n format = qt_format.Format_Indexed8\n colortable = []\n palette = im.getpalette()\n for i in range(0, len(palette), 3):\n colortable.append(rgb(*palette[i : i + 3]))\n elif im.mode == \"RGB\":\n # Populate the 4th channel with 255\n im = im.convert(\"RGBA\")\n\n data = im.tobytes(\"raw\", \"BGRA\")\n format = qt_format.Format_RGB32\n elif im.mode == \"RGBA\":\n data = im.tobytes(\"raw\", \"BGRA\")\n format = qt_format.Format_ARGB32\n else:\n if exclusive_fp:\n im.close()\n raise ValueError(f\"unsupported image mode {repr(im.mode)}\")\n\n size = im.size\n __data = data or align8to32(im.tobytes(), size[0], im.mode)\n if exclusive_fp:\n im.close()\n return {\"data\": __data, \"size\": size, \"format\": format, \"colortable\": colortable}\n\n\nif qt_is_installed:\n\n class ImageQt(QImage):\n def __init__(self, im):\n \"\"\"\n An PIL image wrapper for Qt. This is a subclass of PyQt's QImage\n class.\n\n :param im: A PIL Image object, or a file name (given either as\n Python string or a PyQt string object).\n \"\"\"\n im_data = _toqclass_helper(im)\n # must keep a reference, or Qt will crash!\n # All QImage constructors that take data operate on an existing\n # buffer, so this buffer has to hang on for the life of the image.\n # Fixes https://github.com/python-pillow/Pillow/issues/1370\n self.__data = im_data[\"data\"]\n super().__init__(\n self.__data,\n im_data[\"size\"][0],\n im_data[\"size\"][1],\n im_data[\"format\"],\n )\n if im_data[\"colortable\"]:\n self.setColorTable(im_data[\"colortable\"])\n\n\ndef toqimage(im):\n return ImageQt(im)\n\n\ndef toqpixmap(im):\n # # This doesn't work. For now using a dumb approach.\n # im_data = _toqclass_helper(im)\n # result = QPixmap(im_data[\"size\"][0], im_data[\"size\"][1])\n # result.loadFromData(im_data[\"data\"])\n qimage = toqimage(im)\n return QPixmap.fromImage(qimage)\n", "path": "src/PIL/ImageQt.py"}]}
| 3,100 | 279 |
gh_patches_debug_10354
|
rasdani/github-patches
|
git_diff
|
holoviz__panel-1167
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Panel.serve does not work for panel.Templates without instantiation
#### System Info
- Panel: 0.8.0
- Bokeh: 1.4.0
- Tornado 6.0.3
- Python: 3.7.4
- OS: Windows 8.1
- Browser: Chrome
#### My Pain
I'm trying to serve a list of apps and one of them uses the Panel Templating System.
If I provide a function that returns a Template to `pn.Serve` the app is not shown.
#### Additional Info
If I provide a function that returns a Column to `pn.Serve` the app is shown
If I provide an instance of the Template to `pn.Serve` the app is shown
#### Screenshot

#### Code
````bash
import holoviews as hv
import panel as pn
TEMPLATE = """
<!-- This template is inspired by
- Bokeh Template. See https://panel.pyviz.org/user_guide/Templates.html
-->
{% extends base %}
<!-- goes in head -->
{% block postamble %}
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
{% endblock %}
<!-- goes in body -->
{% block contents %}
<section id="page">
<header>
{{ embed(roots.header) }}
</header>
<main>
{{ embed(roots.main) }}
</main>
</section>
{% endblock %}
"""
class DashboardTemplate(pn.Template):
"""A Basic App Template"""
def __init__(self, **params):
template = TEMPLATE
self.header = pn.Row()
self.main = pn.Column()
items = {
"header": self.header,
"main": self.main,
}
super().__init__(template=template, items=items, **params)
def app_template():
app = DashboardTemplate()
component = pn.Column("# App Template",)
app.main[:] = [component]
return app
def app_column():
return pn.Column("# App Column")
APP_ROUTES = {
"app_column": app_column,
"app_template": app_template,
"app_template_instance": app_template(),
}
pn.serve(APP_ROUTES, port=14033, dev=True)
````
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `panel/io/server.py`
Content:
```
1 """
2 Utilities for creating bokeh Server instances.
3 """
4 from __future__ import absolute_import, division, unicode_literals
5
6 import os
7 import signal
8 import threading
9 import uuid
10
11 from contextlib import contextmanager
12 from functools import partial
13 from types import FunctionType
14
15 from bokeh.document.events import ModelChangedEvent
16 from bokeh.server.server import Server
17 from tornado.websocket import WebSocketHandler
18
19 from .state import state
20
21
22 #---------------------------------------------------------------------
23 # Private API
24 #---------------------------------------------------------------------
25
26 INDEX_HTML = os.path.join(os.path.dirname(__file__), '..', '_templates', "index.html")
27
28 def _origin_url(url):
29 if url.startswith("http"):
30 url = url.split("//")[1]
31 return url
32
33
34 def _server_url(url, port):
35 if url.startswith("http"):
36 return '%s:%d%s' % (url.rsplit(':', 1)[0], port, "/")
37 else:
38 return 'http://%s:%d%s' % (url.split(':')[0], port, "/")
39
40 def _eval_panel(panel, server_id, title, doc):
41 from ..template import Template
42 from ..pane import panel as as_panel
43
44 if isinstance(panel, Template):
45 return panel._modify_doc(server_id, title, doc)
46 elif isinstance(panel, FunctionType):
47 panel = panel()
48 return as_panel(panel)._modify_doc(server_id, title, doc)
49
50 #---------------------------------------------------------------------
51 # Public API
52 #---------------------------------------------------------------------
53
54
55 @contextmanager
56 def unlocked():
57 """
58 Context manager which unlocks a Document and dispatches
59 ModelChangedEvents triggered in the context body to all sockets
60 on current sessions.
61 """
62 curdoc = state.curdoc
63 if curdoc is None or curdoc.session_context is None:
64 yield
65 return
66 connections = curdoc.session_context.session._subscribed_connections
67
68 hold = curdoc._hold
69 if hold:
70 old_events = list(curdoc._held_events)
71 else:
72 old_events = []
73 curdoc.hold()
74 try:
75 yield
76 events = []
77 for conn in connections:
78 socket = conn._socket
79 for event in curdoc._held_events:
80 if (isinstance(event, ModelChangedEvent) and event not in old_events
81 and hasattr(socket, 'write_message')):
82 msg = conn.protocol.create('PATCH-DOC', [event])
83 WebSocketHandler.write_message(socket, msg.header_json)
84 WebSocketHandler.write_message(socket, msg.metadata_json)
85 WebSocketHandler.write_message(socket, msg.content_json)
86 for header, payload in msg._buffers:
87 WebSocketHandler.write_message(socket, header)
88 WebSocketHandler.write_message(socket, payload, binary=True)
89 elif event not in events:
90 events.append(event)
91 curdoc._held_events = events
92 finally:
93 if not hold:
94 curdoc.unhold()
95
96
97 def serve(panels, port=0, websocket_origin=None, loop=None, show=True,
98 start=True, title=None, verbose=True, **kwargs):
99 """
100 Allows serving one or more panel objects on a single server.
101 The panels argument should be either a Panel object or a function
102 returning a Panel object or a dictionary of these two. If a
103 dictionary is supplied the keys represent the slugs at which
104 each app is served, e.g. `serve({'app': panel1, 'app2': panel2})`
105 will serve apps at /app and /app2 on the server.
106
107 Arguments
108 ---------
109 panel: Viewable, function or {str: Viewable}
110 A Panel object, a function returning a Panel object or a
111 dictionary mapping from the URL slug to either.
112 port: int (optional, default=0)
113 Allows specifying a specific port
114 websocket_origin: str or list(str) (optional)
115 A list of hosts that can connect to the websocket.
116
117 This is typically required when embedding a server app in
118 an external web site.
119
120 If None, "localhost" is used.
121 loop : tornado.ioloop.IOLoop (optional, default=IOLoop.current())
122 The tornado IOLoop to run the Server on
123 show : boolean (optional, default=False)
124 Whether to open the server in a new browser tab on start
125 start : boolean(optional, default=False)
126 Whether to start the Server
127 title: str (optional, default=None)
128 An HTML title for the application
129 verbose: boolean (optional, default=True)
130 Whether to print the address and port
131 kwargs: dict
132 Additional keyword arguments to pass to Server instance
133 """
134 return get_server(panels, port, websocket_origin, loop, show, start,
135 title, verbose, **kwargs)
136
137
138 def get_server(panel, port=0, websocket_origin=None, loop=None,
139 show=False, start=False, title=None, verbose=False, **kwargs):
140 """
141 Returns a Server instance with this panel attached as the root
142 app.
143
144 Arguments
145 ---------
146 panel: Viewable, function or {str: Viewable}
147 A Panel object, a function returning a Panel object or a
148 dictionary mapping from the URL slug to either.
149 port: int (optional, default=0)
150 Allows specifying a specific port
151 websocket_origin: str or list(str) (optional)
152 A list of hosts that can connect to the websocket.
153
154 This is typically required when embedding a server app in
155 an external web site.
156
157 If None, "localhost" is used.
158 loop : tornado.ioloop.IOLoop (optional, default=IOLoop.current())
159 The tornado IOLoop to run the Server on
160 show : boolean (optional, default=False)
161 Whether to open the server in a new browser tab on start
162 start : boolean(optional, default=False)
163 Whether to start the Server
164 title: str (optional, default=None)
165 An HTML title for the application
166 verbose: boolean (optional, default=False)
167 Whether to report the address and port
168 kwargs: dict
169 Additional keyword arguments to pass to Server instance
170
171 Returns
172 -------
173 server : bokeh.server.server.Server
174 Bokeh Server instance running this panel
175 """
176 from tornado.ioloop import IOLoop
177
178 server_id = kwargs.pop('server_id', uuid.uuid4().hex)
179 if isinstance(panel, dict):
180 apps = {slug if slug.startswith('/') else '/'+slug:
181 partial(_eval_panel, p, server_id, title)
182 for slug, p in panel.items()}
183 else:
184 apps = {'/': partial(_eval_panel, panel, server_id, title)}
185
186 opts = dict(kwargs)
187 if loop:
188 loop.make_current()
189 opts['io_loop'] = loop
190 else:
191 opts['io_loop'] = IOLoop.current()
192
193 if 'index' not in opts:
194 opts['index'] = INDEX_HTML
195
196 if websocket_origin:
197 if not isinstance(websocket_origin, list):
198 websocket_origin = [websocket_origin]
199 opts['allow_websocket_origin'] = websocket_origin
200
201 server = Server(apps, port=port, **opts)
202 if verbose:
203 address = server.address or 'localhost'
204 print("Launching server at http://%s:%s" % (address, server.port))
205
206 state._servers[server_id] = (server, panel, [])
207
208 if show:
209 def show_callback():
210 server.show('/')
211 server.io_loop.add_callback(show_callback)
212
213 def sig_exit(*args, **kwargs):
214 server.io_loop.add_callback_from_signal(do_stop)
215
216 def do_stop(*args, **kwargs):
217 server.io_loop.stop()
218
219 try:
220 signal.signal(signal.SIGINT, sig_exit)
221 except ValueError:
222 pass # Can't use signal on a thread
223
224 if start:
225 server.start()
226 try:
227 server.io_loop.start()
228 except RuntimeError:
229 pass
230 return server
231
232
233 class StoppableThread(threading.Thread):
234 """Thread class with a stop() method."""
235
236 def __init__(self, io_loop=None, timeout=1000, **kwargs):
237 from tornado import ioloop
238 super(StoppableThread, self).__init__(**kwargs)
239 self._stop_event = threading.Event()
240 self.io_loop = io_loop
241 self._cb = ioloop.PeriodicCallback(self._check_stopped, timeout)
242 self._cb.start()
243
244 def _check_stopped(self):
245 if self.stopped:
246 self._cb.stop()
247 self.io_loop.stop()
248
249 def run(self):
250 if hasattr(self, '_target'):
251 target, args, kwargs = self._target, self._args, self._kwargs
252 else:
253 target, args, kwargs = self._Thread__target, self._Thread__args, self._Thread__kwargs
254 if not target:
255 return
256 bokeh_server = None
257 try:
258 bokeh_server = target(*args, **kwargs)
259 finally:
260 if isinstance(bokeh_server, Server):
261 bokeh_server.stop()
262 if hasattr(self, '_target'):
263 del self._target, self._args, self._kwargs
264 else:
265 del self._Thread__target, self._Thread__args, self._Thread__kwargs
266
267 def stop(self):
268 self._stop_event.set()
269
270 @property
271 def stopped(self):
272 return self._stop_event.is_set()
273
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/panel/io/server.py b/panel/io/server.py
--- a/panel/io/server.py
+++ b/panel/io/server.py
@@ -41,10 +41,10 @@
from ..template import Template
from ..pane import panel as as_panel
+ if isinstance(panel, FunctionType):
+ panel = panel()
if isinstance(panel, Template):
return panel._modify_doc(server_id, title, doc)
- elif isinstance(panel, FunctionType):
- panel = panel()
return as_panel(panel)._modify_doc(server_id, title, doc)
#---------------------------------------------------------------------
|
{"golden_diff": "diff --git a/panel/io/server.py b/panel/io/server.py\n--- a/panel/io/server.py\n+++ b/panel/io/server.py\n@@ -41,10 +41,10 @@\n from ..template import Template\n from ..pane import panel as as_panel\n \n+ if isinstance(panel, FunctionType):\n+ panel = panel()\n if isinstance(panel, Template):\n return panel._modify_doc(server_id, title, doc)\n- elif isinstance(panel, FunctionType):\n- panel = panel()\n return as_panel(panel)._modify_doc(server_id, title, doc)\n \n #---------------------------------------------------------------------\n", "issue": "Panel.serve does not work for panel.Templates without instantiation\n#### System Info\r\n\r\n- Panel: 0.8.0\r\n- Bokeh: 1.4.0\r\n- Tornado 6.0.3\r\n- Python: 3.7.4\r\n- OS: Windows 8.1\r\n- Browser: Chrome\r\n\r\n#### My Pain\r\n\r\nI'm trying to serve a list of apps and one of them uses the Panel Templating System.\r\n\r\nIf I provide a function that returns a Template to `pn.Serve` the app is not shown.\r\n\r\n#### Additional Info\r\n\r\nIf I provide a function that returns a Column to `pn.Serve` the app is shown\r\nIf I provide an instance of the Template to `pn.Serve` the app is shown\r\n\r\n#### Screenshot\r\n\r\n\r\n\r\n#### Code\r\n\r\n````bash\r\nimport holoviews as hv\r\nimport panel as pn\r\n\r\nTEMPLATE = \"\"\"\r\n<!-- This template is inspired by\r\n- Bokeh Template. See https://panel.pyviz.org/user_guide/Templates.html\r\n-->\r\n{% extends base %}\r\n\r\n<!-- goes in head -->\r\n{% block postamble %}\r\n<meta name=\"viewport\" content=\"width=device-width, initial-scale=1, shrink-to-fit=no\">\r\n{% endblock %}\r\n\r\n<!-- goes in body -->\r\n{% block contents %}\r\n\r\n<section id=\"page\">\r\n <header>\r\n {{ embed(roots.header) }}\r\n </header>\r\n <main>\r\n {{ embed(roots.main) }}\r\n </main>\r\n</section>\r\n\r\n{% endblock %}\r\n\"\"\"\r\n\r\n\r\nclass DashboardTemplate(pn.Template):\r\n \"\"\"A Basic App Template\"\"\"\r\n\r\n def __init__(self, **params):\r\n template = TEMPLATE\r\n\r\n self.header = pn.Row()\r\n self.main = pn.Column()\r\n\r\n items = {\r\n \"header\": self.header,\r\n \"main\": self.main,\r\n }\r\n super().__init__(template=template, items=items, **params)\r\n\r\n\r\ndef app_template():\r\n app = DashboardTemplate()\r\n component = pn.Column(\"# App Template\",)\r\n app.main[:] = [component]\r\n return app\r\n\r\n\r\ndef app_column():\r\n return pn.Column(\"# App Column\")\r\n\r\n\r\nAPP_ROUTES = {\r\n \"app_column\": app_column,\r\n \"app_template\": app_template,\r\n \"app_template_instance\": app_template(),\r\n}\r\n\r\npn.serve(APP_ROUTES, port=14033, dev=True)\r\n````\r\n\n", "before_files": [{"content": "\"\"\"\nUtilities for creating bokeh Server instances.\n\"\"\"\nfrom __future__ import absolute_import, division, unicode_literals\n\nimport os\nimport signal\nimport threading\nimport uuid\n\nfrom contextlib import contextmanager\nfrom functools import partial\nfrom types import FunctionType\n\nfrom bokeh.document.events import ModelChangedEvent\nfrom bokeh.server.server import Server\nfrom tornado.websocket import WebSocketHandler\n\nfrom .state import state\n\n\n#---------------------------------------------------------------------\n# Private API\n#---------------------------------------------------------------------\n\nINDEX_HTML = os.path.join(os.path.dirname(__file__), '..', '_templates', \"index.html\")\n\ndef _origin_url(url):\n if url.startswith(\"http\"):\n url = url.split(\"//\")[1]\n return url\n\n\ndef _server_url(url, port):\n if url.startswith(\"http\"):\n return '%s:%d%s' % (url.rsplit(':', 1)[0], port, \"/\")\n else:\n return 'http://%s:%d%s' % (url.split(':')[0], port, \"/\")\n\ndef _eval_panel(panel, server_id, title, doc):\n from ..template import Template\n from ..pane import panel as as_panel\n\n if isinstance(panel, Template):\n return panel._modify_doc(server_id, title, doc)\n elif isinstance(panel, FunctionType):\n panel = panel()\n return as_panel(panel)._modify_doc(server_id, title, doc)\n \n#---------------------------------------------------------------------\n# Public API\n#---------------------------------------------------------------------\n\n\n@contextmanager\ndef unlocked():\n \"\"\"\n Context manager which unlocks a Document and dispatches\n ModelChangedEvents triggered in the context body to all sockets\n on current sessions.\n \"\"\"\n curdoc = state.curdoc\n if curdoc is None or curdoc.session_context is None:\n yield\n return\n connections = curdoc.session_context.session._subscribed_connections\n\n hold = curdoc._hold\n if hold:\n old_events = list(curdoc._held_events)\n else:\n old_events = []\n curdoc.hold()\n try:\n yield\n events = []\n for conn in connections:\n socket = conn._socket\n for event in curdoc._held_events:\n if (isinstance(event, ModelChangedEvent) and event not in old_events\n and hasattr(socket, 'write_message')):\n msg = conn.protocol.create('PATCH-DOC', [event])\n WebSocketHandler.write_message(socket, msg.header_json)\n WebSocketHandler.write_message(socket, msg.metadata_json)\n WebSocketHandler.write_message(socket, msg.content_json)\n for header, payload in msg._buffers:\n WebSocketHandler.write_message(socket, header)\n WebSocketHandler.write_message(socket, payload, binary=True)\n elif event not in events:\n events.append(event)\n curdoc._held_events = events\n finally:\n if not hold:\n curdoc.unhold()\n\n\ndef serve(panels, port=0, websocket_origin=None, loop=None, show=True,\n start=True, title=None, verbose=True, **kwargs):\n \"\"\"\n Allows serving one or more panel objects on a single server.\n The panels argument should be either a Panel object or a function\n returning a Panel object or a dictionary of these two. If a \n dictionary is supplied the keys represent the slugs at which\n each app is served, e.g. `serve({'app': panel1, 'app2': panel2})`\n will serve apps at /app and /app2 on the server.\n\n Arguments\n ---------\n panel: Viewable, function or {str: Viewable}\n A Panel object, a function returning a Panel object or a\n dictionary mapping from the URL slug to either.\n port: int (optional, default=0)\n Allows specifying a specific port\n websocket_origin: str or list(str) (optional)\n A list of hosts that can connect to the websocket.\n\n This is typically required when embedding a server app in\n an external web site.\n\n If None, \"localhost\" is used.\n loop : tornado.ioloop.IOLoop (optional, default=IOLoop.current())\n The tornado IOLoop to run the Server on\n show : boolean (optional, default=False)\n Whether to open the server in a new browser tab on start\n start : boolean(optional, default=False)\n Whether to start the Server\n title: str (optional, default=None)\n An HTML title for the application\n verbose: boolean (optional, default=True)\n Whether to print the address and port\n kwargs: dict\n Additional keyword arguments to pass to Server instance\n \"\"\"\n return get_server(panels, port, websocket_origin, loop, show, start,\n title, verbose, **kwargs)\n\n\ndef get_server(panel, port=0, websocket_origin=None, loop=None,\n show=False, start=False, title=None, verbose=False, **kwargs):\n \"\"\"\n Returns a Server instance with this panel attached as the root\n app.\n\n Arguments\n ---------\n panel: Viewable, function or {str: Viewable}\n A Panel object, a function returning a Panel object or a\n dictionary mapping from the URL slug to either.\n port: int (optional, default=0)\n Allows specifying a specific port\n websocket_origin: str or list(str) (optional)\n A list of hosts that can connect to the websocket.\n\n This is typically required when embedding a server app in\n an external web site.\n\n If None, \"localhost\" is used.\n loop : tornado.ioloop.IOLoop (optional, default=IOLoop.current())\n The tornado IOLoop to run the Server on\n show : boolean (optional, default=False)\n Whether to open the server in a new browser tab on start\n start : boolean(optional, default=False)\n Whether to start the Server\n title: str (optional, default=None)\n An HTML title for the application\n verbose: boolean (optional, default=False)\n Whether to report the address and port\n kwargs: dict\n Additional keyword arguments to pass to Server instance\n\n Returns\n -------\n server : bokeh.server.server.Server\n Bokeh Server instance running this panel\n \"\"\"\n from tornado.ioloop import IOLoop\n\n server_id = kwargs.pop('server_id', uuid.uuid4().hex)\n if isinstance(panel, dict):\n apps = {slug if slug.startswith('/') else '/'+slug:\n partial(_eval_panel, p, server_id, title)\n for slug, p in panel.items()}\n else:\n apps = {'/': partial(_eval_panel, panel, server_id, title)}\n\n opts = dict(kwargs)\n if loop:\n loop.make_current()\n opts['io_loop'] = loop\n else:\n opts['io_loop'] = IOLoop.current()\n\n if 'index' not in opts:\n opts['index'] = INDEX_HTML\n\n if websocket_origin:\n if not isinstance(websocket_origin, list):\n websocket_origin = [websocket_origin]\n opts['allow_websocket_origin'] = websocket_origin\n\n server = Server(apps, port=port, **opts)\n if verbose:\n address = server.address or 'localhost'\n print(\"Launching server at http://%s:%s\" % (address, server.port))\n\n state._servers[server_id] = (server, panel, [])\n\n if show:\n def show_callback():\n server.show('/')\n server.io_loop.add_callback(show_callback)\n\n def sig_exit(*args, **kwargs):\n server.io_loop.add_callback_from_signal(do_stop)\n\n def do_stop(*args, **kwargs):\n server.io_loop.stop()\n\n try:\n signal.signal(signal.SIGINT, sig_exit)\n except ValueError:\n pass # Can't use signal on a thread\n\n if start:\n server.start()\n try:\n server.io_loop.start()\n except RuntimeError:\n pass\n return server\n\n\nclass StoppableThread(threading.Thread):\n \"\"\"Thread class with a stop() method.\"\"\"\n\n def __init__(self, io_loop=None, timeout=1000, **kwargs):\n from tornado import ioloop\n super(StoppableThread, self).__init__(**kwargs)\n self._stop_event = threading.Event()\n self.io_loop = io_loop\n self._cb = ioloop.PeriodicCallback(self._check_stopped, timeout)\n self._cb.start()\n\n def _check_stopped(self):\n if self.stopped:\n self._cb.stop()\n self.io_loop.stop()\n\n def run(self):\n if hasattr(self, '_target'):\n target, args, kwargs = self._target, self._args, self._kwargs\n else:\n target, args, kwargs = self._Thread__target, self._Thread__args, self._Thread__kwargs\n if not target:\n return\n bokeh_server = None\n try:\n bokeh_server = target(*args, **kwargs)\n finally:\n if isinstance(bokeh_server, Server):\n bokeh_server.stop()\n if hasattr(self, '_target'):\n del self._target, self._args, self._kwargs\n else:\n del self._Thread__target, self._Thread__args, self._Thread__kwargs\n\n def stop(self):\n self._stop_event.set()\n\n @property\n def stopped(self):\n return self._stop_event.is_set()\n", "path": "panel/io/server.py"}], "after_files": [{"content": "\"\"\"\nUtilities for creating bokeh Server instances.\n\"\"\"\nfrom __future__ import absolute_import, division, unicode_literals\n\nimport os\nimport signal\nimport threading\nimport uuid\n\nfrom contextlib import contextmanager\nfrom functools import partial\nfrom types import FunctionType\n\nfrom bokeh.document.events import ModelChangedEvent\nfrom bokeh.server.server import Server\nfrom tornado.websocket import WebSocketHandler\n\nfrom .state import state\n\n\n#---------------------------------------------------------------------\n# Private API\n#---------------------------------------------------------------------\n\nINDEX_HTML = os.path.join(os.path.dirname(__file__), '..', '_templates', \"index.html\")\n\ndef _origin_url(url):\n if url.startswith(\"http\"):\n url = url.split(\"//\")[1]\n return url\n\n\ndef _server_url(url, port):\n if url.startswith(\"http\"):\n return '%s:%d%s' % (url.rsplit(':', 1)[0], port, \"/\")\n else:\n return 'http://%s:%d%s' % (url.split(':')[0], port, \"/\")\n\ndef _eval_panel(panel, server_id, title, doc):\n from ..template import Template\n from ..pane import panel as as_panel\n\n if isinstance(panel, FunctionType):\n panel = panel()\n if isinstance(panel, Template):\n return panel._modify_doc(server_id, title, doc)\n return as_panel(panel)._modify_doc(server_id, title, doc)\n \n#---------------------------------------------------------------------\n# Public API\n#---------------------------------------------------------------------\n\n\n@contextmanager\ndef unlocked():\n \"\"\"\n Context manager which unlocks a Document and dispatches\n ModelChangedEvents triggered in the context body to all sockets\n on current sessions.\n \"\"\"\n curdoc = state.curdoc\n if curdoc is None or curdoc.session_context is None:\n yield\n return\n connections = curdoc.session_context.session._subscribed_connections\n\n hold = curdoc._hold\n if hold:\n old_events = list(curdoc._held_events)\n else:\n old_events = []\n curdoc.hold()\n try:\n yield\n events = []\n for conn in connections:\n socket = conn._socket\n for event in curdoc._held_events:\n if (isinstance(event, ModelChangedEvent) and event not in old_events\n and hasattr(socket, 'write_message')):\n msg = conn.protocol.create('PATCH-DOC', [event])\n WebSocketHandler.write_message(socket, msg.header_json)\n WebSocketHandler.write_message(socket, msg.metadata_json)\n WebSocketHandler.write_message(socket, msg.content_json)\n for header, payload in msg._buffers:\n WebSocketHandler.write_message(socket, header)\n WebSocketHandler.write_message(socket, payload, binary=True)\n elif event not in events:\n events.append(event)\n curdoc._held_events = events\n finally:\n if not hold:\n curdoc.unhold()\n\n\ndef serve(panels, port=0, websocket_origin=None, loop=None, show=True,\n start=True, title=None, verbose=True, **kwargs):\n \"\"\"\n Allows serving one or more panel objects on a single server.\n The panels argument should be either a Panel object or a function\n returning a Panel object or a dictionary of these two. If a \n dictionary is supplied the keys represent the slugs at which\n each app is served, e.g. `serve({'app': panel1, 'app2': panel2})`\n will serve apps at /app and /app2 on the server.\n\n Arguments\n ---------\n panel: Viewable, function or {str: Viewable}\n A Panel object, a function returning a Panel object or a\n dictionary mapping from the URL slug to either.\n port: int (optional, default=0)\n Allows specifying a specific port\n websocket_origin: str or list(str) (optional)\n A list of hosts that can connect to the websocket.\n\n This is typically required when embedding a server app in\n an external web site.\n\n If None, \"localhost\" is used.\n loop : tornado.ioloop.IOLoop (optional, default=IOLoop.current())\n The tornado IOLoop to run the Server on\n show : boolean (optional, default=False)\n Whether to open the server in a new browser tab on start\n start : boolean(optional, default=False)\n Whether to start the Server\n title: str (optional, default=None)\n An HTML title for the application\n verbose: boolean (optional, default=True)\n Whether to print the address and port\n kwargs: dict\n Additional keyword arguments to pass to Server instance\n \"\"\"\n return get_server(panels, port, websocket_origin, loop, show, start,\n title, verbose, **kwargs)\n\n\ndef get_server(panel, port=0, websocket_origin=None, loop=None,\n show=False, start=False, title=None, verbose=False, **kwargs):\n \"\"\"\n Returns a Server instance with this panel attached as the root\n app.\n\n Arguments\n ---------\n panel: Viewable, function or {str: Viewable}\n A Panel object, a function returning a Panel object or a\n dictionary mapping from the URL slug to either.\n port: int (optional, default=0)\n Allows specifying a specific port\n websocket_origin: str or list(str) (optional)\n A list of hosts that can connect to the websocket.\n\n This is typically required when embedding a server app in\n an external web site.\n\n If None, \"localhost\" is used.\n loop : tornado.ioloop.IOLoop (optional, default=IOLoop.current())\n The tornado IOLoop to run the Server on\n show : boolean (optional, default=False)\n Whether to open the server in a new browser tab on start\n start : boolean(optional, default=False)\n Whether to start the Server\n title: str (optional, default=None)\n An HTML title for the application\n verbose: boolean (optional, default=False)\n Whether to report the address and port\n kwargs: dict\n Additional keyword arguments to pass to Server instance\n\n Returns\n -------\n server : bokeh.server.server.Server\n Bokeh Server instance running this panel\n \"\"\"\n from tornado.ioloop import IOLoop\n\n server_id = kwargs.pop('server_id', uuid.uuid4().hex)\n if isinstance(panel, dict):\n apps = {slug if slug.startswith('/') else '/'+slug:\n partial(_eval_panel, p, server_id, title)\n for slug, p in panel.items()}\n else:\n apps = {'/': partial(_eval_panel, panel, server_id, title)}\n\n opts = dict(kwargs)\n if loop:\n loop.make_current()\n opts['io_loop'] = loop\n else:\n opts['io_loop'] = IOLoop.current()\n\n if 'index' not in opts:\n opts['index'] = INDEX_HTML\n\n if websocket_origin:\n if not isinstance(websocket_origin, list):\n websocket_origin = [websocket_origin]\n opts['allow_websocket_origin'] = websocket_origin\n\n server = Server(apps, port=port, **opts)\n if verbose:\n address = server.address or 'localhost'\n print(\"Launching server at http://%s:%s\" % (address, server.port))\n\n state._servers[server_id] = (server, panel, [])\n\n if show:\n def show_callback():\n server.show('/')\n server.io_loop.add_callback(show_callback)\n\n def sig_exit(*args, **kwargs):\n server.io_loop.add_callback_from_signal(do_stop)\n\n def do_stop(*args, **kwargs):\n server.io_loop.stop()\n\n try:\n signal.signal(signal.SIGINT, sig_exit)\n except ValueError:\n pass # Can't use signal on a thread\n\n if start:\n server.start()\n try:\n server.io_loop.start()\n except RuntimeError:\n pass\n return server\n\n\nclass StoppableThread(threading.Thread):\n \"\"\"Thread class with a stop() method.\"\"\"\n\n def __init__(self, io_loop=None, timeout=1000, **kwargs):\n from tornado import ioloop\n super(StoppableThread, self).__init__(**kwargs)\n self._stop_event = threading.Event()\n self.io_loop = io_loop\n self._cb = ioloop.PeriodicCallback(self._check_stopped, timeout)\n self._cb.start()\n\n def _check_stopped(self):\n if self.stopped:\n self._cb.stop()\n self.io_loop.stop()\n\n def run(self):\n if hasattr(self, '_target'):\n target, args, kwargs = self._target, self._args, self._kwargs\n else:\n target, args, kwargs = self._Thread__target, self._Thread__args, self._Thread__kwargs\n if not target:\n return\n bokeh_server = None\n try:\n bokeh_server = target(*args, **kwargs)\n finally:\n if isinstance(bokeh_server, Server):\n bokeh_server.stop()\n if hasattr(self, '_target'):\n del self._target, self._args, self._kwargs\n else:\n del self._Thread__target, self._Thread__args, self._Thread__kwargs\n\n def stop(self):\n self._stop_event.set()\n\n @property\n def stopped(self):\n return self._stop_event.is_set()\n", "path": "panel/io/server.py"}]}
| 3,523 | 132 |
gh_patches_debug_6897
|
rasdani/github-patches
|
git_diff
|
ydataai__ydata-profiling-736
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Negative exponents appear positive
**Describe the bug**
Negative exponents do not appear in a profiling report
**To Reproduce**
```python
import numpy as np
import pandas as pd
from pandas_profiling import ProfileReport
data = { 'some_numbers' : (0.0001, 0.00001, 0.00000001, 0.002, 0.0002, 0.00003) * 100}
df = pd.DataFrame(data)
profile = ProfileReport(df, 'No Negative Exponents')
profile.to_file('NoNegativeExponents.html')
```

Minimum should be 1 x 10<sup>-8</sup> rather than 1 x 10<sup>8</sup>. The issue also arises for Mean and Maximum.
**Version information:**
Python 3.7.7 (default, May 6 2020, 11:45:54) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
**Additional context**
<!--
Add any other context about the problem here.
-->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/pandas_profiling/report/formatters.py`
Content:
```
1 """Formatters are mappings from object(s) to a string."""
2 from typing import Callable, Dict
3
4 import numpy as np
5 from jinja2.utils import escape
6
7
8 def fmt_color(text: str, color: str) -> str:
9 """Format a string in a certain color (`<span>`).
10
11 Args:
12 text: The text to format.
13 color: Any valid CSS color.
14
15 Returns:
16 A `<span>` that contains the colored text.
17 """
18 return f'<span style="color:{color}">{text}</span>'
19
20
21 def fmt_class(text: str, cls: str) -> str:
22 """Format a string in a certain class (`<span>`).
23
24 Args:
25 text: The text to format.
26 cls: The name of the class.
27
28 Returns:
29 A `<span>` with a class added.
30 """
31 return f'<span class="{cls}">{text}</span>'
32
33
34 def fmt_bytesize(num: float, suffix: str = "B") -> str:
35 """Change a number of bytes in a human readable format.
36
37 Args:
38 num: number to format
39 suffix: (Default value = 'B')
40
41 Returns:
42 The value formatted in human readable format (e.g. KiB).
43 """
44 for unit in ["", "Ki", "Mi", "Gi", "Ti", "Pi", "Ei", "Zi"]:
45 if abs(num) < 1024.0:
46 return f"{num:3.1f} {unit}{suffix}"
47 num /= 1024.0
48 return f"{num:.1f} Yi{suffix}"
49
50
51 def fmt_percent(value: float, edge_cases: bool = True) -> str:
52 """Format a ratio as a percentage.
53
54 Args:
55 edge_cases: Check for edge cases?
56 value: The ratio.
57
58 Returns:
59 The percentage with 1 point precision.
60 """
61 if not (1.0 >= value >= 0.0):
62 raise ValueError(f"Value '{value}' should be a ratio between 1 and 0.")
63 if edge_cases and round(value, 3) == 0 and value > 0:
64 return "< 0.1%"
65 if edge_cases and round(value, 3) == 1 and value < 1:
66 return "> 99.9%"
67
68 return f"{value*100:2.1f}%"
69
70
71 def fmt_timespan(num_seconds, detailed=False, max_units=3):
72 # From the `humanfriendly` module (without additional dependency)
73 # https://github.com/xolox/python-humanfriendly/
74 # Author: Peter Odding <[email protected]>
75 # URL: https://humanfriendly.readthedocs.io
76
77 import decimal
78 import math
79 import numbers
80 import re
81 from datetime import datetime, timedelta
82
83 time_units = (
84 dict(
85 divider=1e-9,
86 singular="nanosecond",
87 plural="nanoseconds",
88 abbreviations=["ns"],
89 ),
90 dict(
91 divider=1e-6,
92 singular="microsecond",
93 plural="microseconds",
94 abbreviations=["us"],
95 ),
96 dict(
97 divider=1e-3,
98 singular="millisecond",
99 plural="milliseconds",
100 abbreviations=["ms"],
101 ),
102 dict(
103 divider=1,
104 singular="second",
105 plural="seconds",
106 abbreviations=["s", "sec", "secs"],
107 ),
108 dict(
109 divider=60,
110 singular="minute",
111 plural="minutes",
112 abbreviations=["m", "min", "mins"],
113 ),
114 dict(divider=60 * 60, singular="hour", plural="hours", abbreviations=["h"]),
115 dict(divider=60 * 60 * 24, singular="day", plural="days", abbreviations=["d"]),
116 dict(
117 divider=60 * 60 * 24 * 7,
118 singular="week",
119 plural="weeks",
120 abbreviations=["w"],
121 ),
122 dict(
123 divider=60 * 60 * 24 * 7 * 52,
124 singular="year",
125 plural="years",
126 abbreviations=["y"],
127 ),
128 )
129
130 def round_number(count, keep_width=False):
131 text = "%.2f" % float(count)
132 if not keep_width:
133 text = re.sub("0+$", "", text)
134 text = re.sub(r"\.$", "", text)
135 return text
136
137 def coerce_seconds(value):
138 if isinstance(value, timedelta):
139 return value.total_seconds()
140 if not isinstance(value, numbers.Number):
141 msg = "Failed to coerce value to number of seconds! (%r)"
142 raise ValueError(format(msg, value))
143 return value
144
145 def concatenate(items):
146 items = list(items)
147 if len(items) > 1:
148 return ", ".join(items[:-1]) + " and " + items[-1]
149 elif items:
150 return items[0]
151 else:
152 return ""
153
154 def pluralize(count, singular, plural=None):
155 if not plural:
156 plural = singular + "s"
157 return "{} {}".format(
158 count, singular if math.floor(float(count)) == 1 else plural
159 )
160
161 num_seconds = coerce_seconds(num_seconds)
162 if num_seconds < 60 and not detailed:
163 # Fast path.
164 return pluralize(round_number(num_seconds), "second")
165 else:
166 # Slow path.
167 result = []
168 num_seconds = decimal.Decimal(str(num_seconds))
169 relevant_units = list(reversed(time_units[0 if detailed else 3 :]))
170 for unit in relevant_units:
171 # Extract the unit count from the remaining time.
172 divider = decimal.Decimal(str(unit["divider"]))
173 count = num_seconds / divider
174 num_seconds %= divider
175 # Round the unit count appropriately.
176 if unit != relevant_units[-1]:
177 # Integer rounding for all but the smallest unit.
178 count = int(count)
179 else:
180 # Floating point rounding for the smallest unit.
181 count = round_number(count)
182 # Only include relevant units in the result.
183 if count not in (0, "0"):
184 result.append(pluralize(count, unit["singular"], unit["plural"]))
185 if len(result) == 1:
186 # A single count/unit combination.
187 return result[0]
188 else:
189 if not detailed:
190 # Remove `insignificant' data from the formatted timespan.
191 result = result[:max_units]
192 # Format the timespan in a readable way.
193 return concatenate(result)
194
195
196 def fmt_numeric(value: float, precision=10) -> str:
197 """Format any numeric value.
198
199 Args:
200 value: The numeric value to format.
201 precision: The numeric precision
202
203 Returns:
204 The numeric value with the given precision.
205 """
206 fmtted = f"{{:.{precision}g}}".format(value)
207 for v in ["e+", "e-"]:
208 if v in fmtted:
209 fmtted = fmtted.replace(v, " × 10<sup>") + "</sup>"
210 fmtted = fmtted.replace("<sup>0", "<sup>")
211
212 return fmtted
213
214
215 def fmt_number(value: int) -> str:
216 """Format any numeric value.
217
218 Args:
219 value: The numeric value to format.
220
221 Returns:
222 The numeric value with the given precision.
223 """
224 return f"{value:n}"
225
226
227 def fmt_array(value: np.ndarray, threshold=np.nan) -> str:
228 """Format numpy arrays.
229
230 Args:
231 value: Array to format.
232 threshold: Threshold at which to show ellipsis
233
234 Returns:
235 The string representation of the numpy array.
236 """
237 with np.printoptions(threshold=3, edgeitems=threshold):
238 return_value = str(value)
239
240 return return_value
241
242
243 def fmt(value) -> str:
244 """Format any value.
245
246 Args:
247 value: The value to format.
248
249 Returns:
250 The numeric formatting if the value is float or int, the string formatting otherwise.
251 """
252 if type(value) in [float, int]:
253 return fmt_numeric(value)
254 else:
255 return str(escape(value))
256
257
258 def help(title, url=None) -> str:
259 """Creat help badge
260
261 Args:
262 title: help text
263 url: url to open in new tab (optional)
264
265 Returns:
266 HTML formatted help badge
267 """
268 if url is not None:
269 return f'<a title="{title}" href="{url}" target="_blank"><span class="badge pull-right" style="color:#fff;background-color:#337ab7;" title="{title}">?</span></a>'
270 else:
271 return f'<span class="badge pull-right" style="color:#fff;background-color:#337ab7;" title="{title}">?</span>'
272
273
274 def get_fmt_mapping() -> Dict[str, Callable]:
275 """Get a mapping from formatter name to the function
276
277 Returns: formatter mapping
278 """
279 return {
280 "fmt_percent": fmt_percent,
281 "fmt_bytesize": fmt_bytesize,
282 "fmt_timespan": fmt_timespan,
283 "fmt_numeric": fmt_numeric,
284 "fmt_number": fmt_number,
285 "fmt_array": fmt_array,
286 "fmt": fmt,
287 "raw": lambda x: x,
288 }
289
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/pandas_profiling/report/formatters.py b/src/pandas_profiling/report/formatters.py
--- a/src/pandas_profiling/report/formatters.py
+++ b/src/pandas_profiling/report/formatters.py
@@ -206,8 +206,10 @@
fmtted = f"{{:.{precision}g}}".format(value)
for v in ["e+", "e-"]:
if v in fmtted:
+ sign = "-" if v in "e-" else ""
fmtted = fmtted.replace(v, " × 10<sup>") + "</sup>"
fmtted = fmtted.replace("<sup>0", "<sup>")
+ fmtted = fmtted.replace("<sup>", f"<sup>{sign}")
return fmtted
|
{"golden_diff": "diff --git a/src/pandas_profiling/report/formatters.py b/src/pandas_profiling/report/formatters.py\n--- a/src/pandas_profiling/report/formatters.py\n+++ b/src/pandas_profiling/report/formatters.py\n@@ -206,8 +206,10 @@\n fmtted = f\"{{:.{precision}g}}\".format(value)\n for v in [\"e+\", \"e-\"]:\n if v in fmtted:\n+ sign = \"-\" if v in \"e-\" else \"\"\n fmtted = fmtted.replace(v, \" \u00d7 10<sup>\") + \"</sup>\"\n fmtted = fmtted.replace(\"<sup>0\", \"<sup>\")\n+ fmtted = fmtted.replace(\"<sup>\", f\"<sup>{sign}\")\n \n return fmtted\n", "issue": "Negative exponents appear positive\n**Describe the bug**\r\n\r\nNegative exponents do not appear in a profiling report\r\n\r\n**To Reproduce**\r\n\r\n```python\r\nimport numpy as np\r\nimport pandas as pd\r\nfrom pandas_profiling import ProfileReport\r\n\r\ndata = { 'some_numbers' : (0.0001, 0.00001, 0.00000001, 0.002, 0.0002, 0.00003) * 100}\r\ndf = pd.DataFrame(data)\r\nprofile = ProfileReport(df, 'No Negative Exponents')\r\nprofile.to_file('NoNegativeExponents.html')\r\n```\r\n\r\n\r\n\r\nMinimum should be 1 x 10<sup>-8</sup> rather than 1 x 10<sup>8</sup>. The issue also arises for Mean and Maximum.\r\n\r\n**Version information:**\r\n\r\nPython 3.7.7 (default, May 6 2020, 11:45:54) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32\r\n\r\n\r\n\r\n**Additional context**\r\n\r\n<!--\r\nAdd any other context about the problem here.\r\n-->\r\n\n", "before_files": [{"content": "\"\"\"Formatters are mappings from object(s) to a string.\"\"\"\nfrom typing import Callable, Dict\n\nimport numpy as np\nfrom jinja2.utils import escape\n\n\ndef fmt_color(text: str, color: str) -> str:\n \"\"\"Format a string in a certain color (`<span>`).\n\n Args:\n text: The text to format.\n color: Any valid CSS color.\n\n Returns:\n A `<span>` that contains the colored text.\n \"\"\"\n return f'<span style=\"color:{color}\">{text}</span>'\n\n\ndef fmt_class(text: str, cls: str) -> str:\n \"\"\"Format a string in a certain class (`<span>`).\n\n Args:\n text: The text to format.\n cls: The name of the class.\n\n Returns:\n A `<span>` with a class added.\n \"\"\"\n return f'<span class=\"{cls}\">{text}</span>'\n\n\ndef fmt_bytesize(num: float, suffix: str = \"B\") -> str:\n \"\"\"Change a number of bytes in a human readable format.\n\n Args:\n num: number to format\n suffix: (Default value = 'B')\n\n Returns:\n The value formatted in human readable format (e.g. KiB).\n \"\"\"\n for unit in [\"\", \"Ki\", \"Mi\", \"Gi\", \"Ti\", \"Pi\", \"Ei\", \"Zi\"]:\n if abs(num) < 1024.0:\n return f\"{num:3.1f} {unit}{suffix}\"\n num /= 1024.0\n return f\"{num:.1f} Yi{suffix}\"\n\n\ndef fmt_percent(value: float, edge_cases: bool = True) -> str:\n \"\"\"Format a ratio as a percentage.\n\n Args:\n edge_cases: Check for edge cases?\n value: The ratio.\n\n Returns:\n The percentage with 1 point precision.\n \"\"\"\n if not (1.0 >= value >= 0.0):\n raise ValueError(f\"Value '{value}' should be a ratio between 1 and 0.\")\n if edge_cases and round(value, 3) == 0 and value > 0:\n return \"< 0.1%\"\n if edge_cases and round(value, 3) == 1 and value < 1:\n return \"> 99.9%\"\n\n return f\"{value*100:2.1f}%\"\n\n\ndef fmt_timespan(num_seconds, detailed=False, max_units=3):\n # From the `humanfriendly` module (without additional dependency)\n # https://github.com/xolox/python-humanfriendly/\n # Author: Peter Odding <[email protected]>\n # URL: https://humanfriendly.readthedocs.io\n\n import decimal\n import math\n import numbers\n import re\n from datetime import datetime, timedelta\n\n time_units = (\n dict(\n divider=1e-9,\n singular=\"nanosecond\",\n plural=\"nanoseconds\",\n abbreviations=[\"ns\"],\n ),\n dict(\n divider=1e-6,\n singular=\"microsecond\",\n plural=\"microseconds\",\n abbreviations=[\"us\"],\n ),\n dict(\n divider=1e-3,\n singular=\"millisecond\",\n plural=\"milliseconds\",\n abbreviations=[\"ms\"],\n ),\n dict(\n divider=1,\n singular=\"second\",\n plural=\"seconds\",\n abbreviations=[\"s\", \"sec\", \"secs\"],\n ),\n dict(\n divider=60,\n singular=\"minute\",\n plural=\"minutes\",\n abbreviations=[\"m\", \"min\", \"mins\"],\n ),\n dict(divider=60 * 60, singular=\"hour\", plural=\"hours\", abbreviations=[\"h\"]),\n dict(divider=60 * 60 * 24, singular=\"day\", plural=\"days\", abbreviations=[\"d\"]),\n dict(\n divider=60 * 60 * 24 * 7,\n singular=\"week\",\n plural=\"weeks\",\n abbreviations=[\"w\"],\n ),\n dict(\n divider=60 * 60 * 24 * 7 * 52,\n singular=\"year\",\n plural=\"years\",\n abbreviations=[\"y\"],\n ),\n )\n\n def round_number(count, keep_width=False):\n text = \"%.2f\" % float(count)\n if not keep_width:\n text = re.sub(\"0+$\", \"\", text)\n text = re.sub(r\"\\.$\", \"\", text)\n return text\n\n def coerce_seconds(value):\n if isinstance(value, timedelta):\n return value.total_seconds()\n if not isinstance(value, numbers.Number):\n msg = \"Failed to coerce value to number of seconds! (%r)\"\n raise ValueError(format(msg, value))\n return value\n\n def concatenate(items):\n items = list(items)\n if len(items) > 1:\n return \", \".join(items[:-1]) + \" and \" + items[-1]\n elif items:\n return items[0]\n else:\n return \"\"\n\n def pluralize(count, singular, plural=None):\n if not plural:\n plural = singular + \"s\"\n return \"{} {}\".format(\n count, singular if math.floor(float(count)) == 1 else plural\n )\n\n num_seconds = coerce_seconds(num_seconds)\n if num_seconds < 60 and not detailed:\n # Fast path.\n return pluralize(round_number(num_seconds), \"second\")\n else:\n # Slow path.\n result = []\n num_seconds = decimal.Decimal(str(num_seconds))\n relevant_units = list(reversed(time_units[0 if detailed else 3 :]))\n for unit in relevant_units:\n # Extract the unit count from the remaining time.\n divider = decimal.Decimal(str(unit[\"divider\"]))\n count = num_seconds / divider\n num_seconds %= divider\n # Round the unit count appropriately.\n if unit != relevant_units[-1]:\n # Integer rounding for all but the smallest unit.\n count = int(count)\n else:\n # Floating point rounding for the smallest unit.\n count = round_number(count)\n # Only include relevant units in the result.\n if count not in (0, \"0\"):\n result.append(pluralize(count, unit[\"singular\"], unit[\"plural\"]))\n if len(result) == 1:\n # A single count/unit combination.\n return result[0]\n else:\n if not detailed:\n # Remove `insignificant' data from the formatted timespan.\n result = result[:max_units]\n # Format the timespan in a readable way.\n return concatenate(result)\n\n\ndef fmt_numeric(value: float, precision=10) -> str:\n \"\"\"Format any numeric value.\n\n Args:\n value: The numeric value to format.\n precision: The numeric precision\n\n Returns:\n The numeric value with the given precision.\n \"\"\"\n fmtted = f\"{{:.{precision}g}}\".format(value)\n for v in [\"e+\", \"e-\"]:\n if v in fmtted:\n fmtted = fmtted.replace(v, \" \u00d7 10<sup>\") + \"</sup>\"\n fmtted = fmtted.replace(\"<sup>0\", \"<sup>\")\n\n return fmtted\n\n\ndef fmt_number(value: int) -> str:\n \"\"\"Format any numeric value.\n\n Args:\n value: The numeric value to format.\n\n Returns:\n The numeric value with the given precision.\n \"\"\"\n return f\"{value:n}\"\n\n\ndef fmt_array(value: np.ndarray, threshold=np.nan) -> str:\n \"\"\"Format numpy arrays.\n\n Args:\n value: Array to format.\n threshold: Threshold at which to show ellipsis\n\n Returns:\n The string representation of the numpy array.\n \"\"\"\n with np.printoptions(threshold=3, edgeitems=threshold):\n return_value = str(value)\n\n return return_value\n\n\ndef fmt(value) -> str:\n \"\"\"Format any value.\n\n Args:\n value: The value to format.\n\n Returns:\n The numeric formatting if the value is float or int, the string formatting otherwise.\n \"\"\"\n if type(value) in [float, int]:\n return fmt_numeric(value)\n else:\n return str(escape(value))\n\n\ndef help(title, url=None) -> str:\n \"\"\"Creat help badge\n\n Args:\n title: help text\n url: url to open in new tab (optional)\n\n Returns:\n HTML formatted help badge\n \"\"\"\n if url is not None:\n return f'<a title=\"{title}\" href=\"{url}\" target=\"_blank\"><span class=\"badge pull-right\" style=\"color:#fff;background-color:#337ab7;\" title=\"{title}\">?</span></a>'\n else:\n return f'<span class=\"badge pull-right\" style=\"color:#fff;background-color:#337ab7;\" title=\"{title}\">?</span>'\n\n\ndef get_fmt_mapping() -> Dict[str, Callable]:\n \"\"\"Get a mapping from formatter name to the function\n\n Returns: formatter mapping\n \"\"\"\n return {\n \"fmt_percent\": fmt_percent,\n \"fmt_bytesize\": fmt_bytesize,\n \"fmt_timespan\": fmt_timespan,\n \"fmt_numeric\": fmt_numeric,\n \"fmt_number\": fmt_number,\n \"fmt_array\": fmt_array,\n \"fmt\": fmt,\n \"raw\": lambda x: x,\n }\n", "path": "src/pandas_profiling/report/formatters.py"}], "after_files": [{"content": "\"\"\"Formatters are mappings from object(s) to a string.\"\"\"\nfrom typing import Callable, Dict\n\nimport numpy as np\nfrom jinja2.utils import escape\n\n\ndef fmt_color(text: str, color: str) -> str:\n \"\"\"Format a string in a certain color (`<span>`).\n\n Args:\n text: The text to format.\n color: Any valid CSS color.\n\n Returns:\n A `<span>` that contains the colored text.\n \"\"\"\n return f'<span style=\"color:{color}\">{text}</span>'\n\n\ndef fmt_class(text: str, cls: str) -> str:\n \"\"\"Format a string in a certain class (`<span>`).\n\n Args:\n text: The text to format.\n cls: The name of the class.\n\n Returns:\n A `<span>` with a class added.\n \"\"\"\n return f'<span class=\"{cls}\">{text}</span>'\n\n\ndef fmt_bytesize(num: float, suffix: str = \"B\") -> str:\n \"\"\"Change a number of bytes in a human readable format.\n\n Args:\n num: number to format\n suffix: (Default value = 'B')\n\n Returns:\n The value formatted in human readable format (e.g. KiB).\n \"\"\"\n for unit in [\"\", \"Ki\", \"Mi\", \"Gi\", \"Ti\", \"Pi\", \"Ei\", \"Zi\"]:\n if abs(num) < 1024.0:\n return f\"{num:3.1f} {unit}{suffix}\"\n num /= 1024.0\n return f\"{num:.1f} Yi{suffix}\"\n\n\ndef fmt_percent(value: float, edge_cases: bool = True) -> str:\n \"\"\"Format a ratio as a percentage.\n\n Args:\n edge_cases: Check for edge cases?\n value: The ratio.\n\n Returns:\n The percentage with 1 point precision.\n \"\"\"\n if not (1.0 >= value >= 0.0):\n raise ValueError(f\"Value '{value}' should be a ratio between 1 and 0.\")\n if edge_cases and round(value, 3) == 0 and value > 0:\n return \"< 0.1%\"\n if edge_cases and round(value, 3) == 1 and value < 1:\n return \"> 99.9%\"\n\n return f\"{value*100:2.1f}%\"\n\n\ndef fmt_timespan(num_seconds, detailed=False, max_units=3):\n # From the `humanfriendly` module (without additional dependency)\n # https://github.com/xolox/python-humanfriendly/\n # Author: Peter Odding <[email protected]>\n # URL: https://humanfriendly.readthedocs.io\n\n import decimal\n import math\n import numbers\n import re\n from datetime import datetime, timedelta\n\n time_units = (\n dict(\n divider=1e-9,\n singular=\"nanosecond\",\n plural=\"nanoseconds\",\n abbreviations=[\"ns\"],\n ),\n dict(\n divider=1e-6,\n singular=\"microsecond\",\n plural=\"microseconds\",\n abbreviations=[\"us\"],\n ),\n dict(\n divider=1e-3,\n singular=\"millisecond\",\n plural=\"milliseconds\",\n abbreviations=[\"ms\"],\n ),\n dict(\n divider=1,\n singular=\"second\",\n plural=\"seconds\",\n abbreviations=[\"s\", \"sec\", \"secs\"],\n ),\n dict(\n divider=60,\n singular=\"minute\",\n plural=\"minutes\",\n abbreviations=[\"m\", \"min\", \"mins\"],\n ),\n dict(divider=60 * 60, singular=\"hour\", plural=\"hours\", abbreviations=[\"h\"]),\n dict(divider=60 * 60 * 24, singular=\"day\", plural=\"days\", abbreviations=[\"d\"]),\n dict(\n divider=60 * 60 * 24 * 7,\n singular=\"week\",\n plural=\"weeks\",\n abbreviations=[\"w\"],\n ),\n dict(\n divider=60 * 60 * 24 * 7 * 52,\n singular=\"year\",\n plural=\"years\",\n abbreviations=[\"y\"],\n ),\n )\n\n def round_number(count, keep_width=False):\n text = \"%.2f\" % float(count)\n if not keep_width:\n text = re.sub(\"0+$\", \"\", text)\n text = re.sub(r\"\\.$\", \"\", text)\n return text\n\n def coerce_seconds(value):\n if isinstance(value, timedelta):\n return value.total_seconds()\n if not isinstance(value, numbers.Number):\n msg = \"Failed to coerce value to number of seconds! (%r)\"\n raise ValueError(format(msg, value))\n return value\n\n def concatenate(items):\n items = list(items)\n if len(items) > 1:\n return \", \".join(items[:-1]) + \" and \" + items[-1]\n elif items:\n return items[0]\n else:\n return \"\"\n\n def pluralize(count, singular, plural=None):\n if not plural:\n plural = singular + \"s\"\n return \"{} {}\".format(\n count, singular if math.floor(float(count)) == 1 else plural\n )\n\n num_seconds = coerce_seconds(num_seconds)\n if num_seconds < 60 and not detailed:\n # Fast path.\n return pluralize(round_number(num_seconds), \"second\")\n else:\n # Slow path.\n result = []\n num_seconds = decimal.Decimal(str(num_seconds))\n relevant_units = list(reversed(time_units[0 if detailed else 3 :]))\n for unit in relevant_units:\n # Extract the unit count from the remaining time.\n divider = decimal.Decimal(str(unit[\"divider\"]))\n count = num_seconds / divider\n num_seconds %= divider\n # Round the unit count appropriately.\n if unit != relevant_units[-1]:\n # Integer rounding for all but the smallest unit.\n count = int(count)\n else:\n # Floating point rounding for the smallest unit.\n count = round_number(count)\n # Only include relevant units in the result.\n if count not in (0, \"0\"):\n result.append(pluralize(count, unit[\"singular\"], unit[\"plural\"]))\n if len(result) == 1:\n # A single count/unit combination.\n return result[0]\n else:\n if not detailed:\n # Remove `insignificant' data from the formatted timespan.\n result = result[:max_units]\n # Format the timespan in a readable way.\n return concatenate(result)\n\n\ndef fmt_numeric(value: float, precision=10) -> str:\n \"\"\"Format any numeric value.\n\n Args:\n value: The numeric value to format.\n precision: The numeric precision\n\n Returns:\n The numeric value with the given precision.\n \"\"\"\n fmtted = f\"{{:.{precision}g}}\".format(value)\n for v in [\"e+\", \"e-\"]:\n if v in fmtted:\n sign = \"-\" if v in \"e-\" else \"\"\n fmtted = fmtted.replace(v, \" \u00d7 10<sup>\") + \"</sup>\"\n fmtted = fmtted.replace(\"<sup>0\", \"<sup>\")\n fmtted = fmtted.replace(\"<sup>\", f\"<sup>{sign}\")\n\n return fmtted\n\n\ndef fmt_number(value: int) -> str:\n \"\"\"Format any numeric value.\n\n Args:\n value: The numeric value to format.\n\n Returns:\n The numeric value with the given precision.\n \"\"\"\n return f\"{value:n}\"\n\n\ndef fmt_array(value: np.ndarray, threshold=np.nan) -> str:\n \"\"\"Format numpy arrays.\n\n Args:\n value: Array to format.\n threshold: Threshold at which to show ellipsis\n\n Returns:\n The string representation of the numpy array.\n \"\"\"\n with np.printoptions(threshold=3, edgeitems=threshold):\n return_value = str(value)\n\n return return_value\n\n\ndef fmt(value) -> str:\n \"\"\"Format any value.\n\n Args:\n value: The value to format.\n\n Returns:\n The numeric formatting if the value is float or int, the string formatting otherwise.\n \"\"\"\n if type(value) in [float, int]:\n return fmt_numeric(value)\n else:\n return str(escape(value))\n\n\ndef help(title, url=None) -> str:\n \"\"\"Creat help badge\n\n Args:\n title: help text\n url: url to open in new tab (optional)\n\n Returns:\n HTML formatted help badge\n \"\"\"\n if url is not None:\n return f'<a title=\"{title}\" href=\"{url}\" target=\"_blank\"><span class=\"badge pull-right\" style=\"color:#fff;background-color:#337ab7;\" title=\"{title}\">?</span></a>'\n else:\n return f'<span class=\"badge pull-right\" style=\"color:#fff;background-color:#337ab7;\" title=\"{title}\">?</span>'\n\n\ndef get_fmt_mapping() -> Dict[str, Callable]:\n \"\"\"Get a mapping from formatter name to the function\n\n Returns: formatter mapping\n \"\"\"\n return {\n \"fmt_percent\": fmt_percent,\n \"fmt_bytesize\": fmt_bytesize,\n \"fmt_timespan\": fmt_timespan,\n \"fmt_numeric\": fmt_numeric,\n \"fmt_number\": fmt_number,\n \"fmt_array\": fmt_array,\n \"fmt\": fmt,\n \"raw\": lambda x: x,\n }\n", "path": "src/pandas_profiling/report/formatters.py"}]}
| 3,382 | 170 |
gh_patches_debug_9870
|
rasdani/github-patches
|
git_diff
|
scikit-image__scikit-image-7266
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Faster map_array function
### Description:
In another project ( BiAPoL/napari-clusters-plotter#283 ) we faced the problem that the map_array function is quite slow when it comes to large arrays. We came up with a faster method and I thought the new method might be worth considering as a replacement for map_array. Here is a script comparing the old and new methods:
```python
import numpy as np
from skimage.util import map_array
from time import perf_counter
shape = (1,1024,1024)
total = shape[0]*shape[1]*shape[2]
NUM_LABELS=100000
input_data = np.random.randint(NUM_LABELS,size=total).reshape(shape).astype("int64")
from_values = np.arange(NUM_LABELS)
to_values = np.copy(from_values)
np.random.shuffle(to_values)
def generate_cluster_image(label_image, label_list, predictionlist):
"""
Generates a clusters image from a label image and a list of cluster predictions,
where each label value corresponds to the cluster identity.
It is assumed that len(predictionlist) == max(label_image)
Parameters
----------
label_image: ndarray or dask array
Label image used for cluster predictions
predictionlist: Array-like
An array containing cluster identities for each label
Returns
----------
ndarray: The clusters image as a numpy array.
"""
predictionlist_new = np.array(predictionlist) + 1
plist = np.zeros(np.max(label_image) + 1, dtype=np.uint32)
plist[label_list] = predictionlist_new
predictionlist_new = plist
return predictionlist_new[label_image]
def generate_cluster_image_old(label_image, label_list, predictionlist):
"""
Generates a clusters image from a label image and a list of cluster predictions,
where each label value corresponds to the cluster identity.
It is assumed that len(predictionlist) == max(label_image)
Parameters
----------
label_image: ndarray or dask array
Label image used for cluster predictions
predictionlist: Array-like
An array containing cluster identities for each label
Returns
----------
ndarray: The clusters image as a numpy array.
"""
from skimage.util import map_array
# reforming the prediction list, this is done to account
# for cluster labels that start at 0, conveniently hdbscan
# labelling starts at -1 for noise, removing these from the labels
predictionlist_new = np.array(predictionlist) + 1
label_list = np.array(label_list)
return map_array(np.asarray(label_image), label_list, predictionlist_new).astype(
"uint32"
)
t1 = perf_counter()
res_new = generate_cluster_image(input_data, from_values, to_values)
t_new = perf_counter() - t1
print(t_new)
t1 = perf_counter()
res_old = generate_cluster_image_old(input_data, from_values, to_values)
t_old = perf_counter() - t1
print(t_old)
print(f"Speedup {t_old/t_new}")
print(np.array_equal(res_new,res_old))
```
The new method is 15x-30x faster than the old one.
If you guys think that this is worth it, I can come up with a PR.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `skimage/util/_map_array.py`
Content:
```
1 import numpy as np
2
3
4 def map_array(input_arr, input_vals, output_vals, out=None):
5 """Map values from input array from input_vals to output_vals.
6
7 Parameters
8 ----------
9 input_arr : array of int, shape (M[, ...])
10 The input label image.
11 input_vals : array of int, shape (K,)
12 The values to map from.
13 output_vals : array, shape (K,)
14 The values to map to.
15 out: array, same shape as `input_arr`
16 The output array. Will be created if not provided. It should
17 have the same dtype as `output_vals`.
18
19 Returns
20 -------
21 out : array, same shape as `input_arr`
22 The array of mapped values.
23 """
24 from ._remap import _map_array
25
26 if not np.issubdtype(input_arr.dtype, np.integer):
27 raise TypeError('The dtype of an array to be remapped should be integer.')
28 # We ravel the input array for simplicity of iteration in Cython:
29 orig_shape = input_arr.shape
30 # NumPy docs for `np.ravel()` says:
31 # "When a view is desired in as many cases as possible,
32 # arr.reshape(-1) may be preferable."
33 input_arr = input_arr.reshape(-1)
34 if out is None:
35 out = np.empty(orig_shape, dtype=output_vals.dtype)
36 elif out.shape != orig_shape:
37 raise ValueError(
38 'If out array is provided, it should have the same shape as '
39 f'the input array. Input array has shape {orig_shape}, provided '
40 f'output array has shape {out.shape}.'
41 )
42 try:
43 out_view = out.view()
44 out_view.shape = (-1,) # no-copy reshape/ravel
45 except AttributeError: # if out strides are not compatible with 0-copy
46 raise ValueError(
47 'If out array is provided, it should be either contiguous '
48 f'or 1-dimensional. Got array with shape {out.shape} and '
49 f'strides {out.strides}.'
50 )
51
52 # ensure all arrays have matching types before sending to Cython
53 input_vals = input_vals.astype(input_arr.dtype, copy=False)
54 output_vals = output_vals.astype(out.dtype, copy=False)
55 _map_array(input_arr, out_view, input_vals, output_vals)
56 return out
57
58
59 class ArrayMap:
60 """Class designed to mimic mapping by NumPy array indexing.
61
62 This class is designed to replicate the use of NumPy arrays for mapping
63 values with indexing:
64
65 >>> values = np.array([0.25, 0.5, 1.0])
66 >>> indices = np.array([[0, 0, 1], [2, 2, 1]])
67 >>> values[indices]
68 array([[0.25, 0.25, 0.5 ],
69 [1. , 1. , 0.5 ]])
70
71 The issue with this indexing is that you need a very large ``values``
72 array if the values in the ``indices`` array are large.
73
74 >>> values = np.array([0.25, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0])
75 >>> indices = np.array([[0, 0, 10], [0, 10, 10]])
76 >>> values[indices]
77 array([[0.25, 0.25, 1. ],
78 [0.25, 1. , 1. ]])
79
80 Using this class, the approach is similar, but there is no need to
81 create a large values array:
82
83 >>> in_indices = np.array([0, 10])
84 >>> out_values = np.array([0.25, 1.0])
85 >>> values = ArrayMap(in_indices, out_values)
86 >>> values
87 ArrayMap(array([ 0, 10]), array([0.25, 1. ]))
88 >>> print(values)
89 ArrayMap:
90 0 → 0.25
91 10 → 1.0
92 >>> indices = np.array([[0, 0, 10], [0, 10, 10]])
93 >>> values[indices]
94 array([[0.25, 0.25, 1. ],
95 [0.25, 1. , 1. ]])
96
97 Parameters
98 ----------
99 in_values : array of int, shape (K,)
100 The source values from which to map.
101 out_values : array, shape (K,)
102 The destination values from which to map.
103 """
104
105 def __init__(self, in_values, out_values):
106 self.in_values = in_values
107 self.out_values = out_values
108 self._max_str_lines = 4
109 self._array = None
110
111 def __len__(self):
112 """Return one more than the maximum label value being remapped."""
113 return np.max(self.in_values) + 1
114
115 def __array__(self, dtype=None):
116 """Return an array that behaves like the arraymap when indexed.
117
118 This array can be very large: it is the size of the largest value
119 in the ``in_vals`` array, plus one.
120 """
121 if dtype is None:
122 dtype = self.out_values.dtype
123 output = np.zeros(np.max(self.in_values) + 1, dtype=dtype)
124 output[self.in_values] = self.out_values
125 return output
126
127 @property
128 def dtype(self):
129 return self.out_values.dtype
130
131 def __repr__(self):
132 return f'ArrayMap({repr(self.in_values)}, {repr(self.out_values)})'
133
134 def __str__(self):
135 if len(self.in_values) <= self._max_str_lines + 1:
136 rows = range(len(self.in_values))
137 string = '\n'.join(
138 ['ArrayMap:']
139 + [f' {self.in_values[i]} → {self.out_values[i]}' for i in rows]
140 )
141 else:
142 rows0 = list(range(0, self._max_str_lines // 2))
143 rows1 = list(range(-self._max_str_lines // 2, 0))
144 string = '\n'.join(
145 ['ArrayMap:']
146 + [f' {self.in_values[i]} → {self.out_values[i]}' for i in rows0]
147 + [' ...']
148 + [f' {self.in_values[i]} → {self.out_values[i]}' for i in rows1]
149 )
150 return string
151
152 def __call__(self, arr):
153 return self.__getitem__(arr)
154
155 def __getitem__(self, index):
156 scalar = np.isscalar(index)
157 if scalar:
158 index = np.array([index])
159 elif isinstance(index, slice):
160 start = index.start or 0 # treat None or 0 the same way
161 stop = index.stop if index.stop is not None else len(self)
162 step = index.step
163 index = np.arange(start, stop, step)
164 if index.dtype == bool:
165 index = np.flatnonzero(index)
166
167 out = map_array(
168 index,
169 self.in_values.astype(index.dtype, copy=False),
170 self.out_values,
171 )
172
173 if scalar:
174 out = out[0]
175 return out
176
177 def __setitem__(self, indices, values):
178 if self._array is None:
179 self._array = self.__array__()
180 self._array[indices] = values
181 self.in_values = np.flatnonzero(self._array)
182 self.out_values = self._array[self.in_values]
183
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/skimage/util/_map_array.py b/skimage/util/_map_array.py
--- a/skimage/util/_map_array.py
+++ b/skimage/util/_map_array.py
@@ -20,6 +20,23 @@
-------
out : array, same shape as `input_arr`
The array of mapped values.
+
+ Notes
+ -----
+ If `input_arr` contains values that aren't covered by `input_vals`, they
+ are set to 0.
+
+ Examples
+ --------
+ >>> import numpy as np
+ >>> import skimage as ski
+ >>> ski.util.map_array(
+ ... input_arr=np.array([[0, 2, 2, 0], [3, 4, 5, 0]]),
+ ... input_vals=np.array([1, 2, 3, 4, 6]),
+ ... output_vals=np.array([6, 7, 8, 9, 10]),
+ ... )
+ array([[0, 7, 7, 0],
+ [8, 9, 0, 0]])
"""
from ._remap import _map_array
|
{"golden_diff": "diff --git a/skimage/util/_map_array.py b/skimage/util/_map_array.py\n--- a/skimage/util/_map_array.py\n+++ b/skimage/util/_map_array.py\n@@ -20,6 +20,23 @@\n -------\n out : array, same shape as `input_arr`\n The array of mapped values.\n+\n+ Notes\n+ -----\n+ If `input_arr` contains values that aren't covered by `input_vals`, they\n+ are set to 0.\n+\n+ Examples\n+ --------\n+ >>> import numpy as np\n+ >>> import skimage as ski\n+ >>> ski.util.map_array(\n+ ... input_arr=np.array([[0, 2, 2, 0], [3, 4, 5, 0]]),\n+ ... input_vals=np.array([1, 2, 3, 4, 6]),\n+ ... output_vals=np.array([6, 7, 8, 9, 10]),\n+ ... )\n+ array([[0, 7, 7, 0],\n+ [8, 9, 0, 0]])\n \"\"\"\n from ._remap import _map_array\n", "issue": "Faster map_array function\n### Description:\r\n\r\nIn another project ( BiAPoL/napari-clusters-plotter#283 ) we faced the problem that the map_array function is quite slow when it comes to large arrays. We came up with a faster method and I thought the new method might be worth considering as a replacement for map_array. Here is a script comparing the old and new methods:\r\n\r\n```python\r\nimport numpy as np\r\nfrom skimage.util import map_array\r\nfrom time import perf_counter\r\n\r\nshape = (1,1024,1024)\r\ntotal = shape[0]*shape[1]*shape[2]\r\nNUM_LABELS=100000\r\ninput_data = np.random.randint(NUM_LABELS,size=total).reshape(shape).astype(\"int64\")\r\nfrom_values = np.arange(NUM_LABELS)\r\nto_values = np.copy(from_values)\r\nnp.random.shuffle(to_values)\r\n\r\ndef generate_cluster_image(label_image, label_list, predictionlist):\r\n \"\"\"\r\n Generates a clusters image from a label image and a list of cluster predictions,\r\n where each label value corresponds to the cluster identity.\r\n It is assumed that len(predictionlist) == max(label_image)\r\n\r\n Parameters\r\n ----------\r\n label_image: ndarray or dask array\r\n Label image used for cluster predictions\r\n predictionlist: Array-like\r\n An array containing cluster identities for each label\r\n\r\n Returns\r\n ----------\r\n ndarray: The clusters image as a numpy array.\r\n \"\"\"\r\n\r\n predictionlist_new = np.array(predictionlist) + 1\r\n plist = np.zeros(np.max(label_image) + 1, dtype=np.uint32)\r\n plist[label_list] = predictionlist_new\r\n\r\n predictionlist_new = plist\r\n\r\n return predictionlist_new[label_image]\r\n\r\ndef generate_cluster_image_old(label_image, label_list, predictionlist):\r\n \"\"\"\r\n Generates a clusters image from a label image and a list of cluster predictions,\r\n where each label value corresponds to the cluster identity.\r\n It is assumed that len(predictionlist) == max(label_image)\r\n\r\n Parameters\r\n ----------\r\n label_image: ndarray or dask array\r\n Label image used for cluster predictions\r\n predictionlist: Array-like\r\n An array containing cluster identities for each label\r\n\r\n Returns\r\n ----------\r\n ndarray: The clusters image as a numpy array.\r\n \"\"\"\r\n from skimage.util import map_array\r\n\r\n # reforming the prediction list, this is done to account\r\n # for cluster labels that start at 0, conveniently hdbscan\r\n # labelling starts at -1 for noise, removing these from the labels\r\n predictionlist_new = np.array(predictionlist) + 1\r\n label_list = np.array(label_list)\r\n\r\n return map_array(np.asarray(label_image), label_list, predictionlist_new).astype(\r\n \"uint32\"\r\n )\r\n\r\nt1 = perf_counter()\r\nres_new = generate_cluster_image(input_data, from_values, to_values)\r\nt_new = perf_counter() - t1\r\nprint(t_new)\r\n\r\nt1 = perf_counter()\r\nres_old = generate_cluster_image_old(input_data, from_values, to_values)\r\nt_old = perf_counter() - t1\r\nprint(t_old)\r\n\r\nprint(f\"Speedup {t_old/t_new}\")\r\n\r\nprint(np.array_equal(res_new,res_old))\r\n```\r\n\r\nThe new method is 15x-30x faster than the old one. \r\n\r\nIf you guys think that this is worth it, I can come up with a PR.\n", "before_files": [{"content": "import numpy as np\n\n\ndef map_array(input_arr, input_vals, output_vals, out=None):\n \"\"\"Map values from input array from input_vals to output_vals.\n\n Parameters\n ----------\n input_arr : array of int, shape (M[, ...])\n The input label image.\n input_vals : array of int, shape (K,)\n The values to map from.\n output_vals : array, shape (K,)\n The values to map to.\n out: array, same shape as `input_arr`\n The output array. Will be created if not provided. It should\n have the same dtype as `output_vals`.\n\n Returns\n -------\n out : array, same shape as `input_arr`\n The array of mapped values.\n \"\"\"\n from ._remap import _map_array\n\n if not np.issubdtype(input_arr.dtype, np.integer):\n raise TypeError('The dtype of an array to be remapped should be integer.')\n # We ravel the input array for simplicity of iteration in Cython:\n orig_shape = input_arr.shape\n # NumPy docs for `np.ravel()` says:\n # \"When a view is desired in as many cases as possible,\n # arr.reshape(-1) may be preferable.\"\n input_arr = input_arr.reshape(-1)\n if out is None:\n out = np.empty(orig_shape, dtype=output_vals.dtype)\n elif out.shape != orig_shape:\n raise ValueError(\n 'If out array is provided, it should have the same shape as '\n f'the input array. Input array has shape {orig_shape}, provided '\n f'output array has shape {out.shape}.'\n )\n try:\n out_view = out.view()\n out_view.shape = (-1,) # no-copy reshape/ravel\n except AttributeError: # if out strides are not compatible with 0-copy\n raise ValueError(\n 'If out array is provided, it should be either contiguous '\n f'or 1-dimensional. Got array with shape {out.shape} and '\n f'strides {out.strides}.'\n )\n\n # ensure all arrays have matching types before sending to Cython\n input_vals = input_vals.astype(input_arr.dtype, copy=False)\n output_vals = output_vals.astype(out.dtype, copy=False)\n _map_array(input_arr, out_view, input_vals, output_vals)\n return out\n\n\nclass ArrayMap:\n \"\"\"Class designed to mimic mapping by NumPy array indexing.\n\n This class is designed to replicate the use of NumPy arrays for mapping\n values with indexing:\n\n >>> values = np.array([0.25, 0.5, 1.0])\n >>> indices = np.array([[0, 0, 1], [2, 2, 1]])\n >>> values[indices]\n array([[0.25, 0.25, 0.5 ],\n [1. , 1. , 0.5 ]])\n\n The issue with this indexing is that you need a very large ``values``\n array if the values in the ``indices`` array are large.\n\n >>> values = np.array([0.25, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0])\n >>> indices = np.array([[0, 0, 10], [0, 10, 10]])\n >>> values[indices]\n array([[0.25, 0.25, 1. ],\n [0.25, 1. , 1. ]])\n\n Using this class, the approach is similar, but there is no need to\n create a large values array:\n\n >>> in_indices = np.array([0, 10])\n >>> out_values = np.array([0.25, 1.0])\n >>> values = ArrayMap(in_indices, out_values)\n >>> values\n ArrayMap(array([ 0, 10]), array([0.25, 1. ]))\n >>> print(values)\n ArrayMap:\n 0 \u2192 0.25\n 10 \u2192 1.0\n >>> indices = np.array([[0, 0, 10], [0, 10, 10]])\n >>> values[indices]\n array([[0.25, 0.25, 1. ],\n [0.25, 1. , 1. ]])\n\n Parameters\n ----------\n in_values : array of int, shape (K,)\n The source values from which to map.\n out_values : array, shape (K,)\n The destination values from which to map.\n \"\"\"\n\n def __init__(self, in_values, out_values):\n self.in_values = in_values\n self.out_values = out_values\n self._max_str_lines = 4\n self._array = None\n\n def __len__(self):\n \"\"\"Return one more than the maximum label value being remapped.\"\"\"\n return np.max(self.in_values) + 1\n\n def __array__(self, dtype=None):\n \"\"\"Return an array that behaves like the arraymap when indexed.\n\n This array can be very large: it is the size of the largest value\n in the ``in_vals`` array, plus one.\n \"\"\"\n if dtype is None:\n dtype = self.out_values.dtype\n output = np.zeros(np.max(self.in_values) + 1, dtype=dtype)\n output[self.in_values] = self.out_values\n return output\n\n @property\n def dtype(self):\n return self.out_values.dtype\n\n def __repr__(self):\n return f'ArrayMap({repr(self.in_values)}, {repr(self.out_values)})'\n\n def __str__(self):\n if len(self.in_values) <= self._max_str_lines + 1:\n rows = range(len(self.in_values))\n string = '\\n'.join(\n ['ArrayMap:']\n + [f' {self.in_values[i]} \u2192 {self.out_values[i]}' for i in rows]\n )\n else:\n rows0 = list(range(0, self._max_str_lines // 2))\n rows1 = list(range(-self._max_str_lines // 2, 0))\n string = '\\n'.join(\n ['ArrayMap:']\n + [f' {self.in_values[i]} \u2192 {self.out_values[i]}' for i in rows0]\n + [' ...']\n + [f' {self.in_values[i]} \u2192 {self.out_values[i]}' for i in rows1]\n )\n return string\n\n def __call__(self, arr):\n return self.__getitem__(arr)\n\n def __getitem__(self, index):\n scalar = np.isscalar(index)\n if scalar:\n index = np.array([index])\n elif isinstance(index, slice):\n start = index.start or 0 # treat None or 0 the same way\n stop = index.stop if index.stop is not None else len(self)\n step = index.step\n index = np.arange(start, stop, step)\n if index.dtype == bool:\n index = np.flatnonzero(index)\n\n out = map_array(\n index,\n self.in_values.astype(index.dtype, copy=False),\n self.out_values,\n )\n\n if scalar:\n out = out[0]\n return out\n\n def __setitem__(self, indices, values):\n if self._array is None:\n self._array = self.__array__()\n self._array[indices] = values\n self.in_values = np.flatnonzero(self._array)\n self.out_values = self._array[self.in_values]\n", "path": "skimage/util/_map_array.py"}], "after_files": [{"content": "import numpy as np\n\n\ndef map_array(input_arr, input_vals, output_vals, out=None):\n \"\"\"Map values from input array from input_vals to output_vals.\n\n Parameters\n ----------\n input_arr : array of int, shape (M[, ...])\n The input label image.\n input_vals : array of int, shape (K,)\n The values to map from.\n output_vals : array, shape (K,)\n The values to map to.\n out: array, same shape as `input_arr`\n The output array. Will be created if not provided. It should\n have the same dtype as `output_vals`.\n\n Returns\n -------\n out : array, same shape as `input_arr`\n The array of mapped values.\n\n Notes\n -----\n If `input_arr` contains values that aren't covered by `input_vals`, they\n are set to 0.\n\n Examples\n --------\n >>> import numpy as np\n >>> import skimage as ski\n >>> ski.util.map_array(\n ... input_arr=np.array([[0, 2, 2, 0], [3, 4, 5, 0]]),\n ... input_vals=np.array([1, 2, 3, 4, 6]),\n ... output_vals=np.array([6, 7, 8, 9, 10]),\n ... )\n array([[0, 7, 7, 0],\n [8, 9, 0, 0]])\n \"\"\"\n from ._remap import _map_array\n\n if not np.issubdtype(input_arr.dtype, np.integer):\n raise TypeError('The dtype of an array to be remapped should be integer.')\n # We ravel the input array for simplicity of iteration in Cython:\n orig_shape = input_arr.shape\n # NumPy docs for `np.ravel()` says:\n # \"When a view is desired in as many cases as possible,\n # arr.reshape(-1) may be preferable.\"\n input_arr = input_arr.reshape(-1)\n if out is None:\n out = np.empty(orig_shape, dtype=output_vals.dtype)\n elif out.shape != orig_shape:\n raise ValueError(\n 'If out array is provided, it should have the same shape as '\n f'the input array. Input array has shape {orig_shape}, provided '\n f'output array has shape {out.shape}.'\n )\n try:\n out_view = out.view()\n out_view.shape = (-1,) # no-copy reshape/ravel\n except AttributeError: # if out strides are not compatible with 0-copy\n raise ValueError(\n 'If out array is provided, it should be either contiguous '\n f'or 1-dimensional. Got array with shape {out.shape} and '\n f'strides {out.strides}.'\n )\n\n # ensure all arrays have matching types before sending to Cython\n input_vals = input_vals.astype(input_arr.dtype, copy=False)\n output_vals = output_vals.astype(out.dtype, copy=False)\n _map_array(input_arr, out_view, input_vals, output_vals)\n return out\n\n\nclass ArrayMap:\n \"\"\"Class designed to mimic mapping by NumPy array indexing.\n\n This class is designed to replicate the use of NumPy arrays for mapping\n values with indexing:\n\n >>> values = np.array([0.25, 0.5, 1.0])\n >>> indices = np.array([[0, 0, 1], [2, 2, 1]])\n >>> values[indices]\n array([[0.25, 0.25, 0.5 ],\n [1. , 1. , 0.5 ]])\n\n The issue with this indexing is that you need a very large ``values``\n array if the values in the ``indices`` array are large.\n\n >>> values = np.array([0.25, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0])\n >>> indices = np.array([[0, 0, 10], [0, 10, 10]])\n >>> values[indices]\n array([[0.25, 0.25, 1. ],\n [0.25, 1. , 1. ]])\n\n Using this class, the approach is similar, but there is no need to\n create a large values array:\n\n >>> in_indices = np.array([0, 10])\n >>> out_values = np.array([0.25, 1.0])\n >>> values = ArrayMap(in_indices, out_values)\n >>> values\n ArrayMap(array([ 0, 10]), array([0.25, 1. ]))\n >>> print(values)\n ArrayMap:\n 0 \u2192 0.25\n 10 \u2192 1.0\n >>> indices = np.array([[0, 0, 10], [0, 10, 10]])\n >>> values[indices]\n array([[0.25, 0.25, 1. ],\n [0.25, 1. , 1. ]])\n\n Parameters\n ----------\n in_values : array of int, shape (K,)\n The source values from which to map.\n out_values : array, shape (K,)\n The destination values from which to map.\n \"\"\"\n\n def __init__(self, in_values, out_values):\n self.in_values = in_values\n self.out_values = out_values\n self._max_str_lines = 4\n self._array = None\n\n def __len__(self):\n \"\"\"Return one more than the maximum label value being remapped.\"\"\"\n return np.max(self.in_values) + 1\n\n def __array__(self, dtype=None):\n \"\"\"Return an array that behaves like the arraymap when indexed.\n\n This array can be very large: it is the size of the largest value\n in the ``in_vals`` array, plus one.\n \"\"\"\n if dtype is None:\n dtype = self.out_values.dtype\n output = np.zeros(np.max(self.in_values) + 1, dtype=dtype)\n output[self.in_values] = self.out_values\n return output\n\n @property\n def dtype(self):\n return self.out_values.dtype\n\n def __repr__(self):\n return f'ArrayMap({repr(self.in_values)}, {repr(self.out_values)})'\n\n def __str__(self):\n if len(self.in_values) <= self._max_str_lines + 1:\n rows = range(len(self.in_values))\n string = '\\n'.join(\n ['ArrayMap:']\n + [f' {self.in_values[i]} \u2192 {self.out_values[i]}' for i in rows]\n )\n else:\n rows0 = list(range(0, self._max_str_lines // 2))\n rows1 = list(range(-self._max_str_lines // 2, 0))\n string = '\\n'.join(\n ['ArrayMap:']\n + [f' {self.in_values[i]} \u2192 {self.out_values[i]}' for i in rows0]\n + [' ...']\n + [f' {self.in_values[i]} \u2192 {self.out_values[i]}' for i in rows1]\n )\n return string\n\n def __call__(self, arr):\n return self.__getitem__(arr)\n\n def __getitem__(self, index):\n scalar = np.isscalar(index)\n if scalar:\n index = np.array([index])\n elif isinstance(index, slice):\n start = index.start or 0 # treat None or 0 the same way\n stop = index.stop if index.stop is not None else len(self)\n step = index.step\n index = np.arange(start, stop, step)\n if index.dtype == bool:\n index = np.flatnonzero(index)\n\n out = map_array(\n index,\n self.in_values.astype(index.dtype, copy=False),\n self.out_values,\n )\n\n if scalar:\n out = out[0]\n return out\n\n def __setitem__(self, indices, values):\n if self._array is None:\n self._array = self.__array__()\n self._array[indices] = values\n self.in_values = np.flatnonzero(self._array)\n self.out_values = self._array[self.in_values]\n", "path": "skimage/util/_map_array.py"}]}
| 3,105 | 271 |
gh_patches_debug_30775
|
rasdani/github-patches
|
git_diff
|
vispy__vispy-154
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Why is there no asser_in in nose.tools on py2.7?
This makes me jump through hoops when writing tests.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `vispy/util/misc.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright (c) 2014, Vispy Development Team.
3 # Distributed under the (new) BSD License. See LICENSE.txt for more info.
4
5 """Miscellaneous functions
6 """
7
8 import numpy as np
9 import tempfile
10 import atexit
11 from shutil import rmtree
12 import sys
13 import platform
14 import getopt
15 from os import path as op
16 import traceback
17
18 from .six import string_types
19 from .event import EmitterGroup, EventEmitter, Event
20 from ._logging import logger, set_log_level, use_log_level
21
22
23 class _TempDir(str):
24
25 """Class for creating and auto-destroying temp dir
26
27 This is designed to be used with testing modules.
28
29 We cannot simply use __del__() method for cleanup here because the rmtree
30 function may be cleaned up before this object, so we use the atexit module
31 instead.
32 """
33
34 def __new__(self):
35 new = str.__new__(self, tempfile.mkdtemp())
36 return new
37
38 def __init__(self):
39 self._path = self.__str__()
40 atexit.register(self.cleanup)
41
42 def cleanup(self):
43 rmtree(self._path, ignore_errors=True)
44
45
46 def is_string(s):
47 return isinstance(s, string_types)
48
49
50 ###############################################################################
51 # These fast normal calculation routines are adapted from mne-python
52
53 def _fast_cross_3d(x, y):
54 """Compute cross product between list of 3D vectors
55
56 Much faster than np.cross() when the number of cross products
57 becomes large (>500). This is because np.cross() methods become
58 less memory efficient at this stage.
59
60 Parameters
61 ----------
62 x : array
63 Input array 1.
64 y : array
65 Input array 2.
66
67 Returns
68 -------
69 z : array
70 Cross product of x and y.
71
72 Notes
73 -----
74 x and y must both be 2D row vectors. One must have length 1, or both
75 lengths must match.
76 """
77 assert x.ndim == 2
78 assert y.ndim == 2
79 assert x.shape[1] == 3
80 assert y.shape[1] == 3
81 assert (x.shape[0] == 1 or y.shape[0] == 1) or x.shape[0] == y.shape[0]
82 if max([x.shape[0], y.shape[0]]) >= 500:
83 return np.c_[x[:, 1] * y[:, 2] - x[:, 2] * y[:, 1],
84 x[:, 2] * y[:, 0] - x[:, 0] * y[:, 2],
85 x[:, 0] * y[:, 1] - x[:, 1] * y[:, 0]]
86 else:
87 return np.cross(x, y)
88
89
90 def _calculate_normals(rr, tris):
91 """Efficiently compute vertex normals for triangulated surface"""
92 # ensure highest precision for our summation/vectorization "trick"
93 rr = rr.astype(np.float64)
94 # first, compute triangle normals
95 r1 = rr[tris[:, 0], :]
96 r2 = rr[tris[:, 1], :]
97 r3 = rr[tris[:, 2], :]
98 tri_nn = _fast_cross_3d((r2 - r1), (r3 - r1))
99
100 # Triangle normals and areas
101 size = np.sqrt(np.sum(tri_nn * tri_nn, axis=1))
102 size[size == 0] = 1.0 # prevent ugly divide-by-zero
103 tri_nn /= size[:, np.newaxis]
104
105 npts = len(rr)
106
107 # the following code replaces this, but is faster (vectorized):
108 #
109 # for p, verts in enumerate(tris):
110 # nn[verts, :] += tri_nn[p, :]
111 #
112 nn = np.zeros((npts, 3))
113 for verts in tris.T: # note this only loops 3x (number of verts per tri)
114 for idx in range(3): # x, y, z
115 nn[:, idx] += np.bincount(verts, tri_nn[:, idx], minlength=npts)
116 size = np.sqrt(np.sum(nn * nn, axis=1))
117 size[size == 0] = 1.0 # prevent ugly divide-by-zero
118 nn /= size[:, np.newaxis]
119 return nn
120
121
122 ###############################################################################
123 # CONFIG
124
125 class ConfigEvent(Event):
126
127 """ Event indicating a configuration change.
128
129 This class has a 'changes' attribute which is a dict of all name:value
130 pairs that have changed in the configuration.
131 """
132
133 def __init__(self, changes):
134 Event.__init__(self, type='config_change')
135 self.changes = changes
136
137
138 class Config(object):
139
140 """ Container for global settings used application-wide in vispy.
141
142 Events:
143 -------
144 Config.events.changed - Emits ConfigEvent whenever the configuration
145 changes.
146 """
147
148 def __init__(self):
149 self.events = EmitterGroup(source=self)
150 self.events['changed'] = EventEmitter(
151 event_class=ConfigEvent,
152 source=self)
153 self._config = {}
154
155 def __getitem__(self, item):
156 return self._config[item]
157
158 def __setitem__(self, item, val):
159 self._config[item] = val
160 # inform any listeners that a configuration option has changed
161 self.events.changed(changes={item: val})
162
163 def update(self, **kwds):
164 self._config.update(kwds)
165 self.events.changed(changes=kwds)
166
167 def __repr__(self):
168 return repr(self._config)
169
170 config = Config()
171 config.update(
172 default_backend='qt',
173 qt_lib='any', # options are 'pyqt', 'pyside', or 'any'
174 show_warnings=False,
175 gl_debug=False,
176 logging_level='info',
177 )
178
179 set_log_level(config['logging_level'])
180
181
182 def parse_command_line_arguments():
183 """ Transform vispy specific command line args to vispy config.
184 Put into a function so that any variables dont leak in the vispy namespace.
185 """
186 # Get command line args for vispy
187 argnames = ['vispy-backend', 'vispy-gl-debug']
188 try:
189 opts, args = getopt.getopt(sys.argv[1:], '', argnames)
190 except getopt.GetoptError:
191 opts = []
192 # Use them to set the config values
193 for o, a in opts:
194 if o.startswith('--vispy'):
195 if o == '--vispy-backend':
196 config['default_backend'] = a
197 logger.info('backend', a)
198 elif o == '--vispy-gl-debug':
199 config['gl_debug'] = True
200 else:
201 logger.warning("Unsupported vispy flag: %s" % o)
202
203
204 def sys_info(fname=None, overwrite=False):
205 """Get relevant system and debugging information
206
207 Parameters
208 ----------
209 fname : str | None
210 Filename to dump info to. Use None to simply print.
211 overwrite : bool
212 If True, overwrite file (if it exists).
213
214 Returns
215 -------
216 out : str
217 The system information as a string.
218 """
219 if fname is not None and op.isfile(fname) and not overwrite:
220 raise IOError('file exists, use overwrite=True to overwrite')
221
222 out = ''
223 try:
224 # Nest all imports here to avoid any circular imports
225 from ..app import Application, Canvas, backends
226 from ..gloo import gl
227 # get default app
228 this_app = Application()
229 with use_log_level('warning'):
230 this_app.use() # suppress unnecessary messages
231 out += 'Platform: %s\n' % platform.platform()
232 out += 'Python: %s\n' % str(sys.version).replace('\n', ' ')
233 out += 'Backend: %s\n' % this_app.backend_name
234 out += 'Qt: %s\n' % backends.has_qt(return_which=True)[1]
235 out += 'Pyglet: %s\n' % backends.has_pyglet(return_which=True)[1]
236 out += 'glfw: %s\n' % backends.has_glfw(return_which=True)[1]
237 out += 'glut: %s\n' % backends.has_glut(return_which=True)[1]
238 out += '\n'
239 # We need an OpenGL context to get GL info
240 if 'glut' in this_app.backend_name.lower():
241 # glut causes problems
242 out += 'OpenGL information omitted for glut backend\n'
243 else:
244 canvas = Canvas('Test', (10, 10), show=False, app=this_app)
245 canvas._backend._vispy_set_current()
246 out += 'GL version: %s\n' % gl.glGetString(gl.GL_VERSION)
247 x_ = gl.GL_MAX_TEXTURE_SIZE
248 out += 'MAX_TEXTURE_SIZE: %d\n' % gl.glGetIntegerv(x_)
249 x_ = gl.ext.GL_MAX_3D_TEXTURE_SIZE
250 out += 'MAX_3D_TEXTURE_SIZE: %d\n\n' % gl.glGetIntegerv(x_)
251 out += 'Extensions: %s\n' % gl.glGetString(gl.GL_EXTENSIONS)
252 canvas.close()
253 except Exception: # don't stop printing info
254 out += '\nInfo-gathering error:\n%s' % traceback.format_exc()
255 pass
256 if fname is not None:
257 with open(fname, 'w') as fid:
258 fid.write(out)
259 return out
260
```
Path: `vispy/util/__init__.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright (c) 2014, Vispy Development Team. All Rights Reserved.
3 # Distributed under the (new) BSD License. See LICENSE.txt for more info.
4
5 """ Utilities for Vispy. A collection of modules that are used in
6 one or more Vispy sub-packages.
7 """
8
9 from .misc import (_TempDir, is_string, parse_command_line_arguments, # noqa
10 config, sys_info) # noqa
11 from ._logging import logger, set_log_level, use_log_level # noqa
12
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/vispy/util/__init__.py b/vispy/util/__init__.py
--- a/vispy/util/__init__.py
+++ b/vispy/util/__init__.py
@@ -7,5 +7,6 @@
"""
from .misc import (_TempDir, is_string, parse_command_line_arguments, # noqa
- config, sys_info) # noqa
+ config, sys_info, assert_in, assert_not_in, # noqa
+ assert_is) # noqa
from ._logging import logger, set_log_level, use_log_level # noqa
diff --git a/vispy/util/misc.py b/vispy/util/misc.py
--- a/vispy/util/misc.py
+++ b/vispy/util/misc.py
@@ -257,3 +257,60 @@
with open(fname, 'w') as fid:
fid.write(out)
return out
+
+
+# Adapted from Python's unittest2 (which is wrapped by nose)
+# http://docs.python.org/2/license.html
+def _safe_rep(obj, short=False):
+ """Helper for assert_* ports"""
+ try:
+ result = repr(obj)
+ except Exception:
+ result = object.__repr__(obj)
+ if not short or len(result) < 80:
+ return result
+ return result[:80] + ' [truncated]...'
+
+
+def _safe_str(obj):
+ """Helper for assert_* ports"""
+ try:
+ return str(obj)
+ except Exception:
+ return object.__str__(obj)
+
+
+def _format_msg(msg, std_msg):
+ """Helper for assert_* ports"""
+ if msg is None:
+ msg = std_msg
+ else:
+ try:
+ msg = '%s : %s' % (std_msg, msg)
+ except UnicodeDecodeError:
+ msg = '%s : %s' % (_safe_str(std_msg), _safe_str(msg))
+
+
+def assert_in(member, container, msg=None):
+ """Backport for old nose.tools"""
+ if member in container:
+ return
+ std_msg = '%s not found in %s' % (_safe_rep(member), _safe_rep(container))
+ msg = _format_msg(msg, std_msg)
+ raise AssertionError(msg)
+
+
+def assert_not_in(member, container, msg=None):
+ """Backport for old nose.tools"""
+ if member not in container:
+ return
+ std_msg = '%s found in %s' % (_safe_rep(member), _safe_rep(container))
+ msg = _format_msg(msg, std_msg)
+ raise AssertionError(msg)
+
+
+def assert_is(expr1, expr2, msg=None):
+ """Backport for old nose.tools"""
+ if expr1 is not expr2:
+ std_msg = '%s is not %s' % (_safe_rep(expr1), _safe_rep(expr2))
+ raise AssertionError(_format_msg(msg, std_msg))
|
{"golden_diff": "diff --git a/vispy/util/__init__.py b/vispy/util/__init__.py\n--- a/vispy/util/__init__.py\n+++ b/vispy/util/__init__.py\n@@ -7,5 +7,6 @@\n \"\"\"\n \n from .misc import (_TempDir, is_string, parse_command_line_arguments, # noqa\n- config, sys_info) # noqa\n+ config, sys_info, assert_in, assert_not_in, # noqa\n+ assert_is) # noqa\n from ._logging import logger, set_log_level, use_log_level # noqa\ndiff --git a/vispy/util/misc.py b/vispy/util/misc.py\n--- a/vispy/util/misc.py\n+++ b/vispy/util/misc.py\n@@ -257,3 +257,60 @@\n with open(fname, 'w') as fid:\n fid.write(out)\n return out\n+\n+\n+# Adapted from Python's unittest2 (which is wrapped by nose)\n+# http://docs.python.org/2/license.html\n+def _safe_rep(obj, short=False):\n+ \"\"\"Helper for assert_* ports\"\"\"\n+ try:\n+ result = repr(obj)\n+ except Exception:\n+ result = object.__repr__(obj)\n+ if not short or len(result) < 80:\n+ return result\n+ return result[:80] + ' [truncated]...'\n+\n+\n+def _safe_str(obj):\n+ \"\"\"Helper for assert_* ports\"\"\"\n+ try:\n+ return str(obj)\n+ except Exception:\n+ return object.__str__(obj)\n+\n+\n+def _format_msg(msg, std_msg):\n+ \"\"\"Helper for assert_* ports\"\"\"\n+ if msg is None:\n+ msg = std_msg\n+ else:\n+ try:\n+ msg = '%s : %s' % (std_msg, msg)\n+ except UnicodeDecodeError:\n+ msg = '%s : %s' % (_safe_str(std_msg), _safe_str(msg))\n+\n+\n+def assert_in(member, container, msg=None):\n+ \"\"\"Backport for old nose.tools\"\"\"\n+ if member in container:\n+ return\n+ std_msg = '%s not found in %s' % (_safe_rep(member), _safe_rep(container))\n+ msg = _format_msg(msg, std_msg)\n+ raise AssertionError(msg)\n+\n+\n+def assert_not_in(member, container, msg=None):\n+ \"\"\"Backport for old nose.tools\"\"\"\n+ if member not in container:\n+ return\n+ std_msg = '%s found in %s' % (_safe_rep(member), _safe_rep(container))\n+ msg = _format_msg(msg, std_msg)\n+ raise AssertionError(msg)\n+\n+\n+def assert_is(expr1, expr2, msg=None):\n+ \"\"\"Backport for old nose.tools\"\"\"\n+ if expr1 is not expr2:\n+ std_msg = '%s is not %s' % (_safe_rep(expr1), _safe_rep(expr2))\n+ raise AssertionError(_format_msg(msg, std_msg))\n", "issue": "Why is there no asser_in in nose.tools on py2.7?\nThis makes me jump through hoops when writing tests.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) 2014, Vispy Development Team.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n\n\"\"\"Miscellaneous functions\n\"\"\"\n\nimport numpy as np\nimport tempfile\nimport atexit\nfrom shutil import rmtree\nimport sys\nimport platform\nimport getopt\nfrom os import path as op\nimport traceback\n\nfrom .six import string_types\nfrom .event import EmitterGroup, EventEmitter, Event\nfrom ._logging import logger, set_log_level, use_log_level\n\n\nclass _TempDir(str):\n\n \"\"\"Class for creating and auto-destroying temp dir\n\n This is designed to be used with testing modules.\n\n We cannot simply use __del__() method for cleanup here because the rmtree\n function may be cleaned up before this object, so we use the atexit module\n instead.\n \"\"\"\n\n def __new__(self):\n new = str.__new__(self, tempfile.mkdtemp())\n return new\n\n def __init__(self):\n self._path = self.__str__()\n atexit.register(self.cleanup)\n\n def cleanup(self):\n rmtree(self._path, ignore_errors=True)\n\n\ndef is_string(s):\n return isinstance(s, string_types)\n\n\n###############################################################################\n# These fast normal calculation routines are adapted from mne-python\n\ndef _fast_cross_3d(x, y):\n \"\"\"Compute cross product between list of 3D vectors\n\n Much faster than np.cross() when the number of cross products\n becomes large (>500). This is because np.cross() methods become\n less memory efficient at this stage.\n\n Parameters\n ----------\n x : array\n Input array 1.\n y : array\n Input array 2.\n\n Returns\n -------\n z : array\n Cross product of x and y.\n\n Notes\n -----\n x and y must both be 2D row vectors. One must have length 1, or both\n lengths must match.\n \"\"\"\n assert x.ndim == 2\n assert y.ndim == 2\n assert x.shape[1] == 3\n assert y.shape[1] == 3\n assert (x.shape[0] == 1 or y.shape[0] == 1) or x.shape[0] == y.shape[0]\n if max([x.shape[0], y.shape[0]]) >= 500:\n return np.c_[x[:, 1] * y[:, 2] - x[:, 2] * y[:, 1],\n x[:, 2] * y[:, 0] - x[:, 0] * y[:, 2],\n x[:, 0] * y[:, 1] - x[:, 1] * y[:, 0]]\n else:\n return np.cross(x, y)\n\n\ndef _calculate_normals(rr, tris):\n \"\"\"Efficiently compute vertex normals for triangulated surface\"\"\"\n # ensure highest precision for our summation/vectorization \"trick\"\n rr = rr.astype(np.float64)\n # first, compute triangle normals\n r1 = rr[tris[:, 0], :]\n r2 = rr[tris[:, 1], :]\n r3 = rr[tris[:, 2], :]\n tri_nn = _fast_cross_3d((r2 - r1), (r3 - r1))\n\n # Triangle normals and areas\n size = np.sqrt(np.sum(tri_nn * tri_nn, axis=1))\n size[size == 0] = 1.0 # prevent ugly divide-by-zero\n tri_nn /= size[:, np.newaxis]\n\n npts = len(rr)\n\n # the following code replaces this, but is faster (vectorized):\n #\n # for p, verts in enumerate(tris):\n # nn[verts, :] += tri_nn[p, :]\n #\n nn = np.zeros((npts, 3))\n for verts in tris.T: # note this only loops 3x (number of verts per tri)\n for idx in range(3): # x, y, z\n nn[:, idx] += np.bincount(verts, tri_nn[:, idx], minlength=npts)\n size = np.sqrt(np.sum(nn * nn, axis=1))\n size[size == 0] = 1.0 # prevent ugly divide-by-zero\n nn /= size[:, np.newaxis]\n return nn\n\n\n###############################################################################\n# CONFIG\n\nclass ConfigEvent(Event):\n\n \"\"\" Event indicating a configuration change.\n\n This class has a 'changes' attribute which is a dict of all name:value\n pairs that have changed in the configuration.\n \"\"\"\n\n def __init__(self, changes):\n Event.__init__(self, type='config_change')\n self.changes = changes\n\n\nclass Config(object):\n\n \"\"\" Container for global settings used application-wide in vispy.\n\n Events:\n -------\n Config.events.changed - Emits ConfigEvent whenever the configuration\n changes.\n \"\"\"\n\n def __init__(self):\n self.events = EmitterGroup(source=self)\n self.events['changed'] = EventEmitter(\n event_class=ConfigEvent,\n source=self)\n self._config = {}\n\n def __getitem__(self, item):\n return self._config[item]\n\n def __setitem__(self, item, val):\n self._config[item] = val\n # inform any listeners that a configuration option has changed\n self.events.changed(changes={item: val})\n\n def update(self, **kwds):\n self._config.update(kwds)\n self.events.changed(changes=kwds)\n\n def __repr__(self):\n return repr(self._config)\n\nconfig = Config()\nconfig.update(\n default_backend='qt',\n qt_lib='any', # options are 'pyqt', 'pyside', or 'any'\n show_warnings=False,\n gl_debug=False,\n logging_level='info',\n)\n\nset_log_level(config['logging_level'])\n\n\ndef parse_command_line_arguments():\n \"\"\" Transform vispy specific command line args to vispy config.\n Put into a function so that any variables dont leak in the vispy namespace.\n \"\"\"\n # Get command line args for vispy\n argnames = ['vispy-backend', 'vispy-gl-debug']\n try:\n opts, args = getopt.getopt(sys.argv[1:], '', argnames)\n except getopt.GetoptError:\n opts = []\n # Use them to set the config values\n for o, a in opts:\n if o.startswith('--vispy'):\n if o == '--vispy-backend':\n config['default_backend'] = a\n logger.info('backend', a)\n elif o == '--vispy-gl-debug':\n config['gl_debug'] = True\n else:\n logger.warning(\"Unsupported vispy flag: %s\" % o)\n\n\ndef sys_info(fname=None, overwrite=False):\n \"\"\"Get relevant system and debugging information\n\n Parameters\n ----------\n fname : str | None\n Filename to dump info to. Use None to simply print.\n overwrite : bool\n If True, overwrite file (if it exists).\n\n Returns\n -------\n out : str\n The system information as a string.\n \"\"\"\n if fname is not None and op.isfile(fname) and not overwrite:\n raise IOError('file exists, use overwrite=True to overwrite')\n\n out = ''\n try:\n # Nest all imports here to avoid any circular imports\n from ..app import Application, Canvas, backends\n from ..gloo import gl\n # get default app\n this_app = Application()\n with use_log_level('warning'):\n this_app.use() # suppress unnecessary messages\n out += 'Platform: %s\\n' % platform.platform()\n out += 'Python: %s\\n' % str(sys.version).replace('\\n', ' ')\n out += 'Backend: %s\\n' % this_app.backend_name\n out += 'Qt: %s\\n' % backends.has_qt(return_which=True)[1]\n out += 'Pyglet: %s\\n' % backends.has_pyglet(return_which=True)[1]\n out += 'glfw: %s\\n' % backends.has_glfw(return_which=True)[1]\n out += 'glut: %s\\n' % backends.has_glut(return_which=True)[1]\n out += '\\n'\n # We need an OpenGL context to get GL info\n if 'glut' in this_app.backend_name.lower():\n # glut causes problems\n out += 'OpenGL information omitted for glut backend\\n'\n else:\n canvas = Canvas('Test', (10, 10), show=False, app=this_app)\n canvas._backend._vispy_set_current()\n out += 'GL version: %s\\n' % gl.glGetString(gl.GL_VERSION)\n x_ = gl.GL_MAX_TEXTURE_SIZE\n out += 'MAX_TEXTURE_SIZE: %d\\n' % gl.glGetIntegerv(x_)\n x_ = gl.ext.GL_MAX_3D_TEXTURE_SIZE\n out += 'MAX_3D_TEXTURE_SIZE: %d\\n\\n' % gl.glGetIntegerv(x_)\n out += 'Extensions: %s\\n' % gl.glGetString(gl.GL_EXTENSIONS)\n canvas.close()\n except Exception: # don't stop printing info\n out += '\\nInfo-gathering error:\\n%s' % traceback.format_exc()\n pass\n if fname is not None:\n with open(fname, 'w') as fid:\n fid.write(out)\n return out\n", "path": "vispy/util/misc.py"}, {"content": "# -*- coding: utf-8 -*-\n# Copyright (c) 2014, Vispy Development Team. All Rights Reserved.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n\n\"\"\" Utilities for Vispy. A collection of modules that are used in\none or more Vispy sub-packages.\n\"\"\"\n\nfrom .misc import (_TempDir, is_string, parse_command_line_arguments, # noqa\n config, sys_info) # noqa\nfrom ._logging import logger, set_log_level, use_log_level # noqa\n", "path": "vispy/util/__init__.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) 2014, Vispy Development Team.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n\n\"\"\"Miscellaneous functions\n\"\"\"\n\nimport numpy as np\nimport tempfile\nimport atexit\nfrom shutil import rmtree\nimport sys\nimport platform\nimport getopt\nfrom os import path as op\nimport traceback\n\nfrom .six import string_types\nfrom .event import EmitterGroup, EventEmitter, Event\nfrom ._logging import logger, set_log_level, use_log_level\n\n\nclass _TempDir(str):\n\n \"\"\"Class for creating and auto-destroying temp dir\n\n This is designed to be used with testing modules.\n\n We cannot simply use __del__() method for cleanup here because the rmtree\n function may be cleaned up before this object, so we use the atexit module\n instead.\n \"\"\"\n\n def __new__(self):\n new = str.__new__(self, tempfile.mkdtemp())\n return new\n\n def __init__(self):\n self._path = self.__str__()\n atexit.register(self.cleanup)\n\n def cleanup(self):\n rmtree(self._path, ignore_errors=True)\n\n\ndef is_string(s):\n return isinstance(s, string_types)\n\n\n###############################################################################\n# These fast normal calculation routines are adapted from mne-python\n\ndef _fast_cross_3d(x, y):\n \"\"\"Compute cross product between list of 3D vectors\n\n Much faster than np.cross() when the number of cross products\n becomes large (>500). This is because np.cross() methods become\n less memory efficient at this stage.\n\n Parameters\n ----------\n x : array\n Input array 1.\n y : array\n Input array 2.\n\n Returns\n -------\n z : array\n Cross product of x and y.\n\n Notes\n -----\n x and y must both be 2D row vectors. One must have length 1, or both\n lengths must match.\n \"\"\"\n assert x.ndim == 2\n assert y.ndim == 2\n assert x.shape[1] == 3\n assert y.shape[1] == 3\n assert (x.shape[0] == 1 or y.shape[0] == 1) or x.shape[0] == y.shape[0]\n if max([x.shape[0], y.shape[0]]) >= 500:\n return np.c_[x[:, 1] * y[:, 2] - x[:, 2] * y[:, 1],\n x[:, 2] * y[:, 0] - x[:, 0] * y[:, 2],\n x[:, 0] * y[:, 1] - x[:, 1] * y[:, 0]]\n else:\n return np.cross(x, y)\n\n\ndef _calculate_normals(rr, tris):\n \"\"\"Efficiently compute vertex normals for triangulated surface\"\"\"\n # ensure highest precision for our summation/vectorization \"trick\"\n rr = rr.astype(np.float64)\n # first, compute triangle normals\n r1 = rr[tris[:, 0], :]\n r2 = rr[tris[:, 1], :]\n r3 = rr[tris[:, 2], :]\n tri_nn = _fast_cross_3d((r2 - r1), (r3 - r1))\n\n # Triangle normals and areas\n size = np.sqrt(np.sum(tri_nn * tri_nn, axis=1))\n size[size == 0] = 1.0 # prevent ugly divide-by-zero\n tri_nn /= size[:, np.newaxis]\n\n npts = len(rr)\n\n # the following code replaces this, but is faster (vectorized):\n #\n # for p, verts in enumerate(tris):\n # nn[verts, :] += tri_nn[p, :]\n #\n nn = np.zeros((npts, 3))\n for verts in tris.T: # note this only loops 3x (number of verts per tri)\n for idx in range(3): # x, y, z\n nn[:, idx] += np.bincount(verts, tri_nn[:, idx], minlength=npts)\n size = np.sqrt(np.sum(nn * nn, axis=1))\n size[size == 0] = 1.0 # prevent ugly divide-by-zero\n nn /= size[:, np.newaxis]\n return nn\n\n\n###############################################################################\n# CONFIG\n\nclass ConfigEvent(Event):\n\n \"\"\" Event indicating a configuration change.\n\n This class has a 'changes' attribute which is a dict of all name:value\n pairs that have changed in the configuration.\n \"\"\"\n\n def __init__(self, changes):\n Event.__init__(self, type='config_change')\n self.changes = changes\n\n\nclass Config(object):\n\n \"\"\" Container for global settings used application-wide in vispy.\n\n Events:\n -------\n Config.events.changed - Emits ConfigEvent whenever the configuration\n changes.\n \"\"\"\n\n def __init__(self):\n self.events = EmitterGroup(source=self)\n self.events['changed'] = EventEmitter(\n event_class=ConfigEvent,\n source=self)\n self._config = {}\n\n def __getitem__(self, item):\n return self._config[item]\n\n def __setitem__(self, item, val):\n self._config[item] = val\n # inform any listeners that a configuration option has changed\n self.events.changed(changes={item: val})\n\n def update(self, **kwds):\n self._config.update(kwds)\n self.events.changed(changes=kwds)\n\n def __repr__(self):\n return repr(self._config)\n\nconfig = Config()\nconfig.update(\n default_backend='qt',\n qt_lib='any', # options are 'pyqt', 'pyside', or 'any'\n show_warnings=False,\n gl_debug=False,\n logging_level='info',\n)\n\nset_log_level(config['logging_level'])\n\n\ndef parse_command_line_arguments():\n \"\"\" Transform vispy specific command line args to vispy config.\n Put into a function so that any variables dont leak in the vispy namespace.\n \"\"\"\n # Get command line args for vispy\n argnames = ['vispy-backend', 'vispy-gl-debug']\n try:\n opts, args = getopt.getopt(sys.argv[1:], '', argnames)\n except getopt.GetoptError:\n opts = []\n # Use them to set the config values\n for o, a in opts:\n if o.startswith('--vispy'):\n if o == '--vispy-backend':\n config['default_backend'] = a\n logger.info('backend', a)\n elif o == '--vispy-gl-debug':\n config['gl_debug'] = True\n else:\n logger.warning(\"Unsupported vispy flag: %s\" % o)\n\n\ndef sys_info(fname=None, overwrite=False):\n \"\"\"Get relevant system and debugging information\n\n Parameters\n ----------\n fname : str | None\n Filename to dump info to. Use None to simply print.\n overwrite : bool\n If True, overwrite file (if it exists).\n\n Returns\n -------\n out : str\n The system information as a string.\n \"\"\"\n if fname is not None and op.isfile(fname) and not overwrite:\n raise IOError('file exists, use overwrite=True to overwrite')\n\n out = ''\n try:\n # Nest all imports here to avoid any circular imports\n from ..app import Application, Canvas, backends\n from ..gloo import gl\n # get default app\n this_app = Application()\n with use_log_level('warning'):\n this_app.use() # suppress unnecessary messages\n out += 'Platform: %s\\n' % platform.platform()\n out += 'Python: %s\\n' % str(sys.version).replace('\\n', ' ')\n out += 'Backend: %s\\n' % this_app.backend_name\n out += 'Qt: %s\\n' % backends.has_qt(return_which=True)[1]\n out += 'Pyglet: %s\\n' % backends.has_pyglet(return_which=True)[1]\n out += 'glfw: %s\\n' % backends.has_glfw(return_which=True)[1]\n out += 'glut: %s\\n' % backends.has_glut(return_which=True)[1]\n out += '\\n'\n # We need an OpenGL context to get GL info\n if 'glut' in this_app.backend_name.lower():\n # glut causes problems\n out += 'OpenGL information omitted for glut backend\\n'\n else:\n canvas = Canvas('Test', (10, 10), show=False, app=this_app)\n canvas._backend._vispy_set_current()\n out += 'GL version: %s\\n' % gl.glGetString(gl.GL_VERSION)\n x_ = gl.GL_MAX_TEXTURE_SIZE\n out += 'MAX_TEXTURE_SIZE: %d\\n' % gl.glGetIntegerv(x_)\n x_ = gl.ext.GL_MAX_3D_TEXTURE_SIZE\n out += 'MAX_3D_TEXTURE_SIZE: %d\\n\\n' % gl.glGetIntegerv(x_)\n out += 'Extensions: %s\\n' % gl.glGetString(gl.GL_EXTENSIONS)\n canvas.close()\n except Exception: # don't stop printing info\n out += '\\nInfo-gathering error:\\n%s' % traceback.format_exc()\n pass\n if fname is not None:\n with open(fname, 'w') as fid:\n fid.write(out)\n return out\n\n\n# Adapted from Python's unittest2 (which is wrapped by nose)\n# http://docs.python.org/2/license.html\ndef _safe_rep(obj, short=False):\n \"\"\"Helper for assert_* ports\"\"\"\n try:\n result = repr(obj)\n except Exception:\n result = object.__repr__(obj)\n if not short or len(result) < 80:\n return result\n return result[:80] + ' [truncated]...'\n\n\ndef _safe_str(obj):\n \"\"\"Helper for assert_* ports\"\"\"\n try:\n return str(obj)\n except Exception:\n return object.__str__(obj)\n\n\ndef _format_msg(msg, std_msg):\n \"\"\"Helper for assert_* ports\"\"\"\n if msg is None:\n msg = std_msg\n else:\n try:\n msg = '%s : %s' % (std_msg, msg)\n except UnicodeDecodeError:\n msg = '%s : %s' % (_safe_str(std_msg), _safe_str(msg))\n\n\ndef assert_in(member, container, msg=None):\n \"\"\"Backport for old nose.tools\"\"\"\n if member in container:\n return\n std_msg = '%s not found in %s' % (_safe_rep(member), _safe_rep(container))\n msg = _format_msg(msg, std_msg)\n raise AssertionError(msg)\n\n\ndef assert_not_in(member, container, msg=None):\n \"\"\"Backport for old nose.tools\"\"\"\n if member not in container:\n return\n std_msg = '%s found in %s' % (_safe_rep(member), _safe_rep(container))\n msg = _format_msg(msg, std_msg)\n raise AssertionError(msg)\n\n\ndef assert_is(expr1, expr2, msg=None):\n \"\"\"Backport for old nose.tools\"\"\"\n if expr1 is not expr2:\n std_msg = '%s is not %s' % (_safe_rep(expr1), _safe_rep(expr2))\n raise AssertionError(_format_msg(msg, std_msg))\n", "path": "vispy/util/misc.py"}, {"content": "# -*- coding: utf-8 -*-\n# Copyright (c) 2014, Vispy Development Team. All Rights Reserved.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n\n\"\"\" Utilities for Vispy. A collection of modules that are used in\none or more Vispy sub-packages.\n\"\"\"\n\nfrom .misc import (_TempDir, is_string, parse_command_line_arguments, # noqa\n config, sys_info, assert_in, assert_not_in, # noqa\n assert_is) # noqa\nfrom ._logging import logger, set_log_level, use_log_level # noqa\n", "path": "vispy/util/__init__.py"}]}
| 3,188 | 665 |
gh_patches_debug_5466
|
rasdani/github-patches
|
git_diff
|
docker__docker-py-820
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
POST /volumes is now POST /volumes/create
https://github.com/docker/docker/pull/17136
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docker/api/volume.py`
Content:
```
1 from .. import utils
2
3
4 class VolumeApiMixin(object):
5 @utils.minimum_version('1.21')
6 def volumes(self, filters=None):
7 params = {
8 'filter': utils.convert_filters(filters) if filters else None
9 }
10 url = self._url('/volumes')
11 return self._result(self._get(url, params=params), True)
12
13 @utils.minimum_version('1.21')
14 def create_volume(self, name, driver=None, driver_opts=None):
15 url = self._url('/volumes')
16 if driver_opts is not None and not isinstance(driver_opts, dict):
17 raise TypeError('driver_opts must be a dictionary')
18
19 data = {
20 'Name': name,
21 'Driver': driver,
22 'DriverOpts': driver_opts,
23 }
24 return self._result(self._post_json(url, data=data), True)
25
26 @utils.minimum_version('1.21')
27 def inspect_volume(self, name):
28 url = self._url('/volumes/{0}', name)
29 return self._result(self._get(url), True)
30
31 @utils.minimum_version('1.21')
32 def remove_volume(self, name):
33 url = self._url('/volumes/{0}', name)
34 resp = self._delete(url)
35 self._raise_for_status(resp)
36 return True
37
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docker/api/volume.py b/docker/api/volume.py
--- a/docker/api/volume.py
+++ b/docker/api/volume.py
@@ -12,7 +12,7 @@
@utils.minimum_version('1.21')
def create_volume(self, name, driver=None, driver_opts=None):
- url = self._url('/volumes')
+ url = self._url('/volumes/create')
if driver_opts is not None and not isinstance(driver_opts, dict):
raise TypeError('driver_opts must be a dictionary')
|
{"golden_diff": "diff --git a/docker/api/volume.py b/docker/api/volume.py\n--- a/docker/api/volume.py\n+++ b/docker/api/volume.py\n@@ -12,7 +12,7 @@\n \n @utils.minimum_version('1.21')\n def create_volume(self, name, driver=None, driver_opts=None):\n- url = self._url('/volumes')\n+ url = self._url('/volumes/create')\n if driver_opts is not None and not isinstance(driver_opts, dict):\n raise TypeError('driver_opts must be a dictionary')\n", "issue": "POST /volumes is now POST /volumes/create\nhttps://github.com/docker/docker/pull/17136\n\n", "before_files": [{"content": "from .. import utils\n\n\nclass VolumeApiMixin(object):\n @utils.minimum_version('1.21')\n def volumes(self, filters=None):\n params = {\n 'filter': utils.convert_filters(filters) if filters else None\n }\n url = self._url('/volumes')\n return self._result(self._get(url, params=params), True)\n\n @utils.minimum_version('1.21')\n def create_volume(self, name, driver=None, driver_opts=None):\n url = self._url('/volumes')\n if driver_opts is not None and not isinstance(driver_opts, dict):\n raise TypeError('driver_opts must be a dictionary')\n\n data = {\n 'Name': name,\n 'Driver': driver,\n 'DriverOpts': driver_opts,\n }\n return self._result(self._post_json(url, data=data), True)\n\n @utils.minimum_version('1.21')\n def inspect_volume(self, name):\n url = self._url('/volumes/{0}', name)\n return self._result(self._get(url), True)\n\n @utils.minimum_version('1.21')\n def remove_volume(self, name):\n url = self._url('/volumes/{0}', name)\n resp = self._delete(url)\n self._raise_for_status(resp)\n return True\n", "path": "docker/api/volume.py"}], "after_files": [{"content": "from .. import utils\n\n\nclass VolumeApiMixin(object):\n @utils.minimum_version('1.21')\n def volumes(self, filters=None):\n params = {\n 'filter': utils.convert_filters(filters) if filters else None\n }\n url = self._url('/volumes')\n return self._result(self._get(url, params=params), True)\n\n @utils.minimum_version('1.21')\n def create_volume(self, name, driver=None, driver_opts=None):\n url = self._url('/volumes/create')\n if driver_opts is not None and not isinstance(driver_opts, dict):\n raise TypeError('driver_opts must be a dictionary')\n\n data = {\n 'Name': name,\n 'Driver': driver,\n 'DriverOpts': driver_opts,\n }\n return self._result(self._post_json(url, data=data), True)\n\n @utils.minimum_version('1.21')\n def inspect_volume(self, name):\n url = self._url('/volumes/{0}', name)\n return self._result(self._get(url), True)\n\n @utils.minimum_version('1.21')\n def remove_volume(self, name):\n url = self._url('/volumes/{0}', name)\n resp = self._delete(url)\n self._raise_for_status(resp)\n return True\n", "path": "docker/api/volume.py"}]}
| 634 | 120 |
gh_patches_debug_21856
|
rasdani/github-patches
|
git_diff
|
arviz-devs__arviz-759
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CmdStanPy tests
Add tests for CmdStanPy. See if we can read cmdstan output csv to PosterierSample which then can be tested.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `arviz/data/io_cmdstanpy.py`
Content:
```
1 """CmdStan-specific conversion code."""
2 from collections import defaultdict
3 from copy import deepcopy
4 import logging
5 import re
6
7
8 import numpy as np
9 import xarray as xr
10
11 from .inference_data import InferenceData
12 from .base import requires, dict_to_dataset, generate_dims_coords, make_attrs
13
14
15 _log = logging.getLogger(__name__)
16
17
18 class CmdStanPyConverter:
19 """Encapsulate CmdStanPy specific logic."""
20
21 # pylint: disable=too-many-instance-attributes
22
23 def __init__(
24 self,
25 *,
26 posterior=None,
27 posterior_predictive=None,
28 prior=None,
29 prior_predictive=None,
30 observed_data=None,
31 log_likelihood=None,
32 coords=None,
33 dims=None
34 ):
35 self.posterior = posterior
36 self.posterior_predictive = posterior_predictive
37 self.prior = prior
38 self.prior_predictive = prior_predictive
39 self.observed_data = observed_data
40 self.log_likelihood = log_likelihood
41 self.coords = coords
42 self.dims = dims
43
44 import cmdstanpy # pylint: disable=import-error
45
46 self.cmdstanpy = cmdstanpy
47
48 @requires("posterior")
49 def posterior_to_xarray(self):
50 """Extract posterior samples from output csv."""
51 columns = self.posterior.column_names
52
53 # filter posterior_predictive and log_likelihood
54 posterior_predictive = self.posterior_predictive
55 if posterior_predictive is None:
56 posterior_predictive = []
57 elif isinstance(posterior_predictive, str):
58 posterior_predictive = [
59 col for col in columns if posterior_predictive == col.split(".")[0]
60 ]
61 else:
62 posterior_predictive = [
63 col
64 for col in columns
65 if any(item == col.split(".")[0] for item in posterior_predictive)
66 ]
67
68 log_likelihood = self.log_likelihood
69 if log_likelihood is None:
70 log_likelihood = []
71 else:
72 log_likelihood = [col for col in columns if log_likelihood == col.split(".")[0]]
73
74 invalid_cols = (
75 posterior_predictive + log_likelihood + [col for col in columns if col.endswith("__")]
76 )
77 valid_cols = [col for col in columns if col not in invalid_cols]
78 data = _unpack_frame(self.posterior.sample, columns, valid_cols)
79 return dict_to_dataset(data, library=self.cmdstanpy, coords=self.coords, dims=self.dims)
80
81 @requires("posterior")
82 def sample_stats_to_xarray(self):
83 """Extract sample_stats from fit."""
84 dtypes = {"divergent__": bool, "n_leapfrog__": np.int64, "treedepth__": np.int64}
85
86 columns = self.posterior.column_names
87 valid_cols = [col for col in columns if col.endswith("__")]
88 # copy dims and coords
89 dims = deepcopy(self.dims) if self.dims is not None else {}
90 coords = deepcopy(self.coords) if self.coords is not None else {}
91
92 log_likelihood = self.log_likelihood
93 if isinstance(log_likelihood, str):
94 valid_cols.append(log_likelihood)
95
96 log_likelihood_cols = [col for col in columns if log_likelihood == col.split(".")[0]]
97 # change dims and coords for log_likelihood if defined
98 if log_likelihood in dims:
99 dims["log_likelihood"] = dims.pop(log_likelihood)
100
101 log_likelihood_dims = np.array(
102 [list(map(int, col.split(".")[1:])) for col in log_likelihood_cols]
103 )
104 max_dims = log_likelihood_dims.max(0)
105 max_dims = max_dims if hasattr(max_dims, "__iter__") else (max_dims,)
106 default_dim_names, _ = generate_dims_coords(shape=max_dims, var_name=log_likelihood)
107 log_likelihood_dim_names, _ = generate_dims_coords(
108 shape=max_dims, var_name="log_likelihood"
109 )
110 for default_dim_name, log_likelihood_dim_name in zip(
111 default_dim_names, log_likelihood_dim_names
112 ):
113 if default_dim_name in coords:
114 coords[log_likelihood_dim_name] = coords.pop(default_dim_name)
115
116 data = _unpack_frame(self.posterior.sample, columns, valid_cols)
117 for s_param in list(data.keys()):
118 s_param_, *_ = s_param.split(".")
119 name = re.sub("__$", "", s_param_)
120 name = "diverging" if name == "divergent" else name
121 data[name] = data.pop(s_param).astype(dtypes.get(s_param, float))
122 return dict_to_dataset(data, library=self.cmdstanpy, coords=coords, dims=dims)
123
124 @requires("posterior")
125 @requires("posterior_predictive")
126 def posterior_predictive_to_xarray(self):
127 """Convert posterior_predictive samples to xarray."""
128 posterior_predictive = self.posterior_predictive
129 columns = self.posterior.column_names
130
131 if isinstance(posterior_predictive, str):
132 posterior_predictive = [posterior_predictive]
133 valid_cols = [col for col in columns if col.split(".")[0] in set(posterior_predictive)]
134 data = _unpack_frame(self.posterior.sample, columns, valid_cols)
135 return dict_to_dataset(data, library=self.cmdstanpy, coords=self.coords, dims=self.dims)
136
137 @requires("prior")
138 def prior_to_xarray(self):
139 """Convert prior samples to xarray."""
140 # filter prior_predictive
141 columns = self.prior.column_names
142
143 # filter posterior_predictive and log_likelihood
144 prior_predictive = self.prior_predictive
145 if prior_predictive is None:
146 prior_predictive = []
147 elif isinstance(prior_predictive, str):
148 prior_predictive = [col for col in columns if prior_predictive == col.split(".")[0]]
149 else:
150 prior_predictive = [
151 col for col in columns if col.split(".")[0] in set(prior_predictive)
152 ]
153
154 valid_cols = [col for col in columns if col not in set(prior_predictive)]
155 data = _unpack_frame(self.posterior.sample, columns, valid_cols)
156 return dict_to_dataset(data, library=self.cmdstanpy, coords=self.coords, dims=self.dims)
157
158 @requires("prior")
159 def sample_stats_prior_to_xarray(self):
160 """Extract sample_stats from fit."""
161 dtypes = {"divergent__": bool, "n_leapfrog__": np.int64, "treedepth__": np.int64}
162
163 columns = self.prior.column_names
164 valid_cols = [col for col in columns if col.endswith("__")]
165 # copy dims and coords
166 dims = deepcopy(self.dims) if self.dims is not None else {}
167 coords = deepcopy(self.coords) if self.coords is not None else {}
168
169 data = _unpack_frame(self.prior.sample, columns, valid_cols)
170 for s_param in list(data.keys()):
171 s_param_, *_ = s_param.split(".")
172 name = re.sub("__$", "", s_param_)
173 name = "diverging" if name == "divergent" else name
174 data[name] = data.pop(s_param).astype(dtypes.get(s_param, float))
175 return dict_to_dataset(data, library=self.cmdstanpy, coords=coords, dims=dims)
176
177 @requires("prior")
178 @requires("prior_predictive")
179 def prior_predictive_to_xarray(self):
180 """Convert prior_predictive samples to xarray."""
181 prior_predictive = self.prior_predictive
182 columns = self.prior.column_names
183
184 if isinstance(prior_predictive, str):
185 prior_predictive = [prior_predictive]
186 valid_cols = [col for col in columns if col.split(".")[0] in set(prior_predictive)]
187 data = _unpack_frame(self.prior.sample, columns, valid_cols)
188 return dict_to_dataset(data, library=self.cmdstanpy, coords=self.coords, dims=self.dims)
189
190 @requires("observed_data")
191 def observed_data_to_xarray(self):
192 """Convert observed data to xarray."""
193 observed_data = {}
194 for key, vals in self.observed_data.items():
195 vals = np.atleast_1d(vals)
196 val_dims = self.dims.get(key)
197 val_dims, coords = generate_dims_coords(
198 vals.shape, key, dims=val_dims, coords=self.coords
199 )
200 observed_data[key] = xr.DataArray(vals, dims=val_dims, coords=coords)
201 return xr.Dataset(data_vars=observed_data, attrs=make_attrs(library=self.cmdstanpy))
202
203 def to_inference_data(self):
204 """Convert all available data to an InferenceData object.
205
206 Note that if groups can not be created (i.e., there is no `output`, so
207 the `posterior` and `sample_stats` can not be extracted), then the InferenceData
208 will not have those groups.
209 """
210 return InferenceData(
211 **{
212 "posterior": self.posterior_to_xarray(),
213 "sample_stats": self.sample_stats_to_xarray(),
214 "posterior_predictive": self.posterior_predictive_to_xarray(),
215 "prior": self.prior_to_xarray(),
216 "sample_stats_prior": self.sample_stats_prior_to_xarray(),
217 "prior_predictive": self.prior_predictive_to_xarray(),
218 "observed_data": self.observed_data_to_xarray(),
219 }
220 )
221
222
223 def _unpack_frame(data, columns, valid_cols):
224 """Transform a list of pandas.DataFrames to dictionary containing ndarrays.
225
226 Parameters
227 ----------
228 data : np.ndarray
229 columns : list
230 valid_cols : list
231
232 Returns
233 -------
234 dict
235 key, values pairs. Values are formatted to shape = (chains, draws, *shape)
236 """
237 draws, chains, *_ = data.shape
238
239 column_groups = defaultdict(list)
240 column_locs = defaultdict(list)
241 # iterate flat column names
242 for i, col in enumerate(columns):
243 # parse parameter names e.g. X.1.2 --> X, (1,2)
244 col_base, *col_tail = col.split(".")
245 if len(col_tail):
246 # gather nD array locations
247 column_groups[col_base].append(tuple(map(int, col_tail)))
248 # gather raw data locations for each parameter
249 column_locs[col_base].append(i)
250 dims = {}
251 for colname, col_dims in column_groups.items():
252 # gather parameter dimensions (assumes dense arrays)
253 dims[colname] = tuple(np.array(col_dims).max(0))
254 sample = {}
255 valid_base_cols = []
256 # get list of parameters for extraction (basename) X.1.2 --> X
257 for col in valid_cols:
258 base_col, *_ = col.split(".")
259 if base_col not in valid_base_cols:
260 valid_base_cols.append(base_col)
261
262 # extract each wanted parameter to ndarray with correct shape
263 for key in valid_base_cols:
264 ndim = dims.get(key, None)
265 shape_location = column_groups.get(key, None)
266 if ndim is not None:
267 sample[key] = np.full((chains, draws, *ndim), np.nan)
268 if shape_location is None:
269 # reorder draw, chain -> chain, draw
270 i, = column_locs[key]
271 sample[key] = np.swapaxes(data[..., i], 0, 1)
272 else:
273 for i, shape_loc in zip(column_locs[key], shape_location):
274 # location to insert extracted array
275 shape_loc = tuple([Ellipsis] + [j - 1 for j in shape_loc])
276 # reorder draw, chain -> chain, draw and insert to ndarray
277 sample[key][shape_loc] = np.swapaxes(data[..., i], 0, 1)
278 return sample
279
280
281 def from_cmdstanpy(
282 posterior=None,
283 *,
284 posterior_predictive=None,
285 prior=None,
286 prior_predictive=None,
287 observed_data=None,
288 log_likelihood=None,
289 coords=None,
290 dims=None
291 ):
292 """Convert CmdStan data into an InferenceData object.
293
294 Parameters
295 ----------
296 posterior : PosteriorSample object
297 PyCmdStan PosteriorSample
298 posterior_predictive : str, list of str
299 Posterior predictive samples for the fit.
300 prior : PosteriorSample
301 PyCmdStan PosteriorSample
302 prior_predictive : str, list of str
303 Prior predictive samples for the fit.
304 observed_data : dict
305 Observed data used in the sampling.
306 log_likelihood : str
307 Pointwise log_likelihood for the data.
308 coords : dict of str or dict of iterable
309 A dictionary containing the values that are used as index. The key
310 is the name of the dimension, the values are the index values.
311 dims : dict of str or list of str
312 A mapping from variables to a list of coordinate names for the variable.
313
314 Returns
315 -------
316 InferenceData object
317 """
318 return CmdStanPyConverter(
319 posterior=posterior,
320 posterior_predictive=posterior_predictive,
321 prior=prior,
322 prior_predictive=prior_predictive,
323 observed_data=observed_data,
324 log_likelihood=log_likelihood,
325 coords=coords,
326 dims=dims,
327 ).to_inference_data()
328
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/arviz/data/io_cmdstanpy.py b/arviz/data/io_cmdstanpy.py
--- a/arviz/data/io_cmdstanpy.py
+++ b/arviz/data/io_cmdstanpy.py
@@ -114,6 +114,8 @@
coords[log_likelihood_dim_name] = coords.pop(default_dim_name)
data = _unpack_frame(self.posterior.sample, columns, valid_cols)
+ if log_likelihood in data:
+ data["log_likelihood"] = data.pop(log_likelihood)
for s_param in list(data.keys()):
s_param_, *_ = s_param.split(".")
name = re.sub("__$", "", s_param_)
@@ -193,7 +195,7 @@
observed_data = {}
for key, vals in self.observed_data.items():
vals = np.atleast_1d(vals)
- val_dims = self.dims.get(key)
+ val_dims = self.dims.get(key) if self.dims is not None else None
val_dims, coords = generate_dims_coords(
vals.shape, key, dims=val_dims, coords=self.coords
)
|
{"golden_diff": "diff --git a/arviz/data/io_cmdstanpy.py b/arviz/data/io_cmdstanpy.py\n--- a/arviz/data/io_cmdstanpy.py\n+++ b/arviz/data/io_cmdstanpy.py\n@@ -114,6 +114,8 @@\n coords[log_likelihood_dim_name] = coords.pop(default_dim_name)\n \n data = _unpack_frame(self.posterior.sample, columns, valid_cols)\n+ if log_likelihood in data:\n+ data[\"log_likelihood\"] = data.pop(log_likelihood)\n for s_param in list(data.keys()):\n s_param_, *_ = s_param.split(\".\")\n name = re.sub(\"__$\", \"\", s_param_)\n@@ -193,7 +195,7 @@\n observed_data = {}\n for key, vals in self.observed_data.items():\n vals = np.atleast_1d(vals)\n- val_dims = self.dims.get(key)\n+ val_dims = self.dims.get(key) if self.dims is not None else None\n val_dims, coords = generate_dims_coords(\n vals.shape, key, dims=val_dims, coords=self.coords\n )\n", "issue": "CmdStanPy tests\nAdd tests for CmdStanPy. See if we can read cmdstan output csv to PosterierSample which then can be tested.\n", "before_files": [{"content": "\"\"\"CmdStan-specific conversion code.\"\"\"\nfrom collections import defaultdict\nfrom copy import deepcopy\nimport logging\nimport re\n\n\nimport numpy as np\nimport xarray as xr\n\nfrom .inference_data import InferenceData\nfrom .base import requires, dict_to_dataset, generate_dims_coords, make_attrs\n\n\n_log = logging.getLogger(__name__)\n\n\nclass CmdStanPyConverter:\n \"\"\"Encapsulate CmdStanPy specific logic.\"\"\"\n\n # pylint: disable=too-many-instance-attributes\n\n def __init__(\n self,\n *,\n posterior=None,\n posterior_predictive=None,\n prior=None,\n prior_predictive=None,\n observed_data=None,\n log_likelihood=None,\n coords=None,\n dims=None\n ):\n self.posterior = posterior\n self.posterior_predictive = posterior_predictive\n self.prior = prior\n self.prior_predictive = prior_predictive\n self.observed_data = observed_data\n self.log_likelihood = log_likelihood\n self.coords = coords\n self.dims = dims\n\n import cmdstanpy # pylint: disable=import-error\n\n self.cmdstanpy = cmdstanpy\n\n @requires(\"posterior\")\n def posterior_to_xarray(self):\n \"\"\"Extract posterior samples from output csv.\"\"\"\n columns = self.posterior.column_names\n\n # filter posterior_predictive and log_likelihood\n posterior_predictive = self.posterior_predictive\n if posterior_predictive is None:\n posterior_predictive = []\n elif isinstance(posterior_predictive, str):\n posterior_predictive = [\n col for col in columns if posterior_predictive == col.split(\".\")[0]\n ]\n else:\n posterior_predictive = [\n col\n for col in columns\n if any(item == col.split(\".\")[0] for item in posterior_predictive)\n ]\n\n log_likelihood = self.log_likelihood\n if log_likelihood is None:\n log_likelihood = []\n else:\n log_likelihood = [col for col in columns if log_likelihood == col.split(\".\")[0]]\n\n invalid_cols = (\n posterior_predictive + log_likelihood + [col for col in columns if col.endswith(\"__\")]\n )\n valid_cols = [col for col in columns if col not in invalid_cols]\n data = _unpack_frame(self.posterior.sample, columns, valid_cols)\n return dict_to_dataset(data, library=self.cmdstanpy, coords=self.coords, dims=self.dims)\n\n @requires(\"posterior\")\n def sample_stats_to_xarray(self):\n \"\"\"Extract sample_stats from fit.\"\"\"\n dtypes = {\"divergent__\": bool, \"n_leapfrog__\": np.int64, \"treedepth__\": np.int64}\n\n columns = self.posterior.column_names\n valid_cols = [col for col in columns if col.endswith(\"__\")]\n # copy dims and coords\n dims = deepcopy(self.dims) if self.dims is not None else {}\n coords = deepcopy(self.coords) if self.coords is not None else {}\n\n log_likelihood = self.log_likelihood\n if isinstance(log_likelihood, str):\n valid_cols.append(log_likelihood)\n\n log_likelihood_cols = [col for col in columns if log_likelihood == col.split(\".\")[0]]\n # change dims and coords for log_likelihood if defined\n if log_likelihood in dims:\n dims[\"log_likelihood\"] = dims.pop(log_likelihood)\n\n log_likelihood_dims = np.array(\n [list(map(int, col.split(\".\")[1:])) for col in log_likelihood_cols]\n )\n max_dims = log_likelihood_dims.max(0)\n max_dims = max_dims if hasattr(max_dims, \"__iter__\") else (max_dims,)\n default_dim_names, _ = generate_dims_coords(shape=max_dims, var_name=log_likelihood)\n log_likelihood_dim_names, _ = generate_dims_coords(\n shape=max_dims, var_name=\"log_likelihood\"\n )\n for default_dim_name, log_likelihood_dim_name in zip(\n default_dim_names, log_likelihood_dim_names\n ):\n if default_dim_name in coords:\n coords[log_likelihood_dim_name] = coords.pop(default_dim_name)\n\n data = _unpack_frame(self.posterior.sample, columns, valid_cols)\n for s_param in list(data.keys()):\n s_param_, *_ = s_param.split(\".\")\n name = re.sub(\"__$\", \"\", s_param_)\n name = \"diverging\" if name == \"divergent\" else name\n data[name] = data.pop(s_param).astype(dtypes.get(s_param, float))\n return dict_to_dataset(data, library=self.cmdstanpy, coords=coords, dims=dims)\n\n @requires(\"posterior\")\n @requires(\"posterior_predictive\")\n def posterior_predictive_to_xarray(self):\n \"\"\"Convert posterior_predictive samples to xarray.\"\"\"\n posterior_predictive = self.posterior_predictive\n columns = self.posterior.column_names\n\n if isinstance(posterior_predictive, str):\n posterior_predictive = [posterior_predictive]\n valid_cols = [col for col in columns if col.split(\".\")[0] in set(posterior_predictive)]\n data = _unpack_frame(self.posterior.sample, columns, valid_cols)\n return dict_to_dataset(data, library=self.cmdstanpy, coords=self.coords, dims=self.dims)\n\n @requires(\"prior\")\n def prior_to_xarray(self):\n \"\"\"Convert prior samples to xarray.\"\"\"\n # filter prior_predictive\n columns = self.prior.column_names\n\n # filter posterior_predictive and log_likelihood\n prior_predictive = self.prior_predictive\n if prior_predictive is None:\n prior_predictive = []\n elif isinstance(prior_predictive, str):\n prior_predictive = [col for col in columns if prior_predictive == col.split(\".\")[0]]\n else:\n prior_predictive = [\n col for col in columns if col.split(\".\")[0] in set(prior_predictive)\n ]\n\n valid_cols = [col for col in columns if col not in set(prior_predictive)]\n data = _unpack_frame(self.posterior.sample, columns, valid_cols)\n return dict_to_dataset(data, library=self.cmdstanpy, coords=self.coords, dims=self.dims)\n\n @requires(\"prior\")\n def sample_stats_prior_to_xarray(self):\n \"\"\"Extract sample_stats from fit.\"\"\"\n dtypes = {\"divergent__\": bool, \"n_leapfrog__\": np.int64, \"treedepth__\": np.int64}\n\n columns = self.prior.column_names\n valid_cols = [col for col in columns if col.endswith(\"__\")]\n # copy dims and coords\n dims = deepcopy(self.dims) if self.dims is not None else {}\n coords = deepcopy(self.coords) if self.coords is not None else {}\n\n data = _unpack_frame(self.prior.sample, columns, valid_cols)\n for s_param in list(data.keys()):\n s_param_, *_ = s_param.split(\".\")\n name = re.sub(\"__$\", \"\", s_param_)\n name = \"diverging\" if name == \"divergent\" else name\n data[name] = data.pop(s_param).astype(dtypes.get(s_param, float))\n return dict_to_dataset(data, library=self.cmdstanpy, coords=coords, dims=dims)\n\n @requires(\"prior\")\n @requires(\"prior_predictive\")\n def prior_predictive_to_xarray(self):\n \"\"\"Convert prior_predictive samples to xarray.\"\"\"\n prior_predictive = self.prior_predictive\n columns = self.prior.column_names\n\n if isinstance(prior_predictive, str):\n prior_predictive = [prior_predictive]\n valid_cols = [col for col in columns if col.split(\".\")[0] in set(prior_predictive)]\n data = _unpack_frame(self.prior.sample, columns, valid_cols)\n return dict_to_dataset(data, library=self.cmdstanpy, coords=self.coords, dims=self.dims)\n\n @requires(\"observed_data\")\n def observed_data_to_xarray(self):\n \"\"\"Convert observed data to xarray.\"\"\"\n observed_data = {}\n for key, vals in self.observed_data.items():\n vals = np.atleast_1d(vals)\n val_dims = self.dims.get(key)\n val_dims, coords = generate_dims_coords(\n vals.shape, key, dims=val_dims, coords=self.coords\n )\n observed_data[key] = xr.DataArray(vals, dims=val_dims, coords=coords)\n return xr.Dataset(data_vars=observed_data, attrs=make_attrs(library=self.cmdstanpy))\n\n def to_inference_data(self):\n \"\"\"Convert all available data to an InferenceData object.\n\n Note that if groups can not be created (i.e., there is no `output`, so\n the `posterior` and `sample_stats` can not be extracted), then the InferenceData\n will not have those groups.\n \"\"\"\n return InferenceData(\n **{\n \"posterior\": self.posterior_to_xarray(),\n \"sample_stats\": self.sample_stats_to_xarray(),\n \"posterior_predictive\": self.posterior_predictive_to_xarray(),\n \"prior\": self.prior_to_xarray(),\n \"sample_stats_prior\": self.sample_stats_prior_to_xarray(),\n \"prior_predictive\": self.prior_predictive_to_xarray(),\n \"observed_data\": self.observed_data_to_xarray(),\n }\n )\n\n\ndef _unpack_frame(data, columns, valid_cols):\n \"\"\"Transform a list of pandas.DataFrames to dictionary containing ndarrays.\n\n Parameters\n ----------\n data : np.ndarray\n columns : list\n valid_cols : list\n\n Returns\n -------\n dict\n key, values pairs. Values are formatted to shape = (chains, draws, *shape)\n \"\"\"\n draws, chains, *_ = data.shape\n\n column_groups = defaultdict(list)\n column_locs = defaultdict(list)\n # iterate flat column names\n for i, col in enumerate(columns):\n # parse parameter names e.g. X.1.2 --> X, (1,2)\n col_base, *col_tail = col.split(\".\")\n if len(col_tail):\n # gather nD array locations\n column_groups[col_base].append(tuple(map(int, col_tail)))\n # gather raw data locations for each parameter\n column_locs[col_base].append(i)\n dims = {}\n for colname, col_dims in column_groups.items():\n # gather parameter dimensions (assumes dense arrays)\n dims[colname] = tuple(np.array(col_dims).max(0))\n sample = {}\n valid_base_cols = []\n # get list of parameters for extraction (basename) X.1.2 --> X\n for col in valid_cols:\n base_col, *_ = col.split(\".\")\n if base_col not in valid_base_cols:\n valid_base_cols.append(base_col)\n\n # extract each wanted parameter to ndarray with correct shape\n for key in valid_base_cols:\n ndim = dims.get(key, None)\n shape_location = column_groups.get(key, None)\n if ndim is not None:\n sample[key] = np.full((chains, draws, *ndim), np.nan)\n if shape_location is None:\n # reorder draw, chain -> chain, draw\n i, = column_locs[key]\n sample[key] = np.swapaxes(data[..., i], 0, 1)\n else:\n for i, shape_loc in zip(column_locs[key], shape_location):\n # location to insert extracted array\n shape_loc = tuple([Ellipsis] + [j - 1 for j in shape_loc])\n # reorder draw, chain -> chain, draw and insert to ndarray\n sample[key][shape_loc] = np.swapaxes(data[..., i], 0, 1)\n return sample\n\n\ndef from_cmdstanpy(\n posterior=None,\n *,\n posterior_predictive=None,\n prior=None,\n prior_predictive=None,\n observed_data=None,\n log_likelihood=None,\n coords=None,\n dims=None\n):\n \"\"\"Convert CmdStan data into an InferenceData object.\n\n Parameters\n ----------\n posterior : PosteriorSample object\n PyCmdStan PosteriorSample\n posterior_predictive : str, list of str\n Posterior predictive samples for the fit.\n prior : PosteriorSample\n PyCmdStan PosteriorSample\n prior_predictive : str, list of str\n Prior predictive samples for the fit.\n observed_data : dict\n Observed data used in the sampling.\n log_likelihood : str\n Pointwise log_likelihood for the data.\n coords : dict of str or dict of iterable\n A dictionary containing the values that are used as index. The key\n is the name of the dimension, the values are the index values.\n dims : dict of str or list of str\n A mapping from variables to a list of coordinate names for the variable.\n\n Returns\n -------\n InferenceData object\n \"\"\"\n return CmdStanPyConverter(\n posterior=posterior,\n posterior_predictive=posterior_predictive,\n prior=prior,\n prior_predictive=prior_predictive,\n observed_data=observed_data,\n log_likelihood=log_likelihood,\n coords=coords,\n dims=dims,\n ).to_inference_data()\n", "path": "arviz/data/io_cmdstanpy.py"}], "after_files": [{"content": "\"\"\"CmdStan-specific conversion code.\"\"\"\nfrom collections import defaultdict\nfrom copy import deepcopy\nimport logging\nimport re\n\n\nimport numpy as np\nimport xarray as xr\n\nfrom .inference_data import InferenceData\nfrom .base import requires, dict_to_dataset, generate_dims_coords, make_attrs\n\n\n_log = logging.getLogger(__name__)\n\n\nclass CmdStanPyConverter:\n \"\"\"Encapsulate CmdStanPy specific logic.\"\"\"\n\n # pylint: disable=too-many-instance-attributes\n\n def __init__(\n self,\n *,\n posterior=None,\n posterior_predictive=None,\n prior=None,\n prior_predictive=None,\n observed_data=None,\n log_likelihood=None,\n coords=None,\n dims=None\n ):\n self.posterior = posterior\n self.posterior_predictive = posterior_predictive\n self.prior = prior\n self.prior_predictive = prior_predictive\n self.observed_data = observed_data\n self.log_likelihood = log_likelihood\n self.coords = coords\n self.dims = dims\n\n import cmdstanpy # pylint: disable=import-error\n\n self.cmdstanpy = cmdstanpy\n\n @requires(\"posterior\")\n def posterior_to_xarray(self):\n \"\"\"Extract posterior samples from output csv.\"\"\"\n columns = self.posterior.column_names\n\n # filter posterior_predictive and log_likelihood\n posterior_predictive = self.posterior_predictive\n if posterior_predictive is None:\n posterior_predictive = []\n elif isinstance(posterior_predictive, str):\n posterior_predictive = [\n col for col in columns if posterior_predictive == col.split(\".\")[0]\n ]\n else:\n posterior_predictive = [\n col\n for col in columns\n if any(item == col.split(\".\")[0] for item in posterior_predictive)\n ]\n\n log_likelihood = self.log_likelihood\n if log_likelihood is None:\n log_likelihood = []\n else:\n log_likelihood = [col for col in columns if log_likelihood == col.split(\".\")[0]]\n\n invalid_cols = (\n posterior_predictive + log_likelihood + [col for col in columns if col.endswith(\"__\")]\n )\n valid_cols = [col for col in columns if col not in invalid_cols]\n data = _unpack_frame(self.posterior.sample, columns, valid_cols)\n return dict_to_dataset(data, library=self.cmdstanpy, coords=self.coords, dims=self.dims)\n\n @requires(\"posterior\")\n def sample_stats_to_xarray(self):\n \"\"\"Extract sample_stats from fit.\"\"\"\n dtypes = {\"divergent__\": bool, \"n_leapfrog__\": np.int64, \"treedepth__\": np.int64}\n\n columns = self.posterior.column_names\n valid_cols = [col for col in columns if col.endswith(\"__\")]\n # copy dims and coords\n dims = deepcopy(self.dims) if self.dims is not None else {}\n coords = deepcopy(self.coords) if self.coords is not None else {}\n\n log_likelihood = self.log_likelihood\n if isinstance(log_likelihood, str):\n valid_cols.append(log_likelihood)\n\n log_likelihood_cols = [col for col in columns if log_likelihood == col.split(\".\")[0]]\n # change dims and coords for log_likelihood if defined\n if log_likelihood in dims:\n dims[\"log_likelihood\"] = dims.pop(log_likelihood)\n\n log_likelihood_dims = np.array(\n [list(map(int, col.split(\".\")[1:])) for col in log_likelihood_cols]\n )\n max_dims = log_likelihood_dims.max(0)\n max_dims = max_dims if hasattr(max_dims, \"__iter__\") else (max_dims,)\n default_dim_names, _ = generate_dims_coords(shape=max_dims, var_name=log_likelihood)\n log_likelihood_dim_names, _ = generate_dims_coords(\n shape=max_dims, var_name=\"log_likelihood\"\n )\n for default_dim_name, log_likelihood_dim_name in zip(\n default_dim_names, log_likelihood_dim_names\n ):\n if default_dim_name in coords:\n coords[log_likelihood_dim_name] = coords.pop(default_dim_name)\n\n data = _unpack_frame(self.posterior.sample, columns, valid_cols)\n if log_likelihood in data:\n data[\"log_likelihood\"] = data.pop(log_likelihood)\n for s_param in list(data.keys()):\n s_param_, *_ = s_param.split(\".\")\n name = re.sub(\"__$\", \"\", s_param_)\n name = \"diverging\" if name == \"divergent\" else name\n data[name] = data.pop(s_param).astype(dtypes.get(s_param, float))\n return dict_to_dataset(data, library=self.cmdstanpy, coords=coords, dims=dims)\n\n @requires(\"posterior\")\n @requires(\"posterior_predictive\")\n def posterior_predictive_to_xarray(self):\n \"\"\"Convert posterior_predictive samples to xarray.\"\"\"\n posterior_predictive = self.posterior_predictive\n columns = self.posterior.column_names\n\n if isinstance(posterior_predictive, str):\n posterior_predictive = [posterior_predictive]\n valid_cols = [col for col in columns if col.split(\".\")[0] in set(posterior_predictive)]\n data = _unpack_frame(self.posterior.sample, columns, valid_cols)\n return dict_to_dataset(data, library=self.cmdstanpy, coords=self.coords, dims=self.dims)\n\n @requires(\"prior\")\n def prior_to_xarray(self):\n \"\"\"Convert prior samples to xarray.\"\"\"\n # filter prior_predictive\n columns = self.prior.column_names\n\n # filter posterior_predictive and log_likelihood\n prior_predictive = self.prior_predictive\n if prior_predictive is None:\n prior_predictive = []\n elif isinstance(prior_predictive, str):\n prior_predictive = [col for col in columns if prior_predictive == col.split(\".\")[0]]\n else:\n prior_predictive = [\n col for col in columns if col.split(\".\")[0] in set(prior_predictive)\n ]\n\n valid_cols = [col for col in columns if col not in set(prior_predictive)]\n data = _unpack_frame(self.posterior.sample, columns, valid_cols)\n return dict_to_dataset(data, library=self.cmdstanpy, coords=self.coords, dims=self.dims)\n\n @requires(\"prior\")\n def sample_stats_prior_to_xarray(self):\n \"\"\"Extract sample_stats from fit.\"\"\"\n dtypes = {\"divergent__\": bool, \"n_leapfrog__\": np.int64, \"treedepth__\": np.int64}\n\n columns = self.prior.column_names\n valid_cols = [col for col in columns if col.endswith(\"__\")]\n # copy dims and coords\n dims = deepcopy(self.dims) if self.dims is not None else {}\n coords = deepcopy(self.coords) if self.coords is not None else {}\n\n data = _unpack_frame(self.prior.sample, columns, valid_cols)\n for s_param in list(data.keys()):\n s_param_, *_ = s_param.split(\".\")\n name = re.sub(\"__$\", \"\", s_param_)\n name = \"diverging\" if name == \"divergent\" else name\n data[name] = data.pop(s_param).astype(dtypes.get(s_param, float))\n return dict_to_dataset(data, library=self.cmdstanpy, coords=coords, dims=dims)\n\n @requires(\"prior\")\n @requires(\"prior_predictive\")\n def prior_predictive_to_xarray(self):\n \"\"\"Convert prior_predictive samples to xarray.\"\"\"\n prior_predictive = self.prior_predictive\n columns = self.prior.column_names\n\n if isinstance(prior_predictive, str):\n prior_predictive = [prior_predictive]\n valid_cols = [col for col in columns if col.split(\".\")[0] in set(prior_predictive)]\n data = _unpack_frame(self.prior.sample, columns, valid_cols)\n return dict_to_dataset(data, library=self.cmdstanpy, coords=self.coords, dims=self.dims)\n\n @requires(\"observed_data\")\n def observed_data_to_xarray(self):\n \"\"\"Convert observed data to xarray.\"\"\"\n observed_data = {}\n for key, vals in self.observed_data.items():\n vals = np.atleast_1d(vals)\n val_dims = self.dims.get(key) if self.dims is not None else None\n val_dims, coords = generate_dims_coords(\n vals.shape, key, dims=val_dims, coords=self.coords\n )\n observed_data[key] = xr.DataArray(vals, dims=val_dims, coords=coords)\n return xr.Dataset(data_vars=observed_data, attrs=make_attrs(library=self.cmdstanpy))\n\n def to_inference_data(self):\n \"\"\"Convert all available data to an InferenceData object.\n\n Note that if groups can not be created (i.e., there is no `output`, so\n the `posterior` and `sample_stats` can not be extracted), then the InferenceData\n will not have those groups.\n \"\"\"\n return InferenceData(\n **{\n \"posterior\": self.posterior_to_xarray(),\n \"sample_stats\": self.sample_stats_to_xarray(),\n \"posterior_predictive\": self.posterior_predictive_to_xarray(),\n \"prior\": self.prior_to_xarray(),\n \"sample_stats_prior\": self.sample_stats_prior_to_xarray(),\n \"prior_predictive\": self.prior_predictive_to_xarray(),\n \"observed_data\": self.observed_data_to_xarray(),\n }\n )\n\n\ndef _unpack_frame(data, columns, valid_cols):\n \"\"\"Transform a list of pandas.DataFrames to dictionary containing ndarrays.\n\n Parameters\n ----------\n data : np.ndarray\n columns : list\n valid_cols : list\n\n Returns\n -------\n dict\n key, values pairs. Values are formatted to shape = (chains, draws, *shape)\n \"\"\"\n draws, chains, *_ = data.shape\n\n column_groups = defaultdict(list)\n column_locs = defaultdict(list)\n # iterate flat column names\n for i, col in enumerate(columns):\n # parse parameter names e.g. X.1.2 --> X, (1,2)\n col_base, *col_tail = col.split(\".\")\n if len(col_tail):\n # gather nD array locations\n column_groups[col_base].append(tuple(map(int, col_tail)))\n # gather raw data locations for each parameter\n column_locs[col_base].append(i)\n dims = {}\n for colname, col_dims in column_groups.items():\n # gather parameter dimensions (assumes dense arrays)\n dims[colname] = tuple(np.array(col_dims).max(0))\n sample = {}\n valid_base_cols = []\n # get list of parameters for extraction (basename) X.1.2 --> X\n for col in valid_cols:\n base_col, *_ = col.split(\".\")\n if base_col not in valid_base_cols:\n valid_base_cols.append(base_col)\n\n # extract each wanted parameter to ndarray with correct shape\n for key in valid_base_cols:\n ndim = dims.get(key, None)\n shape_location = column_groups.get(key, None)\n if ndim is not None:\n sample[key] = np.full((chains, draws, *ndim), np.nan)\n if shape_location is None:\n # reorder draw, chain -> chain, draw\n i, = column_locs[key]\n sample[key] = np.swapaxes(data[..., i], 0, 1)\n else:\n for i, shape_loc in zip(column_locs[key], shape_location):\n # location to insert extracted array\n shape_loc = tuple([Ellipsis] + [j - 1 for j in shape_loc])\n # reorder draw, chain -> chain, draw and insert to ndarray\n sample[key][shape_loc] = np.swapaxes(data[..., i], 0, 1)\n return sample\n\n\ndef from_cmdstanpy(\n posterior=None,\n *,\n posterior_predictive=None,\n prior=None,\n prior_predictive=None,\n observed_data=None,\n log_likelihood=None,\n coords=None,\n dims=None\n):\n \"\"\"Convert CmdStan data into an InferenceData object.\n\n Parameters\n ----------\n posterior : PosteriorSample object\n PyCmdStan PosteriorSample\n posterior_predictive : str, list of str\n Posterior predictive samples for the fit.\n prior : PosteriorSample\n PyCmdStan PosteriorSample\n prior_predictive : str, list of str\n Prior predictive samples for the fit.\n observed_data : dict\n Observed data used in the sampling.\n log_likelihood : str\n Pointwise log_likelihood for the data.\n coords : dict of str or dict of iterable\n A dictionary containing the values that are used as index. The key\n is the name of the dimension, the values are the index values.\n dims : dict of str or list of str\n A mapping from variables to a list of coordinate names for the variable.\n\n Returns\n -------\n InferenceData object\n \"\"\"\n return CmdStanPyConverter(\n posterior=posterior,\n posterior_predictive=posterior_predictive,\n prior=prior,\n prior_predictive=prior_predictive,\n observed_data=observed_data,\n log_likelihood=log_likelihood,\n coords=coords,\n dims=dims,\n ).to_inference_data()\n", "path": "arviz/data/io_cmdstanpy.py"}]}
| 4,002 | 243 |
gh_patches_debug_5879
|
rasdani/github-patches
|
git_diff
|
inventree__InvenTree-1860
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Migration warns for phantom part changes
Here is the warning:
```
Your models in app(s): 'part' have changes that are not yet reflected in a migration, and so won't be applied.
Run 'manage.py makemigrations' to make new migrations, and then re-run 'manage.py migrate' to apply them.
```
Running `manage.py makemigrations` does **not** generate new migration file...
Running `manage.py showmigrations part` shows all part migrations are complete.
I found this warning both with PostGreSQL and SQLite3, if that has anything to do with backend dependency.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `InvenTree/InvenTree/fields.py`
Content:
```
1 """ Custom fields used in InvenTree """
2
3 # -*- coding: utf-8 -*-
4 from __future__ import unicode_literals
5 import sys
6
7 from .validators import allowable_url_schemes
8
9 from django.utils.translation import ugettext_lazy as _
10
11 from django.forms.fields import URLField as FormURLField
12 from django.db import models as models
13 from django.core import validators
14 from django import forms
15
16 from decimal import Decimal
17
18 from djmoney.models.fields import MoneyField as ModelMoneyField
19 from djmoney.forms.fields import MoneyField
20 from djmoney.models.validators import MinMoneyValidator
21
22 import InvenTree.helpers
23
24
25 class InvenTreeURLFormField(FormURLField):
26 """ Custom URL form field with custom scheme validators """
27
28 default_validators = [validators.URLValidator(schemes=allowable_url_schemes())]
29
30
31 class InvenTreeURLField(models.URLField):
32 """ Custom URL field which has custom scheme validators """
33
34 default_validators = [validators.URLValidator(schemes=allowable_url_schemes())]
35
36 def formfield(self, **kwargs):
37 return super().formfield(**{
38 'form_class': InvenTreeURLFormField
39 })
40
41
42 def money_kwargs():
43 """ returns the database settings for MoneyFields """
44 from common.settings import currency_code_mappings, currency_code_default
45
46 kwargs = {}
47 kwargs['currency_choices'] = currency_code_mappings()
48 kwargs['default_currency'] = currency_code_default()
49 return kwargs
50
51
52 class InvenTreeModelMoneyField(ModelMoneyField):
53 """
54 Custom MoneyField for clean migrations while using dynamic currency settings
55 """
56
57 def __init__(self, **kwargs):
58 # detect if creating migration
59 if 'makemigrations' in sys.argv:
60 # remove currency information for a clean migration
61 kwargs['default_currency'] = ''
62 kwargs['currency_choices'] = []
63 else:
64 # set defaults
65 kwargs.update(money_kwargs())
66
67 # Set a minimum value validator
68 validators = kwargs.get('validators', [])
69
70 if len(validators) == 0:
71 validators.append(
72 MinMoneyValidator(0),
73 )
74
75 kwargs['validators'] = validators
76
77 super().__init__(**kwargs)
78
79 def formfield(self, **kwargs):
80 """ override form class to use own function """
81 kwargs['form_class'] = InvenTreeMoneyField
82 return super().formfield(**kwargs)
83
84
85 class InvenTreeMoneyField(MoneyField):
86 """ custom MoneyField for clean migrations while using dynamic currency settings """
87 def __init__(self, *args, **kwargs):
88 # override initial values with the real info from database
89 kwargs.update(money_kwargs())
90 super().__init__(*args, **kwargs)
91
92
93 class DatePickerFormField(forms.DateField):
94 """
95 Custom date-picker field
96 """
97
98 def __init__(self, **kwargs):
99
100 help_text = kwargs.get('help_text', _('Enter date'))
101 label = kwargs.get('label', None)
102 required = kwargs.get('required', False)
103 initial = kwargs.get('initial', None)
104
105 widget = forms.DateInput(
106 attrs={
107 'type': 'date',
108 }
109 )
110
111 forms.DateField.__init__(
112 self,
113 required=required,
114 initial=initial,
115 help_text=help_text,
116 widget=widget,
117 label=label
118 )
119
120
121 def round_decimal(value, places):
122 """
123 Round value to the specified number of places.
124 """
125
126 if value is not None:
127 # see https://docs.python.org/2/library/decimal.html#decimal.Decimal.quantize for options
128 return value.quantize(Decimal(10) ** -places)
129 return value
130
131
132 class RoundingDecimalFormField(forms.DecimalField):
133 def to_python(self, value):
134 value = super(RoundingDecimalFormField, self).to_python(value)
135 value = round_decimal(value, self.decimal_places)
136 return value
137
138 def prepare_value(self, value):
139 """
140 Override the 'prepare_value' method, to remove trailing zeros when displaying.
141 Why? It looks nice!
142 """
143
144 if type(value) == Decimal:
145 return InvenTree.helpers.normalize(value)
146 else:
147 return value
148
149
150 class RoundingDecimalField(models.DecimalField):
151 def to_python(self, value):
152 value = super(RoundingDecimalField, self).to_python(value)
153 return round_decimal(value, self.decimal_places)
154
155 def formfield(self, **kwargs):
156 defaults = {
157 'form_class': RoundingDecimalFormField
158 }
159
160 defaults.update(kwargs)
161
162 return super().formfield(**kwargs)
163
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/InvenTree/InvenTree/fields.py b/InvenTree/InvenTree/fields.py
--- a/InvenTree/InvenTree/fields.py
+++ b/InvenTree/InvenTree/fields.py
@@ -55,7 +55,7 @@
def __init__(self, **kwargs):
# detect if creating migration
- if 'makemigrations' in sys.argv:
+ if 'migrate' in sys.argv or 'makemigrations' in sys.argv:
# remove currency information for a clean migration
kwargs['default_currency'] = ''
kwargs['currency_choices'] = []
|
{"golden_diff": "diff --git a/InvenTree/InvenTree/fields.py b/InvenTree/InvenTree/fields.py\n--- a/InvenTree/InvenTree/fields.py\n+++ b/InvenTree/InvenTree/fields.py\n@@ -55,7 +55,7 @@\n \n def __init__(self, **kwargs):\n # detect if creating migration\n- if 'makemigrations' in sys.argv:\n+ if 'migrate' in sys.argv or 'makemigrations' in sys.argv:\n # remove currency information for a clean migration\n kwargs['default_currency'] = ''\n kwargs['currency_choices'] = []\n", "issue": "Migration warns for phantom part changes \nHere is the warning:\r\n\r\n```\r\nYour models in app(s): 'part' have changes that are not yet reflected in a migration, and so won't be applied.\r\nRun 'manage.py makemigrations' to make new migrations, and then re-run 'manage.py migrate' to apply them.\r\n```\r\n\r\nRunning `manage.py makemigrations` does **not** generate new migration file...\r\n\r\nRunning `manage.py showmigrations part` shows all part migrations are complete.\r\n\r\nI found this warning both with PostGreSQL and SQLite3, if that has anything to do with backend dependency.\n", "before_files": [{"content": "\"\"\" Custom fields used in InvenTree \"\"\"\n\n# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\nimport sys\n\nfrom .validators import allowable_url_schemes\n\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom django.forms.fields import URLField as FormURLField\nfrom django.db import models as models\nfrom django.core import validators\nfrom django import forms\n\nfrom decimal import Decimal\n\nfrom djmoney.models.fields import MoneyField as ModelMoneyField\nfrom djmoney.forms.fields import MoneyField\nfrom djmoney.models.validators import MinMoneyValidator\n\nimport InvenTree.helpers\n\n\nclass InvenTreeURLFormField(FormURLField):\n \"\"\" Custom URL form field with custom scheme validators \"\"\"\n\n default_validators = [validators.URLValidator(schemes=allowable_url_schemes())]\n\n\nclass InvenTreeURLField(models.URLField):\n \"\"\" Custom URL field which has custom scheme validators \"\"\"\n\n default_validators = [validators.URLValidator(schemes=allowable_url_schemes())]\n\n def formfield(self, **kwargs):\n return super().formfield(**{\n 'form_class': InvenTreeURLFormField\n })\n\n\ndef money_kwargs():\n \"\"\" returns the database settings for MoneyFields \"\"\"\n from common.settings import currency_code_mappings, currency_code_default\n\n kwargs = {}\n kwargs['currency_choices'] = currency_code_mappings()\n kwargs['default_currency'] = currency_code_default()\n return kwargs\n\n\nclass InvenTreeModelMoneyField(ModelMoneyField):\n \"\"\"\n Custom MoneyField for clean migrations while using dynamic currency settings\n \"\"\"\n \n def __init__(self, **kwargs):\n # detect if creating migration\n if 'makemigrations' in sys.argv:\n # remove currency information for a clean migration\n kwargs['default_currency'] = ''\n kwargs['currency_choices'] = []\n else:\n # set defaults\n kwargs.update(money_kwargs())\n\n # Set a minimum value validator\n validators = kwargs.get('validators', [])\n\n if len(validators) == 0:\n validators.append(\n MinMoneyValidator(0),\n )\n\n kwargs['validators'] = validators\n\n super().__init__(**kwargs)\n\n def formfield(self, **kwargs):\n \"\"\" override form class to use own function \"\"\"\n kwargs['form_class'] = InvenTreeMoneyField\n return super().formfield(**kwargs)\n\n\nclass InvenTreeMoneyField(MoneyField):\n \"\"\" custom MoneyField for clean migrations while using dynamic currency settings \"\"\"\n def __init__(self, *args, **kwargs):\n # override initial values with the real info from database\n kwargs.update(money_kwargs())\n super().__init__(*args, **kwargs)\n\n\nclass DatePickerFormField(forms.DateField):\n \"\"\"\n Custom date-picker field\n \"\"\"\n\n def __init__(self, **kwargs):\n\n help_text = kwargs.get('help_text', _('Enter date'))\n label = kwargs.get('label', None)\n required = kwargs.get('required', False)\n initial = kwargs.get('initial', None)\n\n widget = forms.DateInput(\n attrs={\n 'type': 'date',\n }\n )\n\n forms.DateField.__init__(\n self,\n required=required,\n initial=initial,\n help_text=help_text,\n widget=widget,\n label=label\n )\n\n\ndef round_decimal(value, places):\n \"\"\"\n Round value to the specified number of places.\n \"\"\"\n\n if value is not None:\n # see https://docs.python.org/2/library/decimal.html#decimal.Decimal.quantize for options\n return value.quantize(Decimal(10) ** -places)\n return value\n\n\nclass RoundingDecimalFormField(forms.DecimalField):\n def to_python(self, value):\n value = super(RoundingDecimalFormField, self).to_python(value)\n value = round_decimal(value, self.decimal_places)\n return value\n\n def prepare_value(self, value):\n \"\"\"\n Override the 'prepare_value' method, to remove trailing zeros when displaying.\n Why? It looks nice!\n \"\"\"\n\n if type(value) == Decimal:\n return InvenTree.helpers.normalize(value)\n else:\n return value\n\n\nclass RoundingDecimalField(models.DecimalField):\n def to_python(self, value):\n value = super(RoundingDecimalField, self).to_python(value)\n return round_decimal(value, self.decimal_places)\n\n def formfield(self, **kwargs):\n defaults = {\n 'form_class': RoundingDecimalFormField\n }\n\n defaults.update(kwargs)\n\n return super().formfield(**kwargs)\n", "path": "InvenTree/InvenTree/fields.py"}], "after_files": [{"content": "\"\"\" Custom fields used in InvenTree \"\"\"\n\n# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\nimport sys\n\nfrom .validators import allowable_url_schemes\n\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom django.forms.fields import URLField as FormURLField\nfrom django.db import models as models\nfrom django.core import validators\nfrom django import forms\n\nfrom decimal import Decimal\n\nfrom djmoney.models.fields import MoneyField as ModelMoneyField\nfrom djmoney.forms.fields import MoneyField\nfrom djmoney.models.validators import MinMoneyValidator\n\nimport InvenTree.helpers\nimport common.settings\n\n\nclass InvenTreeURLFormField(FormURLField):\n \"\"\" Custom URL form field with custom scheme validators \"\"\"\n\n default_validators = [validators.URLValidator(schemes=allowable_url_schemes())]\n\n\nclass InvenTreeURLField(models.URLField):\n \"\"\" Custom URL field which has custom scheme validators \"\"\"\n\n default_validators = [validators.URLValidator(schemes=allowable_url_schemes())]\n\n def formfield(self, **kwargs):\n return super().formfield(**{\n 'form_class': InvenTreeURLFormField\n })\n\n\ndef money_kwargs():\n \"\"\" returns the database settings for MoneyFields \"\"\"\n kwargs = {}\n kwargs['currency_choices'] = common.settings.currency_code_mappings()\n kwargs['default_currency'] = common.settings.currency_code_default\n return kwargs\n\n\nclass InvenTreeModelMoneyField(ModelMoneyField):\n \"\"\"\n Custom MoneyField for clean migrations while using dynamic currency settings\n \"\"\"\n \n def __init__(self, **kwargs):\n # detect if creating migration\n if 'migrate' in sys.argv or 'makemigrations' in sys.argv:\n # remove currency information for a clean migration\n kwargs['default_currency'] = ''\n kwargs['currency_choices'] = []\n else:\n # set defaults\n kwargs.update(money_kwargs())\n\n # Set a minimum value validator\n validators = kwargs.get('validators', [])\n\n if len(validators) == 0:\n validators.append(\n MinMoneyValidator(0),\n )\n\n kwargs['validators'] = validators\n\n super().__init__(**kwargs)\n\n def formfield(self, **kwargs):\n \"\"\" override form class to use own function \"\"\"\n kwargs['form_class'] = InvenTreeMoneyField\n return super().formfield(**kwargs)\n\n\nclass InvenTreeMoneyField(MoneyField):\n \"\"\" custom MoneyField for clean migrations while using dynamic currency settings \"\"\"\n def __init__(self, *args, **kwargs):\n # override initial values with the real info from database\n kwargs.update(money_kwargs())\n super().__init__(*args, **kwargs)\n\n\nclass DatePickerFormField(forms.DateField):\n \"\"\"\n Custom date-picker field\n \"\"\"\n\n def __init__(self, **kwargs):\n\n help_text = kwargs.get('help_text', _('Enter date'))\n label = kwargs.get('label', None)\n required = kwargs.get('required', False)\n initial = kwargs.get('initial', None)\n\n widget = forms.DateInput(\n attrs={\n 'type': 'date',\n }\n )\n\n forms.DateField.__init__(\n self,\n required=required,\n initial=initial,\n help_text=help_text,\n widget=widget,\n label=label\n )\n\n\ndef round_decimal(value, places):\n \"\"\"\n Round value to the specified number of places.\n \"\"\"\n\n if value is not None:\n # see https://docs.python.org/2/library/decimal.html#decimal.Decimal.quantize for options\n return value.quantize(Decimal(10) ** -places)\n return value\n\n\nclass RoundingDecimalFormField(forms.DecimalField):\n def to_python(self, value):\n value = super(RoundingDecimalFormField, self).to_python(value)\n value = round_decimal(value, self.decimal_places)\n return value\n\n def prepare_value(self, value):\n \"\"\"\n Override the 'prepare_value' method, to remove trailing zeros when displaying.\n Why? It looks nice!\n \"\"\"\n\n if type(value) == Decimal:\n return InvenTree.helpers.normalize(value)\n else:\n return value\n\n\nclass RoundingDecimalField(models.DecimalField):\n def to_python(self, value):\n value = super(RoundingDecimalField, self).to_python(value)\n return round_decimal(value, self.decimal_places)\n\n def formfield(self, **kwargs):\n defaults = {\n 'form_class': RoundingDecimalFormField\n }\n\n defaults.update(kwargs)\n\n return super().formfield(**kwargs)\n", "path": "InvenTree/InvenTree/fields.py"}]}
| 1,748 | 144 |
gh_patches_debug_14841
|
rasdani/github-patches
|
git_diff
|
koxudaxi__datamodel-code-generator-421
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
IPv4Address doesn't import from pydantic.validators
**Describe the bug**
When using `format: ipv4`, the following import is added to the output:
```py
from pydantic import IPv4Address
```
This isn't a valid import.
**To Reproduce**
Example schema:
```yaml
openapi: 3.0.0
info:
version: 0.0.1
title: Foo API
paths:
/foo:
get:
responses:
"200":
description: Success
components:
schemas:
Foo:
type: object
properties:
ip:
type: string
format: ipv4
```
Used commandline:
```
$ datamodel-codegen --input openapi.yaml
```
**Expected behavior**
When using `format: ipv4`, the following import is added to the output:
```py
from pydantic.validators import IPv4Address
```
**Version:**
- OS: MacOS
- Python version: `3.9.2`
- datamodel-code-generator version: `0.8.2`
**Additional context**
None
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `datamodel_code_generator/model/pydantic/imports.py`
Content:
```
1 from datamodel_code_generator.imports import Import
2
3 IMPORT_CONSTR = Import.from_full_path('pydantic.constr')
4 IMPORT_CONINT = Import.from_full_path('pydantic.conint')
5 IMPORT_CONFLOAT = Import.from_full_path('pydantic.confloat')
6 IMPORT_CONDECIMAL = Import.from_full_path('pydantic.condecimal')
7 IMPORT_CONBYTES = Import.from_full_path('pydantic.conbytes')
8 IMPORT_POSITIVE_INT = Import.from_full_path('pydantic.PositiveInt')
9 IMPORT_NEGATIVE_INT = Import.from_full_path('pydantic.NegativeInt')
10 IMPORT_POSITIVE_FLOAT = Import.from_full_path('pydantic.PositiveFloat')
11 IMPORT_NEGATIVE_FLOAT = Import.from_full_path('pydantic.NegativeFloat')
12 IMPORT_SECRET_STR = Import.from_full_path('pydantic.SecretStr')
13 IMPORT_EMAIL_STR = Import.from_full_path('pydantic.EmailStr')
14 IMPORT_UUID1 = Import.from_full_path('pydantic.UUID1')
15 IMPORT_UUID2 = Import.from_full_path('pydantic.UUID2')
16 IMPORT_UUID3 = Import.from_full_path('pydantic.UUID3')
17 IMPORT_UUID4 = Import.from_full_path('pydantic.UUID4')
18 IMPORT_UUID5 = Import.from_full_path('pydantic.UUID5')
19 IMPORT_ANYURL = Import.from_full_path('pydantic.AnyUrl')
20 IMPORT_IPV4ADDRESS = Import.from_full_path('pydantic.IPv4Address')
21 IMPORT_IPV6ADDRESS = Import.from_full_path('pydantic.IPv6Address')
22 IMPORT_EXTRA = Import.from_full_path('pydantic.Extra')
23 IMPORT_FIELD = Import.from_full_path('pydantic.Field')
24 IMPORT_STRICT_INT = Import.from_full_path('pydantic.StrictInt')
25 IMPORT_STRICT_FLOAT = Import.from_full_path('pydantic.StrictFloat')
26 IMPORT_STRICT_STR = Import.from_full_path('pydantic.StrictStr')
27 IMPORT_STRICT_BOOL = Import.from_full_path('pydantic.StrictBool')
28 IMPORT_STRICT_BYTES = Import.from_full_path('pydantic.StrictBytes')
29 IMPORT_DATACLASS = Import.from_full_path('pydantic.dataclasses.dataclass')
30
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/datamodel_code_generator/model/pydantic/imports.py b/datamodel_code_generator/model/pydantic/imports.py
--- a/datamodel_code_generator/model/pydantic/imports.py
+++ b/datamodel_code_generator/model/pydantic/imports.py
@@ -17,8 +17,8 @@
IMPORT_UUID4 = Import.from_full_path('pydantic.UUID4')
IMPORT_UUID5 = Import.from_full_path('pydantic.UUID5')
IMPORT_ANYURL = Import.from_full_path('pydantic.AnyUrl')
-IMPORT_IPV4ADDRESS = Import.from_full_path('pydantic.IPv4Address')
-IMPORT_IPV6ADDRESS = Import.from_full_path('pydantic.IPv6Address')
+IMPORT_IPV4ADDRESS = Import.from_full_path('ipaddress.IPv4Address')
+IMPORT_IPV6ADDRESS = Import.from_full_path('ipaddress.IPv6Address')
IMPORT_EXTRA = Import.from_full_path('pydantic.Extra')
IMPORT_FIELD = Import.from_full_path('pydantic.Field')
IMPORT_STRICT_INT = Import.from_full_path('pydantic.StrictInt')
|
{"golden_diff": "diff --git a/datamodel_code_generator/model/pydantic/imports.py b/datamodel_code_generator/model/pydantic/imports.py\n--- a/datamodel_code_generator/model/pydantic/imports.py\n+++ b/datamodel_code_generator/model/pydantic/imports.py\n@@ -17,8 +17,8 @@\n IMPORT_UUID4 = Import.from_full_path('pydantic.UUID4')\n IMPORT_UUID5 = Import.from_full_path('pydantic.UUID5')\n IMPORT_ANYURL = Import.from_full_path('pydantic.AnyUrl')\n-IMPORT_IPV4ADDRESS = Import.from_full_path('pydantic.IPv4Address')\n-IMPORT_IPV6ADDRESS = Import.from_full_path('pydantic.IPv6Address')\n+IMPORT_IPV4ADDRESS = Import.from_full_path('ipaddress.IPv4Address')\n+IMPORT_IPV6ADDRESS = Import.from_full_path('ipaddress.IPv6Address')\n IMPORT_EXTRA = Import.from_full_path('pydantic.Extra')\n IMPORT_FIELD = Import.from_full_path('pydantic.Field')\n IMPORT_STRICT_INT = Import.from_full_path('pydantic.StrictInt')\n", "issue": "IPv4Address doesn't import from pydantic.validators\n**Describe the bug**\r\n\r\nWhen using `format: ipv4`, the following import is added to the output:\r\n\r\n```py\r\nfrom pydantic import IPv4Address\r\n```\r\n\r\nThis isn't a valid import.\r\n\r\n**To Reproduce**\r\n\r\nExample schema:\r\n```yaml\r\nopenapi: 3.0.0\r\n\r\ninfo:\r\n version: 0.0.1\r\n title: Foo API\r\n\r\npaths:\r\n /foo:\r\n get:\r\n responses:\r\n \"200\":\r\n description: Success\r\n\r\ncomponents:\r\n schemas:\r\n Foo:\r\n type: object\r\n properties:\r\n ip:\r\n type: string\r\n format: ipv4\r\n```\r\n\r\nUsed commandline:\r\n```\r\n$ datamodel-codegen --input openapi.yaml\r\n```\r\n\r\n**Expected behavior**\r\n\r\nWhen using `format: ipv4`, the following import is added to the output:\r\n\r\n```py\r\nfrom pydantic.validators import IPv4Address\r\n```\r\n\r\n**Version:**\r\n - OS: MacOS\r\n - Python version: `3.9.2`\r\n - datamodel-code-generator version: `0.8.2`\r\n\r\n**Additional context**\r\nNone\r\n\n", "before_files": [{"content": "from datamodel_code_generator.imports import Import\n\nIMPORT_CONSTR = Import.from_full_path('pydantic.constr')\nIMPORT_CONINT = Import.from_full_path('pydantic.conint')\nIMPORT_CONFLOAT = Import.from_full_path('pydantic.confloat')\nIMPORT_CONDECIMAL = Import.from_full_path('pydantic.condecimal')\nIMPORT_CONBYTES = Import.from_full_path('pydantic.conbytes')\nIMPORT_POSITIVE_INT = Import.from_full_path('pydantic.PositiveInt')\nIMPORT_NEGATIVE_INT = Import.from_full_path('pydantic.NegativeInt')\nIMPORT_POSITIVE_FLOAT = Import.from_full_path('pydantic.PositiveFloat')\nIMPORT_NEGATIVE_FLOAT = Import.from_full_path('pydantic.NegativeFloat')\nIMPORT_SECRET_STR = Import.from_full_path('pydantic.SecretStr')\nIMPORT_EMAIL_STR = Import.from_full_path('pydantic.EmailStr')\nIMPORT_UUID1 = Import.from_full_path('pydantic.UUID1')\nIMPORT_UUID2 = Import.from_full_path('pydantic.UUID2')\nIMPORT_UUID3 = Import.from_full_path('pydantic.UUID3')\nIMPORT_UUID4 = Import.from_full_path('pydantic.UUID4')\nIMPORT_UUID5 = Import.from_full_path('pydantic.UUID5')\nIMPORT_ANYURL = Import.from_full_path('pydantic.AnyUrl')\nIMPORT_IPV4ADDRESS = Import.from_full_path('pydantic.IPv4Address')\nIMPORT_IPV6ADDRESS = Import.from_full_path('pydantic.IPv6Address')\nIMPORT_EXTRA = Import.from_full_path('pydantic.Extra')\nIMPORT_FIELD = Import.from_full_path('pydantic.Field')\nIMPORT_STRICT_INT = Import.from_full_path('pydantic.StrictInt')\nIMPORT_STRICT_FLOAT = Import.from_full_path('pydantic.StrictFloat')\nIMPORT_STRICT_STR = Import.from_full_path('pydantic.StrictStr')\nIMPORT_STRICT_BOOL = Import.from_full_path('pydantic.StrictBool')\nIMPORT_STRICT_BYTES = Import.from_full_path('pydantic.StrictBytes')\nIMPORT_DATACLASS = Import.from_full_path('pydantic.dataclasses.dataclass')\n", "path": "datamodel_code_generator/model/pydantic/imports.py"}], "after_files": [{"content": "from datamodel_code_generator.imports import Import\n\nIMPORT_CONSTR = Import.from_full_path('pydantic.constr')\nIMPORT_CONINT = Import.from_full_path('pydantic.conint')\nIMPORT_CONFLOAT = Import.from_full_path('pydantic.confloat')\nIMPORT_CONDECIMAL = Import.from_full_path('pydantic.condecimal')\nIMPORT_CONBYTES = Import.from_full_path('pydantic.conbytes')\nIMPORT_POSITIVE_INT = Import.from_full_path('pydantic.PositiveInt')\nIMPORT_NEGATIVE_INT = Import.from_full_path('pydantic.NegativeInt')\nIMPORT_POSITIVE_FLOAT = Import.from_full_path('pydantic.PositiveFloat')\nIMPORT_NEGATIVE_FLOAT = Import.from_full_path('pydantic.NegativeFloat')\nIMPORT_SECRET_STR = Import.from_full_path('pydantic.SecretStr')\nIMPORT_EMAIL_STR = Import.from_full_path('pydantic.EmailStr')\nIMPORT_UUID1 = Import.from_full_path('pydantic.UUID1')\nIMPORT_UUID2 = Import.from_full_path('pydantic.UUID2')\nIMPORT_UUID3 = Import.from_full_path('pydantic.UUID3')\nIMPORT_UUID4 = Import.from_full_path('pydantic.UUID4')\nIMPORT_UUID5 = Import.from_full_path('pydantic.UUID5')\nIMPORT_ANYURL = Import.from_full_path('pydantic.AnyUrl')\nIMPORT_IPV4ADDRESS = Import.from_full_path('ipaddress.IPv4Address')\nIMPORT_IPV6ADDRESS = Import.from_full_path('ipaddress.IPv6Address')\nIMPORT_EXTRA = Import.from_full_path('pydantic.Extra')\nIMPORT_FIELD = Import.from_full_path('pydantic.Field')\nIMPORT_STRICT_INT = Import.from_full_path('pydantic.StrictInt')\nIMPORT_STRICT_FLOAT = Import.from_full_path('pydantic.StrictFloat')\nIMPORT_STRICT_STR = Import.from_full_path('pydantic.StrictStr')\nIMPORT_STRICT_BOOL = Import.from_full_path('pydantic.StrictBool')\nIMPORT_STRICT_BYTES = Import.from_full_path('pydantic.StrictBytes')\nIMPORT_DATACLASS = Import.from_full_path('pydantic.dataclasses.dataclass')\n", "path": "datamodel_code_generator/model/pydantic/imports.py"}]}
| 999 | 230 |
gh_patches_debug_5490
|
rasdani/github-patches
|
git_diff
|
jazzband__pip-tools-1867
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improve --allow-unsafe warning comment
Currently, if `pip-compile` suppresses a requirement because the `--allow-unsafe` flag wasn't passed, it includes a warning that reads something like this:
```
# WARNING: The following packages were not pinned, but pip requires them to be
# pinned when the requirements file includes hashes. Consider using the --allow-unsafe flag.
# pip
```
This wording is a little incorrect because it's not always the case that pip requires them to be pinned. Namely, if the requirement is already satisfied by a package that's already installed, then pip won't check any hashes (because it won't need to install anything for that requirement). Because of this, a user seeing the current message can get the mistaken idea that they don't have any options other than passing `--allow-unsafe`.
Thus, I'd like to suggest that the wording be modified slightly to read something like:
```
# WARNING: The following packages were not pinned, but pip requires them to be
# pinned when the requirements file includes hashes and the requirement isn't
# already satisfied by a package already installed. Consider using the
# --allow-unsafe flag.
# pip
```
(As a side note, there is a third option in addition to (1) commenting out the "unsafe" requirement, and (2) including the pinned requirement with hashes. Namely, the pinned requirement can be included without hashes. Doing that third option would enforce that the user has the correct version already installed by erroring out if they don't, while not risking doing an "unsafe" operation by uninstalling / installing the package in question. The reason is that, by not including the hashes, it prevents a new install from taking place.)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `piptools/writer.py`
Content:
```
1 from __future__ import annotations
2
3 import io
4 import os
5 import re
6 import sys
7 from itertools import chain
8 from typing import BinaryIO, Iterable, Iterator, cast
9
10 from click import unstyle
11 from click.core import Context
12 from pip._internal.models.format_control import FormatControl
13 from pip._internal.req.req_install import InstallRequirement
14 from pip._vendor.packaging.markers import Marker
15 from pip._vendor.packaging.utils import canonicalize_name
16
17 from .logging import log
18 from .utils import (
19 comment,
20 dedup,
21 format_requirement,
22 get_compile_command,
23 key_from_ireq,
24 strip_extras,
25 )
26
27 MESSAGE_UNHASHED_PACKAGE = comment(
28 "# WARNING: pip install will require the following package to be hashed."
29 "\n# Consider using a hashable URL like "
30 "https://github.com/jazzband/pip-tools/archive/SOMECOMMIT.zip"
31 )
32
33 MESSAGE_UNSAFE_PACKAGES_UNPINNED = comment(
34 "# WARNING: The following packages were not pinned, but pip requires them to be"
35 "\n# pinned when the requirements file includes hashes. "
36 "Consider using the --allow-unsafe flag."
37 )
38
39 MESSAGE_UNSAFE_PACKAGES = comment(
40 "# The following packages are considered to be unsafe in a requirements file:"
41 )
42
43 MESSAGE_UNINSTALLABLE = (
44 "The generated requirements file may be rejected by pip install. "
45 "See # WARNING lines for details."
46 )
47
48
49 strip_comes_from_line_re = re.compile(r" \(line \d+\)$")
50
51
52 def _comes_from_as_string(comes_from: str | InstallRequirement) -> str:
53 if isinstance(comes_from, str):
54 return strip_comes_from_line_re.sub("", comes_from)
55 return cast(str, canonicalize_name(key_from_ireq(comes_from)))
56
57
58 def annotation_style_split(required_by: set[str]) -> str:
59 sorted_required_by = sorted(required_by)
60 if len(sorted_required_by) == 1:
61 source = sorted_required_by[0]
62 annotation = "# via " + source
63 else:
64 annotation_lines = ["# via"]
65 for source in sorted_required_by:
66 annotation_lines.append(" # " + source)
67 annotation = "\n".join(annotation_lines)
68 return annotation
69
70
71 def annotation_style_line(required_by: set[str]) -> str:
72 return f"# via {', '.join(sorted(required_by))}"
73
74
75 class OutputWriter:
76 def __init__(
77 self,
78 dst_file: BinaryIO,
79 click_ctx: Context,
80 dry_run: bool,
81 emit_header: bool,
82 emit_index_url: bool,
83 emit_trusted_host: bool,
84 annotate: bool,
85 annotation_style: str,
86 strip_extras: bool,
87 generate_hashes: bool,
88 default_index_url: str,
89 index_urls: Iterable[str],
90 trusted_hosts: Iterable[str],
91 format_control: FormatControl,
92 linesep: str,
93 allow_unsafe: bool,
94 find_links: list[str],
95 emit_find_links: bool,
96 emit_options: bool,
97 ) -> None:
98 self.dst_file = dst_file
99 self.click_ctx = click_ctx
100 self.dry_run = dry_run
101 self.emit_header = emit_header
102 self.emit_index_url = emit_index_url
103 self.emit_trusted_host = emit_trusted_host
104 self.annotate = annotate
105 self.annotation_style = annotation_style
106 self.strip_extras = strip_extras
107 self.generate_hashes = generate_hashes
108 self.default_index_url = default_index_url
109 self.index_urls = index_urls
110 self.trusted_hosts = trusted_hosts
111 self.format_control = format_control
112 self.linesep = linesep
113 self.allow_unsafe = allow_unsafe
114 self.find_links = find_links
115 self.emit_find_links = emit_find_links
116 self.emit_options = emit_options
117
118 def _sort_key(self, ireq: InstallRequirement) -> tuple[bool, str]:
119 return (not ireq.editable, key_from_ireq(ireq))
120
121 def write_header(self) -> Iterator[str]:
122 if self.emit_header:
123 yield comment("#")
124 yield comment(
125 "# This file is autogenerated by pip-compile with Python "
126 f"{sys.version_info.major}.{sys.version_info.minor}"
127 )
128 yield comment("# by the following command:")
129 yield comment("#")
130 compile_command = os.environ.get(
131 "CUSTOM_COMPILE_COMMAND"
132 ) or get_compile_command(self.click_ctx)
133 yield comment(f"# {compile_command}")
134 yield comment("#")
135
136 def write_index_options(self) -> Iterator[str]:
137 if self.emit_index_url:
138 for index, index_url in enumerate(dedup(self.index_urls)):
139 if index == 0 and index_url.rstrip("/") == self.default_index_url:
140 continue
141 flag = "--index-url" if index == 0 else "--extra-index-url"
142 yield f"{flag} {index_url}"
143
144 def write_trusted_hosts(self) -> Iterator[str]:
145 if self.emit_trusted_host:
146 for trusted_host in dedup(self.trusted_hosts):
147 yield f"--trusted-host {trusted_host}"
148
149 def write_format_controls(self) -> Iterator[str]:
150 for nb in dedup(sorted(self.format_control.no_binary)):
151 yield f"--no-binary {nb}"
152 for ob in dedup(sorted(self.format_control.only_binary)):
153 yield f"--only-binary {ob}"
154
155 def write_find_links(self) -> Iterator[str]:
156 if self.emit_find_links:
157 for find_link in dedup(self.find_links):
158 yield f"--find-links {find_link}"
159
160 def write_flags(self) -> Iterator[str]:
161 if not self.emit_options:
162 return
163 emitted = False
164 for line in chain(
165 self.write_index_options(),
166 self.write_find_links(),
167 self.write_trusted_hosts(),
168 self.write_format_controls(),
169 ):
170 emitted = True
171 yield line
172 if emitted:
173 yield ""
174
175 def _iter_lines(
176 self,
177 results: set[InstallRequirement],
178 unsafe_requirements: set[InstallRequirement],
179 unsafe_packages: set[str],
180 markers: dict[str, Marker],
181 hashes: dict[InstallRequirement, set[str]] | None = None,
182 ) -> Iterator[str]:
183 # default values
184 unsafe_packages = unsafe_packages if self.allow_unsafe else set()
185 hashes = hashes or {}
186
187 # Check for unhashed or unpinned packages if at least one package does have
188 # hashes, which will trigger pip install's --require-hashes mode.
189 warn_uninstallable = False
190 has_hashes = hashes and any(hash for hash in hashes.values())
191
192 yielded = False
193
194 for line in self.write_header():
195 yield line
196 yielded = True
197 for line in self.write_flags():
198 yield line
199 yielded = True
200
201 unsafe_requirements = unsafe_requirements or {
202 r for r in results if r.name in unsafe_packages
203 }
204 packages = {r for r in results if r.name not in unsafe_packages}
205
206 if packages:
207 for ireq in sorted(packages, key=self._sort_key):
208 if has_hashes and not hashes.get(ireq):
209 yield MESSAGE_UNHASHED_PACKAGE
210 warn_uninstallable = True
211 line = self._format_requirement(
212 ireq, markers.get(key_from_ireq(ireq)), hashes=hashes
213 )
214 yield line
215 yielded = True
216
217 if unsafe_requirements:
218 yield ""
219 yielded = True
220 if has_hashes and not self.allow_unsafe:
221 yield MESSAGE_UNSAFE_PACKAGES_UNPINNED
222 warn_uninstallable = True
223 else:
224 yield MESSAGE_UNSAFE_PACKAGES
225
226 for ireq in sorted(unsafe_requirements, key=self._sort_key):
227 ireq_key = key_from_ireq(ireq)
228 if not self.allow_unsafe:
229 yield comment(f"# {ireq_key}")
230 else:
231 line = self._format_requirement(
232 ireq, marker=markers.get(ireq_key), hashes=hashes
233 )
234 yield line
235
236 # Yield even when there's no real content, so that blank files are written
237 if not yielded:
238 yield ""
239
240 if warn_uninstallable:
241 log.warning(MESSAGE_UNINSTALLABLE)
242
243 def write(
244 self,
245 results: set[InstallRequirement],
246 unsafe_requirements: set[InstallRequirement],
247 unsafe_packages: set[str],
248 markers: dict[str, Marker],
249 hashes: dict[InstallRequirement, set[str]] | None,
250 ) -> None:
251 if not self.dry_run:
252 dst_file = io.TextIOWrapper(
253 self.dst_file,
254 encoding="utf8",
255 newline=self.linesep,
256 line_buffering=True,
257 )
258 try:
259 for line in self._iter_lines(
260 results, unsafe_requirements, unsafe_packages, markers, hashes
261 ):
262 if self.dry_run:
263 # Bypass the log level to always print this during a dry run
264 log.log(line)
265 else:
266 log.info(line)
267 dst_file.write(unstyle(line))
268 dst_file.write("\n")
269 finally:
270 if not self.dry_run:
271 dst_file.detach()
272
273 def _format_requirement(
274 self,
275 ireq: InstallRequirement,
276 marker: Marker | None = None,
277 hashes: dict[InstallRequirement, set[str]] | None = None,
278 ) -> str:
279 ireq_hashes = (hashes if hashes is not None else {}).get(ireq)
280
281 line = format_requirement(ireq, marker=marker, hashes=ireq_hashes)
282 if self.strip_extras:
283 line = strip_extras(line)
284
285 if not self.annotate:
286 return line
287
288 # Annotate what packages or reqs-ins this package is required by
289 required_by = set()
290 if hasattr(ireq, "_source_ireqs"):
291 required_by |= {
292 _comes_from_as_string(src_ireq.comes_from)
293 for src_ireq in ireq._source_ireqs
294 if src_ireq.comes_from
295 }
296
297 if ireq.comes_from:
298 required_by.add(_comes_from_as_string(ireq.comes_from))
299
300 required_by |= set(getattr(ireq, "_required_by", set()))
301
302 if required_by:
303 if self.annotation_style == "split":
304 annotation = annotation_style_split(required_by)
305 sep = "\n "
306 elif self.annotation_style == "line":
307 annotation = annotation_style_line(required_by)
308 sep = "\n " if ireq_hashes else " "
309 else: # pragma: no cover
310 raise ValueError("Invalid value for annotation style")
311 if self.strip_extras:
312 annotation = strip_extras(annotation)
313 # 24 is one reasonable column size to use here, that we've used in the past
314 lines = f"{line:24}{sep}{comment(annotation)}".splitlines()
315 line = "\n".join(ln.rstrip() for ln in lines)
316
317 return line
318
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/piptools/writer.py b/piptools/writer.py
--- a/piptools/writer.py
+++ b/piptools/writer.py
@@ -32,7 +32,8 @@
MESSAGE_UNSAFE_PACKAGES_UNPINNED = comment(
"# WARNING: The following packages were not pinned, but pip requires them to be"
- "\n# pinned when the requirements file includes hashes. "
+ "\n# pinned when the requirements file includes hashes and the requirement is not"
+ "\n# satisfied by a package already installed. "
"Consider using the --allow-unsafe flag."
)
|
{"golden_diff": "diff --git a/piptools/writer.py b/piptools/writer.py\n--- a/piptools/writer.py\n+++ b/piptools/writer.py\n@@ -32,7 +32,8 @@\n \n MESSAGE_UNSAFE_PACKAGES_UNPINNED = comment(\n \"# WARNING: The following packages were not pinned, but pip requires them to be\"\n- \"\\n# pinned when the requirements file includes hashes. \"\n+ \"\\n# pinned when the requirements file includes hashes and the requirement is not\"\n+ \"\\n# satisfied by a package already installed. \"\n \"Consider using the --allow-unsafe flag.\"\n )\n", "issue": "Improve --allow-unsafe warning comment\nCurrently, if `pip-compile` suppresses a requirement because the `--allow-unsafe` flag wasn't passed, it includes a warning that reads something like this:\r\n\r\n```\r\n# WARNING: The following packages were not pinned, but pip requires them to be\r\n# pinned when the requirements file includes hashes. Consider using the --allow-unsafe flag.\r\n# pip\r\n```\r\n\r\nThis wording is a little incorrect because it's not always the case that pip requires them to be pinned. Namely, if the requirement is already satisfied by a package that's already installed, then pip won't check any hashes (because it won't need to install anything for that requirement). Because of this, a user seeing the current message can get the mistaken idea that they don't have any options other than passing `--allow-unsafe`.\r\n\r\nThus, I'd like to suggest that the wording be modified slightly to read something like:\r\n\r\n```\r\n# WARNING: The following packages were not pinned, but pip requires them to be\r\n# pinned when the requirements file includes hashes and the requirement isn't\r\n# already satisfied by a package already installed. Consider using the\r\n# --allow-unsafe flag.\r\n# pip\r\n```\r\n\r\n(As a side note, there is a third option in addition to (1) commenting out the \"unsafe\" requirement, and (2) including the pinned requirement with hashes. Namely, the pinned requirement can be included without hashes. Doing that third option would enforce that the user has the correct version already installed by erroring out if they don't, while not risking doing an \"unsafe\" operation by uninstalling / installing the package in question. The reason is that, by not including the hashes, it prevents a new install from taking place.)\n", "before_files": [{"content": "from __future__ import annotations\n\nimport io\nimport os\nimport re\nimport sys\nfrom itertools import chain\nfrom typing import BinaryIO, Iterable, Iterator, cast\n\nfrom click import unstyle\nfrom click.core import Context\nfrom pip._internal.models.format_control import FormatControl\nfrom pip._internal.req.req_install import InstallRequirement\nfrom pip._vendor.packaging.markers import Marker\nfrom pip._vendor.packaging.utils import canonicalize_name\n\nfrom .logging import log\nfrom .utils import (\n comment,\n dedup,\n format_requirement,\n get_compile_command,\n key_from_ireq,\n strip_extras,\n)\n\nMESSAGE_UNHASHED_PACKAGE = comment(\n \"# WARNING: pip install will require the following package to be hashed.\"\n \"\\n# Consider using a hashable URL like \"\n \"https://github.com/jazzband/pip-tools/archive/SOMECOMMIT.zip\"\n)\n\nMESSAGE_UNSAFE_PACKAGES_UNPINNED = comment(\n \"# WARNING: The following packages were not pinned, but pip requires them to be\"\n \"\\n# pinned when the requirements file includes hashes. \"\n \"Consider using the --allow-unsafe flag.\"\n)\n\nMESSAGE_UNSAFE_PACKAGES = comment(\n \"# The following packages are considered to be unsafe in a requirements file:\"\n)\n\nMESSAGE_UNINSTALLABLE = (\n \"The generated requirements file may be rejected by pip install. \"\n \"See # WARNING lines for details.\"\n)\n\n\nstrip_comes_from_line_re = re.compile(r\" \\(line \\d+\\)$\")\n\n\ndef _comes_from_as_string(comes_from: str | InstallRequirement) -> str:\n if isinstance(comes_from, str):\n return strip_comes_from_line_re.sub(\"\", comes_from)\n return cast(str, canonicalize_name(key_from_ireq(comes_from)))\n\n\ndef annotation_style_split(required_by: set[str]) -> str:\n sorted_required_by = sorted(required_by)\n if len(sorted_required_by) == 1:\n source = sorted_required_by[0]\n annotation = \"# via \" + source\n else:\n annotation_lines = [\"# via\"]\n for source in sorted_required_by:\n annotation_lines.append(\" # \" + source)\n annotation = \"\\n\".join(annotation_lines)\n return annotation\n\n\ndef annotation_style_line(required_by: set[str]) -> str:\n return f\"# via {', '.join(sorted(required_by))}\"\n\n\nclass OutputWriter:\n def __init__(\n self,\n dst_file: BinaryIO,\n click_ctx: Context,\n dry_run: bool,\n emit_header: bool,\n emit_index_url: bool,\n emit_trusted_host: bool,\n annotate: bool,\n annotation_style: str,\n strip_extras: bool,\n generate_hashes: bool,\n default_index_url: str,\n index_urls: Iterable[str],\n trusted_hosts: Iterable[str],\n format_control: FormatControl,\n linesep: str,\n allow_unsafe: bool,\n find_links: list[str],\n emit_find_links: bool,\n emit_options: bool,\n ) -> None:\n self.dst_file = dst_file\n self.click_ctx = click_ctx\n self.dry_run = dry_run\n self.emit_header = emit_header\n self.emit_index_url = emit_index_url\n self.emit_trusted_host = emit_trusted_host\n self.annotate = annotate\n self.annotation_style = annotation_style\n self.strip_extras = strip_extras\n self.generate_hashes = generate_hashes\n self.default_index_url = default_index_url\n self.index_urls = index_urls\n self.trusted_hosts = trusted_hosts\n self.format_control = format_control\n self.linesep = linesep\n self.allow_unsafe = allow_unsafe\n self.find_links = find_links\n self.emit_find_links = emit_find_links\n self.emit_options = emit_options\n\n def _sort_key(self, ireq: InstallRequirement) -> tuple[bool, str]:\n return (not ireq.editable, key_from_ireq(ireq))\n\n def write_header(self) -> Iterator[str]:\n if self.emit_header:\n yield comment(\"#\")\n yield comment(\n \"# This file is autogenerated by pip-compile with Python \"\n f\"{sys.version_info.major}.{sys.version_info.minor}\"\n )\n yield comment(\"# by the following command:\")\n yield comment(\"#\")\n compile_command = os.environ.get(\n \"CUSTOM_COMPILE_COMMAND\"\n ) or get_compile_command(self.click_ctx)\n yield comment(f\"# {compile_command}\")\n yield comment(\"#\")\n\n def write_index_options(self) -> Iterator[str]:\n if self.emit_index_url:\n for index, index_url in enumerate(dedup(self.index_urls)):\n if index == 0 and index_url.rstrip(\"/\") == self.default_index_url:\n continue\n flag = \"--index-url\" if index == 0 else \"--extra-index-url\"\n yield f\"{flag} {index_url}\"\n\n def write_trusted_hosts(self) -> Iterator[str]:\n if self.emit_trusted_host:\n for trusted_host in dedup(self.trusted_hosts):\n yield f\"--trusted-host {trusted_host}\"\n\n def write_format_controls(self) -> Iterator[str]:\n for nb in dedup(sorted(self.format_control.no_binary)):\n yield f\"--no-binary {nb}\"\n for ob in dedup(sorted(self.format_control.only_binary)):\n yield f\"--only-binary {ob}\"\n\n def write_find_links(self) -> Iterator[str]:\n if self.emit_find_links:\n for find_link in dedup(self.find_links):\n yield f\"--find-links {find_link}\"\n\n def write_flags(self) -> Iterator[str]:\n if not self.emit_options:\n return\n emitted = False\n for line in chain(\n self.write_index_options(),\n self.write_find_links(),\n self.write_trusted_hosts(),\n self.write_format_controls(),\n ):\n emitted = True\n yield line\n if emitted:\n yield \"\"\n\n def _iter_lines(\n self,\n results: set[InstallRequirement],\n unsafe_requirements: set[InstallRequirement],\n unsafe_packages: set[str],\n markers: dict[str, Marker],\n hashes: dict[InstallRequirement, set[str]] | None = None,\n ) -> Iterator[str]:\n # default values\n unsafe_packages = unsafe_packages if self.allow_unsafe else set()\n hashes = hashes or {}\n\n # Check for unhashed or unpinned packages if at least one package does have\n # hashes, which will trigger pip install's --require-hashes mode.\n warn_uninstallable = False\n has_hashes = hashes and any(hash for hash in hashes.values())\n\n yielded = False\n\n for line in self.write_header():\n yield line\n yielded = True\n for line in self.write_flags():\n yield line\n yielded = True\n\n unsafe_requirements = unsafe_requirements or {\n r for r in results if r.name in unsafe_packages\n }\n packages = {r for r in results if r.name not in unsafe_packages}\n\n if packages:\n for ireq in sorted(packages, key=self._sort_key):\n if has_hashes and not hashes.get(ireq):\n yield MESSAGE_UNHASHED_PACKAGE\n warn_uninstallable = True\n line = self._format_requirement(\n ireq, markers.get(key_from_ireq(ireq)), hashes=hashes\n )\n yield line\n yielded = True\n\n if unsafe_requirements:\n yield \"\"\n yielded = True\n if has_hashes and not self.allow_unsafe:\n yield MESSAGE_UNSAFE_PACKAGES_UNPINNED\n warn_uninstallable = True\n else:\n yield MESSAGE_UNSAFE_PACKAGES\n\n for ireq in sorted(unsafe_requirements, key=self._sort_key):\n ireq_key = key_from_ireq(ireq)\n if not self.allow_unsafe:\n yield comment(f\"# {ireq_key}\")\n else:\n line = self._format_requirement(\n ireq, marker=markers.get(ireq_key), hashes=hashes\n )\n yield line\n\n # Yield even when there's no real content, so that blank files are written\n if not yielded:\n yield \"\"\n\n if warn_uninstallable:\n log.warning(MESSAGE_UNINSTALLABLE)\n\n def write(\n self,\n results: set[InstallRequirement],\n unsafe_requirements: set[InstallRequirement],\n unsafe_packages: set[str],\n markers: dict[str, Marker],\n hashes: dict[InstallRequirement, set[str]] | None,\n ) -> None:\n if not self.dry_run:\n dst_file = io.TextIOWrapper(\n self.dst_file,\n encoding=\"utf8\",\n newline=self.linesep,\n line_buffering=True,\n )\n try:\n for line in self._iter_lines(\n results, unsafe_requirements, unsafe_packages, markers, hashes\n ):\n if self.dry_run:\n # Bypass the log level to always print this during a dry run\n log.log(line)\n else:\n log.info(line)\n dst_file.write(unstyle(line))\n dst_file.write(\"\\n\")\n finally:\n if not self.dry_run:\n dst_file.detach()\n\n def _format_requirement(\n self,\n ireq: InstallRequirement,\n marker: Marker | None = None,\n hashes: dict[InstallRequirement, set[str]] | None = None,\n ) -> str:\n ireq_hashes = (hashes if hashes is not None else {}).get(ireq)\n\n line = format_requirement(ireq, marker=marker, hashes=ireq_hashes)\n if self.strip_extras:\n line = strip_extras(line)\n\n if not self.annotate:\n return line\n\n # Annotate what packages or reqs-ins this package is required by\n required_by = set()\n if hasattr(ireq, \"_source_ireqs\"):\n required_by |= {\n _comes_from_as_string(src_ireq.comes_from)\n for src_ireq in ireq._source_ireqs\n if src_ireq.comes_from\n }\n\n if ireq.comes_from:\n required_by.add(_comes_from_as_string(ireq.comes_from))\n\n required_by |= set(getattr(ireq, \"_required_by\", set()))\n\n if required_by:\n if self.annotation_style == \"split\":\n annotation = annotation_style_split(required_by)\n sep = \"\\n \"\n elif self.annotation_style == \"line\":\n annotation = annotation_style_line(required_by)\n sep = \"\\n \" if ireq_hashes else \" \"\n else: # pragma: no cover\n raise ValueError(\"Invalid value for annotation style\")\n if self.strip_extras:\n annotation = strip_extras(annotation)\n # 24 is one reasonable column size to use here, that we've used in the past\n lines = f\"{line:24}{sep}{comment(annotation)}\".splitlines()\n line = \"\\n\".join(ln.rstrip() for ln in lines)\n\n return line\n", "path": "piptools/writer.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport io\nimport os\nimport re\nimport sys\nfrom itertools import chain\nfrom typing import BinaryIO, Iterable, Iterator, cast\n\nfrom click import unstyle\nfrom click.core import Context\nfrom pip._internal.models.format_control import FormatControl\nfrom pip._internal.req.req_install import InstallRequirement\nfrom pip._vendor.packaging.markers import Marker\nfrom pip._vendor.packaging.utils import canonicalize_name\n\nfrom .logging import log\nfrom .utils import (\n comment,\n dedup,\n format_requirement,\n get_compile_command,\n key_from_ireq,\n strip_extras,\n)\n\nMESSAGE_UNHASHED_PACKAGE = comment(\n \"# WARNING: pip install will require the following package to be hashed.\"\n \"\\n# Consider using a hashable URL like \"\n \"https://github.com/jazzband/pip-tools/archive/SOMECOMMIT.zip\"\n)\n\nMESSAGE_UNSAFE_PACKAGES_UNPINNED = comment(\n \"# WARNING: The following packages were not pinned, but pip requires them to be\"\n \"\\n# pinned when the requirements file includes hashes and the requirement is not\"\n \"\\n# satisfied by a package already installed. \"\n \"Consider using the --allow-unsafe flag.\"\n)\n\nMESSAGE_UNSAFE_PACKAGES = comment(\n \"# The following packages are considered to be unsafe in a requirements file:\"\n)\n\nMESSAGE_UNINSTALLABLE = (\n \"The generated requirements file may be rejected by pip install. \"\n \"See # WARNING lines for details.\"\n)\n\n\nstrip_comes_from_line_re = re.compile(r\" \\(line \\d+\\)$\")\n\n\ndef _comes_from_as_string(comes_from: str | InstallRequirement) -> str:\n if isinstance(comes_from, str):\n return strip_comes_from_line_re.sub(\"\", comes_from)\n return cast(str, canonicalize_name(key_from_ireq(comes_from)))\n\n\ndef annotation_style_split(required_by: set[str]) -> str:\n sorted_required_by = sorted(required_by)\n if len(sorted_required_by) == 1:\n source = sorted_required_by[0]\n annotation = \"# via \" + source\n else:\n annotation_lines = [\"# via\"]\n for source in sorted_required_by:\n annotation_lines.append(\" # \" + source)\n annotation = \"\\n\".join(annotation_lines)\n return annotation\n\n\ndef annotation_style_line(required_by: set[str]) -> str:\n return f\"# via {', '.join(sorted(required_by))}\"\n\n\nclass OutputWriter:\n def __init__(\n self,\n dst_file: BinaryIO,\n click_ctx: Context,\n dry_run: bool,\n emit_header: bool,\n emit_index_url: bool,\n emit_trusted_host: bool,\n annotate: bool,\n annotation_style: str,\n strip_extras: bool,\n generate_hashes: bool,\n default_index_url: str,\n index_urls: Iterable[str],\n trusted_hosts: Iterable[str],\n format_control: FormatControl,\n linesep: str,\n allow_unsafe: bool,\n find_links: list[str],\n emit_find_links: bool,\n emit_options: bool,\n ) -> None:\n self.dst_file = dst_file\n self.click_ctx = click_ctx\n self.dry_run = dry_run\n self.emit_header = emit_header\n self.emit_index_url = emit_index_url\n self.emit_trusted_host = emit_trusted_host\n self.annotate = annotate\n self.annotation_style = annotation_style\n self.strip_extras = strip_extras\n self.generate_hashes = generate_hashes\n self.default_index_url = default_index_url\n self.index_urls = index_urls\n self.trusted_hosts = trusted_hosts\n self.format_control = format_control\n self.linesep = linesep\n self.allow_unsafe = allow_unsafe\n self.find_links = find_links\n self.emit_find_links = emit_find_links\n self.emit_options = emit_options\n\n def _sort_key(self, ireq: InstallRequirement) -> tuple[bool, str]:\n return (not ireq.editable, key_from_ireq(ireq))\n\n def write_header(self) -> Iterator[str]:\n if self.emit_header:\n yield comment(\"#\")\n yield comment(\n \"# This file is autogenerated by pip-compile with Python \"\n f\"{sys.version_info.major}.{sys.version_info.minor}\"\n )\n yield comment(\"# by the following command:\")\n yield comment(\"#\")\n compile_command = os.environ.get(\n \"CUSTOM_COMPILE_COMMAND\"\n ) or get_compile_command(self.click_ctx)\n yield comment(f\"# {compile_command}\")\n yield comment(\"#\")\n\n def write_index_options(self) -> Iterator[str]:\n if self.emit_index_url:\n for index, index_url in enumerate(dedup(self.index_urls)):\n if index == 0 and index_url.rstrip(\"/\") == self.default_index_url:\n continue\n flag = \"--index-url\" if index == 0 else \"--extra-index-url\"\n yield f\"{flag} {index_url}\"\n\n def write_trusted_hosts(self) -> Iterator[str]:\n if self.emit_trusted_host:\n for trusted_host in dedup(self.trusted_hosts):\n yield f\"--trusted-host {trusted_host}\"\n\n def write_format_controls(self) -> Iterator[str]:\n for nb in dedup(sorted(self.format_control.no_binary)):\n yield f\"--no-binary {nb}\"\n for ob in dedup(sorted(self.format_control.only_binary)):\n yield f\"--only-binary {ob}\"\n\n def write_find_links(self) -> Iterator[str]:\n if self.emit_find_links:\n for find_link in dedup(self.find_links):\n yield f\"--find-links {find_link}\"\n\n def write_flags(self) -> Iterator[str]:\n if not self.emit_options:\n return\n emitted = False\n for line in chain(\n self.write_index_options(),\n self.write_find_links(),\n self.write_trusted_hosts(),\n self.write_format_controls(),\n ):\n emitted = True\n yield line\n if emitted:\n yield \"\"\n\n def _iter_lines(\n self,\n results: set[InstallRequirement],\n unsafe_requirements: set[InstallRequirement],\n unsafe_packages: set[str],\n markers: dict[str, Marker],\n hashes: dict[InstallRequirement, set[str]] | None = None,\n ) -> Iterator[str]:\n # default values\n unsafe_packages = unsafe_packages if self.allow_unsafe else set()\n hashes = hashes or {}\n\n # Check for unhashed or unpinned packages if at least one package does have\n # hashes, which will trigger pip install's --require-hashes mode.\n warn_uninstallable = False\n has_hashes = hashes and any(hash for hash in hashes.values())\n\n yielded = False\n\n for line in self.write_header():\n yield line\n yielded = True\n for line in self.write_flags():\n yield line\n yielded = True\n\n unsafe_requirements = unsafe_requirements or {\n r for r in results if r.name in unsafe_packages\n }\n packages = {r for r in results if r.name not in unsafe_packages}\n\n if packages:\n for ireq in sorted(packages, key=self._sort_key):\n if has_hashes and not hashes.get(ireq):\n yield MESSAGE_UNHASHED_PACKAGE\n warn_uninstallable = True\n line = self._format_requirement(\n ireq, markers.get(key_from_ireq(ireq)), hashes=hashes\n )\n yield line\n yielded = True\n\n if unsafe_requirements:\n yield \"\"\n yielded = True\n if has_hashes and not self.allow_unsafe:\n yield MESSAGE_UNSAFE_PACKAGES_UNPINNED\n warn_uninstallable = True\n else:\n yield MESSAGE_UNSAFE_PACKAGES\n\n for ireq in sorted(unsafe_requirements, key=self._sort_key):\n ireq_key = key_from_ireq(ireq)\n if not self.allow_unsafe:\n yield comment(f\"# {ireq_key}\")\n else:\n line = self._format_requirement(\n ireq, marker=markers.get(ireq_key), hashes=hashes\n )\n yield line\n\n # Yield even when there's no real content, so that blank files are written\n if not yielded:\n yield \"\"\n\n if warn_uninstallable:\n log.warning(MESSAGE_UNINSTALLABLE)\n\n def write(\n self,\n results: set[InstallRequirement],\n unsafe_requirements: set[InstallRequirement],\n unsafe_packages: set[str],\n markers: dict[str, Marker],\n hashes: dict[InstallRequirement, set[str]] | None,\n ) -> None:\n if not self.dry_run:\n dst_file = io.TextIOWrapper(\n self.dst_file,\n encoding=\"utf8\",\n newline=self.linesep,\n line_buffering=True,\n )\n try:\n for line in self._iter_lines(\n results, unsafe_requirements, unsafe_packages, markers, hashes\n ):\n if self.dry_run:\n # Bypass the log level to always print this during a dry run\n log.log(line)\n else:\n log.info(line)\n dst_file.write(unstyle(line))\n dst_file.write(\"\\n\")\n finally:\n if not self.dry_run:\n dst_file.detach()\n\n def _format_requirement(\n self,\n ireq: InstallRequirement,\n marker: Marker | None = None,\n hashes: dict[InstallRequirement, set[str]] | None = None,\n ) -> str:\n ireq_hashes = (hashes if hashes is not None else {}).get(ireq)\n\n line = format_requirement(ireq, marker=marker, hashes=ireq_hashes)\n if self.strip_extras:\n line = strip_extras(line)\n\n if not self.annotate:\n return line\n\n # Annotate what packages or reqs-ins this package is required by\n required_by = set()\n if hasattr(ireq, \"_source_ireqs\"):\n required_by |= {\n _comes_from_as_string(src_ireq.comes_from)\n for src_ireq in ireq._source_ireqs\n if src_ireq.comes_from\n }\n\n if ireq.comes_from:\n required_by.add(_comes_from_as_string(ireq.comes_from))\n\n required_by |= set(getattr(ireq, \"_required_by\", set()))\n\n if required_by:\n if self.annotation_style == \"split\":\n annotation = annotation_style_split(required_by)\n sep = \"\\n \"\n elif self.annotation_style == \"line\":\n annotation = annotation_style_line(required_by)\n sep = \"\\n \" if ireq_hashes else \" \"\n else: # pragma: no cover\n raise ValueError(\"Invalid value for annotation style\")\n if self.strip_extras:\n annotation = strip_extras(annotation)\n # 24 is one reasonable column size to use here, that we've used in the past\n lines = f\"{line:24}{sep}{comment(annotation)}\".splitlines()\n line = \"\\n\".join(ln.rstrip() for ln in lines)\n\n return line\n", "path": "piptools/writer.py"}]}
| 3,832 | 136 |
gh_patches_debug_417
|
rasdani/github-patches
|
git_diff
|
python__python-docs-es-1712
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Translate 'library/base64.po'
This needs to reach 100% translated.
The rendered version of this file will be available at https://docs.python.org/es/3.10/library/base64.html once translated.
Meanwhile, the English version is shown.
Current stats for `library/base64.po`:
* Fuzzy: 4
* Percent translated: 90.9%
* Entries: 50 / 55
* Untranslated: 5
Please, comment here if you want this file to be assigned to you and an member will assign it to you as soon as possible, so you can start working on it.
Remember to follow the steps in our [Contributing Guide](https://python-docs-es.readthedocs.io/page/CONTRIBUTING.html).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/translate.py`
Content:
```
1 import os
2 import re
3 import sys
4 from typing import Dict, Tuple
5
6 import polib
7
8 VERBOSE = False
9 DEBUG = False
10 SKIP_TRANSLATED_ENTRIES = True
11
12 try:
13 from deep_translator import GoogleTranslator
14 except ImportError:
15 print("Error: This util script needs `deep_translator` to be installed")
16 sys.exit(1)
17
18 _patterns = [
19 ":c:func:`[^`]+`",
20 ":c:type:`[^`]+`",
21 ":c:macro:`[^`]+`",
22 ":c:member:`[^`]+`",
23 ":c:data:`[^`]+`",
24 ":py:data:`[^`]+`",
25 ":py:mod:`[^`]+`",
26 ":func:`[^`]+`",
27 ":mod:`[^`]+`",
28 ":ref:`[^`]+`",
29 ":class:`[^`]+`",
30 ":pep:`[^`]+`",
31 ":data:`[^`]+`",
32 ":exc:`[^`]+`",
33 ":term:`[^`]+`",
34 ":meth:`[^`]+`",
35 ":envvar:`[^`]+`",
36 ":file:`[^`]+`",
37 ":attr:`[^`]+`",
38 ":const:`[^`]+`",
39 ":issue:`[^`]+`",
40 ":opcode:`[^`]+`",
41 ":option:`[^`]+`",
42 ":program:`[^`]+`",
43 ":keyword:`[^`]+`",
44 ":RFC:`[^`]+`",
45 ":doc:`[^`]+`",
46 "``[^`]+``",
47 "`[^`]+`__",
48 "`[^`]+`_",
49 "\*\*.+\*\*", # bold text between **
50 "\*.+\*", # italic text between *
51 ]
52
53 _exps = [re.compile(e) for e in _patterns]
54
55 def protect_sphinx_directives(s: str) -> Tuple[dict, str]:
56 """
57 Parameters:
58 string containing the text to translate
59
60 Returns:
61 dictionary containing all the placeholder text as keys
62 and the correct value.
63 """
64
65 i = 0
66 d: Dict[str, str] = {}
67 for exp in _exps:
68 matches = exp.findall(s)
69 if DEBUG:
70 print(exp, matches)
71 for match in matches:
72 ph = f"XASDF{str(i).zfill(2)}"
73 s = s.replace(match, ph)
74 if ph in d and VERBOSE:
75 print(f"Error: {ph} is already in the dictionary")
76 print("new", match)
77 print("old", d[ph])
78 d[ph] = match
79 i += 1
80 return d, s
81
82
83 def undo_sphinx_directives_protection(placeholders: dict, translated_text: str) -> str:
84 for ph, value in placeholders.items():
85 translated_text = translated_text.replace(ph, value)
86 if DEBUG:
87 print(ph, value)
88 print(translated_text)
89 return translated_text
90
91
92 if __name__ == "__main__":
93 filename = sys.argv[1]
94 if not os.path.isfile(filename):
95 print(f"File not found: '{filename}'")
96 sys.exit(-1)
97
98 po = polib.pofile(filename)
99 translator = GoogleTranslator(source="en", target="es")
100
101 for entry in po:
102 # If the entry has already a translation, skip.
103 if SKIP_TRANSLATED_ENTRIES and entry.msgstr:
104 continue
105
106 print("\nEN|", entry.msgid)
107 placeholders, temp_text = protect_sphinx_directives(entry.msgid)
108 if VERBOSE:
109 print(temp_text)
110 print(placeholders)
111
112 # Translate the temporary text without sphinx statements
113 translated_text = translator.translate(temp_text)
114
115 # Recover sphinx statements
116 real_text = undo_sphinx_directives_protection(placeholders, translated_text)
117 print("ES|", real_text)
118
119 # Replace the po file translated entry
120 entry.msgstr = real_text
121
122 # Save the file after all the entries are translated
123 po.save()
124
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scripts/translate.py b/scripts/translate.py
--- a/scripts/translate.py
+++ b/scripts/translate.py
@@ -42,6 +42,7 @@
":program:`[^`]+`",
":keyword:`[^`]+`",
":RFC:`[^`]+`",
+ ":rfc:`[^`]+`",
":doc:`[^`]+`",
"``[^`]+``",
"`[^`]+`__",
|
{"golden_diff": "diff --git a/scripts/translate.py b/scripts/translate.py\n--- a/scripts/translate.py\n+++ b/scripts/translate.py\n@@ -42,6 +42,7 @@\n \":program:`[^`]+`\",\n \":keyword:`[^`]+`\",\n \":RFC:`[^`]+`\",\n+ \":rfc:`[^`]+`\",\n \":doc:`[^`]+`\",\n \"``[^`]+``\",\n \"`[^`]+`__\",\n", "issue": "Translate 'library/base64.po'\nThis needs to reach 100% translated.\n\nThe rendered version of this file will be available at https://docs.python.org/es/3.10/library/base64.html once translated.\nMeanwhile, the English version is shown.\n\nCurrent stats for `library/base64.po`:\n\n* Fuzzy: 4\n* Percent translated: 90.9%\n* Entries: 50 / 55\n* Untranslated: 5\n\nPlease, comment here if you want this file to be assigned to you and an member will assign it to you as soon as possible, so you can start working on it.\n\nRemember to follow the steps in our [Contributing Guide](https://python-docs-es.readthedocs.io/page/CONTRIBUTING.html).\n", "before_files": [{"content": "import os\nimport re\nimport sys\nfrom typing import Dict, Tuple\n\nimport polib\n\nVERBOSE = False\nDEBUG = False\nSKIP_TRANSLATED_ENTRIES = True\n\ntry:\n from deep_translator import GoogleTranslator\nexcept ImportError:\n print(\"Error: This util script needs `deep_translator` to be installed\")\n sys.exit(1)\n\n_patterns = [\n \":c:func:`[^`]+`\",\n \":c:type:`[^`]+`\",\n \":c:macro:`[^`]+`\",\n \":c:member:`[^`]+`\",\n \":c:data:`[^`]+`\",\n \":py:data:`[^`]+`\",\n \":py:mod:`[^`]+`\",\n \":func:`[^`]+`\",\n \":mod:`[^`]+`\",\n \":ref:`[^`]+`\",\n \":class:`[^`]+`\",\n \":pep:`[^`]+`\",\n \":data:`[^`]+`\",\n \":exc:`[^`]+`\",\n \":term:`[^`]+`\",\n \":meth:`[^`]+`\",\n \":envvar:`[^`]+`\",\n \":file:`[^`]+`\",\n \":attr:`[^`]+`\",\n \":const:`[^`]+`\",\n \":issue:`[^`]+`\",\n \":opcode:`[^`]+`\",\n \":option:`[^`]+`\",\n \":program:`[^`]+`\",\n \":keyword:`[^`]+`\",\n \":RFC:`[^`]+`\",\n \":doc:`[^`]+`\",\n \"``[^`]+``\",\n \"`[^`]+`__\",\n \"`[^`]+`_\",\n \"\\*\\*.+\\*\\*\", # bold text between **\n \"\\*.+\\*\", # italic text between *\n]\n\n_exps = [re.compile(e) for e in _patterns]\n\ndef protect_sphinx_directives(s: str) -> Tuple[dict, str]:\n \"\"\"\n Parameters:\n string containing the text to translate\n\n Returns:\n dictionary containing all the placeholder text as keys\n and the correct value.\n \"\"\"\n\n i = 0\n d: Dict[str, str] = {}\n for exp in _exps:\n matches = exp.findall(s)\n if DEBUG:\n print(exp, matches)\n for match in matches:\n ph = f\"XASDF{str(i).zfill(2)}\"\n s = s.replace(match, ph)\n if ph in d and VERBOSE:\n print(f\"Error: {ph} is already in the dictionary\")\n print(\"new\", match)\n print(\"old\", d[ph])\n d[ph] = match\n i += 1\n return d, s\n\n\ndef undo_sphinx_directives_protection(placeholders: dict, translated_text: str) -> str:\n for ph, value in placeholders.items():\n translated_text = translated_text.replace(ph, value)\n if DEBUG:\n print(ph, value)\n print(translated_text)\n return translated_text\n\n\nif __name__ == \"__main__\":\n filename = sys.argv[1]\n if not os.path.isfile(filename):\n print(f\"File not found: '{filename}'\")\n sys.exit(-1)\n\n po = polib.pofile(filename)\n translator = GoogleTranslator(source=\"en\", target=\"es\")\n\n for entry in po:\n # If the entry has already a translation, skip.\n if SKIP_TRANSLATED_ENTRIES and entry.msgstr:\n continue\n\n print(\"\\nEN|\", entry.msgid)\n placeholders, temp_text = protect_sphinx_directives(entry.msgid)\n if VERBOSE:\n print(temp_text)\n print(placeholders)\n\n # Translate the temporary text without sphinx statements\n translated_text = translator.translate(temp_text)\n\n # Recover sphinx statements\n real_text = undo_sphinx_directives_protection(placeholders, translated_text)\n print(\"ES|\", real_text)\n\n # Replace the po file translated entry\n entry.msgstr = real_text\n\n # Save the file after all the entries are translated\n po.save()\n", "path": "scripts/translate.py"}], "after_files": [{"content": "import os\nimport re\nimport sys\nfrom typing import Dict, Tuple\n\nimport polib\n\nVERBOSE = False\nDEBUG = False\nSKIP_TRANSLATED_ENTRIES = True\n\ntry:\n from deep_translator import GoogleTranslator\nexcept ImportError:\n print(\"Error: This util script needs `deep_translator` to be installed\")\n sys.exit(1)\n\n_patterns = [\n \":c:func:`[^`]+`\",\n \":c:type:`[^`]+`\",\n \":c:macro:`[^`]+`\",\n \":c:member:`[^`]+`\",\n \":c:data:`[^`]+`\",\n \":py:data:`[^`]+`\",\n \":py:mod:`[^`]+`\",\n \":func:`[^`]+`\",\n \":mod:`[^`]+`\",\n \":ref:`[^`]+`\",\n \":class:`[^`]+`\",\n \":pep:`[^`]+`\",\n \":data:`[^`]+`\",\n \":exc:`[^`]+`\",\n \":term:`[^`]+`\",\n \":meth:`[^`]+`\",\n \":envvar:`[^`]+`\",\n \":file:`[^`]+`\",\n \":attr:`[^`]+`\",\n \":const:`[^`]+`\",\n \":issue:`[^`]+`\",\n \":opcode:`[^`]+`\",\n \":option:`[^`]+`\",\n \":program:`[^`]+`\",\n \":keyword:`[^`]+`\",\n \":RFC:`[^`]+`\",\n \":rfc:`[^`]+`\",\n \":doc:`[^`]+`\",\n \"``[^`]+``\",\n \"`[^`]+`__\",\n \"`[^`]+`_\",\n \"\\*\\*.+\\*\\*\", # bold text between **\n \"\\*.+\\*\", # italic text between *\n]\n\n_exps = [re.compile(e) for e in _patterns]\n\ndef protect_sphinx_directives(s: str) -> Tuple[dict, str]:\n \"\"\"\n Parameters:\n string containing the text to translate\n\n Returns:\n dictionary containing all the placeholder text as keys\n and the correct value.\n \"\"\"\n\n i = 0\n d: Dict[str, str] = {}\n for exp in _exps:\n matches = exp.findall(s)\n if DEBUG:\n print(exp, matches)\n for match in matches:\n ph = f\"XASDF{str(i).zfill(2)}\"\n s = s.replace(match, ph)\n if ph in d and VERBOSE:\n print(f\"Error: {ph} is already in the dictionary\")\n print(\"new\", match)\n print(\"old\", d[ph])\n d[ph] = match\n i += 1\n return d, s\n\n\ndef undo_sphinx_directives_protection(placeholders: dict, translated_text: str) -> str:\n for ph, value in placeholders.items():\n translated_text = translated_text.replace(ph, value)\n if DEBUG:\n print(ph, value)\n print(translated_text)\n return translated_text\n\n\nif __name__ == \"__main__\":\n filename = sys.argv[1]\n if not os.path.isfile(filename):\n print(f\"File not found: '{filename}'\")\n sys.exit(-1)\n\n po = polib.pofile(filename)\n translator = GoogleTranslator(source=\"en\", target=\"es\")\n\n for entry in po:\n # If the entry has already a translation, skip.\n if SKIP_TRANSLATED_ENTRIES and entry.msgstr:\n continue\n\n print(\"\\nEN|\", entry.msgid)\n placeholders, temp_text = protect_sphinx_directives(entry.msgid)\n if VERBOSE:\n print(temp_text)\n print(placeholders)\n\n # Translate the temporary text without sphinx statements\n translated_text = translator.translate(temp_text)\n\n # Recover sphinx statements\n real_text = undo_sphinx_directives_protection(placeholders, translated_text)\n print(\"ES|\", real_text)\n\n # Replace the po file translated entry\n entry.msgstr = real_text\n\n # Save the file after all the entries are translated\n po.save()\n", "path": "scripts/translate.py"}]}
| 1,572 | 103 |
gh_patches_debug_3070
|
rasdani/github-patches
|
git_diff
|
pallets__werkzeug-1539
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ProfilerMiddleware's default filename_format causes ValueError when ProfilerMiddleware is used with profile_dir
## Environment
```
$ sw_vers
ProductName: Mac OS X
ProductVersion: 10.13.6
BuildVersion: 17G3025
$ python --version
Python 3.7.2
$ pip freeze
Click==7.0
Flask==1.0.2
itsdangerous==1.1.0
Jinja2==2.10.1
MarkupSafe==1.1.1
Werkzeug==0.15.2
```
Basically, the only Python dependency I installed was Flask because that's what I'm most familiar with. However, the error I'm describing looks to be contained within werkzeug.
## Observed Behavior
When using `ProfilerMiddleware` with its `profile_dir` argument, the following error gets raised after a request is sent to the server:
```
Error on request:
Traceback (most recent call last):
File "/dev/jlove-bazaarvoice/werkzeug-profiler-bug/.venv/lib/python3.7/site-packages/werkzeug/serving.py", line 302, in run_wsgi
execute(self.server.app)
File "/dev/jlove-bazaarvoice/werkzeug-profiler-bug/.venv/lib/python3.7/site-packages/werkzeug/serving.py", line 290, in execute
application_iter = app(environ, start_response)
File "/dev/jlove-bazaarvoice/werkzeug-profiler-bug/.venv/lib/python3.7/site-packages/flask/app.py", line 2309, in __call__
return self.wsgi_app(environ, start_response)
File "/dev/jlove-bazaarvoice/werkzeug-profiler-bug/.venv/lib/python3.7/site-packages/werkzeug/middleware/profiler.py", line 119, in __call__
time=time.time(),
ValueError: Unknown format code 'd' for object of type 'float'
```
## Expected Behavior
No `ValueError`.
## Steps to Reproduce
1. `pip install flask`
2. Save the following file as app.py.
```python
# app.py
from flask import Flask
from werkzeug.middleware.profiler import ProfilerMiddleware
app = Flask(__name__)
app.wsgi_app = ProfilerMiddleware(app.wsgi_app, profile_dir=".")
@app.route("/", methods=["GET"])
def get_index():
return "Hello, world!"
```
3. Start the server with `FLASK_APP=app.py flask run`.
4. Send a request to the server (e.g. http://127.0.0.1:5000/).
## Workaround/Solution
Slightly modify `ProfilerMiddleware`'s `filename_format`, replacing the `d` with `f`. For example:
```python
app.wsgi_app = ProfilerMiddleware(
app.wsgi_app, profile_dir=".", filename_format="{method}.{path}.{elapsed:06f}ms.{time:f}.prof"
)
```
Both instances of `d` need to be replaced because both `elapsed` and `time` are floating point numbers.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/werkzeug/middleware/profiler.py`
Content:
```
1 """
2 Application Profiler
3 ====================
4
5 This module provides a middleware that profiles each request with the
6 :mod:`cProfile` module. This can help identify bottlenecks in your code
7 that may be slowing down your application.
8
9 .. autoclass:: ProfilerMiddleware
10
11 :copyright: 2007 Pallets
12 :license: BSD-3-Clause
13 """
14 from __future__ import print_function
15
16 import os.path
17 import sys
18 import time
19 from pstats import Stats
20
21 try:
22 from cProfile import Profile
23 except ImportError:
24 from profile import Profile
25
26
27 class ProfilerMiddleware(object):
28 """Wrap a WSGI application and profile the execution of each
29 request. Responses are buffered so that timings are more exact.
30
31 If ``stream`` is given, :class:`pstats.Stats` are written to it
32 after each request. If ``profile_dir`` is given, :mod:`cProfile`
33 data files are saved to that directory, one file per request.
34
35 The filename can be customized by passing ``filename_format``. If
36 it is a string, it will be formatted using :meth:`str.format` with
37 the following fields available:
38
39 - ``{method}`` - The request method; GET, POST, etc.
40 - ``{path}`` - The request path or 'root' should one not exist.
41 - ``{elapsed}`` - The elapsed time of the request.
42 - ``{time}`` - The time of the request.
43
44 If it is a callable, it will be called with the WSGI ``environ``
45 dict and should return a filename.
46
47 :param app: The WSGI application to wrap.
48 :param stream: Write stats to this stream. Disable with ``None``.
49 :param sort_by: A tuple of columns to sort stats by. See
50 :meth:`pstats.Stats.sort_stats`.
51 :param restrictions: A tuple of restrictions to filter stats by. See
52 :meth:`pstats.Stats.print_stats`.
53 :param profile_dir: Save profile data files to this directory.
54 :param filename_format: Format string for profile data file names,
55 or a callable returning a name. See explanation above.
56
57 .. code-block:: python
58
59 from werkzeug.middleware.profiler import ProfilerMiddleware
60 app = ProfilerMiddleware(app)
61
62 .. versionchanged:: 0.15
63 Stats are written even if ``profile_dir`` is given, and can be
64 disable by passing ``stream=None``.
65
66 .. versionadded:: 0.15
67 Added ``filename_format``.
68
69 .. versionadded:: 0.9
70 Added ``restrictions`` and ``profile_dir``.
71 """
72
73 def __init__(
74 self,
75 app,
76 stream=sys.stdout,
77 sort_by=("time", "calls"),
78 restrictions=(),
79 profile_dir=None,
80 filename_format="{method}.{path}.{elapsed:06d}ms.{time:d}.prof",
81 ):
82 self._app = app
83 self._stream = stream
84 self._sort_by = sort_by
85 self._restrictions = restrictions
86 self._profile_dir = profile_dir
87 self._filename_format = filename_format
88
89 def __call__(self, environ, start_response):
90 response_body = []
91
92 def catching_start_response(status, headers, exc_info=None):
93 start_response(status, headers, exc_info)
94 return response_body.append
95
96 def runapp():
97 app_iter = self._app(environ, catching_start_response)
98 response_body.extend(app_iter)
99
100 if hasattr(app_iter, "close"):
101 app_iter.close()
102
103 profile = Profile()
104 start = time.time()
105 profile.runcall(runapp)
106 body = b"".join(response_body)
107 elapsed = time.time() - start
108
109 if self._profile_dir is not None:
110 if callable(self._filename_format):
111 filename = self._filename_format(environ)
112 else:
113 filename = self._filename_format.format(
114 method=environ["REQUEST_METHOD"],
115 path=(
116 environ.get("PATH_INFO").strip("/").replace("/", ".") or "root"
117 ),
118 elapsed=elapsed * 1000.0,
119 time=time.time(),
120 )
121 filename = os.path.join(self._profile_dir, filename)
122 profile.dump_stats(filename)
123
124 if self._stream is not None:
125 stats = Stats(profile, stream=self._stream)
126 stats.sort_stats(*self._sort_by)
127 print("-" * 80, file=self._stream)
128 print("PATH: {!r}".format(environ.get("PATH_INFO", "")), file=self._stream)
129 stats.print_stats(*self._restrictions)
130 print("-" * 80 + "\n", file=self._stream)
131
132 return [body]
133
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/werkzeug/middleware/profiler.py b/src/werkzeug/middleware/profiler.py
--- a/src/werkzeug/middleware/profiler.py
+++ b/src/werkzeug/middleware/profiler.py
@@ -77,7 +77,7 @@
sort_by=("time", "calls"),
restrictions=(),
profile_dir=None,
- filename_format="{method}.{path}.{elapsed:06d}ms.{time:d}.prof",
+ filename_format="{method}.{path}.{elapsed:.0f}ms.{time:.0f}.prof",
):
self._app = app
self._stream = stream
|
{"golden_diff": "diff --git a/src/werkzeug/middleware/profiler.py b/src/werkzeug/middleware/profiler.py\n--- a/src/werkzeug/middleware/profiler.py\n+++ b/src/werkzeug/middleware/profiler.py\n@@ -77,7 +77,7 @@\n sort_by=(\"time\", \"calls\"),\n restrictions=(),\n profile_dir=None,\n- filename_format=\"{method}.{path}.{elapsed:06d}ms.{time:d}.prof\",\n+ filename_format=\"{method}.{path}.{elapsed:.0f}ms.{time:.0f}.prof\",\n ):\n self._app = app\n self._stream = stream\n", "issue": "ProfilerMiddleware's default filename_format causes ValueError when ProfilerMiddleware is used with profile_dir\n## Environment\r\n\r\n```\r\n$ sw_vers \r\nProductName:\tMac OS X\r\nProductVersion:\t10.13.6\r\nBuildVersion:\t17G3025\r\n\r\n$ python --version\r\nPython 3.7.2\r\n\r\n$ pip freeze\r\nClick==7.0\r\nFlask==1.0.2\r\nitsdangerous==1.1.0\r\nJinja2==2.10.1\r\nMarkupSafe==1.1.1\r\nWerkzeug==0.15.2\r\n```\r\n\r\nBasically, the only Python dependency I installed was Flask because that's what I'm most familiar with. However, the error I'm describing looks to be contained within werkzeug.\r\n\r\n\r\n## Observed Behavior\r\n\r\nWhen using `ProfilerMiddleware` with its `profile_dir` argument, the following error gets raised after a request is sent to the server:\r\n\r\n```\r\nError on request:\r\nTraceback (most recent call last):\r\n File \"/dev/jlove-bazaarvoice/werkzeug-profiler-bug/.venv/lib/python3.7/site-packages/werkzeug/serving.py\", line 302, in run_wsgi\r\n execute(self.server.app)\r\n File \"/dev/jlove-bazaarvoice/werkzeug-profiler-bug/.venv/lib/python3.7/site-packages/werkzeug/serving.py\", line 290, in execute\r\n application_iter = app(environ, start_response)\r\n File \"/dev/jlove-bazaarvoice/werkzeug-profiler-bug/.venv/lib/python3.7/site-packages/flask/app.py\", line 2309, in __call__\r\n return self.wsgi_app(environ, start_response)\r\n File \"/dev/jlove-bazaarvoice/werkzeug-profiler-bug/.venv/lib/python3.7/site-packages/werkzeug/middleware/profiler.py\", line 119, in __call__\r\n time=time.time(),\r\nValueError: Unknown format code 'd' for object of type 'float'\r\n```\r\n\r\n## Expected Behavior\r\n\r\nNo `ValueError`.\r\n\r\n## Steps to Reproduce\r\n\r\n1. `pip install flask`\r\n2. Save the following file as app.py.\r\n```python\r\n# app.py\r\nfrom flask import Flask\r\nfrom werkzeug.middleware.profiler import ProfilerMiddleware\r\n\r\napp = Flask(__name__)\r\napp.wsgi_app = ProfilerMiddleware(app.wsgi_app, profile_dir=\".\")\r\n\r\n\r\[email protected](\"/\", methods=[\"GET\"])\r\ndef get_index():\r\n return \"Hello, world!\"\r\n```\r\n3. Start the server with `FLASK_APP=app.py flask run`.\r\n4. Send a request to the server (e.g. http://127.0.0.1:5000/).\r\n\r\n## Workaround/Solution\r\n\r\nSlightly modify `ProfilerMiddleware`'s `filename_format`, replacing the `d` with `f`. For example:\r\n```python\r\napp.wsgi_app = ProfilerMiddleware(\r\n app.wsgi_app, profile_dir=\".\", filename_format=\"{method}.{path}.{elapsed:06f}ms.{time:f}.prof\"\r\n)\r\n```\r\n\r\nBoth instances of `d` need to be replaced because both `elapsed` and `time` are floating point numbers.\n", "before_files": [{"content": "\"\"\"\nApplication Profiler\n====================\n\nThis module provides a middleware that profiles each request with the\n:mod:`cProfile` module. This can help identify bottlenecks in your code\nthat may be slowing down your application.\n\n.. autoclass:: ProfilerMiddleware\n\n:copyright: 2007 Pallets\n:license: BSD-3-Clause\n\"\"\"\nfrom __future__ import print_function\n\nimport os.path\nimport sys\nimport time\nfrom pstats import Stats\n\ntry:\n from cProfile import Profile\nexcept ImportError:\n from profile import Profile\n\n\nclass ProfilerMiddleware(object):\n \"\"\"Wrap a WSGI application and profile the execution of each\n request. Responses are buffered so that timings are more exact.\n\n If ``stream`` is given, :class:`pstats.Stats` are written to it\n after each request. If ``profile_dir`` is given, :mod:`cProfile`\n data files are saved to that directory, one file per request.\n\n The filename can be customized by passing ``filename_format``. If\n it is a string, it will be formatted using :meth:`str.format` with\n the following fields available:\n\n - ``{method}`` - The request method; GET, POST, etc.\n - ``{path}`` - The request path or 'root' should one not exist.\n - ``{elapsed}`` - The elapsed time of the request.\n - ``{time}`` - The time of the request.\n\n If it is a callable, it will be called with the WSGI ``environ``\n dict and should return a filename.\n\n :param app: The WSGI application to wrap.\n :param stream: Write stats to this stream. Disable with ``None``.\n :param sort_by: A tuple of columns to sort stats by. See\n :meth:`pstats.Stats.sort_stats`.\n :param restrictions: A tuple of restrictions to filter stats by. See\n :meth:`pstats.Stats.print_stats`.\n :param profile_dir: Save profile data files to this directory.\n :param filename_format: Format string for profile data file names,\n or a callable returning a name. See explanation above.\n\n .. code-block:: python\n\n from werkzeug.middleware.profiler import ProfilerMiddleware\n app = ProfilerMiddleware(app)\n\n .. versionchanged:: 0.15\n Stats are written even if ``profile_dir`` is given, and can be\n disable by passing ``stream=None``.\n\n .. versionadded:: 0.15\n Added ``filename_format``.\n\n .. versionadded:: 0.9\n Added ``restrictions`` and ``profile_dir``.\n \"\"\"\n\n def __init__(\n self,\n app,\n stream=sys.stdout,\n sort_by=(\"time\", \"calls\"),\n restrictions=(),\n profile_dir=None,\n filename_format=\"{method}.{path}.{elapsed:06d}ms.{time:d}.prof\",\n ):\n self._app = app\n self._stream = stream\n self._sort_by = sort_by\n self._restrictions = restrictions\n self._profile_dir = profile_dir\n self._filename_format = filename_format\n\n def __call__(self, environ, start_response):\n response_body = []\n\n def catching_start_response(status, headers, exc_info=None):\n start_response(status, headers, exc_info)\n return response_body.append\n\n def runapp():\n app_iter = self._app(environ, catching_start_response)\n response_body.extend(app_iter)\n\n if hasattr(app_iter, \"close\"):\n app_iter.close()\n\n profile = Profile()\n start = time.time()\n profile.runcall(runapp)\n body = b\"\".join(response_body)\n elapsed = time.time() - start\n\n if self._profile_dir is not None:\n if callable(self._filename_format):\n filename = self._filename_format(environ)\n else:\n filename = self._filename_format.format(\n method=environ[\"REQUEST_METHOD\"],\n path=(\n environ.get(\"PATH_INFO\").strip(\"/\").replace(\"/\", \".\") or \"root\"\n ),\n elapsed=elapsed * 1000.0,\n time=time.time(),\n )\n filename = os.path.join(self._profile_dir, filename)\n profile.dump_stats(filename)\n\n if self._stream is not None:\n stats = Stats(profile, stream=self._stream)\n stats.sort_stats(*self._sort_by)\n print(\"-\" * 80, file=self._stream)\n print(\"PATH: {!r}\".format(environ.get(\"PATH_INFO\", \"\")), file=self._stream)\n stats.print_stats(*self._restrictions)\n print(\"-\" * 80 + \"\\n\", file=self._stream)\n\n return [body]\n", "path": "src/werkzeug/middleware/profiler.py"}], "after_files": [{"content": "\"\"\"\nApplication Profiler\n====================\n\nThis module provides a middleware that profiles each request with the\n:mod:`cProfile` module. This can help identify bottlenecks in your code\nthat may be slowing down your application.\n\n.. autoclass:: ProfilerMiddleware\n\n:copyright: 2007 Pallets\n:license: BSD-3-Clause\n\"\"\"\nfrom __future__ import print_function\n\nimport os.path\nimport sys\nimport time\nfrom pstats import Stats\n\ntry:\n from cProfile import Profile\nexcept ImportError:\n from profile import Profile\n\n\nclass ProfilerMiddleware(object):\n \"\"\"Wrap a WSGI application and profile the execution of each\n request. Responses are buffered so that timings are more exact.\n\n If ``stream`` is given, :class:`pstats.Stats` are written to it\n after each request. If ``profile_dir`` is given, :mod:`cProfile`\n data files are saved to that directory, one file per request.\n\n The filename can be customized by passing ``filename_format``. If\n it is a string, it will be formatted using :meth:`str.format` with\n the following fields available:\n\n - ``{method}`` - The request method; GET, POST, etc.\n - ``{path}`` - The request path or 'root' should one not exist.\n - ``{elapsed}`` - The elapsed time of the request.\n - ``{time}`` - The time of the request.\n\n If it is a callable, it will be called with the WSGI ``environ``\n dict and should return a filename.\n\n :param app: The WSGI application to wrap.\n :param stream: Write stats to this stream. Disable with ``None``.\n :param sort_by: A tuple of columns to sort stats by. See\n :meth:`pstats.Stats.sort_stats`.\n :param restrictions: A tuple of restrictions to filter stats by. See\n :meth:`pstats.Stats.print_stats`.\n :param profile_dir: Save profile data files to this directory.\n :param filename_format: Format string for profile data file names,\n or a callable returning a name. See explanation above.\n\n .. code-block:: python\n\n from werkzeug.middleware.profiler import ProfilerMiddleware\n app = ProfilerMiddleware(app)\n\n .. versionchanged:: 0.15\n Stats are written even if ``profile_dir`` is given, and can be\n disable by passing ``stream=None``.\n\n .. versionadded:: 0.15\n Added ``filename_format``.\n\n .. versionadded:: 0.9\n Added ``restrictions`` and ``profile_dir``.\n \"\"\"\n\n def __init__(\n self,\n app,\n stream=sys.stdout,\n sort_by=(\"time\", \"calls\"),\n restrictions=(),\n profile_dir=None,\n filename_format=\"{method}.{path}.{elapsed:.0f}ms.{time:.0f}.prof\",\n ):\n self._app = app\n self._stream = stream\n self._sort_by = sort_by\n self._restrictions = restrictions\n self._profile_dir = profile_dir\n self._filename_format = filename_format\n\n def __call__(self, environ, start_response):\n response_body = []\n\n def catching_start_response(status, headers, exc_info=None):\n start_response(status, headers, exc_info)\n return response_body.append\n\n def runapp():\n app_iter = self._app(environ, catching_start_response)\n response_body.extend(app_iter)\n\n if hasattr(app_iter, \"close\"):\n app_iter.close()\n\n profile = Profile()\n start = time.time()\n profile.runcall(runapp)\n body = b\"\".join(response_body)\n elapsed = time.time() - start\n\n if self._profile_dir is not None:\n if callable(self._filename_format):\n filename = self._filename_format(environ)\n else:\n filename = self._filename_format.format(\n method=environ[\"REQUEST_METHOD\"],\n path=(\n environ.get(\"PATH_INFO\").strip(\"/\").replace(\"/\", \".\") or \"root\"\n ),\n elapsed=elapsed * 1000.0,\n time=time.time(),\n )\n filename = os.path.join(self._profile_dir, filename)\n profile.dump_stats(filename)\n\n if self._stream is not None:\n stats = Stats(profile, stream=self._stream)\n stats.sort_stats(*self._sort_by)\n print(\"-\" * 80, file=self._stream)\n print(\"PATH: {!r}\".format(environ.get(\"PATH_INFO\", \"\")), file=self._stream)\n stats.print_stats(*self._restrictions)\n print(\"-\" * 80 + \"\\n\", file=self._stream)\n\n return [body]\n", "path": "src/werkzeug/middleware/profiler.py"}]}
| 2,285 | 139 |
gh_patches_debug_6326
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-contrib-472
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
OpenTracing propagator does not use a TraceFlags object
I set up a client and server that propagated spans using the OpenTracing propagator. The server side reported this error:
```
[2021-04-26 16:41:13,377] ERROR in app: Exception on /ping [GET]
Traceback (most recent call last):
File "/home/ocelotl/virtual_environments/LS-22507/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/home/ocelotl/virtual_environments/LS-22507/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/ocelotl/virtual_environments/LS-22507/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/ocelotl/virtual_environments/LS-22507/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/home/ocelotl/virtual_environments/LS-22507/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/home/ocelotl/virtual_environments/LS-22507/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "server.py", line 53, in ping
with tracer.start_as_current_span(
File "/home/ocelotl/.pyenv/versions/3.8.3/lib/python3.8/contextlib.py", line 113, in __enter__
return next(self.gen)
File "/home/ocelotl/github/ocelotl/opentelemetry-python/opentelemetry-sdk/src/opentelemetry/sdk/trace/__init__.py", line 863, in start_as_current_span
span = self.start_span(
File "/home/ocelotl/github/ocelotl/opentelemetry-python/opentelemetry-sdk/src/opentelemetry/sdk/trace/__init__.py", line 917, in start_span
sampling_result = self.sampler.should_sample(
File "/home/ocelotl/github/ocelotl/opentelemetry-python/opentelemetry-sdk/src/opentelemetry/sdk/trace/sampling.py", line 326, in should_sample
if parent_span_context.trace_flags.sampled:
AttributeError: 'int' object has no attribute 'sampled'
```
This happens because when instantiating a context during propagation with the OpenTracing propagator, a `TracFlags` object is not used for the trace flags.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `propagator/opentelemetry-propagator-ot-trace/src/opentelemetry/propagators/ot_trace/__init__.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from re import compile as re_compile
16 from typing import Any, Iterable, Optional
17
18 from opentelemetry.baggage import get_all, set_baggage
19 from opentelemetry.context import Context
20 from opentelemetry.propagators.textmap import (
21 CarrierT,
22 Getter,
23 Setter,
24 TextMapPropagator,
25 default_getter,
26 default_setter,
27 )
28 from opentelemetry.trace import (
29 INVALID_SPAN_ID,
30 INVALID_TRACE_ID,
31 NonRecordingSpan,
32 SpanContext,
33 TraceFlags,
34 get_current_span,
35 set_span_in_context,
36 )
37
38 OT_TRACE_ID_HEADER = "ot-tracer-traceid"
39 OT_SPAN_ID_HEADER = "ot-tracer-spanid"
40 OT_SAMPLED_HEADER = "ot-tracer-sampled"
41 OT_BAGGAGE_PREFIX = "ot-baggage-"
42
43 _valid_header_name = re_compile(r"[\w_^`!#$%&'*+.|~]+")
44 _valid_header_value = re_compile(r"[\t\x20-\x7e\x80-\xff]+")
45 _valid_extract_traceid = re_compile(r"[0-9a-f]{1,32}")
46 _valid_extract_spanid = re_compile(r"[0-9a-f]{1,16}")
47
48
49 class OTTracePropagator(TextMapPropagator):
50 """Propagator for the OTTrace HTTP header format"""
51
52 def extract(
53 self,
54 carrier: CarrierT,
55 context: Optional[Context] = None,
56 getter: Getter = default_getter,
57 ) -> Context:
58
59 traceid = _extract_first_element(
60 getter.get(carrier, OT_TRACE_ID_HEADER), INVALID_TRACE_ID
61 )
62
63 spanid = _extract_first_element(
64 getter.get(carrier, OT_SPAN_ID_HEADER), INVALID_SPAN_ID
65 )
66
67 sampled = _extract_first_element(
68 getter.get(carrier, OT_SAMPLED_HEADER)
69 )
70
71 if sampled == "true":
72 traceflags = TraceFlags.SAMPLED
73 else:
74 traceflags = TraceFlags.DEFAULT
75
76 if (
77 traceid != INVALID_TRACE_ID
78 and _valid_extract_traceid.fullmatch(traceid) is not None
79 and spanid != INVALID_SPAN_ID
80 and _valid_extract_spanid.fullmatch(spanid) is not None
81 ):
82 context = set_span_in_context(
83 NonRecordingSpan(
84 SpanContext(
85 trace_id=int(traceid, 16),
86 span_id=int(spanid, 16),
87 is_remote=True,
88 trace_flags=traceflags,
89 )
90 ),
91 context,
92 )
93
94 baggage = get_all(context) or {}
95
96 for key in getter.keys(carrier):
97
98 if not key.startswith(OT_BAGGAGE_PREFIX):
99 continue
100
101 baggage[
102 key[len(OT_BAGGAGE_PREFIX) :]
103 ] = _extract_first_element(getter.get(carrier, key))
104
105 for key, value in baggage.items():
106 context = set_baggage(key, value, context)
107
108 return context
109
110 def inject(
111 self,
112 carrier: CarrierT,
113 context: Optional[Context] = None,
114 setter: Setter = default_setter,
115 ) -> None:
116
117 span_context = get_current_span(context).get_span_context()
118
119 if span_context.trace_id == INVALID_TRACE_ID:
120 return
121
122 setter.set(
123 carrier, OT_TRACE_ID_HEADER, hex(span_context.trace_id)[2:][-16:]
124 )
125 setter.set(
126 carrier, OT_SPAN_ID_HEADER, hex(span_context.span_id)[2:][-16:],
127 )
128
129 if span_context.trace_flags == TraceFlags.SAMPLED:
130 traceflags = "true"
131 else:
132 traceflags = "false"
133
134 setter.set(carrier, OT_SAMPLED_HEADER, traceflags)
135
136 baggage = get_all(context)
137
138 if not baggage:
139 return
140
141 for header_name, header_value in baggage.items():
142
143 if (
144 _valid_header_name.fullmatch(header_name) is None
145 or _valid_header_value.fullmatch(header_value) is None
146 ):
147 continue
148
149 setter.set(
150 carrier,
151 "".join([OT_BAGGAGE_PREFIX, header_name]),
152 header_value,
153 )
154
155 @property
156 def fields(self):
157 """Returns a set with the fields set in `inject`.
158
159 See
160 `opentelemetry.propagators.textmap.TextMapPropagator.fields`
161 """
162 return {
163 OT_TRACE_ID_HEADER,
164 OT_SPAN_ID_HEADER,
165 OT_SAMPLED_HEADER,
166 }
167
168
169 def _extract_first_element(
170 items: Iterable[CarrierT], default: Any = None,
171 ) -> Optional[CarrierT]:
172 if items is None:
173 return default
174 return next(iter(items), None)
175
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/propagator/opentelemetry-propagator-ot-trace/src/opentelemetry/propagators/ot_trace/__init__.py b/propagator/opentelemetry-propagator-ot-trace/src/opentelemetry/propagators/ot_trace/__init__.py
--- a/propagator/opentelemetry-propagator-ot-trace/src/opentelemetry/propagators/ot_trace/__init__.py
+++ b/propagator/opentelemetry-propagator-ot-trace/src/opentelemetry/propagators/ot_trace/__init__.py
@@ -85,7 +85,7 @@
trace_id=int(traceid, 16),
span_id=int(spanid, 16),
is_remote=True,
- trace_flags=traceflags,
+ trace_flags=TraceFlags(traceflags),
)
),
context,
|
{"golden_diff": "diff --git a/propagator/opentelemetry-propagator-ot-trace/src/opentelemetry/propagators/ot_trace/__init__.py b/propagator/opentelemetry-propagator-ot-trace/src/opentelemetry/propagators/ot_trace/__init__.py\n--- a/propagator/opentelemetry-propagator-ot-trace/src/opentelemetry/propagators/ot_trace/__init__.py\n+++ b/propagator/opentelemetry-propagator-ot-trace/src/opentelemetry/propagators/ot_trace/__init__.py\n@@ -85,7 +85,7 @@\n trace_id=int(traceid, 16),\n span_id=int(spanid, 16),\n is_remote=True,\n- trace_flags=traceflags,\n+ trace_flags=TraceFlags(traceflags),\n )\n ),\n context,\n", "issue": "OpenTracing propagator does not use a TraceFlags object\nI set up a client and server that propagated spans using the OpenTracing propagator. The server side reported this error:\r\n\r\n```\r\n[2021-04-26 16:41:13,377] ERROR in app: Exception on /ping [GET]\r\nTraceback (most recent call last):\r\n File \"/home/ocelotl/virtual_environments/LS-22507/lib/python3.8/site-packages/flask/app.py\", line 2447, in wsgi_app\r\n response = self.full_dispatch_request()\r\n File \"/home/ocelotl/virtual_environments/LS-22507/lib/python3.8/site-packages/flask/app.py\", line 1952, in full_dispatch_request\r\n rv = self.handle_user_exception(e)\r\n File \"/home/ocelotl/virtual_environments/LS-22507/lib/python3.8/site-packages/flask/app.py\", line 1821, in handle_user_exception\r\n reraise(exc_type, exc_value, tb)\r\n File \"/home/ocelotl/virtual_environments/LS-22507/lib/python3.8/site-packages/flask/_compat.py\", line 39, in reraise\r\n raise value\r\n File \"/home/ocelotl/virtual_environments/LS-22507/lib/python3.8/site-packages/flask/app.py\", line 1950, in full_dispatch_request\r\n rv = self.dispatch_request()\r\n File \"/home/ocelotl/virtual_environments/LS-22507/lib/python3.8/site-packages/flask/app.py\", line 1936, in dispatch_request\r\n return self.view_functions[rule.endpoint](**req.view_args)\r\n File \"server.py\", line 53, in ping\r\n with tracer.start_as_current_span(\r\n File \"/home/ocelotl/.pyenv/versions/3.8.3/lib/python3.8/contextlib.py\", line 113, in __enter__\r\n return next(self.gen)\r\n File \"/home/ocelotl/github/ocelotl/opentelemetry-python/opentelemetry-sdk/src/opentelemetry/sdk/trace/__init__.py\", line 863, in start_as_current_span\r\n span = self.start_span(\r\n File \"/home/ocelotl/github/ocelotl/opentelemetry-python/opentelemetry-sdk/src/opentelemetry/sdk/trace/__init__.py\", line 917, in start_span\r\n sampling_result = self.sampler.should_sample(\r\n File \"/home/ocelotl/github/ocelotl/opentelemetry-python/opentelemetry-sdk/src/opentelemetry/sdk/trace/sampling.py\", line 326, in should_sample\r\n if parent_span_context.trace_flags.sampled:\r\nAttributeError: 'int' object has no attribute 'sampled'\r\n```\r\n\r\nThis happens because when instantiating a context during propagation with the OpenTracing propagator, a `TracFlags` object is not used for the trace flags.\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom re import compile as re_compile\nfrom typing import Any, Iterable, Optional\n\nfrom opentelemetry.baggage import get_all, set_baggage\nfrom opentelemetry.context import Context\nfrom opentelemetry.propagators.textmap import (\n CarrierT,\n Getter,\n Setter,\n TextMapPropagator,\n default_getter,\n default_setter,\n)\nfrom opentelemetry.trace import (\n INVALID_SPAN_ID,\n INVALID_TRACE_ID,\n NonRecordingSpan,\n SpanContext,\n TraceFlags,\n get_current_span,\n set_span_in_context,\n)\n\nOT_TRACE_ID_HEADER = \"ot-tracer-traceid\"\nOT_SPAN_ID_HEADER = \"ot-tracer-spanid\"\nOT_SAMPLED_HEADER = \"ot-tracer-sampled\"\nOT_BAGGAGE_PREFIX = \"ot-baggage-\"\n\n_valid_header_name = re_compile(r\"[\\w_^`!#$%&'*+.|~]+\")\n_valid_header_value = re_compile(r\"[\\t\\x20-\\x7e\\x80-\\xff]+\")\n_valid_extract_traceid = re_compile(r\"[0-9a-f]{1,32}\")\n_valid_extract_spanid = re_compile(r\"[0-9a-f]{1,16}\")\n\n\nclass OTTracePropagator(TextMapPropagator):\n \"\"\"Propagator for the OTTrace HTTP header format\"\"\"\n\n def extract(\n self,\n carrier: CarrierT,\n context: Optional[Context] = None,\n getter: Getter = default_getter,\n ) -> Context:\n\n traceid = _extract_first_element(\n getter.get(carrier, OT_TRACE_ID_HEADER), INVALID_TRACE_ID\n )\n\n spanid = _extract_first_element(\n getter.get(carrier, OT_SPAN_ID_HEADER), INVALID_SPAN_ID\n )\n\n sampled = _extract_first_element(\n getter.get(carrier, OT_SAMPLED_HEADER)\n )\n\n if sampled == \"true\":\n traceflags = TraceFlags.SAMPLED\n else:\n traceflags = TraceFlags.DEFAULT\n\n if (\n traceid != INVALID_TRACE_ID\n and _valid_extract_traceid.fullmatch(traceid) is not None\n and spanid != INVALID_SPAN_ID\n and _valid_extract_spanid.fullmatch(spanid) is not None\n ):\n context = set_span_in_context(\n NonRecordingSpan(\n SpanContext(\n trace_id=int(traceid, 16),\n span_id=int(spanid, 16),\n is_remote=True,\n trace_flags=traceflags,\n )\n ),\n context,\n )\n\n baggage = get_all(context) or {}\n\n for key in getter.keys(carrier):\n\n if not key.startswith(OT_BAGGAGE_PREFIX):\n continue\n\n baggage[\n key[len(OT_BAGGAGE_PREFIX) :]\n ] = _extract_first_element(getter.get(carrier, key))\n\n for key, value in baggage.items():\n context = set_baggage(key, value, context)\n\n return context\n\n def inject(\n self,\n carrier: CarrierT,\n context: Optional[Context] = None,\n setter: Setter = default_setter,\n ) -> None:\n\n span_context = get_current_span(context).get_span_context()\n\n if span_context.trace_id == INVALID_TRACE_ID:\n return\n\n setter.set(\n carrier, OT_TRACE_ID_HEADER, hex(span_context.trace_id)[2:][-16:]\n )\n setter.set(\n carrier, OT_SPAN_ID_HEADER, hex(span_context.span_id)[2:][-16:],\n )\n\n if span_context.trace_flags == TraceFlags.SAMPLED:\n traceflags = \"true\"\n else:\n traceflags = \"false\"\n\n setter.set(carrier, OT_SAMPLED_HEADER, traceflags)\n\n baggage = get_all(context)\n\n if not baggage:\n return\n\n for header_name, header_value in baggage.items():\n\n if (\n _valid_header_name.fullmatch(header_name) is None\n or _valid_header_value.fullmatch(header_value) is None\n ):\n continue\n\n setter.set(\n carrier,\n \"\".join([OT_BAGGAGE_PREFIX, header_name]),\n header_value,\n )\n\n @property\n def fields(self):\n \"\"\"Returns a set with the fields set in `inject`.\n\n See\n `opentelemetry.propagators.textmap.TextMapPropagator.fields`\n \"\"\"\n return {\n OT_TRACE_ID_HEADER,\n OT_SPAN_ID_HEADER,\n OT_SAMPLED_HEADER,\n }\n\n\ndef _extract_first_element(\n items: Iterable[CarrierT], default: Any = None,\n) -> Optional[CarrierT]:\n if items is None:\n return default\n return next(iter(items), None)\n", "path": "propagator/opentelemetry-propagator-ot-trace/src/opentelemetry/propagators/ot_trace/__init__.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom re import compile as re_compile\nfrom typing import Any, Iterable, Optional\n\nfrom opentelemetry.baggage import get_all, set_baggage\nfrom opentelemetry.context import Context\nfrom opentelemetry.propagators.textmap import (\n CarrierT,\n Getter,\n Setter,\n TextMapPropagator,\n default_getter,\n default_setter,\n)\nfrom opentelemetry.trace import (\n INVALID_SPAN_ID,\n INVALID_TRACE_ID,\n NonRecordingSpan,\n SpanContext,\n TraceFlags,\n get_current_span,\n set_span_in_context,\n)\n\nOT_TRACE_ID_HEADER = \"ot-tracer-traceid\"\nOT_SPAN_ID_HEADER = \"ot-tracer-spanid\"\nOT_SAMPLED_HEADER = \"ot-tracer-sampled\"\nOT_BAGGAGE_PREFIX = \"ot-baggage-\"\n\n_valid_header_name = re_compile(r\"[\\w_^`!#$%&'*+.|~]+\")\n_valid_header_value = re_compile(r\"[\\t\\x20-\\x7e\\x80-\\xff]+\")\n_valid_extract_traceid = re_compile(r\"[0-9a-f]{1,32}\")\n_valid_extract_spanid = re_compile(r\"[0-9a-f]{1,16}\")\n\n\nclass OTTracePropagator(TextMapPropagator):\n \"\"\"Propagator for the OTTrace HTTP header format\"\"\"\n\n def extract(\n self,\n carrier: CarrierT,\n context: Optional[Context] = None,\n getter: Getter = default_getter,\n ) -> Context:\n\n traceid = _extract_first_element(\n getter.get(carrier, OT_TRACE_ID_HEADER), INVALID_TRACE_ID\n )\n\n spanid = _extract_first_element(\n getter.get(carrier, OT_SPAN_ID_HEADER), INVALID_SPAN_ID\n )\n\n sampled = _extract_first_element(\n getter.get(carrier, OT_SAMPLED_HEADER)\n )\n\n if sampled == \"true\":\n traceflags = TraceFlags.SAMPLED\n else:\n traceflags = TraceFlags.DEFAULT\n\n if (\n traceid != INVALID_TRACE_ID\n and _valid_extract_traceid.fullmatch(traceid) is not None\n and spanid != INVALID_SPAN_ID\n and _valid_extract_spanid.fullmatch(spanid) is not None\n ):\n context = set_span_in_context(\n NonRecordingSpan(\n SpanContext(\n trace_id=int(traceid, 16),\n span_id=int(spanid, 16),\n is_remote=True,\n trace_flags=TraceFlags(traceflags),\n )\n ),\n context,\n )\n\n baggage = get_all(context) or {}\n\n for key in getter.keys(carrier):\n\n if not key.startswith(OT_BAGGAGE_PREFIX):\n continue\n\n baggage[\n key[len(OT_BAGGAGE_PREFIX) :]\n ] = _extract_first_element(getter.get(carrier, key))\n\n for key, value in baggage.items():\n context = set_baggage(key, value, context)\n\n return context\n\n def inject(\n self,\n carrier: CarrierT,\n context: Optional[Context] = None,\n setter: Setter = default_setter,\n ) -> None:\n\n span_context = get_current_span(context).get_span_context()\n\n if span_context.trace_id == INVALID_TRACE_ID:\n return\n\n setter.set(\n carrier, OT_TRACE_ID_HEADER, hex(span_context.trace_id)[2:][-16:]\n )\n setter.set(\n carrier, OT_SPAN_ID_HEADER, hex(span_context.span_id)[2:][-16:],\n )\n\n if span_context.trace_flags == TraceFlags.SAMPLED:\n traceflags = \"true\"\n else:\n traceflags = \"false\"\n\n setter.set(carrier, OT_SAMPLED_HEADER, traceflags)\n\n baggage = get_all(context)\n\n if not baggage:\n return\n\n for header_name, header_value in baggage.items():\n\n if (\n _valid_header_name.fullmatch(header_name) is None\n or _valid_header_value.fullmatch(header_value) is None\n ):\n continue\n\n setter.set(\n carrier,\n \"\".join([OT_BAGGAGE_PREFIX, header_name]),\n header_value,\n )\n\n @property\n def fields(self):\n \"\"\"Returns a set with the fields set in `inject`.\n\n See\n `opentelemetry.propagators.textmap.TextMapPropagator.fields`\n \"\"\"\n return {\n OT_TRACE_ID_HEADER,\n OT_SPAN_ID_HEADER,\n OT_SAMPLED_HEADER,\n }\n\n\ndef _extract_first_element(\n items: Iterable[CarrierT], default: Any = None,\n) -> Optional[CarrierT]:\n if items is None:\n return default\n return next(iter(items), None)\n", "path": "propagator/opentelemetry-propagator-ot-trace/src/opentelemetry/propagators/ot_trace/__init__.py"}]}
| 2,541 | 192 |
gh_patches_debug_5952
|
rasdani/github-patches
|
git_diff
|
Kinto__kinto-386
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Activate POST on collections
```
$ curl -H "Content-Type: application/json" \
-X POST -d '{"data": {"test": "some_data"}}' --user testuser:abc123 \
https://kinto.dev.mozaws.net/v1/buckets/test_bucket/collections
{"errno":115,"message":"Method not allowed on this endpoint.","code":405,"error":"Method Not Allowed"}
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kinto/views/collections.py`
Content:
```
1 import colander
2 import jsonschema
3 from cliquet import resource
4 from jsonschema import exceptions as jsonschema_exceptions
5
6 from kinto.views import NameGenerator
7
8
9 class JSONSchemaMapping(colander.SchemaNode):
10 def schema_type(self, **kw):
11 return colander.Mapping(unknown='preserve')
12
13 def deserialize(self, cstruct=colander.null):
14 # Start by deserializing a simple mapping.
15 validated = super(JSONSchemaMapping, self).deserialize(cstruct)
16
17 # In case it is optional in parent schema.
18 if not validated or validated in (colander.null, colander.drop):
19 return validated
20
21 try:
22 jsonschema.Draft4Validator.check_schema(validated)
23 except jsonschema_exceptions.SchemaError as e:
24 self.raise_invalid(e.path.pop() + e.message)
25 return validated
26
27
28 class CollectionSchema(resource.ResourceSchema):
29 schema = JSONSchemaMapping(missing=colander.drop)
30 cache_expires = colander.SchemaNode(colander.Int(), missing=colander.drop)
31
32 class Options:
33 preserve_unknown = True
34
35
36 @resource.register(name='collection',
37 collection_methods=('GET',),
38 collection_path='/buckets/{{bucket_id}}/collections',
39 record_path='/buckets/{{bucket_id}}/collections/{{id}}')
40 class Collection(resource.ProtectedResource):
41 mapping = CollectionSchema()
42 permissions = ('read', 'write', 'record:create')
43
44 def __init__(self, *args, **kwargs):
45 super(Collection, self).__init__(*args, **kwargs)
46 self.model.id_generator = NameGenerator()
47
48 def get_parent_id(self, request):
49 bucket_id = request.matchdict['bucket_id']
50 parent_id = '/buckets/%s' % bucket_id
51 return parent_id
52
53 def delete(self):
54 result = super(Collection, self).delete()
55
56 # Delete records.
57 storage = self.model.storage
58 parent_id = '%s/collections/%s' % (self.model.parent_id,
59 self.record_id)
60 storage.delete_all(collection_id='record',
61 parent_id=parent_id,
62 with_deleted=False)
63 storage.purge_deleted(collection_id='record', parent_id=parent_id)
64
65 return result
66
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kinto/views/collections.py b/kinto/views/collections.py
--- a/kinto/views/collections.py
+++ b/kinto/views/collections.py
@@ -34,7 +34,7 @@
@resource.register(name='collection',
- collection_methods=('GET',),
+ collection_methods=('GET', 'POST'),
collection_path='/buckets/{{bucket_id}}/collections',
record_path='/buckets/{{bucket_id}}/collections/{{id}}')
class Collection(resource.ProtectedResource):
|
{"golden_diff": "diff --git a/kinto/views/collections.py b/kinto/views/collections.py\n--- a/kinto/views/collections.py\n+++ b/kinto/views/collections.py\n@@ -34,7 +34,7 @@\n \n \n @resource.register(name='collection',\n- collection_methods=('GET',),\n+ collection_methods=('GET', 'POST'),\n collection_path='/buckets/{{bucket_id}}/collections',\n record_path='/buckets/{{bucket_id}}/collections/{{id}}')\n class Collection(resource.ProtectedResource):\n", "issue": "Activate POST on collections\n```\n$ curl -H \"Content-Type: application/json\" \\\n -X POST -d '{\"data\": {\"test\": \"some_data\"}}' --user testuser:abc123 \\\n https://kinto.dev.mozaws.net/v1/buckets/test_bucket/collections\n\n{\"errno\":115,\"message\":\"Method not allowed on this endpoint.\",\"code\":405,\"error\":\"Method Not Allowed\"}\n```\n\n", "before_files": [{"content": "import colander\nimport jsonschema\nfrom cliquet import resource\nfrom jsonschema import exceptions as jsonschema_exceptions\n\nfrom kinto.views import NameGenerator\n\n\nclass JSONSchemaMapping(colander.SchemaNode):\n def schema_type(self, **kw):\n return colander.Mapping(unknown='preserve')\n\n def deserialize(self, cstruct=colander.null):\n # Start by deserializing a simple mapping.\n validated = super(JSONSchemaMapping, self).deserialize(cstruct)\n\n # In case it is optional in parent schema.\n if not validated or validated in (colander.null, colander.drop):\n return validated\n\n try:\n jsonschema.Draft4Validator.check_schema(validated)\n except jsonschema_exceptions.SchemaError as e:\n self.raise_invalid(e.path.pop() + e.message)\n return validated\n\n\nclass CollectionSchema(resource.ResourceSchema):\n schema = JSONSchemaMapping(missing=colander.drop)\n cache_expires = colander.SchemaNode(colander.Int(), missing=colander.drop)\n\n class Options:\n preserve_unknown = True\n\n\[email protected](name='collection',\n collection_methods=('GET',),\n collection_path='/buckets/{{bucket_id}}/collections',\n record_path='/buckets/{{bucket_id}}/collections/{{id}}')\nclass Collection(resource.ProtectedResource):\n mapping = CollectionSchema()\n permissions = ('read', 'write', 'record:create')\n\n def __init__(self, *args, **kwargs):\n super(Collection, self).__init__(*args, **kwargs)\n self.model.id_generator = NameGenerator()\n\n def get_parent_id(self, request):\n bucket_id = request.matchdict['bucket_id']\n parent_id = '/buckets/%s' % bucket_id\n return parent_id\n\n def delete(self):\n result = super(Collection, self).delete()\n\n # Delete records.\n storage = self.model.storage\n parent_id = '%s/collections/%s' % (self.model.parent_id,\n self.record_id)\n storage.delete_all(collection_id='record',\n parent_id=parent_id,\n with_deleted=False)\n storage.purge_deleted(collection_id='record', parent_id=parent_id)\n\n return result\n", "path": "kinto/views/collections.py"}], "after_files": [{"content": "import colander\nimport jsonschema\nfrom cliquet import resource\nfrom jsonschema import exceptions as jsonschema_exceptions\n\nfrom kinto.views import NameGenerator\n\n\nclass JSONSchemaMapping(colander.SchemaNode):\n def schema_type(self, **kw):\n return colander.Mapping(unknown='preserve')\n\n def deserialize(self, cstruct=colander.null):\n # Start by deserializing a simple mapping.\n validated = super(JSONSchemaMapping, self).deserialize(cstruct)\n\n # In case it is optional in parent schema.\n if not validated or validated in (colander.null, colander.drop):\n return validated\n\n try:\n jsonschema.Draft4Validator.check_schema(validated)\n except jsonschema_exceptions.SchemaError as e:\n self.raise_invalid(e.path.pop() + e.message)\n return validated\n\n\nclass CollectionSchema(resource.ResourceSchema):\n schema = JSONSchemaMapping(missing=colander.drop)\n cache_expires = colander.SchemaNode(colander.Int(), missing=colander.drop)\n\n class Options:\n preserve_unknown = True\n\n\[email protected](name='collection',\n collection_methods=('GET', 'POST'),\n collection_path='/buckets/{{bucket_id}}/collections',\n record_path='/buckets/{{bucket_id}}/collections/{{id}}')\nclass Collection(resource.ProtectedResource):\n mapping = CollectionSchema()\n permissions = ('read', 'write', 'record:create')\n\n def __init__(self, *args, **kwargs):\n super(Collection, self).__init__(*args, **kwargs)\n self.model.id_generator = NameGenerator()\n\n def get_parent_id(self, request):\n bucket_id = request.matchdict['bucket_id']\n parent_id = '/buckets/%s' % bucket_id\n return parent_id\n\n def delete(self):\n result = super(Collection, self).delete()\n\n # Delete records.\n storage = self.model.storage\n parent_id = '%s/collections/%s' % (self.model.parent_id,\n self.record_id)\n storage.delete_all(collection_id='record',\n parent_id=parent_id,\n with_deleted=False)\n storage.purge_deleted(collection_id='record', parent_id=parent_id)\n\n return result\n", "path": "kinto/views/collections.py"}]}
| 943 | 108 |
gh_patches_debug_5335
|
rasdani/github-patches
|
git_diff
|
Nitrate__Nitrate-415
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
import xml says Worng xml_version
import xml in not working says worng xml_version 1.1
i export the test case and generate xml and try to import same not work
thanks in advance
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/tcms/settings/product.py`
Content:
```
1 # Django settings for product env.
2
3 from tcms.settings.common import * # noqa
4
5 # Debug settings
6 DEBUG = False
7 TEMPLATE_DEBUG = DEBUG
8
9 # Database settings
10 DATABASES = {
11 'default': {
12 'ENGINE': SUPPORTED_DB_ENGINES[DB_ENGINE],
13 'NAME': env.get('NITRATE_DB_NAME', 'nitrate'),
14 'USER': env.get('NITRATE_DB_USER', 'nitrate'),
15 'PASSWORD': env.get('NITRATE_DB_PASSWORD', 'nitrate'),
16 'HOST': env.get('NITRATE_DB_HOST', ''),
17 'PORT': env.get('NITRATE_DB_PORT', ''),
18 },
19 }
20
21 # For Kerberos authentication, uncomment out RemoteUserMiddleware.
22 # MIDDLEWARE += (
23 # 'django.contrib.auth.middleware.RemoteUserMiddleware',
24 # )
25
26 # Remote kerberos authentication backends
27 # AUTHENTICATION_BACKENDS = (
28 # 'tcms.auth.backends.ModAuthKerbBackend',
29 # )
30
31 # To enable database routers for read/write separation.
32 # DATABASE_ROUTERS = ['tcms.core.utils.tcms_router.RWRouter']
33
34 # Kerberos realm
35 # KRB5_REALM = 'EXAMPLE.COM'
36
37 # User authentication by Bugzilla settings
38 # BUGZILLA_XMLRPC_URL = 'https://bugzilla.example.com/xmlrpc.cgi'
39
40
41 TEMPLATES[0].update({
42 'DIRS': ['/usr/share/nitrate/templates'],
43 })
44
45 # Set the default send mail address
46 EMAIL_HOST = 'smtp.example.com'
47 EMAIL_FROM = '[email protected]'
48
49 # Site-specific messages
50
51 # First run - to determine if it needs to prompt user or not.
52 FIRST_RUN = False
53
54 # You can add a help link on the footer of home page as following format:
55 # ('http://foo.com', 'foo')
56 FOOTER_LINKS = (
57 ('https://nitrate.readthedocs.io/en/latest/api/xmlrpc.html', 'XML-RPC Service'),
58 ('https://nitrate.readthedocs.io/en/latest/guide.html', 'User Guide'),
59 )
60
61 # added for nitrate3.4 compatibility
62 DEFAULT_GROUPS = ['default']
63 TESTOPIA_XML_VERSION = '1.0'
64
65 # admin settings
66 ADMINS = (
67 # ('Your Name', '[email protected]'),
68 )
69
70 DEFAULT_PAGE_SIZE = 100
71
```
Path: `docker/released/product.py`
Content:
```
1 # Django settings for product env.
2
3 from tcms.settings.common import * # noqa
4
5 # Debug settings
6 DEBUG = False
7 TEMPLATE_DEBUG = DEBUG
8
9 # Database settings
10 DATABASES = {
11 'default': {
12 'ENGINE': SUPPORTED_DB_ENGINES[DB_ENGINE],
13 'NAME': env.get('NITRATE_DB_NAME', 'nitrate'),
14 'USER': env.get('NITRATE_DB_USER', 'nitrate'),
15 'PASSWORD': env.get('NITRATE_DB_PASSWORD', ''),
16 'HOST': env.get('NITRATE_DB_HOST', ''),
17 'PORT': env.get('NITRATE_DB_PORT', ''),
18 },
19 }
20
21 AUTHENTICATION_BACKENDS = (
22 'django.contrib.auth.backends.ModelBackend',
23 )
24
25 TEMPLATES[0].update({
26 'DIRS': ['/usr/share/nitrate/templates'],
27 })
28
29 # Set the default send mail address
30 EMAIL_HOST = 'smtp.example.com'
31 EMAIL_FROM = '[email protected]'
32
33 # Site-specific messages
34
35 # First run - to determine if it needs to prompt user or not.
36 FIRST_RUN = False
37
38 # You can add a help link on the footer of home page as following format:
39 # ('http://foo.com', 'foo')
40 FOOTER_LINKS = (
41 ('https://nitrate.readthedocs.io/en/latest/api/xmlrpc.html', 'XML-RPC Service'),
42 ('https://nitrate.readthedocs.io/en/latest/guide.html', 'User Guide'),
43 )
44
45 # added for nitrate3.4 compatibility
46 DEFAULT_GROUPS = ['default']
47 TESTOPIA_XML_VERSION = '1.0'
48
49 ADMINS = (
50 )
51
52 DEFAULT_PAGE_SIZE = 100
53
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docker/released/product.py b/docker/released/product.py
--- a/docker/released/product.py
+++ b/docker/released/product.py
@@ -44,7 +44,6 @@
# added for nitrate3.4 compatibility
DEFAULT_GROUPS = ['default']
-TESTOPIA_XML_VERSION = '1.0'
ADMINS = (
)
diff --git a/src/tcms/settings/product.py b/src/tcms/settings/product.py
--- a/src/tcms/settings/product.py
+++ b/src/tcms/settings/product.py
@@ -60,7 +60,6 @@
# added for nitrate3.4 compatibility
DEFAULT_GROUPS = ['default']
-TESTOPIA_XML_VERSION = '1.0'
# admin settings
ADMINS = (
|
{"golden_diff": "diff --git a/docker/released/product.py b/docker/released/product.py\n--- a/docker/released/product.py\n+++ b/docker/released/product.py\n@@ -44,7 +44,6 @@\n \n # added for nitrate3.4 compatibility\n DEFAULT_GROUPS = ['default']\n-TESTOPIA_XML_VERSION = '1.0'\n \n ADMINS = (\n )\ndiff --git a/src/tcms/settings/product.py b/src/tcms/settings/product.py\n--- a/src/tcms/settings/product.py\n+++ b/src/tcms/settings/product.py\n@@ -60,7 +60,6 @@\n \n # added for nitrate3.4 compatibility\n DEFAULT_GROUPS = ['default']\n-TESTOPIA_XML_VERSION = '1.0'\n \n # admin settings\n ADMINS = (\n", "issue": "import xml says Worng xml_version\nimport xml in not working says worng xml_version 1.1\r\n\r\ni export the test case and generate xml and try to import same not work\r\n\r\nthanks in advance\n", "before_files": [{"content": "# Django settings for product env.\n\nfrom tcms.settings.common import * # noqa\n\n# Debug settings\nDEBUG = False\nTEMPLATE_DEBUG = DEBUG\n\n# Database settings\nDATABASES = {\n 'default': {\n 'ENGINE': SUPPORTED_DB_ENGINES[DB_ENGINE],\n 'NAME': env.get('NITRATE_DB_NAME', 'nitrate'),\n 'USER': env.get('NITRATE_DB_USER', 'nitrate'),\n 'PASSWORD': env.get('NITRATE_DB_PASSWORD', 'nitrate'),\n 'HOST': env.get('NITRATE_DB_HOST', ''),\n 'PORT': env.get('NITRATE_DB_PORT', ''),\n },\n}\n\n# For Kerberos authentication, uncomment out RemoteUserMiddleware.\n# MIDDLEWARE += (\n# 'django.contrib.auth.middleware.RemoteUserMiddleware',\n# )\n\n# Remote kerberos authentication backends\n# AUTHENTICATION_BACKENDS = (\n# 'tcms.auth.backends.ModAuthKerbBackend',\n# )\n\n# To enable database routers for read/write separation.\n# DATABASE_ROUTERS = ['tcms.core.utils.tcms_router.RWRouter']\n\n# Kerberos realm\n# KRB5_REALM = 'EXAMPLE.COM'\n\n# User authentication by Bugzilla settings\n# BUGZILLA_XMLRPC_URL = 'https://bugzilla.example.com/xmlrpc.cgi'\n\n\nTEMPLATES[0].update({\n 'DIRS': ['/usr/share/nitrate/templates'],\n})\n\n# Set the default send mail address\nEMAIL_HOST = 'smtp.example.com'\nEMAIL_FROM = '[email protected]'\n\n# Site-specific messages\n\n# First run - to determine if it needs to prompt user or not.\nFIRST_RUN = False\n\n# You can add a help link on the footer of home page as following format:\n# ('http://foo.com', 'foo')\nFOOTER_LINKS = (\n ('https://nitrate.readthedocs.io/en/latest/api/xmlrpc.html', 'XML-RPC Service'),\n ('https://nitrate.readthedocs.io/en/latest/guide.html', 'User Guide'),\n)\n\n# added for nitrate3.4 compatibility\nDEFAULT_GROUPS = ['default']\nTESTOPIA_XML_VERSION = '1.0'\n\n# admin settings\nADMINS = (\n # ('Your Name', '[email protected]'),\n)\n\nDEFAULT_PAGE_SIZE = 100\n", "path": "src/tcms/settings/product.py"}, {"content": "# Django settings for product env.\n\nfrom tcms.settings.common import * # noqa\n\n# Debug settings\nDEBUG = False\nTEMPLATE_DEBUG = DEBUG\n\n# Database settings\nDATABASES = {\n 'default': {\n 'ENGINE': SUPPORTED_DB_ENGINES[DB_ENGINE],\n 'NAME': env.get('NITRATE_DB_NAME', 'nitrate'),\n 'USER': env.get('NITRATE_DB_USER', 'nitrate'),\n 'PASSWORD': env.get('NITRATE_DB_PASSWORD', ''),\n 'HOST': env.get('NITRATE_DB_HOST', ''),\n 'PORT': env.get('NITRATE_DB_PORT', ''),\n },\n}\n\nAUTHENTICATION_BACKENDS = (\n 'django.contrib.auth.backends.ModelBackend',\n)\n\nTEMPLATES[0].update({\n 'DIRS': ['/usr/share/nitrate/templates'],\n})\n\n# Set the default send mail address\nEMAIL_HOST = 'smtp.example.com'\nEMAIL_FROM = '[email protected]'\n\n# Site-specific messages\n\n# First run - to determine if it needs to prompt user or not.\nFIRST_RUN = False\n\n# You can add a help link on the footer of home page as following format:\n# ('http://foo.com', 'foo')\nFOOTER_LINKS = (\n ('https://nitrate.readthedocs.io/en/latest/api/xmlrpc.html', 'XML-RPC Service'),\n ('https://nitrate.readthedocs.io/en/latest/guide.html', 'User Guide'),\n)\n\n# added for nitrate3.4 compatibility\nDEFAULT_GROUPS = ['default']\nTESTOPIA_XML_VERSION = '1.0'\n\nADMINS = (\n)\n\nDEFAULT_PAGE_SIZE = 100\n", "path": "docker/released/product.py"}], "after_files": [{"content": "# Django settings for product env.\n\nfrom tcms.settings.common import * # noqa\n\n# Debug settings\nDEBUG = False\nTEMPLATE_DEBUG = DEBUG\n\n# Database settings\nDATABASES = {\n 'default': {\n 'ENGINE': SUPPORTED_DB_ENGINES[DB_ENGINE],\n 'NAME': env.get('NITRATE_DB_NAME', 'nitrate'),\n 'USER': env.get('NITRATE_DB_USER', 'nitrate'),\n 'PASSWORD': env.get('NITRATE_DB_PASSWORD', 'nitrate'),\n 'HOST': env.get('NITRATE_DB_HOST', ''),\n 'PORT': env.get('NITRATE_DB_PORT', ''),\n },\n}\n\n# For Kerberos authentication, uncomment out RemoteUserMiddleware.\n# MIDDLEWARE += (\n# 'django.contrib.auth.middleware.RemoteUserMiddleware',\n# )\n\n# Remote kerberos authentication backends\n# AUTHENTICATION_BACKENDS = (\n# 'tcms.auth.backends.ModAuthKerbBackend',\n# )\n\n# To enable database routers for read/write separation.\n# DATABASE_ROUTERS = ['tcms.core.utils.tcms_router.RWRouter']\n\n# Kerberos realm\n# KRB5_REALM = 'EXAMPLE.COM'\n\n# User authentication by Bugzilla settings\n# BUGZILLA_XMLRPC_URL = 'https://bugzilla.example.com/xmlrpc.cgi'\n\n\nTEMPLATES[0].update({\n 'DIRS': ['/usr/share/nitrate/templates'],\n})\n\n# Set the default send mail address\nEMAIL_HOST = 'smtp.example.com'\nEMAIL_FROM = '[email protected]'\n\n# Site-specific messages\n\n# First run - to determine if it needs to prompt user or not.\nFIRST_RUN = False\n\n# You can add a help link on the footer of home page as following format:\n# ('http://foo.com', 'foo')\nFOOTER_LINKS = (\n ('https://nitrate.readthedocs.io/en/latest/api/xmlrpc.html', 'XML-RPC Service'),\n ('https://nitrate.readthedocs.io/en/latest/guide.html', 'User Guide'),\n)\n\n# added for nitrate3.4 compatibility\nDEFAULT_GROUPS = ['default']\n\n# admin settings\nADMINS = (\n # ('Your Name', '[email protected]'),\n)\n\nDEFAULT_PAGE_SIZE = 100\n", "path": "src/tcms/settings/product.py"}, {"content": "# Django settings for product env.\n\nfrom tcms.settings.common import * # noqa\n\n# Debug settings\nDEBUG = False\nTEMPLATE_DEBUG = DEBUG\n\n# Database settings\nDATABASES = {\n 'default': {\n 'ENGINE': SUPPORTED_DB_ENGINES[DB_ENGINE],\n 'NAME': env.get('NITRATE_DB_NAME', 'nitrate'),\n 'USER': env.get('NITRATE_DB_USER', 'nitrate'),\n 'PASSWORD': env.get('NITRATE_DB_PASSWORD', ''),\n 'HOST': env.get('NITRATE_DB_HOST', ''),\n 'PORT': env.get('NITRATE_DB_PORT', ''),\n },\n}\n\nAUTHENTICATION_BACKENDS = (\n 'django.contrib.auth.backends.ModelBackend',\n)\n\nTEMPLATES[0].update({\n 'DIRS': ['/usr/share/nitrate/templates'],\n})\n\n# Set the default send mail address\nEMAIL_HOST = 'smtp.example.com'\nEMAIL_FROM = '[email protected]'\n\n# Site-specific messages\n\n# First run - to determine if it needs to prompt user or not.\nFIRST_RUN = False\n\n# You can add a help link on the footer of home page as following format:\n# ('http://foo.com', 'foo')\nFOOTER_LINKS = (\n ('https://nitrate.readthedocs.io/en/latest/api/xmlrpc.html', 'XML-RPC Service'),\n ('https://nitrate.readthedocs.io/en/latest/guide.html', 'User Guide'),\n)\n\n# added for nitrate3.4 compatibility\nDEFAULT_GROUPS = ['default']\n\nADMINS = (\n)\n\nDEFAULT_PAGE_SIZE = 100\n", "path": "docker/released/product.py"}]}
| 1,401 | 167 |
gh_patches_debug_25743
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-python-3099
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`sentry_sdk.init` breaks `import exceptiongroup` in virtualenv activated with `activate_this.py`
### How do you use Sentry?
Sentry Saas (sentry.io)
### Version
2.2.1
### Steps to Reproduce
```console
$ docker run --rm -it ubuntu:22.04
root@e264f830878b:/# apt update
root@e264f830878b:/# apt install -y python3-apport virtualenv
root@e264f830878b:/# virtualenv venv
root@e264f830878b:/# venv/bin/pip install exceptiongroup sentry-sdk
…
Successfully installed certifi-2024.2.2 exceptiongroup-1.2.1 sentry-sdk-2.2.1 urllib3-2.2.1
root@e264f830878b:/# cat > test.py <<EOF
exec(open("venv/bin/activate_this.py").read(), {"__file__": "venv/bin/activate_this.py"})
import sentry_sdk
sentry_sdk.init(dsn="https://[email protected]/1234")
import exceptiongroup
EOF
root@e264f830878b:/# python3 test.py
```
### Expected Result
No error.
### Actual Result
```pytb
Traceback (most recent call last):
File "//test.py", line 4, in <module>
import exceptiongroup
File "/venv/lib/python3.10/site-packages/exceptiongroup/__init__.py", line 20, in <module>
from ._formatting import (
File "/venv/lib/python3.10/site-packages/exceptiongroup/_formatting.py", line 394, in <module>
assert sys.excepthook is apport_python_hook.apport_excepthook
AssertionError
Sentry is attempting to send 2 pending events
Waiting up to 2 seconds
Press Ctrl-C to quit
```
The [relevant code within `exceptiongroup`](https://github.com/agronholm/exceptiongroup/blob/1.2.1/src/exceptiongroup/_formatting.py#L374-L400) is
```python
if getattr(sys.excepthook, "__name__", None) in (
"apport_excepthook",
# on ubuntu 22.10 the hook was renamed to partial_apport_excepthook
"partial_apport_excepthook",
):
…
import apport_python_hook
assert sys.excepthook is apport_python_hook.apport_excepthook
```
which fails because Sentry has patched `sys.excepthook` but retained the same `__name__`, due to `functools.wraps` within the `ensure_integration_enabled` decorator, used by `_make_excepthook` as of
- #2906
(cc @sentrivana)
This is arguably poor code within `exceptiongroup` (I opened agronholm/exceptiongroup#123), but Sentry should avoid breaking it since it’s a popular library; for example, it’s a dependency of IPython.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sentry_sdk/integrations/excepthook.py`
Content:
```
1 import sys
2
3 import sentry_sdk
4 from sentry_sdk.utils import (
5 capture_internal_exceptions,
6 ensure_integration_enabled,
7 event_from_exception,
8 )
9 from sentry_sdk.integrations import Integration
10
11 from sentry_sdk._types import TYPE_CHECKING
12
13 if TYPE_CHECKING:
14 from typing import Callable
15 from typing import Any
16 from typing import Type
17 from typing import Optional
18
19 from types import TracebackType
20
21 Excepthook = Callable[
22 [Type[BaseException], BaseException, Optional[TracebackType]],
23 Any,
24 ]
25
26
27 class ExcepthookIntegration(Integration):
28 identifier = "excepthook"
29
30 always_run = False
31
32 def __init__(self, always_run=False):
33 # type: (bool) -> None
34
35 if not isinstance(always_run, bool):
36 raise ValueError(
37 "Invalid value for always_run: %s (must be type boolean)"
38 % (always_run,)
39 )
40 self.always_run = always_run
41
42 @staticmethod
43 def setup_once():
44 # type: () -> None
45 sys.excepthook = _make_excepthook(sys.excepthook)
46
47
48 def _make_excepthook(old_excepthook):
49 # type: (Excepthook) -> Excepthook
50 @ensure_integration_enabled(ExcepthookIntegration, old_excepthook)
51 def sentry_sdk_excepthook(type_, value, traceback):
52 # type: (Type[BaseException], BaseException, Optional[TracebackType]) -> None
53 integration = sentry_sdk.get_client().get_integration(ExcepthookIntegration)
54
55 if _should_send(integration.always_run):
56 with capture_internal_exceptions():
57 event, hint = event_from_exception(
58 (type_, value, traceback),
59 client_options=sentry_sdk.get_client().options,
60 mechanism={"type": "excepthook", "handled": False},
61 )
62 sentry_sdk.capture_event(event, hint=hint)
63
64 return old_excepthook(type_, value, traceback)
65
66 return sentry_sdk_excepthook
67
68
69 def _should_send(always_run=False):
70 # type: (bool) -> bool
71 if always_run:
72 return True
73
74 if hasattr(sys, "ps1"):
75 # Disable the excepthook for interactive Python shells, otherwise
76 # every typo gets sent to Sentry.
77 return False
78
79 return True
80
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sentry_sdk/integrations/excepthook.py b/sentry_sdk/integrations/excepthook.py
--- a/sentry_sdk/integrations/excepthook.py
+++ b/sentry_sdk/integrations/excepthook.py
@@ -3,7 +3,6 @@
import sentry_sdk
from sentry_sdk.utils import (
capture_internal_exceptions,
- ensure_integration_enabled,
event_from_exception,
)
from sentry_sdk.integrations import Integration
@@ -47,11 +46,16 @@
def _make_excepthook(old_excepthook):
# type: (Excepthook) -> Excepthook
- @ensure_integration_enabled(ExcepthookIntegration, old_excepthook)
def sentry_sdk_excepthook(type_, value, traceback):
# type: (Type[BaseException], BaseException, Optional[TracebackType]) -> None
integration = sentry_sdk.get_client().get_integration(ExcepthookIntegration)
+ # Note: If we replace this with ensure_integration_enabled then
+ # we break the exceptiongroup backport;
+ # See: https://github.com/getsentry/sentry-python/issues/3097
+ if integration is None:
+ return old_excepthook(type_, value, traceback)
+
if _should_send(integration.always_run):
with capture_internal_exceptions():
event, hint = event_from_exception(
|
{"golden_diff": "diff --git a/sentry_sdk/integrations/excepthook.py b/sentry_sdk/integrations/excepthook.py\n--- a/sentry_sdk/integrations/excepthook.py\n+++ b/sentry_sdk/integrations/excepthook.py\n@@ -3,7 +3,6 @@\n import sentry_sdk\n from sentry_sdk.utils import (\n capture_internal_exceptions,\n- ensure_integration_enabled,\n event_from_exception,\n )\n from sentry_sdk.integrations import Integration\n@@ -47,11 +46,16 @@\n \n def _make_excepthook(old_excepthook):\n # type: (Excepthook) -> Excepthook\n- @ensure_integration_enabled(ExcepthookIntegration, old_excepthook)\n def sentry_sdk_excepthook(type_, value, traceback):\n # type: (Type[BaseException], BaseException, Optional[TracebackType]) -> None\n integration = sentry_sdk.get_client().get_integration(ExcepthookIntegration)\n \n+ # Note: If we replace this with ensure_integration_enabled then\n+ # we break the exceptiongroup backport;\n+ # See: https://github.com/getsentry/sentry-python/issues/3097\n+ if integration is None:\n+ return old_excepthook(type_, value, traceback)\n+\n if _should_send(integration.always_run):\n with capture_internal_exceptions():\n event, hint = event_from_exception(\n", "issue": "`sentry_sdk.init` breaks `import exceptiongroup` in virtualenv activated with `activate_this.py`\n### How do you use Sentry?\r\n\r\nSentry Saas (sentry.io)\r\n\r\n### Version\r\n\r\n2.2.1\r\n\r\n### Steps to Reproduce\r\n\r\n```console\r\n$ docker run --rm -it ubuntu:22.04\r\nroot@e264f830878b:/# apt update\r\nroot@e264f830878b:/# apt install -y python3-apport virtualenv\r\nroot@e264f830878b:/# virtualenv venv\r\nroot@e264f830878b:/# venv/bin/pip install exceptiongroup sentry-sdk\r\n\u2026\r\nSuccessfully installed certifi-2024.2.2 exceptiongroup-1.2.1 sentry-sdk-2.2.1 urllib3-2.2.1\r\nroot@e264f830878b:/# cat > test.py <<EOF\r\nexec(open(\"venv/bin/activate_this.py\").read(), {\"__file__\": \"venv/bin/activate_this.py\"})\r\nimport sentry_sdk\r\nsentry_sdk.init(dsn=\"https://[email protected]/1234\")\r\nimport exceptiongroup\r\nEOF\r\nroot@e264f830878b:/# python3 test.py\r\n```\r\n\r\n### Expected Result\r\n\r\nNo error.\r\n\r\n### Actual Result\r\n\r\n```pytb\r\nTraceback (most recent call last):\r\n File \"//test.py\", line 4, in <module>\r\n import exceptiongroup\r\n File \"/venv/lib/python3.10/site-packages/exceptiongroup/__init__.py\", line 20, in <module>\r\n from ._formatting import (\r\n File \"/venv/lib/python3.10/site-packages/exceptiongroup/_formatting.py\", line 394, in <module>\r\n assert sys.excepthook is apport_python_hook.apport_excepthook\r\nAssertionError\r\nSentry is attempting to send 2 pending events\r\nWaiting up to 2 seconds\r\nPress Ctrl-C to quit\r\n```\r\n\r\nThe [relevant code within `exceptiongroup`](https://github.com/agronholm/exceptiongroup/blob/1.2.1/src/exceptiongroup/_formatting.py#L374-L400) is\r\n\r\n```python\r\nif getattr(sys.excepthook, \"__name__\", None) in (\r\n \"apport_excepthook\",\r\n # on ubuntu 22.10 the hook was renamed to partial_apport_excepthook\r\n \"partial_apport_excepthook\",\r\n):\r\n \u2026\r\n import apport_python_hook\r\n\r\n assert sys.excepthook is apport_python_hook.apport_excepthook\r\n```\r\n\r\nwhich fails because Sentry has patched `sys.excepthook` but retained the same `__name__`, due to `functools.wraps` within the `ensure_integration_enabled` decorator, used by `_make_excepthook` as of\r\n\r\n- #2906\r\n\r\n(cc @sentrivana)\r\n\r\nThis is arguably poor code within `exceptiongroup` (I opened agronholm/exceptiongroup#123), but Sentry should avoid breaking it since it\u2019s a popular library; for example, it\u2019s a dependency of IPython.\n", "before_files": [{"content": "import sys\n\nimport sentry_sdk\nfrom sentry_sdk.utils import (\n capture_internal_exceptions,\n ensure_integration_enabled,\n event_from_exception,\n)\nfrom sentry_sdk.integrations import Integration\n\nfrom sentry_sdk._types import TYPE_CHECKING\n\nif TYPE_CHECKING:\n from typing import Callable\n from typing import Any\n from typing import Type\n from typing import Optional\n\n from types import TracebackType\n\n Excepthook = Callable[\n [Type[BaseException], BaseException, Optional[TracebackType]],\n Any,\n ]\n\n\nclass ExcepthookIntegration(Integration):\n identifier = \"excepthook\"\n\n always_run = False\n\n def __init__(self, always_run=False):\n # type: (bool) -> None\n\n if not isinstance(always_run, bool):\n raise ValueError(\n \"Invalid value for always_run: %s (must be type boolean)\"\n % (always_run,)\n )\n self.always_run = always_run\n\n @staticmethod\n def setup_once():\n # type: () -> None\n sys.excepthook = _make_excepthook(sys.excepthook)\n\n\ndef _make_excepthook(old_excepthook):\n # type: (Excepthook) -> Excepthook\n @ensure_integration_enabled(ExcepthookIntegration, old_excepthook)\n def sentry_sdk_excepthook(type_, value, traceback):\n # type: (Type[BaseException], BaseException, Optional[TracebackType]) -> None\n integration = sentry_sdk.get_client().get_integration(ExcepthookIntegration)\n\n if _should_send(integration.always_run):\n with capture_internal_exceptions():\n event, hint = event_from_exception(\n (type_, value, traceback),\n client_options=sentry_sdk.get_client().options,\n mechanism={\"type\": \"excepthook\", \"handled\": False},\n )\n sentry_sdk.capture_event(event, hint=hint)\n\n return old_excepthook(type_, value, traceback)\n\n return sentry_sdk_excepthook\n\n\ndef _should_send(always_run=False):\n # type: (bool) -> bool\n if always_run:\n return True\n\n if hasattr(sys, \"ps1\"):\n # Disable the excepthook for interactive Python shells, otherwise\n # every typo gets sent to Sentry.\n return False\n\n return True\n", "path": "sentry_sdk/integrations/excepthook.py"}], "after_files": [{"content": "import sys\n\nimport sentry_sdk\nfrom sentry_sdk.utils import (\n capture_internal_exceptions,\n event_from_exception,\n)\nfrom sentry_sdk.integrations import Integration\n\nfrom sentry_sdk._types import TYPE_CHECKING\n\nif TYPE_CHECKING:\n from typing import Callable\n from typing import Any\n from typing import Type\n from typing import Optional\n\n from types import TracebackType\n\n Excepthook = Callable[\n [Type[BaseException], BaseException, Optional[TracebackType]],\n Any,\n ]\n\n\nclass ExcepthookIntegration(Integration):\n identifier = \"excepthook\"\n\n always_run = False\n\n def __init__(self, always_run=False):\n # type: (bool) -> None\n\n if not isinstance(always_run, bool):\n raise ValueError(\n \"Invalid value for always_run: %s (must be type boolean)\"\n % (always_run,)\n )\n self.always_run = always_run\n\n @staticmethod\n def setup_once():\n # type: () -> None\n sys.excepthook = _make_excepthook(sys.excepthook)\n\n\ndef _make_excepthook(old_excepthook):\n # type: (Excepthook) -> Excepthook\n def sentry_sdk_excepthook(type_, value, traceback):\n # type: (Type[BaseException], BaseException, Optional[TracebackType]) -> None\n integration = sentry_sdk.get_client().get_integration(ExcepthookIntegration)\n\n # Note: If we replace this with ensure_integration_enabled then\n # we break the exceptiongroup backport;\n # See: https://github.com/getsentry/sentry-python/issues/3097\n if integration is None:\n return old_excepthook(type_, value, traceback)\n\n if _should_send(integration.always_run):\n with capture_internal_exceptions():\n event, hint = event_from_exception(\n (type_, value, traceback),\n client_options=sentry_sdk.get_client().options,\n mechanism={\"type\": \"excepthook\", \"handled\": False},\n )\n sentry_sdk.capture_event(event, hint=hint)\n\n return old_excepthook(type_, value, traceback)\n\n return sentry_sdk_excepthook\n\n\ndef _should_send(always_run=False):\n # type: (bool) -> bool\n if always_run:\n return True\n\n if hasattr(sys, \"ps1\"):\n # Disable the excepthook for interactive Python shells, otherwise\n # every typo gets sent to Sentry.\n return False\n\n return True\n", "path": "sentry_sdk/integrations/excepthook.py"}]}
| 1,675 | 321 |
gh_patches_debug_30470
|
rasdani/github-patches
|
git_diff
|
vega__altair-2643
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
x-axis tick labels in Natural Disasters case study need clean up
See:

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `altair/examples/natural_disasters.py`
Content:
```
1 """
2 Natural Disasters
3 -----------------
4 This example shows a visualization of global deaths from natural disasters.
5 """
6 # category: case studies
7 import altair as alt
8 from vega_datasets import data
9
10 source = data.disasters.url
11
12 alt.Chart(source).mark_circle(
13 opacity=0.8,
14 stroke='black',
15 strokeWidth=1
16 ).encode(
17 alt.X('Year:O', axis=alt.Axis(labelAngle=0)),
18 alt.Y('Entity:N'),
19 alt.Size('Deaths:Q',
20 scale=alt.Scale(range=[0, 4000]),
21 legend=alt.Legend(title='Annual Global Deaths')
22 ),
23 alt.Color('Entity:N', legend=None)
24 ).properties(
25 width=450,
26 height=320
27 ).transform_filter(
28 alt.datum.Entity != 'All natural disasters'
29 )
30
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/altair/examples/natural_disasters.py b/altair/examples/natural_disasters.py
--- a/altair/examples/natural_disasters.py
+++ b/altair/examples/natural_disasters.py
@@ -1,7 +1,7 @@
"""
-Natural Disasters
------------------
-This example shows a visualization of global deaths from natural disasters.
+Global Deaths from Natural Disasters
+------------------------------------
+This example shows a proportional symbols visualization of deaths from natural disasters by year and type.
"""
# category: case studies
import altair as alt
@@ -9,21 +9,44 @@
source = data.disasters.url
-alt.Chart(source).mark_circle(
+alt.Chart(source).transform_filter(
+ alt.datum.Entity != 'All natural disasters'
+).mark_circle(
opacity=0.8,
stroke='black',
- strokeWidth=1
+ strokeWidth=1,
+ strokeOpacity=0.4
).encode(
- alt.X('Year:O', axis=alt.Axis(labelAngle=0)),
- alt.Y('Entity:N'),
- alt.Size('Deaths:Q',
- scale=alt.Scale(range=[0, 4000]),
- legend=alt.Legend(title='Annual Global Deaths')
+ x=alt.X('Year:T', title=None, scale=alt.Scale(domain=['1899','2018'])),
+ y=alt.Y(
+ 'Entity:N',
+ sort=alt.EncodingSortField(field="Deaths", op="sum", order='descending'),
+ title=None
+ ),
+ size=alt.Size('Deaths:Q',
+ scale=alt.Scale(range=[0, 2500]),
+ legend=alt.Legend(title='Deaths', clipHeight=30, format='s')
),
- alt.Color('Entity:N', legend=None)
+ color=alt.Color('Entity:N', legend=None),
+ tooltip=[
+ "Entity:N",
+ alt.Tooltip("Year:T", format='%Y'),
+ alt.Tooltip("Deaths:Q", format='~s')
+ ],
).properties(
width=450,
- height=320
-).transform_filter(
- alt.datum.Entity != 'All natural disasters'
+ height=320,
+ title=alt.TitleParams(
+ text="Global Deaths from Natural Disasters (1900-2017)",
+ subtitle="The size of the bubble represents the total death count per year, by type of disaster",
+ anchor='start'
+ )
+).configure_axisY(
+ domain=False,
+ ticks=False,
+ offset=10
+).configure_axisX(
+ grid=False,
+).configure_view(
+ stroke=None
)
|
{"golden_diff": "diff --git a/altair/examples/natural_disasters.py b/altair/examples/natural_disasters.py\n--- a/altair/examples/natural_disasters.py\n+++ b/altair/examples/natural_disasters.py\n@@ -1,7 +1,7 @@\n \"\"\"\n-Natural Disasters\n------------------\n-This example shows a visualization of global deaths from natural disasters.\n+Global Deaths from Natural Disasters\n+------------------------------------\n+This example shows a proportional symbols visualization of deaths from natural disasters by year and type.\n \"\"\"\n # category: case studies\n import altair as alt\n@@ -9,21 +9,44 @@\n \n source = data.disasters.url\n \n-alt.Chart(source).mark_circle(\n+alt.Chart(source).transform_filter(\n+ alt.datum.Entity != 'All natural disasters'\n+).mark_circle(\n opacity=0.8,\n stroke='black',\n- strokeWidth=1\n+ strokeWidth=1,\n+ strokeOpacity=0.4\n ).encode(\n- alt.X('Year:O', axis=alt.Axis(labelAngle=0)),\n- alt.Y('Entity:N'),\n- alt.Size('Deaths:Q',\n- scale=alt.Scale(range=[0, 4000]),\n- legend=alt.Legend(title='Annual Global Deaths')\n+ x=alt.X('Year:T', title=None, scale=alt.Scale(domain=['1899','2018'])),\n+ y=alt.Y(\n+ 'Entity:N',\n+ sort=alt.EncodingSortField(field=\"Deaths\", op=\"sum\", order='descending'),\n+ title=None\n+ ),\n+ size=alt.Size('Deaths:Q',\n+ scale=alt.Scale(range=[0, 2500]),\n+ legend=alt.Legend(title='Deaths', clipHeight=30, format='s')\n ),\n- alt.Color('Entity:N', legend=None)\n+ color=alt.Color('Entity:N', legend=None),\n+ tooltip=[\n+ \"Entity:N\", \n+ alt.Tooltip(\"Year:T\", format='%Y'), \n+ alt.Tooltip(\"Deaths:Q\", format='~s')\n+ ],\n ).properties(\n width=450,\n- height=320\n-).transform_filter(\n- alt.datum.Entity != 'All natural disasters'\n+ height=320,\n+ title=alt.TitleParams(\n+ text=\"Global Deaths from Natural Disasters (1900-2017)\",\n+ subtitle=\"The size of the bubble represents the total death count per year, by type of disaster\",\n+ anchor='start'\n+ )\n+).configure_axisY(\n+ domain=False,\n+ ticks=False,\n+ offset=10\n+).configure_axisX(\n+ grid=False,\n+).configure_view(\n+ stroke=None\n )\n", "issue": "x-axis tick labels in Natural Disasters case study need clean up\nSee:\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nNatural Disasters\n-----------------\nThis example shows a visualization of global deaths from natural disasters.\n\"\"\"\n# category: case studies\nimport altair as alt\nfrom vega_datasets import data\n\nsource = data.disasters.url\n\nalt.Chart(source).mark_circle(\n opacity=0.8,\n stroke='black',\n strokeWidth=1\n).encode(\n alt.X('Year:O', axis=alt.Axis(labelAngle=0)),\n alt.Y('Entity:N'),\n alt.Size('Deaths:Q',\n scale=alt.Scale(range=[0, 4000]),\n legend=alt.Legend(title='Annual Global Deaths')\n ),\n alt.Color('Entity:N', legend=None)\n).properties(\n width=450,\n height=320\n).transform_filter(\n alt.datum.Entity != 'All natural disasters'\n)\n", "path": "altair/examples/natural_disasters.py"}], "after_files": [{"content": "\"\"\"\nGlobal Deaths from Natural Disasters\n------------------------------------\nThis example shows a proportional symbols visualization of deaths from natural disasters by year and type.\n\"\"\"\n# category: case studies\nimport altair as alt\nfrom vega_datasets import data\n\nsource = data.disasters.url\n\nalt.Chart(source).transform_filter(\n alt.datum.Entity != 'All natural disasters'\n).mark_circle(\n opacity=0.8,\n stroke='black',\n strokeWidth=1,\n strokeOpacity=0.4\n).encode(\n x=alt.X('Year:T', title=None, scale=alt.Scale(domain=['1899','2018'])),\n y=alt.Y(\n 'Entity:N',\n sort=alt.EncodingSortField(field=\"Deaths\", op=\"sum\", order='descending'),\n title=None\n ),\n size=alt.Size('Deaths:Q',\n scale=alt.Scale(range=[0, 2500]),\n legend=alt.Legend(title='Deaths', clipHeight=30, format='s')\n ),\n color=alt.Color('Entity:N', legend=None),\n tooltip=[\n \"Entity:N\", \n alt.Tooltip(\"Year:T\", format='%Y'), \n alt.Tooltip(\"Deaths:Q\", format='~s')\n ],\n).properties(\n width=450,\n height=320,\n title=alt.TitleParams(\n text=\"Global Deaths from Natural Disasters (1900-2017)\",\n subtitle=\"The size of the bubble represents the total death count per year, by type of disaster\",\n anchor='start'\n )\n).configure_axisY(\n domain=False,\n ticks=False,\n offset=10\n).configure_axisX(\n grid=False,\n).configure_view(\n stroke=None\n)\n", "path": "altair/examples/natural_disasters.py"}]}
| 590 | 609 |
gh_patches_debug_33544
|
rasdani/github-patches
|
git_diff
|
rasterio__rasterio-822
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
rio warp like silently ignores res override
The obvious starting place is `rio warp --like` but this doesn't allow you to override the resolution. It silently ignores the `--res` option which could be considered a bug.
```
$ rio warp --like b.tif --res 5 a.tif c.tif
$ rio info --res c.tif
1.0 1.0
```
In this case, warp should either a) override the resolution of the like raster or b) raise an exception saying that it's not supported
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rasterio/rio/warp.py`
Content:
```
1 import logging
2 from math import ceil, floor, log
3 import warnings
4
5 import click
6 from cligj import files_inout_arg, format_opt
7
8 from .helpers import resolve_inout
9 from . import options
10 import rasterio
11 from rasterio.crs import CRS
12 from rasterio.errors import CRSError
13 from rasterio.transform import Affine
14 from rasterio.warp import (
15 reproject, Resampling, calculate_default_transform, transform_bounds)
16
17
18 # Improper usage of rio-warp can lead to accidental creation of
19 # extremely large datasets. We'll put a hard limit on the size of
20 # datasets and raise a usage error if the limits are exceeded.
21 MAX_OUTPUT_WIDTH = 100000
22 MAX_OUTPUT_HEIGHT = 100000
23
24
25 @click.command(short_help='Warp a raster dataset.')
26 @files_inout_arg
27 @options.output_opt
28 @format_opt
29 @click.option(
30 '--like',
31 type=click.Path(exists=True),
32 help='Raster dataset to use as a template for obtaining affine '
33 'transform (bounds and resolution), and crs.')
34 @click.option('--dst-crs', default=None,
35 help='Target coordinate reference system.')
36 @options.dimensions_opt
37 @click.option(
38 '--src-bounds',
39 nargs=4, type=float, default=None,
40 help="Determine output extent from source bounds: left bottom right top "
41 ". Cannot be used with destination --bounds")
42 @click.option(
43 '--bounds', '--dst-bounds', nargs=4, type=float, default=None,
44 help="Determine output extent from destination bounds: left bottom right top")
45 @options.resolution_opt
46 @click.option('--resampling', type=click.Choice([r.name for r in Resampling]),
47 default='nearest', help="Resampling method.",
48 show_default=True)
49 @click.option('--src-nodata', default=None, show_default=True,
50 type=float, help="Manually override source nodata")
51 @click.option('--dst-nodata', default=None, show_default=True,
52 type=float, help="Manually override destination nodata")
53 @click.option('--threads', type=int, default=1,
54 help='Number of processing threads.')
55 @click.option('--check-invert-proj', is_flag=True, default=True,
56 help='Constrain output to valid coordinate region in dst-crs')
57 @options.force_overwrite_opt
58 @options.creation_options
59 @click.pass_context
60 def warp(ctx, files, output, driver, like, dst_crs, dimensions, src_bounds,
61 dst_bounds, res, resampling, src_nodata, dst_nodata, threads,
62 check_invert_proj, force_overwrite, creation_options):
63 """
64 Warp a raster dataset.
65
66 If a template raster is provided using the --like option, the
67 coordinate reference system, affine transform, and dimensions of
68 that raster will be used for the output. In this case --dst-crs,
69 --bounds, --res, and --dimensions options are ignored.
70
71 \b
72 $ rio warp input.tif output.tif --like template.tif
73
74 The output coordinate reference system may be either a PROJ.4 or
75 EPSG:nnnn string,
76
77 \b
78 --dst-crs EPSG:4326
79 --dst-crs '+proj=longlat +ellps=WGS84 +datum=WGS84'
80
81 or a JSON text-encoded PROJ.4 object.
82
83 \b
84 --dst-crs '{"proj": "utm", "zone": 18, ...}'
85
86 If --dimensions are provided, --res and --bounds are ignored.
87 Resolution is calculated based on the relationship between the
88 raster bounds in the target coordinate system and the dimensions,
89 and may produce rectangular rather than square pixels.
90
91 \b
92 $ rio warp input.tif output.tif --dimensions 100 200 \\
93 > --dst-crs EPSG:4326
94
95 If --bounds are provided, --res is required if --dst-crs is provided
96 (defaults to source raster resolution otherwise).
97
98 \b
99 $ rio warp input.tif output.tif \\
100 > --bounds -78 22 -76 24 --res 0.1 --dst-crs EPSG:4326
101
102 """
103 verbosity = (ctx.obj and ctx.obj.get('verbosity')) or 1
104
105 output, files = resolve_inout(
106 files=files, output=output, force_overwrite=force_overwrite)
107
108 resampling = Resampling[resampling] # get integer code for method
109
110 if not len(res):
111 # Click sets this as an empty tuple if not provided
112 res = None
113 else:
114 # Expand one value to two if needed
115 res = (res[0], res[0]) if len(res) == 1 else res
116
117 with rasterio.Env(CPL_DEBUG=verbosity > 2,
118 CHECK_WITH_INVERT_PROJ=check_invert_proj):
119 with rasterio.open(files[0]) as src:
120 l, b, r, t = src.bounds
121 out_kwargs = src.profile.copy()
122 out_kwargs['driver'] = driver
123
124 # Sort out the bounds options.
125 if src_bounds and dst_bounds:
126 raise click.BadParameter(
127 "--src-bounds and destination --bounds may not be specified "
128 "simultaneously.")
129
130 if like:
131 with rasterio.open(like) as template_ds:
132 dst_crs = template_ds.crs
133 dst_transform = template_ds.affine
134 dst_height = template_ds.height
135 dst_width = template_ds.width
136
137 elif dst_crs is not None:
138 try:
139 dst_crs = CRS.from_string(dst_crs)
140 except ValueError as err:
141 raise click.BadParameter(
142 str(err), param='dst_crs', param_hint='dst_crs')
143
144 if dimensions:
145 # Calculate resolution appropriate for dimensions
146 # in target.
147 dst_width, dst_height = dimensions
148 try:
149 xmin, ymin, xmax, ymax = transform_bounds(
150 src.crs, dst_crs, *src.bounds)
151 except CRSError as err:
152 raise click.BadParameter(
153 str(err), param='dst_crs', param_hint='dst_crs')
154 dst_transform = Affine(
155 (xmax - xmin) / float(dst_width),
156 0, xmin, 0,
157 (ymin - ymax) / float(dst_height),
158 ymax
159 )
160
161 elif src_bounds or dst_bounds:
162 if not res:
163 raise click.BadParameter(
164 "Required when using --bounds.",
165 param='res', param_hint='res')
166
167 if src_bounds:
168 try:
169 xmin, ymin, xmax, ymax = transform_bounds(
170 src.crs, dst_crs, *src_bounds)
171 except CRSError as err:
172 raise click.BadParameter(
173 str(err), param='dst_crs',
174 param_hint='dst_crs')
175 else:
176 xmin, ymin, xmax, ymax = dst_bounds
177
178 dst_transform = Affine(res[0], 0, xmin, 0, -res[1], ymax)
179 dst_width = max(int(ceil((xmax - xmin) / res[0])), 1)
180 dst_height = max(int(ceil((ymax - ymin) / res[1])), 1)
181
182 else:
183 try:
184 dst_transform, dst_width, dst_height = calculate_default_transform(
185 src.crs, dst_crs, src.width, src.height,
186 *src.bounds, resolution=res)
187 except CRSError as err:
188 raise click.BadParameter(
189 str(err), param='dst_crs', param_hint='dst_crs')
190 elif dimensions:
191 # Same projection, different dimensions, calculate resolution.
192 dst_crs = src.crs
193 dst_width, dst_height = dimensions
194 dst_transform = Affine(
195 (r - l) / float(dst_width),
196 0, l, 0,
197 (b - t) / float(dst_height),
198 t
199 )
200
201 elif src_bounds or dst_bounds:
202 # Same projection, different dimensions and possibly
203 # different resolution.
204 if not res:
205 res = (src.affine.a, -src.affine.e)
206
207 dst_crs = src.crs
208 xmin, ymin, xmax, ymax = (src_bounds or dst_bounds)
209 dst_transform = Affine(res[0], 0, xmin, 0, -res[1], ymax)
210 dst_width = max(int(ceil((xmax - xmin) / res[0])), 1)
211 dst_height = max(int(ceil((ymax - ymin) / res[1])), 1)
212
213 elif res:
214 # Same projection, different resolution.
215 dst_crs = src.crs
216 dst_transform = Affine(res[0], 0, l, 0, -res[1], t)
217 dst_width = max(int(ceil((r - l) / res[0])), 1)
218 dst_height = max(int(ceil((t - b) / res[1])), 1)
219
220 else:
221 dst_crs = src.crs
222 dst_transform = src.affine
223 dst_width = src.width
224 dst_height = src.height
225
226 # If src_nodata is not None, update the dst metadata NODATA
227 # value to src_nodata (will be overridden by dst_nodata if it is not None
228 if src_nodata is not None:
229 # Update the dst nodata value
230 out_kwargs.update({
231 'nodata': src_nodata
232 })
233
234 # Validate a manually set destination NODATA value
235 # against the input datatype.
236 if dst_nodata is not None:
237 if src_nodata is None and src.meta['nodata'] is None:
238 raise click.BadParameter(
239 "--src-nodata must be provided because dst-nodata is not None")
240 else:
241 # Update the dst nodata value
242 out_kwargs.update({'nodata': dst_nodata})
243
244 # When the bounds option is misused, extreme values of
245 # destination width and height may result.
246 if (dst_width < 0 or dst_height < 0 or
247 dst_width > MAX_OUTPUT_WIDTH or
248 dst_height > MAX_OUTPUT_HEIGHT):
249 raise click.BadParameter(
250 "Invalid output dimensions: {0}.".format(
251 (dst_width, dst_height)))
252
253 out_kwargs.update({
254 'crs': dst_crs,
255 'transform': dst_transform,
256 'affine': dst_transform,
257 'width': dst_width,
258 'height': dst_height
259 })
260
261 # Adjust block size if necessary.
262 if ('blockxsize' in out_kwargs and
263 dst_width < out_kwargs['blockxsize']):
264 del out_kwargs['blockxsize']
265 if ('blockysize' in out_kwargs and
266 dst_height < out_kwargs['blockysize']):
267 del out_kwargs['blockysize']
268
269 out_kwargs.update(**creation_options)
270
271 with rasterio.open(output, 'w', **out_kwargs) as dst:
272 reproject(
273 source=rasterio.band(src, list(range(1, src.count + 1))),
274 destination=rasterio.band(
275 dst, list(range(1, src.count + 1))),
276 src_transform=src.affine,
277 src_crs=src.crs,
278 src_nodata=src_nodata,
279 dst_transform=out_kwargs['transform'],
280 dst_crs=out_kwargs['crs'],
281 dst_nodata=dst_nodata,
282 resampling=resampling,
283 num_threads=threads)
284
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/rasterio/rio/warp.py b/rasterio/rio/warp.py
--- a/rasterio/rio/warp.py
+++ b/rasterio/rio/warp.py
@@ -66,7 +66,8 @@
If a template raster is provided using the --like option, the
coordinate reference system, affine transform, and dimensions of
that raster will be used for the output. In this case --dst-crs,
- --bounds, --res, and --dimensions options are ignored.
+ --bounds, --res, and --dimensions options are not applicable and
+ an exception will be raised.
\b
$ rio warp input.tif output.tif --like template.tif
@@ -83,7 +84,8 @@
\b
--dst-crs '{"proj": "utm", "zone": 18, ...}'
- If --dimensions are provided, --res and --bounds are ignored.
+ If --dimensions are provided, --res and --bounds are not applicable and an
+ exception will be raised.
Resolution is calculated based on the relationship between the
raster bounds in the target coordinate system and the dimensions,
and may produce rectangular rather than square pixels.
@@ -114,6 +116,20 @@
# Expand one value to two if needed
res = (res[0], res[0]) if len(res) == 1 else res
+ # Check invalid parameter combinations
+ if like:
+ invalid_combos = (dimensions, dst_bounds, dst_crs, res)
+ if any(p for p in invalid_combos if p is not None):
+ raise click.BadParameter(
+ "--like cannot be used with any of --dimensions, --bounds, "
+ "--dst-crs, or --res")
+
+ elif dimensions:
+ invalid_combos = (dst_bounds, res)
+ if any(p for p in invalid_combos if p is not None):
+ raise click.BadParameter(
+ "--dimensions cannot be used with --bounds or --res")
+
with rasterio.Env(CPL_DEBUG=verbosity > 2,
CHECK_WITH_INVERT_PROJ=check_invert_proj):
with rasterio.open(files[0]) as src:
|
{"golden_diff": "diff --git a/rasterio/rio/warp.py b/rasterio/rio/warp.py\n--- a/rasterio/rio/warp.py\n+++ b/rasterio/rio/warp.py\n@@ -66,7 +66,8 @@\n If a template raster is provided using the --like option, the\n coordinate reference system, affine transform, and dimensions of\n that raster will be used for the output. In this case --dst-crs,\n- --bounds, --res, and --dimensions options are ignored.\n+ --bounds, --res, and --dimensions options are not applicable and\n+ an exception will be raised.\n \n \\b\n $ rio warp input.tif output.tif --like template.tif\n@@ -83,7 +84,8 @@\n \\b\n --dst-crs '{\"proj\": \"utm\", \"zone\": 18, ...}'\n \n- If --dimensions are provided, --res and --bounds are ignored.\n+ If --dimensions are provided, --res and --bounds are not applicable and an\n+ exception will be raised.\n Resolution is calculated based on the relationship between the\n raster bounds in the target coordinate system and the dimensions,\n and may produce rectangular rather than square pixels.\n@@ -114,6 +116,20 @@\n # Expand one value to two if needed\n res = (res[0], res[0]) if len(res) == 1 else res\n \n+ # Check invalid parameter combinations\n+ if like:\n+ invalid_combos = (dimensions, dst_bounds, dst_crs, res)\n+ if any(p for p in invalid_combos if p is not None):\n+ raise click.BadParameter(\n+ \"--like cannot be used with any of --dimensions, --bounds, \"\n+ \"--dst-crs, or --res\")\n+\n+ elif dimensions:\n+ invalid_combos = (dst_bounds, res)\n+ if any(p for p in invalid_combos if p is not None):\n+ raise click.BadParameter(\n+ \"--dimensions cannot be used with --bounds or --res\")\n+\n with rasterio.Env(CPL_DEBUG=verbosity > 2,\n CHECK_WITH_INVERT_PROJ=check_invert_proj):\n with rasterio.open(files[0]) as src:\n", "issue": "rio warp like silently ignores res override\nThe obvious starting place is `rio warp --like` but this doesn't allow you to override the resolution. It silently ignores the `--res` option which could be considered a bug. \n\n```\n$ rio warp --like b.tif --res 5 a.tif c.tif\n$ rio info --res c.tif\n1.0 1.0\n```\n\nIn this case, warp should either a) override the resolution of the like raster or b) raise an exception saying that it's not supported\n\n", "before_files": [{"content": "import logging\nfrom math import ceil, floor, log\nimport warnings\n\nimport click\nfrom cligj import files_inout_arg, format_opt\n\nfrom .helpers import resolve_inout\nfrom . import options\nimport rasterio\nfrom rasterio.crs import CRS\nfrom rasterio.errors import CRSError\nfrom rasterio.transform import Affine\nfrom rasterio.warp import (\n reproject, Resampling, calculate_default_transform, transform_bounds)\n\n\n# Improper usage of rio-warp can lead to accidental creation of\n# extremely large datasets. We'll put a hard limit on the size of\n# datasets and raise a usage error if the limits are exceeded.\nMAX_OUTPUT_WIDTH = 100000\nMAX_OUTPUT_HEIGHT = 100000\n\n\[email protected](short_help='Warp a raster dataset.')\n@files_inout_arg\[email protected]_opt\n@format_opt\[email protected](\n '--like',\n type=click.Path(exists=True),\n help='Raster dataset to use as a template for obtaining affine '\n 'transform (bounds and resolution), and crs.')\[email protected]('--dst-crs', default=None,\n help='Target coordinate reference system.')\[email protected]_opt\[email protected](\n '--src-bounds',\n nargs=4, type=float, default=None,\n help=\"Determine output extent from source bounds: left bottom right top \"\n \". Cannot be used with destination --bounds\")\[email protected](\n '--bounds', '--dst-bounds', nargs=4, type=float, default=None,\n help=\"Determine output extent from destination bounds: left bottom right top\")\[email protected]_opt\[email protected]('--resampling', type=click.Choice([r.name for r in Resampling]),\n default='nearest', help=\"Resampling method.\",\n show_default=True)\[email protected]('--src-nodata', default=None, show_default=True,\n type=float, help=\"Manually override source nodata\")\[email protected]('--dst-nodata', default=None, show_default=True,\n type=float, help=\"Manually override destination nodata\")\[email protected]('--threads', type=int, default=1,\n help='Number of processing threads.')\[email protected]('--check-invert-proj', is_flag=True, default=True,\n help='Constrain output to valid coordinate region in dst-crs')\[email protected]_overwrite_opt\[email protected]_options\[email protected]_context\ndef warp(ctx, files, output, driver, like, dst_crs, dimensions, src_bounds,\n dst_bounds, res, resampling, src_nodata, dst_nodata, threads,\n check_invert_proj, force_overwrite, creation_options):\n \"\"\"\n Warp a raster dataset.\n\n If a template raster is provided using the --like option, the\n coordinate reference system, affine transform, and dimensions of\n that raster will be used for the output. In this case --dst-crs,\n --bounds, --res, and --dimensions options are ignored.\n\n \\b\n $ rio warp input.tif output.tif --like template.tif\n\n The output coordinate reference system may be either a PROJ.4 or\n EPSG:nnnn string,\n\n \\b\n --dst-crs EPSG:4326\n --dst-crs '+proj=longlat +ellps=WGS84 +datum=WGS84'\n\n or a JSON text-encoded PROJ.4 object.\n\n \\b\n --dst-crs '{\"proj\": \"utm\", \"zone\": 18, ...}'\n\n If --dimensions are provided, --res and --bounds are ignored.\n Resolution is calculated based on the relationship between the\n raster bounds in the target coordinate system and the dimensions,\n and may produce rectangular rather than square pixels.\n\n \\b\n $ rio warp input.tif output.tif --dimensions 100 200 \\\\\n > --dst-crs EPSG:4326\n\n If --bounds are provided, --res is required if --dst-crs is provided\n (defaults to source raster resolution otherwise).\n\n \\b\n $ rio warp input.tif output.tif \\\\\n > --bounds -78 22 -76 24 --res 0.1 --dst-crs EPSG:4326\n\n \"\"\"\n verbosity = (ctx.obj and ctx.obj.get('verbosity')) or 1\n\n output, files = resolve_inout(\n files=files, output=output, force_overwrite=force_overwrite)\n\n resampling = Resampling[resampling] # get integer code for method\n\n if not len(res):\n # Click sets this as an empty tuple if not provided\n res = None\n else:\n # Expand one value to two if needed\n res = (res[0], res[0]) if len(res) == 1 else res\n\n with rasterio.Env(CPL_DEBUG=verbosity > 2,\n CHECK_WITH_INVERT_PROJ=check_invert_proj):\n with rasterio.open(files[0]) as src:\n l, b, r, t = src.bounds\n out_kwargs = src.profile.copy()\n out_kwargs['driver'] = driver\n\n # Sort out the bounds options.\n if src_bounds and dst_bounds:\n raise click.BadParameter(\n \"--src-bounds and destination --bounds may not be specified \"\n \"simultaneously.\")\n\n if like:\n with rasterio.open(like) as template_ds:\n dst_crs = template_ds.crs\n dst_transform = template_ds.affine\n dst_height = template_ds.height\n dst_width = template_ds.width\n\n elif dst_crs is not None:\n try:\n dst_crs = CRS.from_string(dst_crs)\n except ValueError as err:\n raise click.BadParameter(\n str(err), param='dst_crs', param_hint='dst_crs')\n\n if dimensions:\n # Calculate resolution appropriate for dimensions\n # in target.\n dst_width, dst_height = dimensions\n try:\n xmin, ymin, xmax, ymax = transform_bounds(\n src.crs, dst_crs, *src.bounds)\n except CRSError as err:\n raise click.BadParameter(\n str(err), param='dst_crs', param_hint='dst_crs')\n dst_transform = Affine(\n (xmax - xmin) / float(dst_width),\n 0, xmin, 0,\n (ymin - ymax) / float(dst_height),\n ymax\n )\n\n elif src_bounds or dst_bounds:\n if not res:\n raise click.BadParameter(\n \"Required when using --bounds.\",\n param='res', param_hint='res')\n\n if src_bounds:\n try:\n xmin, ymin, xmax, ymax = transform_bounds(\n src.crs, dst_crs, *src_bounds)\n except CRSError as err:\n raise click.BadParameter(\n str(err), param='dst_crs',\n param_hint='dst_crs')\n else:\n xmin, ymin, xmax, ymax = dst_bounds\n\n dst_transform = Affine(res[0], 0, xmin, 0, -res[1], ymax)\n dst_width = max(int(ceil((xmax - xmin) / res[0])), 1)\n dst_height = max(int(ceil((ymax - ymin) / res[1])), 1)\n\n else:\n try:\n dst_transform, dst_width, dst_height = calculate_default_transform(\n src.crs, dst_crs, src.width, src.height,\n *src.bounds, resolution=res)\n except CRSError as err:\n raise click.BadParameter(\n str(err), param='dst_crs', param_hint='dst_crs')\n elif dimensions:\n # Same projection, different dimensions, calculate resolution.\n dst_crs = src.crs\n dst_width, dst_height = dimensions\n dst_transform = Affine(\n (r - l) / float(dst_width),\n 0, l, 0,\n (b - t) / float(dst_height),\n t\n )\n\n elif src_bounds or dst_bounds:\n # Same projection, different dimensions and possibly\n # different resolution.\n if not res:\n res = (src.affine.a, -src.affine.e)\n\n dst_crs = src.crs\n xmin, ymin, xmax, ymax = (src_bounds or dst_bounds)\n dst_transform = Affine(res[0], 0, xmin, 0, -res[1], ymax)\n dst_width = max(int(ceil((xmax - xmin) / res[0])), 1)\n dst_height = max(int(ceil((ymax - ymin) / res[1])), 1)\n\n elif res:\n # Same projection, different resolution.\n dst_crs = src.crs\n dst_transform = Affine(res[0], 0, l, 0, -res[1], t)\n dst_width = max(int(ceil((r - l) / res[0])), 1)\n dst_height = max(int(ceil((t - b) / res[1])), 1)\n\n else:\n dst_crs = src.crs\n dst_transform = src.affine\n dst_width = src.width\n dst_height = src.height\n\n # If src_nodata is not None, update the dst metadata NODATA\n # value to src_nodata (will be overridden by dst_nodata if it is not None\n if src_nodata is not None:\n # Update the dst nodata value\n out_kwargs.update({\n 'nodata': src_nodata\n })\n\n # Validate a manually set destination NODATA value\n # against the input datatype.\n if dst_nodata is not None:\n if src_nodata is None and src.meta['nodata'] is None:\n raise click.BadParameter(\n \"--src-nodata must be provided because dst-nodata is not None\")\n else:\n # Update the dst nodata value\n out_kwargs.update({'nodata': dst_nodata})\n\n # When the bounds option is misused, extreme values of\n # destination width and height may result.\n if (dst_width < 0 or dst_height < 0 or\n dst_width > MAX_OUTPUT_WIDTH or\n dst_height > MAX_OUTPUT_HEIGHT):\n raise click.BadParameter(\n \"Invalid output dimensions: {0}.\".format(\n (dst_width, dst_height)))\n\n out_kwargs.update({\n 'crs': dst_crs,\n 'transform': dst_transform,\n 'affine': dst_transform,\n 'width': dst_width,\n 'height': dst_height\n })\n\n # Adjust block size if necessary.\n if ('blockxsize' in out_kwargs and\n dst_width < out_kwargs['blockxsize']):\n del out_kwargs['blockxsize']\n if ('blockysize' in out_kwargs and\n dst_height < out_kwargs['blockysize']):\n del out_kwargs['blockysize']\n\n out_kwargs.update(**creation_options)\n\n with rasterio.open(output, 'w', **out_kwargs) as dst:\n reproject(\n source=rasterio.band(src, list(range(1, src.count + 1))),\n destination=rasterio.band(\n dst, list(range(1, src.count + 1))),\n src_transform=src.affine,\n src_crs=src.crs,\n src_nodata=src_nodata,\n dst_transform=out_kwargs['transform'],\n dst_crs=out_kwargs['crs'],\n dst_nodata=dst_nodata,\n resampling=resampling,\n num_threads=threads)\n", "path": "rasterio/rio/warp.py"}], "after_files": [{"content": "import logging\nfrom math import ceil, floor, log\nimport warnings\n\nimport click\nfrom cligj import files_inout_arg, format_opt\n\nfrom .helpers import resolve_inout\nfrom . import options\nimport rasterio\nfrom rasterio.crs import CRS\nfrom rasterio.errors import CRSError\nfrom rasterio.transform import Affine\nfrom rasterio.warp import (\n reproject, Resampling, calculate_default_transform, transform_bounds)\n\n\n# Improper usage of rio-warp can lead to accidental creation of\n# extremely large datasets. We'll put a hard limit on the size of\n# datasets and raise a usage error if the limits are exceeded.\nMAX_OUTPUT_WIDTH = 100000\nMAX_OUTPUT_HEIGHT = 100000\n\n\[email protected](short_help='Warp a raster dataset.')\n@files_inout_arg\[email protected]_opt\n@format_opt\[email protected](\n '--like',\n type=click.Path(exists=True),\n help='Raster dataset to use as a template for obtaining affine '\n 'transform (bounds and resolution), and crs.')\[email protected]('--dst-crs', default=None,\n help='Target coordinate reference system.')\[email protected]_opt\[email protected](\n '--src-bounds',\n nargs=4, type=float, default=None,\n help=\"Determine output extent from source bounds: left bottom right top \"\n \". Cannot be used with destination --bounds\")\[email protected](\n '--bounds', '--dst-bounds', nargs=4, type=float, default=None,\n help=\"Determine output extent from destination bounds: left bottom right top\")\[email protected]_opt\[email protected]('--resampling', type=click.Choice([r.name for r in Resampling]),\n default='nearest', help=\"Resampling method.\",\n show_default=True)\[email protected]('--src-nodata', default=None, show_default=True,\n type=float, help=\"Manually override source nodata\")\[email protected]('--dst-nodata', default=None, show_default=True,\n type=float, help=\"Manually override destination nodata\")\[email protected]('--threads', type=int, default=1,\n help='Number of processing threads.')\[email protected]('--check-invert-proj', is_flag=True, default=True,\n help='Constrain output to valid coordinate region in dst-crs')\[email protected]_overwrite_opt\[email protected]_options\[email protected]_context\ndef warp(ctx, files, output, driver, like, dst_crs, dimensions, src_bounds,\n dst_bounds, res, resampling, src_nodata, dst_nodata, threads,\n check_invert_proj, force_overwrite, creation_options):\n \"\"\"\n Warp a raster dataset.\n\n If a template raster is provided using the --like option, the\n coordinate reference system, affine transform, and dimensions of\n that raster will be used for the output. In this case --dst-crs,\n --bounds, --res, and --dimensions options are not applicable and\n an exception will be raised.\n\n \\b\n $ rio warp input.tif output.tif --like template.tif\n\n The output coordinate reference system may be either a PROJ.4 or\n EPSG:nnnn string,\n\n \\b\n --dst-crs EPSG:4326\n --dst-crs '+proj=longlat +ellps=WGS84 +datum=WGS84'\n\n or a JSON text-encoded PROJ.4 object.\n\n \\b\n --dst-crs '{\"proj\": \"utm\", \"zone\": 18, ...}'\n\n If --dimensions are provided, --res and --bounds are not applicable and an\n exception will be raised.\n Resolution is calculated based on the relationship between the\n raster bounds in the target coordinate system and the dimensions,\n and may produce rectangular rather than square pixels.\n\n \\b\n $ rio warp input.tif output.tif --dimensions 100 200 \\\\\n > --dst-crs EPSG:4326\n\n If --bounds are provided, --res is required if --dst-crs is provided\n (defaults to source raster resolution otherwise).\n\n \\b\n $ rio warp input.tif output.tif \\\\\n > --bounds -78 22 -76 24 --res 0.1 --dst-crs EPSG:4326\n\n \"\"\"\n verbosity = (ctx.obj and ctx.obj.get('verbosity')) or 1\n\n output, files = resolve_inout(\n files=files, output=output, force_overwrite=force_overwrite)\n\n resampling = Resampling[resampling] # get integer code for method\n\n if not len(res):\n # Click sets this as an empty tuple if not provided\n res = None\n else:\n # Expand one value to two if needed\n res = (res[0], res[0]) if len(res) == 1 else res\n\n # Check invalid parameter combinations\n if like:\n invalid_combos = (dimensions, dst_bounds, dst_crs, res)\n if any(p for p in invalid_combos if p is not None):\n raise click.BadParameter(\n \"--like cannot be used with any of --dimensions, --bounds, \"\n \"--dst-crs, or --res\")\n\n elif dimensions:\n invalid_combos = (dst_bounds, res)\n if any(p for p in invalid_combos if p is not None):\n raise click.BadParameter(\n \"--dimensions cannot be used with --bounds or --res\")\n\n with rasterio.Env(CPL_DEBUG=verbosity > 2,\n CHECK_WITH_INVERT_PROJ=check_invert_proj):\n with rasterio.open(files[0]) as src:\n l, b, r, t = src.bounds\n out_kwargs = src.profile.copy()\n out_kwargs['driver'] = driver\n\n # Sort out the bounds options.\n if src_bounds and dst_bounds:\n raise click.BadParameter(\n \"--src-bounds and destination --bounds may not be specified \"\n \"simultaneously.\")\n\n if like:\n with rasterio.open(like) as template_ds:\n dst_crs = template_ds.crs\n dst_transform = template_ds.affine\n dst_height = template_ds.height\n dst_width = template_ds.width\n\n elif dst_crs is not None:\n try:\n dst_crs = CRS.from_string(dst_crs)\n except ValueError as err:\n raise click.BadParameter(\n str(err), param='dst_crs', param_hint='dst_crs')\n\n if dimensions:\n # Calculate resolution appropriate for dimensions\n # in target.\n dst_width, dst_height = dimensions\n try:\n xmin, ymin, xmax, ymax = transform_bounds(\n src.crs, dst_crs, *src.bounds)\n except CRSError as err:\n raise click.BadParameter(\n str(err), param='dst_crs', param_hint='dst_crs')\n dst_transform = Affine(\n (xmax - xmin) / float(dst_width),\n 0, xmin, 0,\n (ymin - ymax) / float(dst_height),\n ymax\n )\n\n elif src_bounds or dst_bounds:\n if not res:\n raise click.BadParameter(\n \"Required when using --bounds.\",\n param='res', param_hint='res')\n\n if src_bounds:\n try:\n xmin, ymin, xmax, ymax = transform_bounds(\n src.crs, dst_crs, *src_bounds)\n except CRSError as err:\n raise click.BadParameter(\n str(err), param='dst_crs',\n param_hint='dst_crs')\n else:\n xmin, ymin, xmax, ymax = dst_bounds\n\n dst_transform = Affine(res[0], 0, xmin, 0, -res[1], ymax)\n dst_width = max(int(ceil((xmax - xmin) / res[0])), 1)\n dst_height = max(int(ceil((ymax - ymin) / res[1])), 1)\n\n else:\n try:\n dst_transform, dst_width, dst_height = calculate_default_transform(\n src.crs, dst_crs, src.width, src.height,\n *src.bounds, resolution=res)\n except CRSError as err:\n raise click.BadParameter(\n str(err), param='dst_crs', param_hint='dst_crs')\n elif dimensions:\n # Same projection, different dimensions, calculate resolution.\n dst_crs = src.crs\n dst_width, dst_height = dimensions\n dst_transform = Affine(\n (r - l) / float(dst_width),\n 0, l, 0,\n (b - t) / float(dst_height),\n t\n )\n\n elif src_bounds or dst_bounds:\n # Same projection, different dimensions and possibly\n # different resolution.\n if not res:\n res = (src.affine.a, -src.affine.e)\n\n dst_crs = src.crs\n xmin, ymin, xmax, ymax = (src_bounds or dst_bounds)\n dst_transform = Affine(res[0], 0, xmin, 0, -res[1], ymax)\n dst_width = max(int(ceil((xmax - xmin) / res[0])), 1)\n dst_height = max(int(ceil((ymax - ymin) / res[1])), 1)\n\n elif res:\n # Same projection, different resolution.\n dst_crs = src.crs\n dst_transform = Affine(res[0], 0, l, 0, -res[1], t)\n dst_width = max(int(ceil((r - l) / res[0])), 1)\n dst_height = max(int(ceil((t - b) / res[1])), 1)\n\n else:\n dst_crs = src.crs\n dst_transform = src.affine\n dst_width = src.width\n dst_height = src.height\n\n # If src_nodata is not None, update the dst metadata NODATA\n # value to src_nodata (will be overridden by dst_nodata if it is not None\n if src_nodata is not None:\n # Update the dst nodata value\n out_kwargs.update({\n 'nodata': src_nodata\n })\n\n # Validate a manually set destination NODATA value\n # against the input datatype.\n if dst_nodata is not None:\n if src_nodata is None and src.meta['nodata'] is None:\n raise click.BadParameter(\n \"--src-nodata must be provided because dst-nodata is not None\")\n else:\n # Update the dst nodata value\n out_kwargs.update({'nodata': dst_nodata})\n\n # When the bounds option is misused, extreme values of\n # destination width and height may result.\n if (dst_width < 0 or dst_height < 0 or\n dst_width > MAX_OUTPUT_WIDTH or\n dst_height > MAX_OUTPUT_HEIGHT):\n raise click.BadParameter(\n \"Invalid output dimensions: {0}.\".format(\n (dst_width, dst_height)))\n\n out_kwargs.update({\n 'crs': dst_crs,\n 'transform': dst_transform,\n 'affine': dst_transform,\n 'width': dst_width,\n 'height': dst_height\n })\n\n # Adjust block size if necessary.\n if ('blockxsize' in out_kwargs and\n dst_width < out_kwargs['blockxsize']):\n del out_kwargs['blockxsize']\n if ('blockysize' in out_kwargs and\n dst_height < out_kwargs['blockysize']):\n del out_kwargs['blockysize']\n\n out_kwargs.update(**creation_options)\n\n with rasterio.open(output, 'w', **out_kwargs) as dst:\n reproject(\n source=rasterio.band(src, list(range(1, src.count + 1))),\n destination=rasterio.band(\n dst, list(range(1, src.count + 1))),\n src_transform=src.affine,\n src_crs=src.crs,\n src_nodata=src_nodata,\n dst_transform=out_kwargs['transform'],\n dst_crs=out_kwargs['crs'],\n dst_nodata=dst_nodata,\n resampling=resampling,\n num_threads=threads)\n", "path": "rasterio/rio/warp.py"}]}
| 3,617 | 498 |
gh_patches_debug_15350
|
rasdani/github-patches
|
git_diff
|
mkdocs__mkdocs-418
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
mkdocs build cleaning removes .git when site_dir points to a parent directory
`mkdocs build --clean` wipes out the project when attempting to split doc development and mkdocs' build output across two git branches (gh-pages, gh-pages-dev) with the following layout:
```
<branch: gh-pages-dev>
$PROJ_ROOT/
|- dev
` |- doc/
`- mkdoc.yml \# NOTE: site_dir=../
<branch: gh-pages>
$PROJ_ROOT/
`- ... \# build output
```
This is so I can both keep everything in the same project and also track the dev/ directory on the dev branch and have the output where it should be on the gh-pages branch. It seems obvious now that this would wipe out everything including the .git/ for the project (glad this was a test). Possibly, it could recursively check `site_dir` for a .git, warn and exit. Or just a disclaimer [here](http://www.mkdocs.org/user-guide/configuration/#build-directories) of this possibility would be enough, so no one else tries this and wipes out their project (and maybe you could recommend to new users how to maintain site development and build output in the same repo).
Thanks,
Kris
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mkdocs/utils.py`
Content:
```
1 # coding: utf-8
2
3 """
4 Standalone file utils.
5
6 Nothing in this module should have an knowledge of config or the layout
7 and structure of the site and pages in the site.
8 """
9
10 import os
11 import shutil
12
13 from mkdocs.compat import urlparse
14
15
16 def copy_file(source_path, output_path):
17 """
18 Copy source_path to output_path, making sure any parent directories exist.
19 """
20 output_dir = os.path.dirname(output_path)
21 if not os.path.exists(output_dir):
22 os.makedirs(output_dir)
23 shutil.copy(source_path, output_path)
24
25
26 def write_file(content, output_path):
27 """
28 Write content to output_path, making sure any parent directories exist.
29 """
30 output_dir = os.path.dirname(output_path)
31 if not os.path.exists(output_dir):
32 os.makedirs(output_dir)
33 open(output_path, 'wb').write(content)
34
35
36 def clean_directory(directory):
37 """
38 Remove the content of a directory recursively but not the directory itself.
39 """
40 if os.path.exists(directory):
41 for entry in os.listdir(directory):
42 path = os.path.join(directory, entry)
43 if os.path.isdir(path):
44 shutil.rmtree(path, True)
45 else:
46 os.unlink(path)
47
48
49 def copy_media_files(from_dir, to_dir):
50 """
51 Recursively copy all files except markdown and HTML into another directory.
52 """
53 for (source_dir, dirnames, filenames) in os.walk(from_dir):
54 relative_path = os.path.relpath(source_dir, from_dir)
55 output_dir = os.path.normpath(os.path.join(to_dir, relative_path))
56
57 # Filter filenames starting with a '.'
58 filenames = [f for f in filenames if not f.startswith('.')]
59
60 # Filter the dirnames that start with a '.' and update the list in
61 # place to prevent us walking these.
62 dirnames[:] = [d for d in dirnames if not d.startswith('.')]
63
64 for filename in filenames:
65 if not is_markdown_file(filename) and not is_html_file(filename):
66 source_path = os.path.join(source_dir, filename)
67 output_path = os.path.join(output_dir, filename)
68 copy_file(source_path, output_path)
69
70
71 def get_html_path(path):
72 """
73 Map a source file path to an output html path.
74
75 Paths like 'index.md' will be converted to 'index.html'
76 Paths like 'about.md' will be converted to 'about/index.html'
77 Paths like 'api-guide/core.md' will be converted to 'api-guide/core/index.html'
78 """
79 path = os.path.splitext(path)[0]
80 if os.path.basename(path) == 'index':
81 return path + '.html'
82 return "/".join((path, 'index.html'))
83
84
85 def get_url_path(path, use_directory_urls=True):
86 """
87 Map a source file path to an output html path.
88
89 Paths like 'index.md' will be converted to '/'
90 Paths like 'about.md' will be converted to '/about/'
91 Paths like 'api-guide/core.md' will be converted to '/api-guide/core/'
92
93 If `use_directory_urls` is `False`, returned URLs will include the a trailing
94 `index.html` rather than just returning the directory path.
95 """
96 path = get_html_path(path)
97 url = '/' + path.replace(os.path.sep, '/')
98 if use_directory_urls:
99 return url[:-len('index.html')]
100 return url
101
102
103 def is_homepage(path):
104 return os.path.splitext(path)[0] == 'index'
105
106
107 def is_markdown_file(path):
108 """
109 Return True if the given file path is a Markdown file.
110
111 http://superuser.com/questions/249436/file-extension-for-markdown-files
112 """
113 ext = os.path.splitext(path)[1].lower()
114 return ext in [
115 '.markdown',
116 '.mdown',
117 '.mkdn',
118 '.mkd',
119 '.md',
120 ]
121
122
123 def is_css_file(path):
124 """
125 Return True if the given file path is a CSS file.
126 """
127 ext = os.path.splitext(path)[1].lower()
128 return ext in [
129 '.css',
130 ]
131
132
133 def is_javascript_file(path):
134 """
135 Return True if the given file path is a Javascript file.
136 """
137 ext = os.path.splitext(path)[1].lower()
138 return ext in [
139 '.js',
140 '.javascript'
141 ]
142
143
144 def is_html_file(path):
145 """
146 Return True if the given file path is an HTML file.
147 """
148 ext = os.path.splitext(path)[1].lower()
149 return ext in [
150 '.html',
151 '.htm',
152 ]
153
154
155 def create_media_urls(nav, url_list):
156 """
157 Return a list of URLs that have been processed correctly for inclusion in a page.
158 """
159 final_urls = []
160 for url in url_list:
161 # Allow links to fully qualified URL's
162 parsed = urlparse(url)
163 if parsed.netloc:
164 final_urls.append(url)
165 else:
166 relative_url = '%s/%s' % (nav.url_context.make_relative('/'), url)
167 final_urls.append(relative_url)
168 return final_urls
169
170
171 def create_relative_media_url(nav, url):
172 """
173 For a current page, create a relative url based on the given URL.
174
175 On index.md (which becomes /index.html):
176 image.png -> ./image.png
177 /image.png -> ./image.png
178
179 on sub/page.md (which becomes /sub/page/index.html):
180 image.png -> ../image.png
181 /image.png -> ../../image.png
182
183 """
184
185 # Allow links to fully qualified URL's
186 parsed = urlparse(url)
187 if parsed.netloc:
188 return url
189
190 # If the URL we are looking at starts with a /, then it should be
191 # considered as absolute and will be 'relative' to the root.
192 if url.startswith('/'):
193 base = '/'
194 url = url[1:]
195 else:
196 base = nav.url_context.base_path
197
198 relative_url = '%s/%s' % (nav.url_context.make_relative(base), url)
199
200 # TODO: Fix this, this is a hack. Relative urls are not being calculated
201 # correctly for images in the same directory as the markdown. I think this
202 # is due to us moving it into a directory with index.html, but I'm not sure
203 if nav.url_context.base_path is not '/' and relative_url.startswith("./"):
204 relative_url = ".%s" % relative_url
205
206 return relative_url
207
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mkdocs/utils.py b/mkdocs/utils.py
--- a/mkdocs/utils.py
+++ b/mkdocs/utils.py
@@ -37,13 +37,21 @@
"""
Remove the content of a directory recursively but not the directory itself.
"""
- if os.path.exists(directory):
- for entry in os.listdir(directory):
- path = os.path.join(directory, entry)
- if os.path.isdir(path):
- shutil.rmtree(path, True)
- else:
- os.unlink(path)
+ if not os.path.exists(directory):
+ return
+
+ for entry in os.listdir(directory):
+
+ # Don't remove hidden files from the directory. We never copy files
+ # that are hidden, so we shouldn't delete them either.
+ if entry.startswith('.'):
+ continue
+
+ path = os.path.join(directory, entry)
+ if os.path.isdir(path):
+ shutil.rmtree(path, True)
+ else:
+ os.unlink(path)
def copy_media_files(from_dir, to_dir):
|
{"golden_diff": "diff --git a/mkdocs/utils.py b/mkdocs/utils.py\n--- a/mkdocs/utils.py\n+++ b/mkdocs/utils.py\n@@ -37,13 +37,21 @@\n \"\"\"\n Remove the content of a directory recursively but not the directory itself.\n \"\"\"\n- if os.path.exists(directory):\n- for entry in os.listdir(directory):\n- path = os.path.join(directory, entry)\n- if os.path.isdir(path):\n- shutil.rmtree(path, True)\n- else:\n- os.unlink(path)\n+ if not os.path.exists(directory):\n+ return\n+\n+ for entry in os.listdir(directory):\n+\n+ # Don't remove hidden files from the directory. We never copy files\n+ # that are hidden, so we shouldn't delete them either.\n+ if entry.startswith('.'):\n+ continue\n+\n+ path = os.path.join(directory, entry)\n+ if os.path.isdir(path):\n+ shutil.rmtree(path, True)\n+ else:\n+ os.unlink(path)\n \n \n def copy_media_files(from_dir, to_dir):\n", "issue": "mkdocs build cleaning removes .git when site_dir points to a parent directory\n`mkdocs build --clean` wipes out the project when attempting to split doc development and mkdocs' build output across two git branches (gh-pages, gh-pages-dev) with the following layout:\n\n```\n<branch: gh-pages-dev>\n$PROJ_ROOT/\n|- dev\n` |- doc/\n `- mkdoc.yml \\# NOTE: site_dir=../\n\n<branch: gh-pages>\n$PROJ_ROOT/\n`- ... \\# build output\n```\n\nThis is so I can both keep everything in the same project and also track the dev/ directory on the dev branch and have the output where it should be on the gh-pages branch. It seems obvious now that this would wipe out everything including the .git/ for the project (glad this was a test). Possibly, it could recursively check `site_dir` for a .git, warn and exit. Or just a disclaimer [here](http://www.mkdocs.org/user-guide/configuration/#build-directories) of this possibility would be enough, so no one else tries this and wipes out their project (and maybe you could recommend to new users how to maintain site development and build output in the same repo).\n\nThanks,\nKris\n\n", "before_files": [{"content": "# coding: utf-8\n\n\"\"\"\nStandalone file utils.\n\nNothing in this module should have an knowledge of config or the layout\nand structure of the site and pages in the site.\n\"\"\"\n\nimport os\nimport shutil\n\nfrom mkdocs.compat import urlparse\n\n\ndef copy_file(source_path, output_path):\n \"\"\"\n Copy source_path to output_path, making sure any parent directories exist.\n \"\"\"\n output_dir = os.path.dirname(output_path)\n if not os.path.exists(output_dir):\n os.makedirs(output_dir)\n shutil.copy(source_path, output_path)\n\n\ndef write_file(content, output_path):\n \"\"\"\n Write content to output_path, making sure any parent directories exist.\n \"\"\"\n output_dir = os.path.dirname(output_path)\n if not os.path.exists(output_dir):\n os.makedirs(output_dir)\n open(output_path, 'wb').write(content)\n\n\ndef clean_directory(directory):\n \"\"\"\n Remove the content of a directory recursively but not the directory itself.\n \"\"\"\n if os.path.exists(directory):\n for entry in os.listdir(directory):\n path = os.path.join(directory, entry)\n if os.path.isdir(path):\n shutil.rmtree(path, True)\n else:\n os.unlink(path)\n\n\ndef copy_media_files(from_dir, to_dir):\n \"\"\"\n Recursively copy all files except markdown and HTML into another directory.\n \"\"\"\n for (source_dir, dirnames, filenames) in os.walk(from_dir):\n relative_path = os.path.relpath(source_dir, from_dir)\n output_dir = os.path.normpath(os.path.join(to_dir, relative_path))\n\n # Filter filenames starting with a '.'\n filenames = [f for f in filenames if not f.startswith('.')]\n\n # Filter the dirnames that start with a '.' and update the list in\n # place to prevent us walking these.\n dirnames[:] = [d for d in dirnames if not d.startswith('.')]\n\n for filename in filenames:\n if not is_markdown_file(filename) and not is_html_file(filename):\n source_path = os.path.join(source_dir, filename)\n output_path = os.path.join(output_dir, filename)\n copy_file(source_path, output_path)\n\n\ndef get_html_path(path):\n \"\"\"\n Map a source file path to an output html path.\n\n Paths like 'index.md' will be converted to 'index.html'\n Paths like 'about.md' will be converted to 'about/index.html'\n Paths like 'api-guide/core.md' will be converted to 'api-guide/core/index.html'\n \"\"\"\n path = os.path.splitext(path)[0]\n if os.path.basename(path) == 'index':\n return path + '.html'\n return \"/\".join((path, 'index.html'))\n\n\ndef get_url_path(path, use_directory_urls=True):\n \"\"\"\n Map a source file path to an output html path.\n\n Paths like 'index.md' will be converted to '/'\n Paths like 'about.md' will be converted to '/about/'\n Paths like 'api-guide/core.md' will be converted to '/api-guide/core/'\n\n If `use_directory_urls` is `False`, returned URLs will include the a trailing\n `index.html` rather than just returning the directory path.\n \"\"\"\n path = get_html_path(path)\n url = '/' + path.replace(os.path.sep, '/')\n if use_directory_urls:\n return url[:-len('index.html')]\n return url\n\n\ndef is_homepage(path):\n return os.path.splitext(path)[0] == 'index'\n\n\ndef is_markdown_file(path):\n \"\"\"\n Return True if the given file path is a Markdown file.\n\n http://superuser.com/questions/249436/file-extension-for-markdown-files\n \"\"\"\n ext = os.path.splitext(path)[1].lower()\n return ext in [\n '.markdown',\n '.mdown',\n '.mkdn',\n '.mkd',\n '.md',\n ]\n\n\ndef is_css_file(path):\n \"\"\"\n Return True if the given file path is a CSS file.\n \"\"\"\n ext = os.path.splitext(path)[1].lower()\n return ext in [\n '.css',\n ]\n\n\ndef is_javascript_file(path):\n \"\"\"\n Return True if the given file path is a Javascript file.\n \"\"\"\n ext = os.path.splitext(path)[1].lower()\n return ext in [\n '.js',\n '.javascript'\n ]\n\n\ndef is_html_file(path):\n \"\"\"\n Return True if the given file path is an HTML file.\n \"\"\"\n ext = os.path.splitext(path)[1].lower()\n return ext in [\n '.html',\n '.htm',\n ]\n\n\ndef create_media_urls(nav, url_list):\n \"\"\"\n Return a list of URLs that have been processed correctly for inclusion in a page.\n \"\"\"\n final_urls = []\n for url in url_list:\n # Allow links to fully qualified URL's\n parsed = urlparse(url)\n if parsed.netloc:\n final_urls.append(url)\n else:\n relative_url = '%s/%s' % (nav.url_context.make_relative('/'), url)\n final_urls.append(relative_url)\n return final_urls\n\n\ndef create_relative_media_url(nav, url):\n \"\"\"\n For a current page, create a relative url based on the given URL.\n\n On index.md (which becomes /index.html):\n image.png -> ./image.png\n /image.png -> ./image.png\n\n on sub/page.md (which becomes /sub/page/index.html):\n image.png -> ../image.png\n /image.png -> ../../image.png\n\n \"\"\"\n\n # Allow links to fully qualified URL's\n parsed = urlparse(url)\n if parsed.netloc:\n return url\n\n # If the URL we are looking at starts with a /, then it should be\n # considered as absolute and will be 'relative' to the root.\n if url.startswith('/'):\n base = '/'\n url = url[1:]\n else:\n base = nav.url_context.base_path\n\n relative_url = '%s/%s' % (nav.url_context.make_relative(base), url)\n\n # TODO: Fix this, this is a hack. Relative urls are not being calculated\n # correctly for images in the same directory as the markdown. I think this\n # is due to us moving it into a directory with index.html, but I'm not sure\n if nav.url_context.base_path is not '/' and relative_url.startswith(\"./\"):\n relative_url = \".%s\" % relative_url\n\n return relative_url\n", "path": "mkdocs/utils.py"}], "after_files": [{"content": "# coding: utf-8\n\n\"\"\"\nStandalone file utils.\n\nNothing in this module should have an knowledge of config or the layout\nand structure of the site and pages in the site.\n\"\"\"\n\nimport os\nimport shutil\n\nfrom mkdocs.compat import urlparse\n\n\ndef copy_file(source_path, output_path):\n \"\"\"\n Copy source_path to output_path, making sure any parent directories exist.\n \"\"\"\n output_dir = os.path.dirname(output_path)\n if not os.path.exists(output_dir):\n os.makedirs(output_dir)\n shutil.copy(source_path, output_path)\n\n\ndef write_file(content, output_path):\n \"\"\"\n Write content to output_path, making sure any parent directories exist.\n \"\"\"\n output_dir = os.path.dirname(output_path)\n if not os.path.exists(output_dir):\n os.makedirs(output_dir)\n open(output_path, 'wb').write(content)\n\n\ndef clean_directory(directory):\n \"\"\"\n Remove the content of a directory recursively but not the directory itself.\n \"\"\"\n if not os.path.exists(directory):\n return\n\n for entry in os.listdir(directory):\n\n # Don't remove hidden files from the directory. We never copy files\n # that are hidden, so we shouldn't delete them either.\n if entry.startswith('.'):\n continue\n\n path = os.path.join(directory, entry)\n if os.path.isdir(path):\n shutil.rmtree(path, True)\n else:\n os.unlink(path)\n\n\ndef copy_media_files(from_dir, to_dir):\n \"\"\"\n Recursively copy all files except markdown and HTML into another directory.\n \"\"\"\n for (source_dir, dirnames, filenames) in os.walk(from_dir):\n relative_path = os.path.relpath(source_dir, from_dir)\n output_dir = os.path.normpath(os.path.join(to_dir, relative_path))\n\n # Filter filenames starting with a '.'\n filenames = [f for f in filenames if not f.startswith('.')]\n\n # Filter the dirnames that start with a '.' and update the list in\n # place to prevent us walking these.\n dirnames[:] = [d for d in dirnames if not d.startswith('.')]\n\n for filename in filenames:\n if not is_markdown_file(filename) and not is_html_file(filename):\n source_path = os.path.join(source_dir, filename)\n output_path = os.path.join(output_dir, filename)\n copy_file(source_path, output_path)\n\n\ndef get_html_path(path):\n \"\"\"\n Map a source file path to an output html path.\n\n Paths like 'index.md' will be converted to 'index.html'\n Paths like 'about.md' will be converted to 'about/index.html'\n Paths like 'api-guide/core.md' will be converted to 'api-guide/core/index.html'\n \"\"\"\n path = os.path.splitext(path)[0]\n if os.path.basename(path) == 'index':\n return path + '.html'\n return \"/\".join((path, 'index.html'))\n\n\ndef get_url_path(path, use_directory_urls=True):\n \"\"\"\n Map a source file path to an output html path.\n\n Paths like 'index.md' will be converted to '/'\n Paths like 'about.md' will be converted to '/about/'\n Paths like 'api-guide/core.md' will be converted to '/api-guide/core/'\n\n If `use_directory_urls` is `False`, returned URLs will include the a trailing\n `index.html` rather than just returning the directory path.\n \"\"\"\n path = get_html_path(path)\n url = '/' + path.replace(os.path.sep, '/')\n if use_directory_urls:\n return url[:-len('index.html')]\n return url\n\n\ndef is_homepage(path):\n return os.path.splitext(path)[0] == 'index'\n\n\ndef is_markdown_file(path):\n \"\"\"\n Return True if the given file path is a Markdown file.\n\n http://superuser.com/questions/249436/file-extension-for-markdown-files\n \"\"\"\n ext = os.path.splitext(path)[1].lower()\n return ext in [\n '.markdown',\n '.mdown',\n '.mkdn',\n '.mkd',\n '.md',\n ]\n\n\ndef is_css_file(path):\n \"\"\"\n Return True if the given file path is a CSS file.\n \"\"\"\n ext = os.path.splitext(path)[1].lower()\n return ext in [\n '.css',\n ]\n\n\ndef is_javascript_file(path):\n \"\"\"\n Return True if the given file path is a Javascript file.\n \"\"\"\n ext = os.path.splitext(path)[1].lower()\n return ext in [\n '.js',\n '.javascript'\n ]\n\n\ndef is_html_file(path):\n \"\"\"\n Return True if the given file path is an HTML file.\n \"\"\"\n ext = os.path.splitext(path)[1].lower()\n return ext in [\n '.html',\n '.htm',\n ]\n\n\ndef create_media_urls(nav, url_list):\n \"\"\"\n Return a list of URLs that have been processed correctly for inclusion in a page.\n \"\"\"\n final_urls = []\n for url in url_list:\n # Allow links to fully qualified URL's\n parsed = urlparse(url)\n if parsed.netloc:\n final_urls.append(url)\n else:\n relative_url = '%s/%s' % (nav.url_context.make_relative('/'), url)\n final_urls.append(relative_url)\n return final_urls\n\n\ndef create_relative_media_url(nav, url):\n \"\"\"\n For a current page, create a relative url based on the given URL.\n\n On index.md (which becomes /index.html):\n image.png -> ./image.png\n /image.png -> ./image.png\n\n on sub/page.md (which becomes /sub/page/index.html):\n image.png -> ../image.png\n /image.png -> ../../image.png\n\n \"\"\"\n\n # Allow links to fully qualified URL's\n parsed = urlparse(url)\n if parsed.netloc:\n return url\n\n # If the URL we are looking at starts with a /, then it should be\n # considered as absolute and will be 'relative' to the root.\n if url.startswith('/'):\n base = '/'\n url = url[1:]\n else:\n base = nav.url_context.base_path\n\n relative_url = '%s/%s' % (nav.url_context.make_relative(base), url)\n\n # TODO: Fix this, this is a hack. Relative urls are not being calculated\n # correctly for images in the same directory as the markdown. I think this\n # is due to us moving it into a directory with index.html, but I'm not sure\n if nav.url_context.base_path is not '/' and relative_url.startswith(\"./\"):\n relative_url = \".%s\" % relative_url\n\n return relative_url\n", "path": "mkdocs/utils.py"}]}
| 2,423 | 234 |
gh_patches_debug_32903
|
rasdani/github-patches
|
git_diff
|
netbox-community__netbox-11331
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Initial NetBox 3.4.x migration fails when a plugin using SearchIndex is installed
### NetBox version
3.4.1
### Python version
3.8
### Steps to Reproduce
1. Install NetBox 3.4.1
2. Install any plugin that uses SearchIndex functionality (e.g. netbox-dns) and configure it in PLUGINS
3. Run the initial migration
### Expected Behavior
The migration succeeds
### Observed Behavior
The migration fails with ProgrammingError exception:
```
Operations to perform:
Apply all migrations: admin, auth, circuits, contenttypes, dcim, django_rq, extras, ipam, netbox_dns, sessions, social_django, taggit, tenancy, users, virtualization, wireless
Running migrations:
Applying extras.0083_search...Reindexing 67 models.
Clearing cached values... 0 entries deleted.
Indexing models
netbox_dns.nameserver... Traceback (most recent call last):
File "/opt/netbox/lib/python3.8/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
psycopg2.errors.UndefinedTable: relation "netbox_dns_nameserver" does not exist
LINE 1: ...name", "netbox_dns_nameserver"."description" FROM "netbox_dn...
^
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/netbox/netbox/manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/opt/netbox/lib/python3.8/site-packages/django/core/management/__init__.py", line 446, in execute_from_command_line
utility.execute()
File "/opt/netbox/lib/python3.8/site-packages/django/core/management/__init__.py", line 440, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/opt/netbox/lib/python3.8/site-packages/django/core/management/base.py", line 402, in run_from_argv
self.execute(*args, **cmd_options)
File "/opt/netbox/lib/python3.8/site-packages/django/core/management/base.py", line 448, in execute
output = self.handle(*args, **options)
File "/opt/netbox/lib/python3.8/site-packages/django/core/management/base.py", line 96, in wrapped
res = handle_func(*args, **kwargs)
File "/opt/netbox/lib/python3.8/site-packages/django/core/management/commands/migrate.py", line 349, in handle
post_migrate_state = executor.migrate(
File "/opt/netbox/lib/python3.8/site-packages/django/db/migrations/executor.py", line 135, in migrate
state = self._migrate_all_forwards(
File "/opt/netbox/lib/python3.8/site-packages/django/db/migrations/executor.py", line 167, in _migrate_all_forwards
state = self.apply_migration(
File "/opt/netbox/lib/python3.8/site-packages/django/db/migrations/executor.py", line 252, in apply_migration
state = migration.apply(state, schema_editor)
File "/opt/netbox/lib/python3.8/site-packages/django/db/migrations/migration.py", line 130, in apply
operation.database_forwards(
File "/opt/netbox/lib/python3.8/site-packages/django/db/migrations/operations/special.py", line 193, in database_forwards
self.code(from_state.apps, schema_editor)
File "/opt/netbox/netbox/extras/migrations/0083_search.py", line 13, in reindex
management.call_command('reindex')
File "/opt/netbox/lib/python3.8/site-packages/django/core/management/__init__.py", line 198, in call_command
return command.execute(*args, **defaults)
File "/opt/netbox/lib/python3.8/site-packages/django/core/management/base.py", line 448, in execute
output = self.handle(*args, **options)
File "/opt/netbox/netbox/extras/management/commands/reindex.py", line 68, in handle
i = search_backend.cache(model.objects.iterator(), remove_existing=False)
File "/opt/netbox/netbox/netbox/search/backends.py", line 148, in cache
for instance in instances:
File "/opt/netbox/lib/python3.8/site-packages/django/db/models/query.py", line 512, in _iterator
yield from iterable
File "/opt/netbox/lib/python3.8/site-packages/django/db/models/query.py", line 87, in __iter__
results = compiler.execute_sql(
File "/opt/netbox/lib/python3.8/site-packages/django/db/models/sql/compiler.py", line 1398, in execute_sql
cursor.execute(sql, params)
File "/opt/netbox/lib/python3.8/site-packages/django/db/backends/utils.py", line 103, in execute
return super().execute(sql, params)
File "/opt/netbox/lib/python3.8/site-packages/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(
File "/opt/netbox/lib/python3.8/site-packages/django/db/backends/utils.py", line 80, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/opt/netbox/lib/python3.8/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
File "/opt/netbox/lib/python3.8/site-packages/django/db/utils.py", line 91, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/opt/netbox/lib/python3.8/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
django.db.utils.ProgrammingError: relation "netbox_dns_nameserver" does not exist
LINE 1: ...name", "netbox_dns_nameserver"."description" FROM "netbox_dn...
```
The `netbox_dns_nameserver` relation for the `netbox_dns.models.NameServer` model uses `SearchIndex`:
```
@register_search
class NameServerIndex(SearchIndex):
model = NameServer
fields = (
("name", 100),
("description", 500),
)
```
That results in the NetBox migration `extras.0083_search` trying to reindex the plugin's relation, which does not exist yet at that point.
The obvious workaround is to disable the plugin, run the migration, re-enable the plugin and then run the migration again.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `netbox/extras/management/commands/reindex.py`
Content:
```
1 from django.contrib.contenttypes.models import ContentType
2 from django.core.management.base import BaseCommand, CommandError
3
4 from netbox.registry import registry
5 from netbox.search.backends import search_backend
6
7
8 class Command(BaseCommand):
9 help = 'Reindex objects for search'
10
11 def add_arguments(self, parser):
12 parser.add_argument(
13 'args',
14 metavar='app_label[.ModelName]',
15 nargs='*',
16 help='One or more apps or models to reindex',
17 )
18
19 def _get_indexers(self, *model_names):
20 indexers = {}
21
22 # No models specified; pull in all registered indexers
23 if not model_names:
24 for idx in registry['search'].values():
25 indexers[idx.model] = idx
26
27 # Return only indexers for the specified models
28 else:
29 for label in model_names:
30 try:
31 app_label, model_name = label.lower().split('.')
32 except ValueError:
33 raise CommandError(
34 f"Invalid model: {label}. Model names must be in the format <app_label>.<model_name>."
35 )
36 try:
37 idx = registry['search'][f'{app_label}.{model_name}']
38 indexers[idx.model] = idx
39 except KeyError:
40 raise CommandError(f"No indexer registered for {label}")
41
42 return indexers
43
44 def handle(self, *model_labels, **kwargs):
45
46 # Determine which models to reindex
47 indexers = self._get_indexers(*model_labels)
48 if not indexers:
49 raise CommandError("No indexers found!")
50 self.stdout.write(f'Reindexing {len(indexers)} models.')
51
52 # Clear all cached values for the specified models
53 self.stdout.write('Clearing cached values... ', ending='')
54 self.stdout.flush()
55 content_types = [
56 ContentType.objects.get_for_model(model) for model in indexers.keys()
57 ]
58 deleted_count = search_backend.clear(content_types)
59 self.stdout.write(f'{deleted_count} entries deleted.')
60
61 # Index models
62 self.stdout.write('Indexing models')
63 for model, idx in indexers.items():
64 app_label = model._meta.app_label
65 model_name = model._meta.model_name
66 self.stdout.write(f' {app_label}.{model_name}... ', ending='')
67 self.stdout.flush()
68 i = search_backend.cache(model.objects.iterator(), remove_existing=False)
69 if i:
70 self.stdout.write(f'{i} entries cached.')
71 else:
72 self.stdout.write(f'None found.')
73
74 msg = f'Completed.'
75 if total_count := search_backend.size:
76 msg += f' Total entries: {total_count}'
77 self.stdout.write(msg, self.style.SUCCESS)
78
```
Path: `netbox/extras/migrations/0083_search.py`
Content:
```
1 import sys
2 import uuid
3
4 import django.db.models.deletion
5 import django.db.models.lookups
6 from django.core import management
7 from django.db import migrations, models
8
9
10 def reindex(apps, schema_editor):
11 # Build the search index (except during tests)
12 if 'test' not in sys.argv:
13 management.call_command('reindex')
14
15
16 class Migration(migrations.Migration):
17
18 dependencies = [
19 ('circuits', '0041_standardize_description_comments'),
20 ('contenttypes', '0002_remove_content_type_name'),
21 ('dcim', '0166_virtualdevicecontext'),
22 ('extras', '0082_savedfilter'),
23 ('ipam', '0063_standardize_description_comments'),
24 ('tenancy', '0009_standardize_description_comments'),
25 ('virtualization', '0034_standardize_description_comments'),
26 ('wireless', '0008_wirelesslan_status'),
27 ]
28
29 operations = [
30 migrations.AddField(
31 model_name='customfield',
32 name='search_weight',
33 field=models.PositiveSmallIntegerField(default=1000),
34 ),
35 migrations.CreateModel(
36 name='CachedValue',
37 fields=[
38 ('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),
39 ('timestamp', models.DateTimeField(auto_now_add=True)),
40 ('object_id', models.PositiveBigIntegerField()),
41 ('field', models.CharField(max_length=200)),
42 ('type', models.CharField(max_length=30)),
43 ('value', models.TextField()),
44 ('weight', models.PositiveSmallIntegerField(default=1000)),
45 ('object_type', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to='contenttypes.contenttype')),
46 ],
47 options={
48 'ordering': ('weight', 'object_type', 'object_id'),
49 },
50 ),
51 migrations.RunPython(
52 code=reindex,
53 reverse_code=migrations.RunPython.noop
54 ),
55 ]
56
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/netbox/extras/management/commands/reindex.py b/netbox/extras/management/commands/reindex.py
--- a/netbox/extras/management/commands/reindex.py
+++ b/netbox/extras/management/commands/reindex.py
@@ -27,17 +27,28 @@
# Return only indexers for the specified models
else:
for label in model_names:
- try:
- app_label, model_name = label.lower().split('.')
- except ValueError:
+ labels = label.lower().split('.')
+
+ # Label specifies an exact model
+ if len(labels) == 2:
+ app_label, model_name = labels
+ try:
+ idx = registry['search'][f'{app_label}.{model_name}']
+ indexers[idx.model] = idx
+ except KeyError:
+ raise CommandError(f"No indexer registered for {label}")
+
+ # Label specifies all the models of an app
+ elif len(labels) == 1:
+ app_label = labels[0] + '.'
+ for indexer_label, idx in registry['search'].items():
+ if indexer_label.startswith(app_label):
+ indexers[idx.model] = idx
+
+ else:
raise CommandError(
- f"Invalid model: {label}. Model names must be in the format <app_label>.<model_name>."
+ f"Invalid model: {label}. Model names must be in the format <app_label> or <app_label>.<model_name>."
)
- try:
- idx = registry['search'][f'{app_label}.{model_name}']
- indexers[idx.model] = idx
- except KeyError:
- raise CommandError(f"No indexer registered for {label}")
return indexers
diff --git a/netbox/extras/migrations/0083_search.py b/netbox/extras/migrations/0083_search.py
--- a/netbox/extras/migrations/0083_search.py
+++ b/netbox/extras/migrations/0083_search.py
@@ -10,7 +10,16 @@
def reindex(apps, schema_editor):
# Build the search index (except during tests)
if 'test' not in sys.argv:
- management.call_command('reindex')
+ management.call_command(
+ 'reindex',
+ 'circuits',
+ 'dcim',
+ 'extras',
+ 'ipam',
+ 'tenancy',
+ 'virtualization',
+ 'wireless',
+ )
class Migration(migrations.Migration):
|
{"golden_diff": "diff --git a/netbox/extras/management/commands/reindex.py b/netbox/extras/management/commands/reindex.py\n--- a/netbox/extras/management/commands/reindex.py\n+++ b/netbox/extras/management/commands/reindex.py\n@@ -27,17 +27,28 @@\n # Return only indexers for the specified models\n else:\n for label in model_names:\n- try:\n- app_label, model_name = label.lower().split('.')\n- except ValueError:\n+ labels = label.lower().split('.')\n+\n+ # Label specifies an exact model\n+ if len(labels) == 2:\n+ app_label, model_name = labels\n+ try:\n+ idx = registry['search'][f'{app_label}.{model_name}']\n+ indexers[idx.model] = idx\n+ except KeyError:\n+ raise CommandError(f\"No indexer registered for {label}\")\n+\n+ # Label specifies all the models of an app\n+ elif len(labels) == 1:\n+ app_label = labels[0] + '.'\n+ for indexer_label, idx in registry['search'].items():\n+ if indexer_label.startswith(app_label):\n+ indexers[idx.model] = idx\n+\n+ else:\n raise CommandError(\n- f\"Invalid model: {label}. Model names must be in the format <app_label>.<model_name>.\"\n+ f\"Invalid model: {label}. Model names must be in the format <app_label> or <app_label>.<model_name>.\"\n )\n- try:\n- idx = registry['search'][f'{app_label}.{model_name}']\n- indexers[idx.model] = idx\n- except KeyError:\n- raise CommandError(f\"No indexer registered for {label}\")\n \n return indexers\n \ndiff --git a/netbox/extras/migrations/0083_search.py b/netbox/extras/migrations/0083_search.py\n--- a/netbox/extras/migrations/0083_search.py\n+++ b/netbox/extras/migrations/0083_search.py\n@@ -10,7 +10,16 @@\n def reindex(apps, schema_editor):\n # Build the search index (except during tests)\n if 'test' not in sys.argv:\n- management.call_command('reindex')\n+ management.call_command(\n+ 'reindex',\n+ 'circuits',\n+ 'dcim',\n+ 'extras',\n+ 'ipam',\n+ 'tenancy',\n+ 'virtualization',\n+ 'wireless',\n+ )\n \n \n class Migration(migrations.Migration):\n", "issue": "Initial NetBox 3.4.x migration fails when a plugin using SearchIndex is installed\n### NetBox version\n\n3.4.1\n\n### Python version\n\n3.8\n\n### Steps to Reproduce\n\n1. Install NetBox 3.4.1\r\n2. Install any plugin that uses SearchIndex functionality (e.g. netbox-dns) and configure it in PLUGINS\r\n3. Run the initial migration\n\n### Expected Behavior\n\nThe migration succeeds\n\n### Observed Behavior\n\nThe migration fails with ProgrammingError exception:\r\n```\r\nOperations to perform:\r\n Apply all migrations: admin, auth, circuits, contenttypes, dcim, django_rq, extras, ipam, netbox_dns, sessions, social_django, taggit, tenancy, users, virtualization, wireless\r\nRunning migrations:\r\n Applying extras.0083_search...Reindexing 67 models.\r\nClearing cached values... 0 entries deleted.\r\nIndexing models\r\n netbox_dns.nameserver... Traceback (most recent call last):\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/db/backends/utils.py\", line 89, in _execute\r\n return self.cursor.execute(sql, params)\r\npsycopg2.errors.UndefinedTable: relation \"netbox_dns_nameserver\" does not exist\r\nLINE 1: ...name\", \"netbox_dns_nameserver\".\"description\" FROM \"netbox_dn...\r\n ^\r\n\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/opt/netbox/netbox/manage.py\", line 10, in <module>\r\n execute_from_command_line(sys.argv)\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/core/management/__init__.py\", line 446, in execute_from_command_line\r\n utility.execute()\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/core/management/__init__.py\", line 440, in execute\r\n self.fetch_command(subcommand).run_from_argv(self.argv)\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/core/management/base.py\", line 402, in run_from_argv\r\n self.execute(*args, **cmd_options)\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/core/management/base.py\", line 448, in execute\r\n output = self.handle(*args, **options)\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/core/management/base.py\", line 96, in wrapped\r\n res = handle_func(*args, **kwargs)\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/core/management/commands/migrate.py\", line 349, in handle\r\n post_migrate_state = executor.migrate(\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/db/migrations/executor.py\", line 135, in migrate\r\n state = self._migrate_all_forwards(\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/db/migrations/executor.py\", line 167, in _migrate_all_forwards\r\n state = self.apply_migration(\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/db/migrations/executor.py\", line 252, in apply_migration\r\n state = migration.apply(state, schema_editor)\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/db/migrations/migration.py\", line 130, in apply\r\n operation.database_forwards(\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/db/migrations/operations/special.py\", line 193, in database_forwards\r\n self.code(from_state.apps, schema_editor)\r\n File \"/opt/netbox/netbox/extras/migrations/0083_search.py\", line 13, in reindex\r\n management.call_command('reindex')\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/core/management/__init__.py\", line 198, in call_command\r\n return command.execute(*args, **defaults)\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/core/management/base.py\", line 448, in execute\r\n output = self.handle(*args, **options)\r\n File \"/opt/netbox/netbox/extras/management/commands/reindex.py\", line 68, in handle\r\n i = search_backend.cache(model.objects.iterator(), remove_existing=False)\r\n File \"/opt/netbox/netbox/netbox/search/backends.py\", line 148, in cache\r\n for instance in instances:\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/db/models/query.py\", line 512, in _iterator\r\n yield from iterable\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/db/models/query.py\", line 87, in __iter__\r\n results = compiler.execute_sql(\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/db/models/sql/compiler.py\", line 1398, in execute_sql\r\n cursor.execute(sql, params)\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/db/backends/utils.py\", line 103, in execute\r\n return super().execute(sql, params)\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/db/backends/utils.py\", line 67, in execute\r\n return self._execute_with_wrappers(\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/db/backends/utils.py\", line 80, in _execute_with_wrappers\r\n return executor(sql, params, many, context)\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/db/backends/utils.py\", line 89, in _execute\r\n return self.cursor.execute(sql, params)\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/db/utils.py\", line 91, in __exit__\r\n raise dj_exc_value.with_traceback(traceback) from exc_value\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/db/backends/utils.py\", line 89, in _execute\r\n return self.cursor.execute(sql, params)\r\ndjango.db.utils.ProgrammingError: relation \"netbox_dns_nameserver\" does not exist\r\nLINE 1: ...name\", \"netbox_dns_nameserver\".\"description\" FROM \"netbox_dn...\r\n```\r\n\r\nThe `netbox_dns_nameserver` relation for the `netbox_dns.models.NameServer` model uses `SearchIndex`:\r\n```\r\n@register_search\r\nclass NameServerIndex(SearchIndex):\r\n model = NameServer\r\n fields = (\r\n (\"name\", 100),\r\n (\"description\", 500),\r\n )\r\n```\r\n\r\nThat results in the NetBox migration `extras.0083_search` trying to reindex the plugin's relation, which does not exist yet at that point.\r\n\r\nThe obvious workaround is to disable the plugin, run the migration, re-enable the plugin and then run the migration again. \n", "before_files": [{"content": "from django.contrib.contenttypes.models import ContentType\nfrom django.core.management.base import BaseCommand, CommandError\n\nfrom netbox.registry import registry\nfrom netbox.search.backends import search_backend\n\n\nclass Command(BaseCommand):\n help = 'Reindex objects for search'\n\n def add_arguments(self, parser):\n parser.add_argument(\n 'args',\n metavar='app_label[.ModelName]',\n nargs='*',\n help='One or more apps or models to reindex',\n )\n\n def _get_indexers(self, *model_names):\n indexers = {}\n\n # No models specified; pull in all registered indexers\n if not model_names:\n for idx in registry['search'].values():\n indexers[idx.model] = idx\n\n # Return only indexers for the specified models\n else:\n for label in model_names:\n try:\n app_label, model_name = label.lower().split('.')\n except ValueError:\n raise CommandError(\n f\"Invalid model: {label}. Model names must be in the format <app_label>.<model_name>.\"\n )\n try:\n idx = registry['search'][f'{app_label}.{model_name}']\n indexers[idx.model] = idx\n except KeyError:\n raise CommandError(f\"No indexer registered for {label}\")\n\n return indexers\n\n def handle(self, *model_labels, **kwargs):\n\n # Determine which models to reindex\n indexers = self._get_indexers(*model_labels)\n if not indexers:\n raise CommandError(\"No indexers found!\")\n self.stdout.write(f'Reindexing {len(indexers)} models.')\n\n # Clear all cached values for the specified models\n self.stdout.write('Clearing cached values... ', ending='')\n self.stdout.flush()\n content_types = [\n ContentType.objects.get_for_model(model) for model in indexers.keys()\n ]\n deleted_count = search_backend.clear(content_types)\n self.stdout.write(f'{deleted_count} entries deleted.')\n\n # Index models\n self.stdout.write('Indexing models')\n for model, idx in indexers.items():\n app_label = model._meta.app_label\n model_name = model._meta.model_name\n self.stdout.write(f' {app_label}.{model_name}... ', ending='')\n self.stdout.flush()\n i = search_backend.cache(model.objects.iterator(), remove_existing=False)\n if i:\n self.stdout.write(f'{i} entries cached.')\n else:\n self.stdout.write(f'None found.')\n\n msg = f'Completed.'\n if total_count := search_backend.size:\n msg += f' Total entries: {total_count}'\n self.stdout.write(msg, self.style.SUCCESS)\n", "path": "netbox/extras/management/commands/reindex.py"}, {"content": "import sys\nimport uuid\n\nimport django.db.models.deletion\nimport django.db.models.lookups\nfrom django.core import management\nfrom django.db import migrations, models\n\n\ndef reindex(apps, schema_editor):\n # Build the search index (except during tests)\n if 'test' not in sys.argv:\n management.call_command('reindex')\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('circuits', '0041_standardize_description_comments'),\n ('contenttypes', '0002_remove_content_type_name'),\n ('dcim', '0166_virtualdevicecontext'),\n ('extras', '0082_savedfilter'),\n ('ipam', '0063_standardize_description_comments'),\n ('tenancy', '0009_standardize_description_comments'),\n ('virtualization', '0034_standardize_description_comments'),\n ('wireless', '0008_wirelesslan_status'),\n ]\n\n operations = [\n migrations.AddField(\n model_name='customfield',\n name='search_weight',\n field=models.PositiveSmallIntegerField(default=1000),\n ),\n migrations.CreateModel(\n name='CachedValue',\n fields=[\n ('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),\n ('timestamp', models.DateTimeField(auto_now_add=True)),\n ('object_id', models.PositiveBigIntegerField()),\n ('field', models.CharField(max_length=200)),\n ('type', models.CharField(max_length=30)),\n ('value', models.TextField()),\n ('weight', models.PositiveSmallIntegerField(default=1000)),\n ('object_type', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to='contenttypes.contenttype')),\n ],\n options={\n 'ordering': ('weight', 'object_type', 'object_id'),\n },\n ),\n migrations.RunPython(\n code=reindex,\n reverse_code=migrations.RunPython.noop\n ),\n ]\n", "path": "netbox/extras/migrations/0083_search.py"}], "after_files": [{"content": "from django.contrib.contenttypes.models import ContentType\nfrom django.core.management.base import BaseCommand, CommandError\n\nfrom netbox.registry import registry\nfrom netbox.search.backends import search_backend\n\n\nclass Command(BaseCommand):\n help = 'Reindex objects for search'\n\n def add_arguments(self, parser):\n parser.add_argument(\n 'args',\n metavar='app_label[.ModelName]',\n nargs='*',\n help='One or more apps or models to reindex',\n )\n\n def _get_indexers(self, *model_names):\n indexers = {}\n\n # No models specified; pull in all registered indexers\n if not model_names:\n for idx in registry['search'].values():\n indexers[idx.model] = idx\n\n # Return only indexers for the specified models\n else:\n for label in model_names:\n labels = label.lower().split('.')\n\n # Label specifies an exact model\n if len(labels) == 2:\n app_label, model_name = labels\n try:\n idx = registry['search'][f'{app_label}.{model_name}']\n indexers[idx.model] = idx\n except KeyError:\n raise CommandError(f\"No indexer registered for {label}\")\n\n # Label specifies all the models of an app\n elif len(labels) == 1:\n app_label = labels[0] + '.'\n for indexer_label, idx in registry['search'].items():\n if indexer_label.startswith(app_label):\n indexers[idx.model] = idx\n\n else:\n raise CommandError(\n f\"Invalid model: {label}. Model names must be in the format <app_label> or <app_label>.<model_name>.\"\n )\n\n return indexers\n\n def handle(self, *model_labels, **kwargs):\n\n # Determine which models to reindex\n indexers = self._get_indexers(*model_labels)\n if not indexers:\n raise CommandError(\"No indexers found!\")\n self.stdout.write(f'Reindexing {len(indexers)} models.')\n\n # Clear all cached values for the specified models\n self.stdout.write('Clearing cached values... ', ending='')\n self.stdout.flush()\n content_types = [\n ContentType.objects.get_for_model(model) for model in indexers.keys()\n ]\n deleted_count = search_backend.clear(content_types)\n self.stdout.write(f'{deleted_count} entries deleted.')\n\n # Index models\n self.stdout.write('Indexing models')\n for model, idx in indexers.items():\n app_label = model._meta.app_label\n model_name = model._meta.model_name\n self.stdout.write(f' {app_label}.{model_name}... ', ending='')\n self.stdout.flush()\n i = search_backend.cache(model.objects.iterator(), remove_existing=False)\n if i:\n self.stdout.write(f'{i} entries cached.')\n else:\n self.stdout.write(f'None found.')\n\n msg = f'Completed.'\n if total_count := search_backend.size:\n msg += f' Total entries: {total_count}'\n self.stdout.write(msg, self.style.SUCCESS)\n", "path": "netbox/extras/management/commands/reindex.py"}, {"content": "import sys\nimport uuid\n\nimport django.db.models.deletion\nimport django.db.models.lookups\nfrom django.core import management\nfrom django.db import migrations, models\n\n\ndef reindex(apps, schema_editor):\n # Build the search index (except during tests)\n if 'test' not in sys.argv:\n management.call_command(\n 'reindex',\n 'circuits',\n 'dcim',\n 'extras',\n 'ipam',\n 'tenancy',\n 'virtualization',\n 'wireless',\n )\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('circuits', '0041_standardize_description_comments'),\n ('contenttypes', '0002_remove_content_type_name'),\n ('dcim', '0166_virtualdevicecontext'),\n ('extras', '0082_savedfilter'),\n ('ipam', '0063_standardize_description_comments'),\n ('tenancy', '0009_standardize_description_comments'),\n ('virtualization', '0034_standardize_description_comments'),\n ('wireless', '0008_wirelesslan_status'),\n ]\n\n operations = [\n migrations.AddField(\n model_name='customfield',\n name='search_weight',\n field=models.PositiveSmallIntegerField(default=1000),\n ),\n migrations.CreateModel(\n name='CachedValue',\n fields=[\n ('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),\n ('timestamp', models.DateTimeField(auto_now_add=True)),\n ('object_id', models.PositiveBigIntegerField()),\n ('field', models.CharField(max_length=200)),\n ('type', models.CharField(max_length=30)),\n ('value', models.TextField()),\n ('weight', models.PositiveSmallIntegerField(default=1000)),\n ('object_type', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to='contenttypes.contenttype')),\n ],\n options={\n 'ordering': ('weight', 'object_type', 'object_id'),\n },\n ),\n migrations.RunPython(\n code=reindex,\n reverse_code=migrations.RunPython.noop\n ),\n ]\n", "path": "netbox/extras/migrations/0083_search.py"}]}
| 3,057 | 573 |
gh_patches_debug_34492
|
rasdani/github-patches
|
git_diff
|
WeblateOrg__weblate-9861
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Consider using ahocorasick-rs instead of pyahocorasick
### Describe the problem
https://pypi.org/project/ahocorasick-rs/ seems faster alternative to pyahocorasick.
### Describe the solution you'd like
It would be useful to benchmark it in Weblate use-case and switch to it in case it outperforms pyahocorasick.
### Describe alternatives you've considered
_No response_
### Screenshots
_No response_
### Additional context
> That being said, I've seen ahocorasick_rs run 1.5× to 7× as fast as pyahocorasick, depending on the options used.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `weblate/glossary/models.py`
Content:
```
1 # Copyright © Michal Čihař <[email protected]>
2 #
3 # SPDX-License-Identifier: GPL-3.0-or-later
4
5 import re
6 from itertools import chain
7
8 import ahocorasick
9 import sentry_sdk
10 from django.db.models import Q
11 from django.db.models.functions import Lower
12
13 from weblate.trans.models.unit import Unit
14 from weblate.trans.util import PLURAL_SEPARATOR
15 from weblate.utils.db import re_escape, using_postgresql
16 from weblate.utils.state import STATE_TRANSLATED
17
18 SPLIT_RE = re.compile(r"[\s,.:!?]+", re.UNICODE)
19 NON_WORD_RE = re.compile(r"\W", re.UNICODE)
20
21
22 def get_glossary_sources(component):
23 # Fetch list of terms defined in a translation
24 return list(
25 set(
26 component.source_translation.unit_set.filter(
27 state__gte=STATE_TRANSLATED
28 ).values_list(Lower("source"), flat=True)
29 )
30 )
31
32
33 def get_glossary_automaton(project):
34 with sentry_sdk.start_span(op="glossary.automaton", description=project.slug):
35 # Chain terms
36 terms = set(
37 chain.from_iterable(
38 glossary.glossary_sources for glossary in project.glossaries
39 )
40 )
41 # Build automaton for efficient Aho-Corasick search
42 automaton = ahocorasick.Automaton()
43 for term in terms:
44 automaton.add_word(term, term)
45 automaton.make_automaton()
46 return automaton
47
48
49 def get_glossary_terms(unit):
50 """Return list of term pairs for an unit."""
51 if unit.glossary_terms is not None:
52 return unit.glossary_terms
53 translation = unit.translation
54 language = translation.language
55 component = translation.component
56 project = component.project
57 source_language = component.source_language
58
59 units = (
60 Unit.objects.prefetch()
61 .select_related("source_unit")
62 .order_by("translation__component__priority", Lower("source"))
63 )
64 if language == source_language:
65 return units.none()
66
67 # Build complete source for matching
68 parts = []
69 for text in unit.get_source_plurals():
70 text = text.lower().strip()
71 if text:
72 parts.append(text)
73 source = PLURAL_SEPARATOR.join(parts)
74
75 uses_ngram = source_language.uses_ngram()
76
77 terms = set()
78 automaton = project.glossary_automaton
79 if automaton.kind == ahocorasick.AHOCORASICK:
80 # Extract terms present in the source
81 with sentry_sdk.start_span(op="glossary.match", description=project.slug):
82 for end, term in automaton.iter(source):
83 if uses_ngram or (
84 (end + 1 == len(term) or NON_WORD_RE.match(source[end - len(term)]))
85 and (end + 1 == len(source) or NON_WORD_RE.match(source[end + 1]))
86 ):
87 terms.add(term)
88
89 if using_postgresql():
90 match = r"^({})$".format("|".join(re_escape(term) for term in terms))
91 # Use regex as that is utilizing pg_trgm index
92 query = Q(source__iregex=match) | Q(variant__unit__source__iregex=match)
93 else:
94 # With MySQL we utilize it does case insensitive lookup
95 query = Q(source__in=terms) | Q(variant__unit__source__in=terms)
96
97 units = units.filter(
98 query,
99 translation__component__in=project.glossaries,
100 translation__component__source_language=source_language,
101 translation__language=language,
102 ).distinct()
103
104 # Store in a unit cache
105 unit.glossary_terms = units
106
107 return units
108
```
Path: `weblate/utils/requirements.py`
Content:
```
1 # Copyright © Michal Čihař <[email protected]>
2 #
3 # SPDX-License-Identifier: GPL-3.0-or-later
4
5 import sys
6 from importlib.metadata import PackageNotFoundError, metadata
7
8 from django.conf import settings
9 from django.core.cache import cache
10 from django.core.exceptions import ImproperlyConfigured
11 from django.db import connection
12
13 import weblate.utils.version
14 from weblate.utils.db import using_postgresql
15 from weblate.utils.errors import report_error
16 from weblate.vcs.git import GitRepository, GitWithGerritRepository, SubversionRepository
17 from weblate.vcs.mercurial import HgRepository
18
19 REQUIRES = [
20 "Django",
21 "siphashc",
22 "translate-toolkit",
23 "lxml",
24 "Pillow",
25 "nh3",
26 "python-dateutil",
27 "social-auth-core",
28 "social-auth-app-django",
29 "django-crispy-forms",
30 "oauthlib",
31 "django-compressor",
32 "djangorestframework",
33 "django-filter",
34 "django-appconf",
35 "user-agents",
36 "filelock",
37 "rapidfuzz",
38 "openpyxl",
39 "celery",
40 "django-celery-beat",
41 "kombu",
42 "translation-finder",
43 "weblate-language-data",
44 "html2text",
45 "pycairo",
46 "pygobject",
47 "diff-match-patch",
48 "requests",
49 "django-redis",
50 "hiredis",
51 "sentry_sdk",
52 "Cython",
53 "misaka",
54 "GitPython",
55 "borgbackup",
56 "pyparsing",
57 "pyahocorasick",
58 "python-redis-lock",
59 "charset-normalizer",
60 ]
61
62 OPTIONAL = [
63 "psycopg2",
64 "psycopg2-binary",
65 "phply",
66 "ruamel.yaml",
67 "tesserocr",
68 "akismet",
69 "boto3",
70 "zeep",
71 "aeidon",
72 "iniparse",
73 "mysqlclient",
74 ]
75
76
77 def get_version_module(name, optional=False):
78 """
79 Return module object.
80
81 On error raises verbose exception with name and URL.
82 """
83 try:
84 package = metadata(name)
85 except PackageNotFoundError as exc:
86 if optional:
87 return None
88 raise ImproperlyConfigured(
89 f"Missing dependency {name}, please install using: pip install {name}"
90 ) from exc
91 url = package.get("Home-page")
92 if url is None:
93 for project_url in package.get_all("Project-URL"):
94 name, current_url = project_url.split(",", 1)
95 if name.lower().strip() == "homepage":
96 url = current_url.strip()
97 break
98 if url is None:
99 url = f"https://pypi.org/project/{name}/"
100 return (
101 package.get("Name"),
102 url,
103 package.get("Version"),
104 )
105
106
107 def get_optional_versions():
108 """Return versions of optional modules."""
109 result = []
110
111 for name in OPTIONAL:
112 module = get_version_module(name, True)
113 if module is not None:
114 result.append(module)
115
116 if HgRepository.is_supported():
117 result.append(
118 ("Mercurial", "https://www.mercurial-scm.org/", HgRepository.get_version())
119 )
120
121 if SubversionRepository.is_supported():
122 result.append(
123 (
124 "git-svn",
125 "https://git-scm.com/docs/git-svn",
126 SubversionRepository.get_version(),
127 )
128 )
129
130 if GitWithGerritRepository.is_supported():
131 result.append(
132 (
133 "git-review",
134 "https://pypi.org/project/git-review/",
135 GitWithGerritRepository.get_version(),
136 )
137 )
138
139 return result
140
141
142 def get_versions():
143 """Return list of used versions."""
144 result = [get_version_module(name) for name in REQUIRES]
145
146 result.append(("Python", "https://www.python.org/", sys.version.split()[0]))
147
148 try:
149 result.append(("Git", "https://git-scm.com/", GitRepository.get_version()))
150 except OSError as exc:
151 raise ImproperlyConfigured("Could not run git, please install it.") from exc
152
153 return result
154
155
156 def get_db_version():
157 if using_postgresql():
158 try:
159 with connection.cursor() as cursor:
160 cursor.execute("SHOW server_version")
161 version = cursor.fetchone()
162 except RuntimeError:
163 report_error(cause="PostgreSQL version check")
164 return None
165
166 return (
167 "PostgreSQL server",
168 "https://www.postgresql.org/",
169 version[0].split(" ")[0],
170 )
171 try:
172 with connection.cursor() as cursor:
173 version = cursor.connection.get_server_info()
174 except RuntimeError:
175 report_error(cause="MySQL version check")
176 return None
177 return (
178 f"{connection.display_name} sever",
179 "https://mariadb.org/"
180 if connection.mysql_is_mariadb
181 else "https://www.mysql.com/",
182 version.split("-", 1)[0],
183 )
184
185
186 def get_cache_version():
187 if settings.CACHES["default"]["BACKEND"] == "django_redis.cache.RedisCache":
188 try:
189 version = cache.client.get_client().info()["redis_version"]
190 except RuntimeError:
191 report_error(cause="Redis version check")
192 return None
193
194 return ("Redis server", "https://redis.io/", version)
195
196 return None
197
198
199 def get_db_cache_version():
200 """Returns the list of all the Database and Cache version."""
201 result = []
202 cache_version = get_cache_version()
203 if cache_version:
204 result.append(cache_version)
205 db_version = get_db_version()
206 if db_version:
207 result.append(db_version)
208 return result
209
210
211 def get_versions_list():
212 """Return list with version information summary."""
213 return [
214 ("Weblate", "https://weblate.org/", weblate.utils.version.GIT_VERSION),
215 *get_versions(),
216 *get_optional_versions(),
217 *get_db_cache_version(),
218 ]
219
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/weblate/glossary/models.py b/weblate/glossary/models.py
--- a/weblate/glossary/models.py
+++ b/weblate/glossary/models.py
@@ -5,7 +5,7 @@
import re
from itertools import chain
-import ahocorasick
+import ahocorasick_rs
import sentry_sdk
from django.db.models import Q
from django.db.models.functions import Lower
@@ -39,11 +39,11 @@
)
)
# Build automaton for efficient Aho-Corasick search
- automaton = ahocorasick.Automaton()
- for term in terms:
- automaton.add_word(term, term)
- automaton.make_automaton()
- return automaton
+ return ahocorasick_rs.AhoCorasick(
+ terms,
+ implementation=ahocorasick_rs.Implementation.ContiguousNFA,
+ store_patterns=False,
+ )
def get_glossary_terms(unit):
@@ -76,15 +76,16 @@
terms = set()
automaton = project.glossary_automaton
- if automaton.kind == ahocorasick.AHOCORASICK:
- # Extract terms present in the source
- with sentry_sdk.start_span(op="glossary.match", description=project.slug):
- for end, term in automaton.iter(source):
- if uses_ngram or (
- (end + 1 == len(term) or NON_WORD_RE.match(source[end - len(term)]))
- and (end + 1 == len(source) or NON_WORD_RE.match(source[end + 1]))
- ):
- terms.add(term)
+ # Extract terms present in the source
+ with sentry_sdk.start_span(op="glossary.match", description=project.slug):
+ for _termno, start, end in automaton.find_matches_as_indexes(
+ source, overlapping=True
+ ):
+ if uses_ngram or (
+ (start == 0 or NON_WORD_RE.match(source[start - 1]))
+ and (end >= len(source) or NON_WORD_RE.match(source[end]))
+ ):
+ terms.add(source[start:end])
if using_postgresql():
match = r"^({})$".format("|".join(re_escape(term) for term in terms))
diff --git a/weblate/utils/requirements.py b/weblate/utils/requirements.py
--- a/weblate/utils/requirements.py
+++ b/weblate/utils/requirements.py
@@ -54,7 +54,7 @@
"GitPython",
"borgbackup",
"pyparsing",
- "pyahocorasick",
+ "ahocorasick_rs",
"python-redis-lock",
"charset-normalizer",
]
|
{"golden_diff": "diff --git a/weblate/glossary/models.py b/weblate/glossary/models.py\n--- a/weblate/glossary/models.py\n+++ b/weblate/glossary/models.py\n@@ -5,7 +5,7 @@\n import re\n from itertools import chain\n \n-import ahocorasick\n+import ahocorasick_rs\n import sentry_sdk\n from django.db.models import Q\n from django.db.models.functions import Lower\n@@ -39,11 +39,11 @@\n )\n )\n # Build automaton for efficient Aho-Corasick search\n- automaton = ahocorasick.Automaton()\n- for term in terms:\n- automaton.add_word(term, term)\n- automaton.make_automaton()\n- return automaton\n+ return ahocorasick_rs.AhoCorasick(\n+ terms,\n+ implementation=ahocorasick_rs.Implementation.ContiguousNFA,\n+ store_patterns=False,\n+ )\n \n \n def get_glossary_terms(unit):\n@@ -76,15 +76,16 @@\n \n terms = set()\n automaton = project.glossary_automaton\n- if automaton.kind == ahocorasick.AHOCORASICK:\n- # Extract terms present in the source\n- with sentry_sdk.start_span(op=\"glossary.match\", description=project.slug):\n- for end, term in automaton.iter(source):\n- if uses_ngram or (\n- (end + 1 == len(term) or NON_WORD_RE.match(source[end - len(term)]))\n- and (end + 1 == len(source) or NON_WORD_RE.match(source[end + 1]))\n- ):\n- terms.add(term)\n+ # Extract terms present in the source\n+ with sentry_sdk.start_span(op=\"glossary.match\", description=project.slug):\n+ for _termno, start, end in automaton.find_matches_as_indexes(\n+ source, overlapping=True\n+ ):\n+ if uses_ngram or (\n+ (start == 0 or NON_WORD_RE.match(source[start - 1]))\n+ and (end >= len(source) or NON_WORD_RE.match(source[end]))\n+ ):\n+ terms.add(source[start:end])\n \n if using_postgresql():\n match = r\"^({})$\".format(\"|\".join(re_escape(term) for term in terms))\ndiff --git a/weblate/utils/requirements.py b/weblate/utils/requirements.py\n--- a/weblate/utils/requirements.py\n+++ b/weblate/utils/requirements.py\n@@ -54,7 +54,7 @@\n \"GitPython\",\n \"borgbackup\",\n \"pyparsing\",\n- \"pyahocorasick\",\n+ \"ahocorasick_rs\",\n \"python-redis-lock\",\n \"charset-normalizer\",\n ]\n", "issue": "Consider using ahocorasick-rs instead of pyahocorasick\n### Describe the problem\n\nhttps://pypi.org/project/ahocorasick-rs/ seems faster alternative to pyahocorasick.\n\n### Describe the solution you'd like\n\nIt would be useful to benchmark it in Weblate use-case and switch to it in case it outperforms pyahocorasick.\n\n### Describe alternatives you've considered\n\n_No response_\n\n### Screenshots\n\n_No response_\n\n### Additional context\n\n> That being said, I've seen ahocorasick_rs run 1.5\u00d7 to 7\u00d7 as fast as pyahocorasick, depending on the options used.\n", "before_files": [{"content": "# Copyright \u00a9 Michal \u010ciha\u0159 <[email protected]>\n#\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nimport re\nfrom itertools import chain\n\nimport ahocorasick\nimport sentry_sdk\nfrom django.db.models import Q\nfrom django.db.models.functions import Lower\n\nfrom weblate.trans.models.unit import Unit\nfrom weblate.trans.util import PLURAL_SEPARATOR\nfrom weblate.utils.db import re_escape, using_postgresql\nfrom weblate.utils.state import STATE_TRANSLATED\n\nSPLIT_RE = re.compile(r\"[\\s,.:!?]+\", re.UNICODE)\nNON_WORD_RE = re.compile(r\"\\W\", re.UNICODE)\n\n\ndef get_glossary_sources(component):\n # Fetch list of terms defined in a translation\n return list(\n set(\n component.source_translation.unit_set.filter(\n state__gte=STATE_TRANSLATED\n ).values_list(Lower(\"source\"), flat=True)\n )\n )\n\n\ndef get_glossary_automaton(project):\n with sentry_sdk.start_span(op=\"glossary.automaton\", description=project.slug):\n # Chain terms\n terms = set(\n chain.from_iterable(\n glossary.glossary_sources for glossary in project.glossaries\n )\n )\n # Build automaton for efficient Aho-Corasick search\n automaton = ahocorasick.Automaton()\n for term in terms:\n automaton.add_word(term, term)\n automaton.make_automaton()\n return automaton\n\n\ndef get_glossary_terms(unit):\n \"\"\"Return list of term pairs for an unit.\"\"\"\n if unit.glossary_terms is not None:\n return unit.glossary_terms\n translation = unit.translation\n language = translation.language\n component = translation.component\n project = component.project\n source_language = component.source_language\n\n units = (\n Unit.objects.prefetch()\n .select_related(\"source_unit\")\n .order_by(\"translation__component__priority\", Lower(\"source\"))\n )\n if language == source_language:\n return units.none()\n\n # Build complete source for matching\n parts = []\n for text in unit.get_source_plurals():\n text = text.lower().strip()\n if text:\n parts.append(text)\n source = PLURAL_SEPARATOR.join(parts)\n\n uses_ngram = source_language.uses_ngram()\n\n terms = set()\n automaton = project.glossary_automaton\n if automaton.kind == ahocorasick.AHOCORASICK:\n # Extract terms present in the source\n with sentry_sdk.start_span(op=\"glossary.match\", description=project.slug):\n for end, term in automaton.iter(source):\n if uses_ngram or (\n (end + 1 == len(term) or NON_WORD_RE.match(source[end - len(term)]))\n and (end + 1 == len(source) or NON_WORD_RE.match(source[end + 1]))\n ):\n terms.add(term)\n\n if using_postgresql():\n match = r\"^({})$\".format(\"|\".join(re_escape(term) for term in terms))\n # Use regex as that is utilizing pg_trgm index\n query = Q(source__iregex=match) | Q(variant__unit__source__iregex=match)\n else:\n # With MySQL we utilize it does case insensitive lookup\n query = Q(source__in=terms) | Q(variant__unit__source__in=terms)\n\n units = units.filter(\n query,\n translation__component__in=project.glossaries,\n translation__component__source_language=source_language,\n translation__language=language,\n ).distinct()\n\n # Store in a unit cache\n unit.glossary_terms = units\n\n return units\n", "path": "weblate/glossary/models.py"}, {"content": "# Copyright \u00a9 Michal \u010ciha\u0159 <[email protected]>\n#\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nimport sys\nfrom importlib.metadata import PackageNotFoundError, metadata\n\nfrom django.conf import settings\nfrom django.core.cache import cache\nfrom django.core.exceptions import ImproperlyConfigured\nfrom django.db import connection\n\nimport weblate.utils.version\nfrom weblate.utils.db import using_postgresql\nfrom weblate.utils.errors import report_error\nfrom weblate.vcs.git import GitRepository, GitWithGerritRepository, SubversionRepository\nfrom weblate.vcs.mercurial import HgRepository\n\nREQUIRES = [\n \"Django\",\n \"siphashc\",\n \"translate-toolkit\",\n \"lxml\",\n \"Pillow\",\n \"nh3\",\n \"python-dateutil\",\n \"social-auth-core\",\n \"social-auth-app-django\",\n \"django-crispy-forms\",\n \"oauthlib\",\n \"django-compressor\",\n \"djangorestframework\",\n \"django-filter\",\n \"django-appconf\",\n \"user-agents\",\n \"filelock\",\n \"rapidfuzz\",\n \"openpyxl\",\n \"celery\",\n \"django-celery-beat\",\n \"kombu\",\n \"translation-finder\",\n \"weblate-language-data\",\n \"html2text\",\n \"pycairo\",\n \"pygobject\",\n \"diff-match-patch\",\n \"requests\",\n \"django-redis\",\n \"hiredis\",\n \"sentry_sdk\",\n \"Cython\",\n \"misaka\",\n \"GitPython\",\n \"borgbackup\",\n \"pyparsing\",\n \"pyahocorasick\",\n \"python-redis-lock\",\n \"charset-normalizer\",\n]\n\nOPTIONAL = [\n \"psycopg2\",\n \"psycopg2-binary\",\n \"phply\",\n \"ruamel.yaml\",\n \"tesserocr\",\n \"akismet\",\n \"boto3\",\n \"zeep\",\n \"aeidon\",\n \"iniparse\",\n \"mysqlclient\",\n]\n\n\ndef get_version_module(name, optional=False):\n \"\"\"\n Return module object.\n\n On error raises verbose exception with name and URL.\n \"\"\"\n try:\n package = metadata(name)\n except PackageNotFoundError as exc:\n if optional:\n return None\n raise ImproperlyConfigured(\n f\"Missing dependency {name}, please install using: pip install {name}\"\n ) from exc\n url = package.get(\"Home-page\")\n if url is None:\n for project_url in package.get_all(\"Project-URL\"):\n name, current_url = project_url.split(\",\", 1)\n if name.lower().strip() == \"homepage\":\n url = current_url.strip()\n break\n if url is None:\n url = f\"https://pypi.org/project/{name}/\"\n return (\n package.get(\"Name\"),\n url,\n package.get(\"Version\"),\n )\n\n\ndef get_optional_versions():\n \"\"\"Return versions of optional modules.\"\"\"\n result = []\n\n for name in OPTIONAL:\n module = get_version_module(name, True)\n if module is not None:\n result.append(module)\n\n if HgRepository.is_supported():\n result.append(\n (\"Mercurial\", \"https://www.mercurial-scm.org/\", HgRepository.get_version())\n )\n\n if SubversionRepository.is_supported():\n result.append(\n (\n \"git-svn\",\n \"https://git-scm.com/docs/git-svn\",\n SubversionRepository.get_version(),\n )\n )\n\n if GitWithGerritRepository.is_supported():\n result.append(\n (\n \"git-review\",\n \"https://pypi.org/project/git-review/\",\n GitWithGerritRepository.get_version(),\n )\n )\n\n return result\n\n\ndef get_versions():\n \"\"\"Return list of used versions.\"\"\"\n result = [get_version_module(name) for name in REQUIRES]\n\n result.append((\"Python\", \"https://www.python.org/\", sys.version.split()[0]))\n\n try:\n result.append((\"Git\", \"https://git-scm.com/\", GitRepository.get_version()))\n except OSError as exc:\n raise ImproperlyConfigured(\"Could not run git, please install it.\") from exc\n\n return result\n\n\ndef get_db_version():\n if using_postgresql():\n try:\n with connection.cursor() as cursor:\n cursor.execute(\"SHOW server_version\")\n version = cursor.fetchone()\n except RuntimeError:\n report_error(cause=\"PostgreSQL version check\")\n return None\n\n return (\n \"PostgreSQL server\",\n \"https://www.postgresql.org/\",\n version[0].split(\" \")[0],\n )\n try:\n with connection.cursor() as cursor:\n version = cursor.connection.get_server_info()\n except RuntimeError:\n report_error(cause=\"MySQL version check\")\n return None\n return (\n f\"{connection.display_name} sever\",\n \"https://mariadb.org/\"\n if connection.mysql_is_mariadb\n else \"https://www.mysql.com/\",\n version.split(\"-\", 1)[0],\n )\n\n\ndef get_cache_version():\n if settings.CACHES[\"default\"][\"BACKEND\"] == \"django_redis.cache.RedisCache\":\n try:\n version = cache.client.get_client().info()[\"redis_version\"]\n except RuntimeError:\n report_error(cause=\"Redis version check\")\n return None\n\n return (\"Redis server\", \"https://redis.io/\", version)\n\n return None\n\n\ndef get_db_cache_version():\n \"\"\"Returns the list of all the Database and Cache version.\"\"\"\n result = []\n cache_version = get_cache_version()\n if cache_version:\n result.append(cache_version)\n db_version = get_db_version()\n if db_version:\n result.append(db_version)\n return result\n\n\ndef get_versions_list():\n \"\"\"Return list with version information summary.\"\"\"\n return [\n (\"Weblate\", \"https://weblate.org/\", weblate.utils.version.GIT_VERSION),\n *get_versions(),\n *get_optional_versions(),\n *get_db_cache_version(),\n ]\n", "path": "weblate/utils/requirements.py"}], "after_files": [{"content": "# Copyright \u00a9 Michal \u010ciha\u0159 <[email protected]>\n#\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nimport re\nfrom itertools import chain\n\nimport ahocorasick_rs\nimport sentry_sdk\nfrom django.db.models import Q\nfrom django.db.models.functions import Lower\n\nfrom weblate.trans.models.unit import Unit\nfrom weblate.trans.util import PLURAL_SEPARATOR\nfrom weblate.utils.db import re_escape, using_postgresql\nfrom weblate.utils.state import STATE_TRANSLATED\n\nSPLIT_RE = re.compile(r\"[\\s,.:!?]+\", re.UNICODE)\nNON_WORD_RE = re.compile(r\"\\W\", re.UNICODE)\n\n\ndef get_glossary_sources(component):\n # Fetch list of terms defined in a translation\n return list(\n set(\n component.source_translation.unit_set.filter(\n state__gte=STATE_TRANSLATED\n ).values_list(Lower(\"source\"), flat=True)\n )\n )\n\n\ndef get_glossary_automaton(project):\n with sentry_sdk.start_span(op=\"glossary.automaton\", description=project.slug):\n # Chain terms\n terms = set(\n chain.from_iterable(\n glossary.glossary_sources for glossary in project.glossaries\n )\n )\n # Build automaton for efficient Aho-Corasick search\n return ahocorasick_rs.AhoCorasick(\n terms,\n implementation=ahocorasick_rs.Implementation.ContiguousNFA,\n store_patterns=False,\n )\n\n\ndef get_glossary_terms(unit):\n \"\"\"Return list of term pairs for an unit.\"\"\"\n if unit.glossary_terms is not None:\n return unit.glossary_terms\n translation = unit.translation\n language = translation.language\n component = translation.component\n project = component.project\n source_language = component.source_language\n\n units = (\n Unit.objects.prefetch()\n .select_related(\"source_unit\")\n .order_by(\"translation__component__priority\", Lower(\"source\"))\n )\n if language == source_language:\n return units.none()\n\n # Build complete source for matching\n parts = []\n for text in unit.get_source_plurals():\n text = text.lower().strip()\n if text:\n parts.append(text)\n source = PLURAL_SEPARATOR.join(parts)\n\n uses_ngram = source_language.uses_ngram()\n\n terms = set()\n automaton = project.glossary_automaton\n # Extract terms present in the source\n with sentry_sdk.start_span(op=\"glossary.match\", description=project.slug):\n for _termno, start, end in automaton.find_matches_as_indexes(\n source, overlapping=True\n ):\n if uses_ngram or (\n (start == 0 or NON_WORD_RE.match(source[start - 1]))\n and (end >= len(source) or NON_WORD_RE.match(source[end]))\n ):\n terms.add(source[start:end])\n\n if using_postgresql():\n match = r\"^({})$\".format(\"|\".join(re_escape(term) for term in terms))\n # Use regex as that is utilizing pg_trgm index\n query = Q(source__iregex=match) | Q(variant__unit__source__iregex=match)\n else:\n # With MySQL we utilize it does case insensitive lookup\n query = Q(source__in=terms) | Q(variant__unit__source__in=terms)\n\n units = units.filter(\n query,\n translation__component__in=project.glossaries,\n translation__component__source_language=source_language,\n translation__language=language,\n ).distinct()\n\n # Store in a unit cache\n unit.glossary_terms = units\n\n return units\n", "path": "weblate/glossary/models.py"}, {"content": "# Copyright \u00a9 Michal \u010ciha\u0159 <[email protected]>\n#\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nimport sys\nfrom importlib.metadata import PackageNotFoundError, metadata\n\nfrom django.conf import settings\nfrom django.core.cache import cache\nfrom django.core.exceptions import ImproperlyConfigured\nfrom django.db import connection\n\nimport weblate.utils.version\nfrom weblate.utils.db import using_postgresql\nfrom weblate.utils.errors import report_error\nfrom weblate.vcs.git import GitRepository, GitWithGerritRepository, SubversionRepository\nfrom weblate.vcs.mercurial import HgRepository\n\nREQUIRES = [\n \"Django\",\n \"siphashc\",\n \"translate-toolkit\",\n \"lxml\",\n \"Pillow\",\n \"nh3\",\n \"python-dateutil\",\n \"social-auth-core\",\n \"social-auth-app-django\",\n \"django-crispy-forms\",\n \"oauthlib\",\n \"django-compressor\",\n \"djangorestframework\",\n \"django-filter\",\n \"django-appconf\",\n \"user-agents\",\n \"filelock\",\n \"rapidfuzz\",\n \"openpyxl\",\n \"celery\",\n \"django-celery-beat\",\n \"kombu\",\n \"translation-finder\",\n \"weblate-language-data\",\n \"html2text\",\n \"pycairo\",\n \"pygobject\",\n \"diff-match-patch\",\n \"requests\",\n \"django-redis\",\n \"hiredis\",\n \"sentry_sdk\",\n \"Cython\",\n \"misaka\",\n \"GitPython\",\n \"borgbackup\",\n \"pyparsing\",\n \"ahocorasick_rs\",\n \"python-redis-lock\",\n \"charset-normalizer\",\n]\n\nOPTIONAL = [\n \"psycopg2\",\n \"psycopg2-binary\",\n \"phply\",\n \"ruamel.yaml\",\n \"tesserocr\",\n \"akismet\",\n \"boto3\",\n \"zeep\",\n \"aeidon\",\n \"iniparse\",\n \"mysqlclient\",\n]\n\n\ndef get_version_module(name, optional=False):\n \"\"\"\n Return module object.\n\n On error raises verbose exception with name and URL.\n \"\"\"\n try:\n package = metadata(name)\n except PackageNotFoundError as exc:\n if optional:\n return None\n raise ImproperlyConfigured(\n f\"Missing dependency {name}, please install using: pip install {name}\"\n ) from exc\n url = package.get(\"Home-page\")\n if url is None:\n for project_url in package.get_all(\"Project-URL\"):\n name, current_url = project_url.split(\",\", 1)\n if name.lower().strip() == \"homepage\":\n url = current_url.strip()\n break\n if url is None:\n url = f\"https://pypi.org/project/{name}/\"\n return (\n package.get(\"Name\"),\n url,\n package.get(\"Version\"),\n )\n\n\ndef get_optional_versions():\n \"\"\"Return versions of optional modules.\"\"\"\n result = []\n\n for name in OPTIONAL:\n module = get_version_module(name, True)\n if module is not None:\n result.append(module)\n\n if HgRepository.is_supported():\n result.append(\n (\"Mercurial\", \"https://www.mercurial-scm.org/\", HgRepository.get_version())\n )\n\n if SubversionRepository.is_supported():\n result.append(\n (\n \"git-svn\",\n \"https://git-scm.com/docs/git-svn\",\n SubversionRepository.get_version(),\n )\n )\n\n if GitWithGerritRepository.is_supported():\n result.append(\n (\n \"git-review\",\n \"https://pypi.org/project/git-review/\",\n GitWithGerritRepository.get_version(),\n )\n )\n\n return result\n\n\ndef get_versions():\n \"\"\"Return list of used versions.\"\"\"\n result = [get_version_module(name) for name in REQUIRES]\n\n result.append((\"Python\", \"https://www.python.org/\", sys.version.split()[0]))\n\n try:\n result.append((\"Git\", \"https://git-scm.com/\", GitRepository.get_version()))\n except OSError as exc:\n raise ImproperlyConfigured(\"Could not run git, please install it.\") from exc\n\n return result\n\n\ndef get_db_version():\n if using_postgresql():\n try:\n with connection.cursor() as cursor:\n cursor.execute(\"SHOW server_version\")\n version = cursor.fetchone()\n except RuntimeError:\n report_error(cause=\"PostgreSQL version check\")\n return None\n\n return (\n \"PostgreSQL server\",\n \"https://www.postgresql.org/\",\n version[0].split(\" \")[0],\n )\n try:\n with connection.cursor() as cursor:\n version = cursor.connection.get_server_info()\n except RuntimeError:\n report_error(cause=\"MySQL version check\")\n return None\n return (\n f\"{connection.display_name} sever\",\n \"https://mariadb.org/\"\n if connection.mysql_is_mariadb\n else \"https://www.mysql.com/\",\n version.split(\"-\", 1)[0],\n )\n\n\ndef get_cache_version():\n if settings.CACHES[\"default\"][\"BACKEND\"] == \"django_redis.cache.RedisCache\":\n try:\n version = cache.client.get_client().info()[\"redis_version\"]\n except RuntimeError:\n report_error(cause=\"Redis version check\")\n return None\n\n return (\"Redis server\", \"https://redis.io/\", version)\n\n return None\n\n\ndef get_db_cache_version():\n \"\"\"Returns the list of all the Database and Cache version.\"\"\"\n result = []\n cache_version = get_cache_version()\n if cache_version:\n result.append(cache_version)\n db_version = get_db_version()\n if db_version:\n result.append(db_version)\n return result\n\n\ndef get_versions_list():\n \"\"\"Return list with version information summary.\"\"\"\n return [\n (\"Weblate\", \"https://weblate.org/\", weblate.utils.version.GIT_VERSION),\n *get_versions(),\n *get_optional_versions(),\n *get_db_cache_version(),\n ]\n", "path": "weblate/utils/requirements.py"}]}
| 3,332 | 621 |
gh_patches_debug_31552
|
rasdani/github-patches
|
git_diff
|
CTFd__CTFd-1516
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Change Configs detail API GET/PATCH for a more structured response
The API endpoints for GET, PATCH /api/v1/configs/{config_key} return badly structured data. This should return better structured data.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `CTFd/api/v1/config.py`
Content:
```
1 from typing import List
2
3 from flask import request
4 from flask_restx import Namespace, Resource
5
6 from CTFd.api.v1.helpers.models import build_model_filters
7 from CTFd.api.v1.helpers.request import validate_args
8 from CTFd.api.v1.helpers.schemas import sqlalchemy_to_pydantic
9 from CTFd.api.v1.schemas import APIDetailedSuccessResponse, APIListSuccessResponse
10 from CTFd.cache import clear_config, clear_standings
11 from CTFd.constants import RawEnum
12 from CTFd.models import Configs, db
13 from CTFd.schemas.config import ConfigSchema
14 from CTFd.utils import get_config, set_config
15 from CTFd.utils.decorators import admins_only
16
17 configs_namespace = Namespace("configs", description="Endpoint to retrieve Configs")
18
19 ConfigModel = sqlalchemy_to_pydantic(Configs)
20
21
22 class ConfigDetailedSuccessResponse(APIDetailedSuccessResponse):
23 data: ConfigModel
24
25
26 class ConfigListSuccessResponse(APIListSuccessResponse):
27 data: List[ConfigModel]
28
29
30 configs_namespace.schema_model(
31 "ConfigDetailedSuccessResponse", ConfigDetailedSuccessResponse.apidoc()
32 )
33
34 configs_namespace.schema_model(
35 "ConfigListSuccessResponse", ConfigListSuccessResponse.apidoc()
36 )
37
38
39 @configs_namespace.route("")
40 class ConfigList(Resource):
41 @admins_only
42 @configs_namespace.doc(
43 description="Endpoint to get Config objects in bulk",
44 responses={
45 200: ("Success", "ConfigListSuccessResponse"),
46 400: (
47 "An error occured processing the provided or stored data",
48 "APISimpleErrorResponse",
49 ),
50 },
51 )
52 @validate_args(
53 {
54 "key": (str, None),
55 "value": (str, None),
56 "q": (str, None),
57 "field": (RawEnum("ConfigFields", {"key": "key", "value": "value"}), None),
58 },
59 location="query",
60 )
61 def get(self, query_args):
62 q = query_args.pop("q", None)
63 field = str(query_args.pop("field", None))
64 filters = build_model_filters(model=Configs, query=q, field=field)
65
66 configs = Configs.query.filter_by(**query_args).filter(*filters).all()
67 schema = ConfigSchema(many=True)
68 response = schema.dump(configs)
69 if response.errors:
70 return {"success": False, "errors": response.errors}, 400
71
72 return {"success": True, "data": response.data}
73
74 @admins_only
75 @configs_namespace.doc(
76 description="Endpoint to get create a Config object",
77 responses={
78 200: ("Success", "ConfigDetailedSuccessResponse"),
79 400: (
80 "An error occured processing the provided or stored data",
81 "APISimpleErrorResponse",
82 ),
83 },
84 )
85 def post(self):
86 req = request.get_json()
87 schema = ConfigSchema()
88 response = schema.load(req)
89
90 if response.errors:
91 return {"success": False, "errors": response.errors}, 400
92
93 db.session.add(response.data)
94 db.session.commit()
95
96 response = schema.dump(response.data)
97 db.session.close()
98
99 clear_config()
100 clear_standings()
101
102 return {"success": True, "data": response.data}
103
104 @admins_only
105 @configs_namespace.doc(
106 description="Endpoint to get patch Config objects in bulk",
107 responses={200: ("Success", "APISimpleSuccessResponse")},
108 )
109 def patch(self):
110 req = request.get_json()
111
112 for key, value in req.items():
113 set_config(key=key, value=value)
114
115 clear_config()
116 clear_standings()
117
118 return {"success": True}
119
120
121 @configs_namespace.route("/<config_key>")
122 class Config(Resource):
123 @admins_only
124 # TODO: This returns weirdly structured data. It should more closely match ConfigDetailedSuccessResponse #1506
125 def get(self, config_key):
126
127 return {"success": True, "data": get_config(config_key)}
128
129 @admins_only
130 # TODO: This returns weirdly structured data. It should more closely match ConfigDetailedSuccessResponse #1506
131 def patch(self, config_key):
132 config = Configs.query.filter_by(key=config_key).first()
133 data = request.get_json()
134 if config:
135 schema = ConfigSchema(instance=config, partial=True)
136 response = schema.load(data)
137 else:
138 schema = ConfigSchema()
139 data["key"] = config_key
140 response = schema.load(data)
141 db.session.add(response.data)
142
143 if response.errors:
144 return response.errors, 400
145
146 db.session.commit()
147
148 response = schema.dump(response.data)
149 db.session.close()
150
151 clear_config()
152 clear_standings()
153
154 return {"success": True, "data": response.data}
155
156 @admins_only
157 @configs_namespace.doc(
158 description="Endpoint to delete a Config object",
159 responses={200: ("Success", "APISimpleSuccessResponse")},
160 )
161 def delete(self, config_key):
162 config = Configs.query.filter_by(key=config_key).first_or_404()
163
164 db.session.delete(config)
165 db.session.commit()
166 db.session.close()
167
168 clear_config()
169 clear_standings()
170
171 return {"success": True}
172
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/CTFd/api/v1/config.py b/CTFd/api/v1/config.py
--- a/CTFd/api/v1/config.py
+++ b/CTFd/api/v1/config.py
@@ -11,7 +11,7 @@
from CTFd.constants import RawEnum
from CTFd.models import Configs, db
from CTFd.schemas.config import ConfigSchema
-from CTFd.utils import get_config, set_config
+from CTFd.utils import set_config
from CTFd.utils.decorators import admins_only
configs_namespace = Namespace("configs", description="Endpoint to retrieve Configs")
@@ -121,13 +121,33 @@
@configs_namespace.route("/<config_key>")
class Config(Resource):
@admins_only
- # TODO: This returns weirdly structured data. It should more closely match ConfigDetailedSuccessResponse #1506
+ @configs_namespace.doc(
+ description="Endpoint to get a specific Config object",
+ responses={
+ 200: ("Success", "ConfigDetailedSuccessResponse"),
+ 400: (
+ "An error occured processing the provided or stored data",
+ "APISimpleErrorResponse",
+ ),
+ },
+ )
def get(self, config_key):
-
- return {"success": True, "data": get_config(config_key)}
+ config = Configs.query.filter_by(key=config_key).first_or_404()
+ schema = ConfigSchema()
+ response = schema.dump(config)
+ return {"success": True, "data": response.data}
@admins_only
- # TODO: This returns weirdly structured data. It should more closely match ConfigDetailedSuccessResponse #1506
+ @configs_namespace.doc(
+ description="Endpoint to edit a specific Config object",
+ responses={
+ 200: ("Success", "ConfigDetailedSuccessResponse"),
+ 400: (
+ "An error occured processing the provided or stored data",
+ "APISimpleErrorResponse",
+ ),
+ },
+ )
def patch(self, config_key):
config = Configs.query.filter_by(key=config_key).first()
data = request.get_json()
|
{"golden_diff": "diff --git a/CTFd/api/v1/config.py b/CTFd/api/v1/config.py\n--- a/CTFd/api/v1/config.py\n+++ b/CTFd/api/v1/config.py\n@@ -11,7 +11,7 @@\n from CTFd.constants import RawEnum\n from CTFd.models import Configs, db\n from CTFd.schemas.config import ConfigSchema\n-from CTFd.utils import get_config, set_config\n+from CTFd.utils import set_config\n from CTFd.utils.decorators import admins_only\n \n configs_namespace = Namespace(\"configs\", description=\"Endpoint to retrieve Configs\")\n@@ -121,13 +121,33 @@\n @configs_namespace.route(\"/<config_key>\")\n class Config(Resource):\n @admins_only\n- # TODO: This returns weirdly structured data. It should more closely match ConfigDetailedSuccessResponse #1506\n+ @configs_namespace.doc(\n+ description=\"Endpoint to get a specific Config object\",\n+ responses={\n+ 200: (\"Success\", \"ConfigDetailedSuccessResponse\"),\n+ 400: (\n+ \"An error occured processing the provided or stored data\",\n+ \"APISimpleErrorResponse\",\n+ ),\n+ },\n+ )\n def get(self, config_key):\n-\n- return {\"success\": True, \"data\": get_config(config_key)}\n+ config = Configs.query.filter_by(key=config_key).first_or_404()\n+ schema = ConfigSchema()\n+ response = schema.dump(config)\n+ return {\"success\": True, \"data\": response.data}\n \n @admins_only\n- # TODO: This returns weirdly structured data. It should more closely match ConfigDetailedSuccessResponse #1506\n+ @configs_namespace.doc(\n+ description=\"Endpoint to edit a specific Config object\",\n+ responses={\n+ 200: (\"Success\", \"ConfigDetailedSuccessResponse\"),\n+ 400: (\n+ \"An error occured processing the provided or stored data\",\n+ \"APISimpleErrorResponse\",\n+ ),\n+ },\n+ )\n def patch(self, config_key):\n config = Configs.query.filter_by(key=config_key).first()\n data = request.get_json()\n", "issue": "Change Configs detail API GET/PATCH for a more structured response\nThe API endpoints for GET, PATCH /api/v1/configs/{config_key} return badly structured data. This should return better structured data. \n", "before_files": [{"content": "from typing import List\n\nfrom flask import request\nfrom flask_restx import Namespace, Resource\n\nfrom CTFd.api.v1.helpers.models import build_model_filters\nfrom CTFd.api.v1.helpers.request import validate_args\nfrom CTFd.api.v1.helpers.schemas import sqlalchemy_to_pydantic\nfrom CTFd.api.v1.schemas import APIDetailedSuccessResponse, APIListSuccessResponse\nfrom CTFd.cache import clear_config, clear_standings\nfrom CTFd.constants import RawEnum\nfrom CTFd.models import Configs, db\nfrom CTFd.schemas.config import ConfigSchema\nfrom CTFd.utils import get_config, set_config\nfrom CTFd.utils.decorators import admins_only\n\nconfigs_namespace = Namespace(\"configs\", description=\"Endpoint to retrieve Configs\")\n\nConfigModel = sqlalchemy_to_pydantic(Configs)\n\n\nclass ConfigDetailedSuccessResponse(APIDetailedSuccessResponse):\n data: ConfigModel\n\n\nclass ConfigListSuccessResponse(APIListSuccessResponse):\n data: List[ConfigModel]\n\n\nconfigs_namespace.schema_model(\n \"ConfigDetailedSuccessResponse\", ConfigDetailedSuccessResponse.apidoc()\n)\n\nconfigs_namespace.schema_model(\n \"ConfigListSuccessResponse\", ConfigListSuccessResponse.apidoc()\n)\n\n\n@configs_namespace.route(\"\")\nclass ConfigList(Resource):\n @admins_only\n @configs_namespace.doc(\n description=\"Endpoint to get Config objects in bulk\",\n responses={\n 200: (\"Success\", \"ConfigListSuccessResponse\"),\n 400: (\n \"An error occured processing the provided or stored data\",\n \"APISimpleErrorResponse\",\n ),\n },\n )\n @validate_args(\n {\n \"key\": (str, None),\n \"value\": (str, None),\n \"q\": (str, None),\n \"field\": (RawEnum(\"ConfigFields\", {\"key\": \"key\", \"value\": \"value\"}), None),\n },\n location=\"query\",\n )\n def get(self, query_args):\n q = query_args.pop(\"q\", None)\n field = str(query_args.pop(\"field\", None))\n filters = build_model_filters(model=Configs, query=q, field=field)\n\n configs = Configs.query.filter_by(**query_args).filter(*filters).all()\n schema = ConfigSchema(many=True)\n response = schema.dump(configs)\n if response.errors:\n return {\"success\": False, \"errors\": response.errors}, 400\n\n return {\"success\": True, \"data\": response.data}\n\n @admins_only\n @configs_namespace.doc(\n description=\"Endpoint to get create a Config object\",\n responses={\n 200: (\"Success\", \"ConfigDetailedSuccessResponse\"),\n 400: (\n \"An error occured processing the provided or stored data\",\n \"APISimpleErrorResponse\",\n ),\n },\n )\n def post(self):\n req = request.get_json()\n schema = ConfigSchema()\n response = schema.load(req)\n\n if response.errors:\n return {\"success\": False, \"errors\": response.errors}, 400\n\n db.session.add(response.data)\n db.session.commit()\n\n response = schema.dump(response.data)\n db.session.close()\n\n clear_config()\n clear_standings()\n\n return {\"success\": True, \"data\": response.data}\n\n @admins_only\n @configs_namespace.doc(\n description=\"Endpoint to get patch Config objects in bulk\",\n responses={200: (\"Success\", \"APISimpleSuccessResponse\")},\n )\n def patch(self):\n req = request.get_json()\n\n for key, value in req.items():\n set_config(key=key, value=value)\n\n clear_config()\n clear_standings()\n\n return {\"success\": True}\n\n\n@configs_namespace.route(\"/<config_key>\")\nclass Config(Resource):\n @admins_only\n # TODO: This returns weirdly structured data. It should more closely match ConfigDetailedSuccessResponse #1506\n def get(self, config_key):\n\n return {\"success\": True, \"data\": get_config(config_key)}\n\n @admins_only\n # TODO: This returns weirdly structured data. It should more closely match ConfigDetailedSuccessResponse #1506\n def patch(self, config_key):\n config = Configs.query.filter_by(key=config_key).first()\n data = request.get_json()\n if config:\n schema = ConfigSchema(instance=config, partial=True)\n response = schema.load(data)\n else:\n schema = ConfigSchema()\n data[\"key\"] = config_key\n response = schema.load(data)\n db.session.add(response.data)\n\n if response.errors:\n return response.errors, 400\n\n db.session.commit()\n\n response = schema.dump(response.data)\n db.session.close()\n\n clear_config()\n clear_standings()\n\n return {\"success\": True, \"data\": response.data}\n\n @admins_only\n @configs_namespace.doc(\n description=\"Endpoint to delete a Config object\",\n responses={200: (\"Success\", \"APISimpleSuccessResponse\")},\n )\n def delete(self, config_key):\n config = Configs.query.filter_by(key=config_key).first_or_404()\n\n db.session.delete(config)\n db.session.commit()\n db.session.close()\n\n clear_config()\n clear_standings()\n\n return {\"success\": True}\n", "path": "CTFd/api/v1/config.py"}], "after_files": [{"content": "from typing import List\n\nfrom flask import request\nfrom flask_restx import Namespace, Resource\n\nfrom CTFd.api.v1.helpers.models import build_model_filters\nfrom CTFd.api.v1.helpers.request import validate_args\nfrom CTFd.api.v1.helpers.schemas import sqlalchemy_to_pydantic\nfrom CTFd.api.v1.schemas import APIDetailedSuccessResponse, APIListSuccessResponse\nfrom CTFd.cache import clear_config, clear_standings\nfrom CTFd.constants import RawEnum\nfrom CTFd.models import Configs, db\nfrom CTFd.schemas.config import ConfigSchema\nfrom CTFd.utils import set_config\nfrom CTFd.utils.decorators import admins_only\n\nconfigs_namespace = Namespace(\"configs\", description=\"Endpoint to retrieve Configs\")\n\nConfigModel = sqlalchemy_to_pydantic(Configs)\n\n\nclass ConfigDetailedSuccessResponse(APIDetailedSuccessResponse):\n data: ConfigModel\n\n\nclass ConfigListSuccessResponse(APIListSuccessResponse):\n data: List[ConfigModel]\n\n\nconfigs_namespace.schema_model(\n \"ConfigDetailedSuccessResponse\", ConfigDetailedSuccessResponse.apidoc()\n)\n\nconfigs_namespace.schema_model(\n \"ConfigListSuccessResponse\", ConfigListSuccessResponse.apidoc()\n)\n\n\n@configs_namespace.route(\"\")\nclass ConfigList(Resource):\n @admins_only\n @configs_namespace.doc(\n description=\"Endpoint to get Config objects in bulk\",\n responses={\n 200: (\"Success\", \"ConfigListSuccessResponse\"),\n 400: (\n \"An error occured processing the provided or stored data\",\n \"APISimpleErrorResponse\",\n ),\n },\n )\n @validate_args(\n {\n \"key\": (str, None),\n \"value\": (str, None),\n \"q\": (str, None),\n \"field\": (RawEnum(\"ConfigFields\", {\"key\": \"key\", \"value\": \"value\"}), None),\n },\n location=\"query\",\n )\n def get(self, query_args):\n q = query_args.pop(\"q\", None)\n field = str(query_args.pop(\"field\", None))\n filters = build_model_filters(model=Configs, query=q, field=field)\n\n configs = Configs.query.filter_by(**query_args).filter(*filters).all()\n schema = ConfigSchema(many=True)\n response = schema.dump(configs)\n if response.errors:\n return {\"success\": False, \"errors\": response.errors}, 400\n\n return {\"success\": True, \"data\": response.data}\n\n @admins_only\n @configs_namespace.doc(\n description=\"Endpoint to get create a Config object\",\n responses={\n 200: (\"Success\", \"ConfigDetailedSuccessResponse\"),\n 400: (\n \"An error occured processing the provided or stored data\",\n \"APISimpleErrorResponse\",\n ),\n },\n )\n def post(self):\n req = request.get_json()\n schema = ConfigSchema()\n response = schema.load(req)\n\n if response.errors:\n return {\"success\": False, \"errors\": response.errors}, 400\n\n db.session.add(response.data)\n db.session.commit()\n\n response = schema.dump(response.data)\n db.session.close()\n\n clear_config()\n clear_standings()\n\n return {\"success\": True, \"data\": response.data}\n\n @admins_only\n @configs_namespace.doc(\n description=\"Endpoint to get patch Config objects in bulk\",\n responses={200: (\"Success\", \"APISimpleSuccessResponse\")},\n )\n def patch(self):\n req = request.get_json()\n\n for key, value in req.items():\n set_config(key=key, value=value)\n\n clear_config()\n clear_standings()\n\n return {\"success\": True}\n\n\n@configs_namespace.route(\"/<config_key>\")\nclass Config(Resource):\n @admins_only\n @configs_namespace.doc(\n description=\"Endpoint to get a specific Config object\",\n responses={\n 200: (\"Success\", \"ConfigDetailedSuccessResponse\"),\n 400: (\n \"An error occured processing the provided or stored data\",\n \"APISimpleErrorResponse\",\n ),\n },\n )\n def get(self, config_key):\n config = Configs.query.filter_by(key=config_key).first_or_404()\n schema = ConfigSchema()\n response = schema.dump(config)\n return {\"success\": True, \"data\": response.data}\n\n @admins_only\n @configs_namespace.doc(\n description=\"Endpoint to edit a specific Config object\",\n responses={\n 200: (\"Success\", \"ConfigDetailedSuccessResponse\"),\n 400: (\n \"An error occured processing the provided or stored data\",\n \"APISimpleErrorResponse\",\n ),\n },\n )\n def patch(self, config_key):\n config = Configs.query.filter_by(key=config_key).first()\n data = request.get_json()\n if config:\n schema = ConfigSchema(instance=config, partial=True)\n response = schema.load(data)\n else:\n schema = ConfigSchema()\n data[\"key\"] = config_key\n response = schema.load(data)\n db.session.add(response.data)\n\n if response.errors:\n return response.errors, 400\n\n db.session.commit()\n\n response = schema.dump(response.data)\n db.session.close()\n\n clear_config()\n clear_standings()\n\n return {\"success\": True, \"data\": response.data}\n\n @admins_only\n @configs_namespace.doc(\n description=\"Endpoint to delete a Config object\",\n responses={200: (\"Success\", \"APISimpleSuccessResponse\")},\n )\n def delete(self, config_key):\n config = Configs.query.filter_by(key=config_key).first_or_404()\n\n db.session.delete(config)\n db.session.commit()\n db.session.close()\n\n clear_config()\n clear_standings()\n\n return {\"success\": True}\n", "path": "CTFd/api/v1/config.py"}]}
| 1,864 | 485 |
gh_patches_debug_15874
|
rasdani/github-patches
|
git_diff
|
kubeflow__pipelines-4104
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
allow output artifact store configuration (vs hard coded)
it seems like the output artifacts are always stored in a specific minio service, port, namespace, bucket, secrets, etc (`minio-service.kubeflow:9000`).
see: https://github.com/kubeflow/pipelines/blob/f40a22a3f4a8e06d20cf3e3f425b5058d5c87e0b/sdk/python/kfp/compiler/_op_to_template.py#L148
it would be great to make it flexible, e.g. allow using S3, or change namespace or bucket names.
i suggest making it configurable, i can do such PR if we agree its needed.
flexible pipeline service (host) path in client SDK
when creating an SDK `Client()` the path to `ml-pipeline` API service is loaded from a hard coded value (`ml-pipeline.kubeflow.svc.cluster.local:8888`) which indicate a specific k8s namespace. it can be valuable to load that default value from an env variable, i.e. changing the line in `_client.py` from:
`config.host = host if host else Client.IN_CLUSTER_DNS_NAME`
to:
`config.host = host or os.environ.get('ML_PIPELINE_DNS_NAME',Client.IN_CLUSTER_DNS_NAME)`
also note that when a user provide the `host` parameter, the ipython output points to the API server and not to the UI service (see the logic in `_get_url_prefix()`), it seems like a potential bug
if its acceptable i can submit a PR for the line change above
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sdk/python/kfp/dsl/_component_bridge.py`
Content:
```
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import copy
16 from typing import Any, Mapping
17 from ..components.structures import ComponentSpec, ComponentReference
18 from ..components._components import _default_component_name, _resolve_command_line_and_paths
19 from ..components._naming import _sanitize_python_function_name, generate_unique_name_conversion_table
20 from .. import dsl
21
22
23 def _create_container_op_from_component_and_arguments(
24 component_spec: ComponentSpec,
25 arguments: Mapping[str, Any],
26 component_ref: ComponentReference = None,
27 ) -> 'dsl.ContainerOp':
28 # Check types of the reference arguments and serialize PipelineParams
29 arguments = arguments.copy()
30 for input_name, argument_value in arguments.items():
31 if isinstance(argument_value, dsl.PipelineParam):
32 input_type = component_spec._inputs_dict[input_name].type
33 reference_type = argument_value.param_type
34 dsl.types.verify_type_compatibility(reference_type, input_type, 'Incompatible argument passed to the input "{}" of component "{}": '.format(input_name, component_spec.name))
35
36 arguments[input_name] = str(argument_value)
37
38 resolved_cmd = _resolve_command_line_and_paths(
39 component_spec=component_spec,
40 arguments=arguments,
41 )
42
43 container_spec = component_spec.implementation.container
44
45 task = dsl.ContainerOp(
46 name=component_spec.name or _default_component_name,
47 image=container_spec.image,
48 command=resolved_cmd.command,
49 arguments=resolved_cmd.args,
50 file_outputs=resolved_cmd.output_paths,
51 artifact_argument_paths=[
52 dsl.InputArgumentPath(
53 argument=arguments[input_name],
54 input=input_name,
55 path=path,
56 )
57 for input_name, path in resolved_cmd.input_paths.items()
58 ],
59 )
60
61 component_meta = copy.copy(component_spec)
62 task._set_metadata(component_meta)
63 component_ref_without_spec = copy.copy(component_ref)
64 component_ref_without_spec.spec = None
65 task._component_ref = component_ref_without_spec
66
67 # Previously, ContainerOp had strict requirements for the output names, so we had to
68 # convert all the names before passing them to the ContainerOp constructor.
69 # Outputs with non-pythonic names could not be accessed using their original names.
70 # Now ContainerOp supports any output names, so we're now using the original output names.
71 # However to support legacy pipelines, we're also adding output references with pythonic names.
72 # TODO: Add warning when people use the legacy output names.
73 output_names = [output_spec.name for output_spec in component_spec.outputs or []] # Stabilizing the ordering
74 output_name_to_python = generate_unique_name_conversion_table(output_names, _sanitize_python_function_name)
75 for output_name in output_names:
76 pythonic_output_name = output_name_to_python[output_name]
77 # Note: Some component outputs are currently missing from task.outputs (e.g. MLPipeline UI Metadata)
78 if pythonic_output_name not in task.outputs and output_name in task.outputs:
79 task.outputs[pythonic_output_name] = task.outputs[output_name]
80
81 if container_spec.env:
82 from kubernetes import client as k8s_client
83 for name, value in container_spec.env.items():
84 task.container.add_env_variable(k8s_client.V1EnvVar(name=name, value=value))
85
86 if component_spec.metadata:
87 for key, value in (component_spec.metadata.annotations or {}).items():
88 task.add_pod_annotation(key, value)
89 for key, value in (component_spec.metadata.labels or {}).items():
90 task.add_pod_label(key, value)
91
92 return task
93
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sdk/python/kfp/dsl/_component_bridge.py b/sdk/python/kfp/dsl/_component_bridge.py
--- a/sdk/python/kfp/dsl/_component_bridge.py
+++ b/sdk/python/kfp/dsl/_component_bridge.py
@@ -84,9 +84,13 @@
task.container.add_env_variable(k8s_client.V1EnvVar(name=name, value=value))
if component_spec.metadata:
- for key, value in (component_spec.metadata.annotations or {}).items():
+ annotations = component_spec.metadata.annotations or {}
+ for key, value in annotations.items():
task.add_pod_annotation(key, value)
for key, value in (component_spec.metadata.labels or {}).items():
task.add_pod_label(key, value)
+ # Disabling the caching for the volatile components by default
+ if annotations.get('volatile_component', 'false') == 'true':
+ task.execution_options.caching_strategy.max_cache_staleness = 'P0D'
return task
|
{"golden_diff": "diff --git a/sdk/python/kfp/dsl/_component_bridge.py b/sdk/python/kfp/dsl/_component_bridge.py\n--- a/sdk/python/kfp/dsl/_component_bridge.py\n+++ b/sdk/python/kfp/dsl/_component_bridge.py\n@@ -84,9 +84,13 @@\n task.container.add_env_variable(k8s_client.V1EnvVar(name=name, value=value))\n \n if component_spec.metadata:\n- for key, value in (component_spec.metadata.annotations or {}).items():\n+ annotations = component_spec.metadata.annotations or {}\n+ for key, value in annotations.items():\n task.add_pod_annotation(key, value)\n for key, value in (component_spec.metadata.labels or {}).items():\n task.add_pod_label(key, value)\n+ # Disabling the caching for the volatile components by default\n+ if annotations.get('volatile_component', 'false') == 'true':\n+ task.execution_options.caching_strategy.max_cache_staleness = 'P0D'\n \n return task\n", "issue": "allow output artifact store configuration (vs hard coded)\nit seems like the output artifacts are always stored in a specific minio service, port, namespace, bucket, secrets, etc (`minio-service.kubeflow:9000`). \r\n\r\nsee: https://github.com/kubeflow/pipelines/blob/f40a22a3f4a8e06d20cf3e3f425b5058d5c87e0b/sdk/python/kfp/compiler/_op_to_template.py#L148\r\n\r\nit would be great to make it flexible, e.g. allow using S3, or change namespace or bucket names.\r\ni suggest making it configurable, i can do such PR if we agree its needed. \nflexible pipeline service (host) path in client SDK \nwhen creating an SDK `Client()` the path to `ml-pipeline` API service is loaded from a hard coded value (`ml-pipeline.kubeflow.svc.cluster.local:8888`) which indicate a specific k8s namespace. it can be valuable to load that default value from an env variable, i.e. changing the line in `_client.py` from:\r\n\r\n`config.host = host if host else Client.IN_CLUSTER_DNS_NAME`\r\n\r\nto:\r\n\r\n`config.host = host or os.environ.get('ML_PIPELINE_DNS_NAME',Client.IN_CLUSTER_DNS_NAME)`\r\n\r\nalso note that when a user provide the `host` parameter, the ipython output points to the API server and not to the UI service (see the logic in `_get_url_prefix()`), it seems like a potential bug\r\n\r\nif its acceptable i can submit a PR for the line change above\r\n \n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport copy\nfrom typing import Any, Mapping\nfrom ..components.structures import ComponentSpec, ComponentReference\nfrom ..components._components import _default_component_name, _resolve_command_line_and_paths\nfrom ..components._naming import _sanitize_python_function_name, generate_unique_name_conversion_table\nfrom .. import dsl\n\n\ndef _create_container_op_from_component_and_arguments(\n component_spec: ComponentSpec,\n arguments: Mapping[str, Any],\n component_ref: ComponentReference = None,\n) -> 'dsl.ContainerOp':\n # Check types of the reference arguments and serialize PipelineParams\n arguments = arguments.copy()\n for input_name, argument_value in arguments.items():\n if isinstance(argument_value, dsl.PipelineParam):\n input_type = component_spec._inputs_dict[input_name].type\n reference_type = argument_value.param_type\n dsl.types.verify_type_compatibility(reference_type, input_type, 'Incompatible argument passed to the input \"{}\" of component \"{}\": '.format(input_name, component_spec.name))\n\n arguments[input_name] = str(argument_value)\n\n resolved_cmd = _resolve_command_line_and_paths(\n component_spec=component_spec,\n arguments=arguments,\n )\n\n container_spec = component_spec.implementation.container\n\n task = dsl.ContainerOp(\n name=component_spec.name or _default_component_name,\n image=container_spec.image,\n command=resolved_cmd.command,\n arguments=resolved_cmd.args,\n file_outputs=resolved_cmd.output_paths,\n artifact_argument_paths=[\n dsl.InputArgumentPath(\n argument=arguments[input_name],\n input=input_name,\n path=path,\n )\n for input_name, path in resolved_cmd.input_paths.items()\n ],\n )\n\n component_meta = copy.copy(component_spec)\n task._set_metadata(component_meta)\n component_ref_without_spec = copy.copy(component_ref)\n component_ref_without_spec.spec = None\n task._component_ref = component_ref_without_spec\n\n # Previously, ContainerOp had strict requirements for the output names, so we had to\n # convert all the names before passing them to the ContainerOp constructor.\n # Outputs with non-pythonic names could not be accessed using their original names.\n # Now ContainerOp supports any output names, so we're now using the original output names.\n # However to support legacy pipelines, we're also adding output references with pythonic names.\n # TODO: Add warning when people use the legacy output names.\n output_names = [output_spec.name for output_spec in component_spec.outputs or []] # Stabilizing the ordering\n output_name_to_python = generate_unique_name_conversion_table(output_names, _sanitize_python_function_name)\n for output_name in output_names:\n pythonic_output_name = output_name_to_python[output_name]\n # Note: Some component outputs are currently missing from task.outputs (e.g. MLPipeline UI Metadata)\n if pythonic_output_name not in task.outputs and output_name in task.outputs:\n task.outputs[pythonic_output_name] = task.outputs[output_name]\n\n if container_spec.env:\n from kubernetes import client as k8s_client\n for name, value in container_spec.env.items():\n task.container.add_env_variable(k8s_client.V1EnvVar(name=name, value=value))\n\n if component_spec.metadata:\n for key, value in (component_spec.metadata.annotations or {}).items():\n task.add_pod_annotation(key, value)\n for key, value in (component_spec.metadata.labels or {}).items():\n task.add_pod_label(key, value)\n\n return task\n", "path": "sdk/python/kfp/dsl/_component_bridge.py"}], "after_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport copy\nfrom typing import Any, Mapping\nfrom ..components.structures import ComponentSpec, ComponentReference\nfrom ..components._components import _default_component_name, _resolve_command_line_and_paths\nfrom ..components._naming import _sanitize_python_function_name, generate_unique_name_conversion_table\nfrom .. import dsl\n\n\ndef _create_container_op_from_component_and_arguments(\n component_spec: ComponentSpec,\n arguments: Mapping[str, Any],\n component_ref: ComponentReference = None,\n) -> 'dsl.ContainerOp':\n # Check types of the reference arguments and serialize PipelineParams\n arguments = arguments.copy()\n for input_name, argument_value in arguments.items():\n if isinstance(argument_value, dsl.PipelineParam):\n input_type = component_spec._inputs_dict[input_name].type\n reference_type = argument_value.param_type\n dsl.types.verify_type_compatibility(reference_type, input_type, 'Incompatible argument passed to the input \"{}\" of component \"{}\": '.format(input_name, component_spec.name))\n\n arguments[input_name] = str(argument_value)\n\n resolved_cmd = _resolve_command_line_and_paths(\n component_spec=component_spec,\n arguments=arguments,\n )\n\n container_spec = component_spec.implementation.container\n\n task = dsl.ContainerOp(\n name=component_spec.name or _default_component_name,\n image=container_spec.image,\n command=resolved_cmd.command,\n arguments=resolved_cmd.args,\n file_outputs=resolved_cmd.output_paths,\n artifact_argument_paths=[\n dsl.InputArgumentPath(\n argument=arguments[input_name],\n input=input_name,\n path=path,\n )\n for input_name, path in resolved_cmd.input_paths.items()\n ],\n )\n\n component_meta = copy.copy(component_spec)\n task._set_metadata(component_meta)\n component_ref_without_spec = copy.copy(component_ref)\n component_ref_without_spec.spec = None\n task._component_ref = component_ref_without_spec\n\n # Previously, ContainerOp had strict requirements for the output names, so we had to\n # convert all the names before passing them to the ContainerOp constructor.\n # Outputs with non-pythonic names could not be accessed using their original names.\n # Now ContainerOp supports any output names, so we're now using the original output names.\n # However to support legacy pipelines, we're also adding output references with pythonic names.\n # TODO: Add warning when people use the legacy output names.\n output_names = [output_spec.name for output_spec in component_spec.outputs or []] # Stabilizing the ordering\n output_name_to_python = generate_unique_name_conversion_table(output_names, _sanitize_python_function_name)\n for output_name in output_names:\n pythonic_output_name = output_name_to_python[output_name]\n # Note: Some component outputs are currently missing from task.outputs (e.g. MLPipeline UI Metadata)\n if pythonic_output_name not in task.outputs and output_name in task.outputs:\n task.outputs[pythonic_output_name] = task.outputs[output_name]\n\n if container_spec.env:\n from kubernetes import client as k8s_client\n for name, value in container_spec.env.items():\n task.container.add_env_variable(k8s_client.V1EnvVar(name=name, value=value))\n\n if component_spec.metadata:\n annotations = component_spec.metadata.annotations or {}\n for key, value in annotations.items():\n task.add_pod_annotation(key, value)\n for key, value in (component_spec.metadata.labels or {}).items():\n task.add_pod_label(key, value)\n # Disabling the caching for the volatile components by default\n if annotations.get('volatile_component', 'false') == 'true':\n task.execution_options.caching_strategy.max_cache_staleness = 'P0D'\n\n return task\n", "path": "sdk/python/kfp/dsl/_component_bridge.py"}]}
| 1,667 | 215 |
gh_patches_debug_3647
|
rasdani/github-patches
|
git_diff
|
wagtail__wagtail-1272
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Getting an item after slicing ElasticSearchResults object gives wrong result
For example, let's say we have a list of results with the items A, B, C and D
If you run results[0]. You get A
If you run results[2:]. You get [C, D]
But if you run results[2:][0]. You will get A (you should get C)
Fix coming shortly
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wagtail/wagtailsearch/backends/base.py`
Content:
```
1 from six import text_type
2
3 from django.db.models.query import QuerySet
4 from django.db.models.lookups import Lookup
5 from django.db.models.sql.where import SubqueryConstraint, WhereNode
6
7 from wagtail.wagtailsearch.index import class_is_indexed
8
9
10 class FilterError(Exception):
11 pass
12
13
14 class FieldError(Exception):
15 pass
16
17
18 class BaseSearchQuery(object):
19 def __init__(self, queryset, query_string, fields=None):
20 self.queryset = queryset
21 self.query_string = query_string
22 self.fields = fields
23
24 def _get_searchable_field(self, field_attname):
25 # Get field
26 field = dict(
27 (field.get_attname(self.queryset.model), field)
28 for field in self.queryset.model.get_searchable_search_fields()
29 ).get(field_attname, None)
30
31 return field
32
33 def _get_filterable_field(self, field_attname):
34 # Get field
35 field = dict(
36 (field.get_attname(self.queryset.model), field)
37 for field in self.queryset.model.get_filterable_search_fields()
38 ).get(field_attname, None)
39
40 return field
41
42 def _process_lookup(self, field, lookup, value):
43 raise NotImplementedError
44
45 def _connect_filters(self, filters, connector, negated):
46 raise NotImplementedError
47
48 def _process_filter(self, field_attname, lookup, value):
49 # Get the field
50 field = self._get_filterable_field(field_attname)
51
52 if field is None:
53 raise FieldError('Cannot filter search results with field "' + field_attname + '". Please add index.FilterField(\'' + field_attname + '\') to ' + self.queryset.model.__name__ + '.search_fields.')
54
55 # Process the lookup
56 result = self._process_lookup(field, lookup, value)
57
58 if result is None:
59 raise FilterError('Could not apply filter on search results: "' + field_attname + '__' + lookup + ' = ' + text_type(value) + '". Lookup "' + lookup + '"" not recognosed.')
60
61 return result
62
63 def _get_filters_from_where_node(self, where_node):
64 # Check if this is a leaf node
65 if isinstance(where_node, Lookup):
66 field_attname = where_node.lhs.target.attname
67 lookup = where_node.lookup_name
68 value = where_node.rhs
69
70 # Process the filter
71 return self._process_filter(field_attname, lookup, value)
72
73 elif isinstance(where_node, SubqueryConstraint):
74 raise FilterError('Could not apply filter on search results: Subqueries are not allowed.')
75
76 elif isinstance(where_node, WhereNode):
77 # Get child filters
78 connector = where_node.connector
79 child_filters = [self._get_filters_from_where_node(child) for child in where_node.children]
80 child_filters = [child_filter for child_filter in child_filters if child_filter]
81
82 return self._connect_filters(child_filters, connector, where_node.negated)
83
84 else:
85 raise FilterError('Could not apply filter on search results: Unknown where node: ' + str(type(where_node)))
86
87 def _get_filters_from_queryset(self):
88 return self._get_filters_from_where_node(self.queryset.query.where)
89
90
91 class BaseSearchResults(object):
92 def __init__(self, backend, query, prefetch_related=None):
93 self.backend = backend
94 self.query = query
95 self.prefetch_related = prefetch_related
96 self.start = 0
97 self.stop = None
98 self._results_cache = None
99 self._count_cache = None
100
101 def _set_limits(self, start=None, stop=None):
102 if stop is not None:
103 if self.stop is not None:
104 self.stop = min(self.stop, self.start + stop)
105 else:
106 self.stop = self.start + stop
107
108 if start is not None:
109 if self.stop is not None:
110 self.start = min(self.stop, self.start + start)
111 else:
112 self.start = self.start + start
113
114 def _clone(self):
115 klass = self.__class__
116 new = klass(self.backend, self.query, prefetch_related=self.prefetch_related)
117 new.start = self.start
118 new.stop = self.stop
119 return new
120
121 def _do_search(self):
122 raise NotImplementedError
123
124 def _do_count(self):
125 raise NotImplementedError
126
127 def results(self):
128 if self._results_cache is None:
129 self._results_cache = self._do_search()
130 return self._results_cache
131
132 def count(self):
133 if self._count_cache is None:
134 if self._results_cache is not None:
135 self._count_cache = len(self._results_cache)
136 else:
137 self._count_cache = self._do_count()
138 return self._count_cache
139
140 def __getitem__(self, key):
141 new = self._clone()
142
143 if isinstance(key, slice):
144 # Set limits
145 start = int(key.start) if key.start else None
146 stop = int(key.stop) if key.stop else None
147 new._set_limits(start, stop)
148
149 # Copy results cache
150 if self._results_cache is not None:
151 new._results_cache = self._results_cache[key]
152
153 return new
154 else:
155 if self._results_cache is not None:
156 return self._results_cache[key]
157
158 new.start = key
159 new.stop = key + 1
160 return list(new)[0]
161
162 def __iter__(self):
163 return iter(self.results())
164
165 def __len__(self):
166 return len(self.results())
167
168 def __repr__(self):
169 data = list(self[:21])
170 if len(data) > 20:
171 data[-1] = "...(remaining elements truncated)..."
172 return repr(data)
173
174
175 class BaseSearch(object):
176 def __init__(self, params):
177 pass
178
179 def reset_index(self):
180 raise NotImplementedError
181
182 def add_type(self, model):
183 raise NotImplementedError
184
185 def refresh_index(self):
186 raise NotImplementedError
187
188 def add(self, obj):
189 raise NotImplementedError
190
191 def add_bulk(self, model, obj_list):
192 raise NotImplementedError
193
194 def delete(self, obj):
195 raise NotImplementedError
196
197 def _search(self, queryset, query_string, fields=None):
198 raise NotImplementedError
199
200 def search(self, query_string, model_or_queryset, fields=None, filters=None, prefetch_related=None):
201 # Find model/queryset
202 if isinstance(model_or_queryset, QuerySet):
203 model = model_or_queryset.model
204 queryset = model_or_queryset
205 else:
206 model = model_or_queryset
207 queryset = model_or_queryset.objects.all()
208
209 # Model must be a class that is in the index
210 if not class_is_indexed(model):
211 return []
212
213 # Check that theres still a query string after the clean up
214 if query_string == "":
215 return []
216
217 # Apply filters to queryset
218 if filters:
219 queryset = queryset.filter(**filters)
220
221 # Prefetch related
222 if prefetch_related:
223 for prefetch in prefetch_related:
224 queryset = queryset.prefetch_related(prefetch)
225
226 # Search
227 return self._search(queryset, query_string, fields=fields)
228
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/wagtail/wagtailsearch/backends/base.py b/wagtail/wagtailsearch/backends/base.py
--- a/wagtail/wagtailsearch/backends/base.py
+++ b/wagtail/wagtailsearch/backends/base.py
@@ -155,8 +155,8 @@
if self._results_cache is not None:
return self._results_cache[key]
- new.start = key
- new.stop = key + 1
+ new.start = self.start + key
+ new.stop = self.start + key + 1
return list(new)[0]
def __iter__(self):
|
{"golden_diff": "diff --git a/wagtail/wagtailsearch/backends/base.py b/wagtail/wagtailsearch/backends/base.py\n--- a/wagtail/wagtailsearch/backends/base.py\n+++ b/wagtail/wagtailsearch/backends/base.py\n@@ -155,8 +155,8 @@\n if self._results_cache is not None:\n return self._results_cache[key]\n \n- new.start = key\n- new.stop = key + 1\n+ new.start = self.start + key\n+ new.stop = self.start + key + 1\n return list(new)[0]\n \n def __iter__(self):\n", "issue": "Getting an item after slicing ElasticSearchResults object gives wrong result\nFor example, let's say we have a list of results with the items A, B, C and D\n\nIf you run results[0]. You get A\nIf you run results[2:]. You get [C, D]\nBut if you run results[2:][0]. You will get A (you should get C)\n\nFix coming shortly\n\n", "before_files": [{"content": "from six import text_type\n\nfrom django.db.models.query import QuerySet\nfrom django.db.models.lookups import Lookup\nfrom django.db.models.sql.where import SubqueryConstraint, WhereNode\n\nfrom wagtail.wagtailsearch.index import class_is_indexed\n\n\nclass FilterError(Exception):\n pass\n\n\nclass FieldError(Exception):\n pass\n\n\nclass BaseSearchQuery(object):\n def __init__(self, queryset, query_string, fields=None):\n self.queryset = queryset\n self.query_string = query_string\n self.fields = fields\n\n def _get_searchable_field(self, field_attname):\n # Get field\n field = dict(\n (field.get_attname(self.queryset.model), field)\n for field in self.queryset.model.get_searchable_search_fields()\n ).get(field_attname, None)\n\n return field\n\n def _get_filterable_field(self, field_attname):\n # Get field\n field = dict(\n (field.get_attname(self.queryset.model), field)\n for field in self.queryset.model.get_filterable_search_fields()\n ).get(field_attname, None)\n\n return field\n\n def _process_lookup(self, field, lookup, value):\n raise NotImplementedError\n\n def _connect_filters(self, filters, connector, negated):\n raise NotImplementedError\n\n def _process_filter(self, field_attname, lookup, value):\n # Get the field\n field = self._get_filterable_field(field_attname)\n\n if field is None:\n raise FieldError('Cannot filter search results with field \"' + field_attname + '\". Please add index.FilterField(\\'' + field_attname + '\\') to ' + self.queryset.model.__name__ + '.search_fields.')\n\n # Process the lookup\n result = self._process_lookup(field, lookup, value)\n\n if result is None:\n raise FilterError('Could not apply filter on search results: \"' + field_attname + '__' + lookup + ' = ' + text_type(value) + '\". Lookup \"' + lookup + '\"\" not recognosed.')\n\n return result\n\n def _get_filters_from_where_node(self, where_node):\n # Check if this is a leaf node\n if isinstance(where_node, Lookup):\n field_attname = where_node.lhs.target.attname\n lookup = where_node.lookup_name\n value = where_node.rhs\n\n # Process the filter\n return self._process_filter(field_attname, lookup, value)\n\n elif isinstance(where_node, SubqueryConstraint):\n raise FilterError('Could not apply filter on search results: Subqueries are not allowed.')\n\n elif isinstance(where_node, WhereNode):\n # Get child filters\n connector = where_node.connector\n child_filters = [self._get_filters_from_where_node(child) for child in where_node.children]\n child_filters = [child_filter for child_filter in child_filters if child_filter]\n\n return self._connect_filters(child_filters, connector, where_node.negated)\n\n else:\n raise FilterError('Could not apply filter on search results: Unknown where node: ' + str(type(where_node)))\n\n def _get_filters_from_queryset(self):\n return self._get_filters_from_where_node(self.queryset.query.where)\n\n\nclass BaseSearchResults(object):\n def __init__(self, backend, query, prefetch_related=None):\n self.backend = backend\n self.query = query\n self.prefetch_related = prefetch_related\n self.start = 0\n self.stop = None\n self._results_cache = None\n self._count_cache = None\n\n def _set_limits(self, start=None, stop=None):\n if stop is not None:\n if self.stop is not None:\n self.stop = min(self.stop, self.start + stop)\n else:\n self.stop = self.start + stop\n\n if start is not None:\n if self.stop is not None:\n self.start = min(self.stop, self.start + start)\n else:\n self.start = self.start + start\n\n def _clone(self):\n klass = self.__class__\n new = klass(self.backend, self.query, prefetch_related=self.prefetch_related)\n new.start = self.start\n new.stop = self.stop\n return new\n\n def _do_search(self):\n raise NotImplementedError\n\n def _do_count(self):\n raise NotImplementedError\n\n def results(self):\n if self._results_cache is None:\n self._results_cache = self._do_search()\n return self._results_cache\n\n def count(self):\n if self._count_cache is None:\n if self._results_cache is not None:\n self._count_cache = len(self._results_cache)\n else:\n self._count_cache = self._do_count()\n return self._count_cache\n\n def __getitem__(self, key):\n new = self._clone()\n\n if isinstance(key, slice):\n # Set limits\n start = int(key.start) if key.start else None\n stop = int(key.stop) if key.stop else None\n new._set_limits(start, stop)\n\n # Copy results cache\n if self._results_cache is not None:\n new._results_cache = self._results_cache[key]\n\n return new\n else:\n if self._results_cache is not None:\n return self._results_cache[key]\n\n new.start = key\n new.stop = key + 1\n return list(new)[0]\n\n def __iter__(self):\n return iter(self.results())\n\n def __len__(self):\n return len(self.results())\n\n def __repr__(self):\n data = list(self[:21])\n if len(data) > 20:\n data[-1] = \"...(remaining elements truncated)...\"\n return repr(data)\n\n\nclass BaseSearch(object):\n def __init__(self, params):\n pass\n\n def reset_index(self):\n raise NotImplementedError\n\n def add_type(self, model):\n raise NotImplementedError\n\n def refresh_index(self):\n raise NotImplementedError\n\n def add(self, obj):\n raise NotImplementedError\n\n def add_bulk(self, model, obj_list):\n raise NotImplementedError\n\n def delete(self, obj):\n raise NotImplementedError\n\n def _search(self, queryset, query_string, fields=None):\n raise NotImplementedError\n\n def search(self, query_string, model_or_queryset, fields=None, filters=None, prefetch_related=None):\n # Find model/queryset\n if isinstance(model_or_queryset, QuerySet):\n model = model_or_queryset.model\n queryset = model_or_queryset\n else:\n model = model_or_queryset\n queryset = model_or_queryset.objects.all()\n\n # Model must be a class that is in the index\n if not class_is_indexed(model):\n return []\n\n # Check that theres still a query string after the clean up\n if query_string == \"\":\n return []\n\n # Apply filters to queryset\n if filters:\n queryset = queryset.filter(**filters)\n\n # Prefetch related\n if prefetch_related:\n for prefetch in prefetch_related:\n queryset = queryset.prefetch_related(prefetch)\n\n # Search\n return self._search(queryset, query_string, fields=fields)\n", "path": "wagtail/wagtailsearch/backends/base.py"}], "after_files": [{"content": "from six import text_type\n\nfrom django.db.models.query import QuerySet\nfrom django.db.models.lookups import Lookup\nfrom django.db.models.sql.where import SubqueryConstraint, WhereNode\n\nfrom wagtail.wagtailsearch.index import class_is_indexed\n\n\nclass FilterError(Exception):\n pass\n\n\nclass FieldError(Exception):\n pass\n\n\nclass BaseSearchQuery(object):\n def __init__(self, queryset, query_string, fields=None):\n self.queryset = queryset\n self.query_string = query_string\n self.fields = fields\n\n def _get_searchable_field(self, field_attname):\n # Get field\n field = dict(\n (field.get_attname(self.queryset.model), field)\n for field in self.queryset.model.get_searchable_search_fields()\n ).get(field_attname, None)\n\n return field\n\n def _get_filterable_field(self, field_attname):\n # Get field\n field = dict(\n (field.get_attname(self.queryset.model), field)\n for field in self.queryset.model.get_filterable_search_fields()\n ).get(field_attname, None)\n\n return field\n\n def _process_lookup(self, field, lookup, value):\n raise NotImplementedError\n\n def _connect_filters(self, filters, connector, negated):\n raise NotImplementedError\n\n def _process_filter(self, field_attname, lookup, value):\n # Get the field\n field = self._get_filterable_field(field_attname)\n\n if field is None:\n raise FieldError('Cannot filter search results with field \"' + field_attname + '\". Please add index.FilterField(\\'' + field_attname + '\\') to ' + self.queryset.model.__name__ + '.search_fields.')\n\n # Process the lookup\n result = self._process_lookup(field, lookup, value)\n\n if result is None:\n raise FilterError('Could not apply filter on search results: \"' + field_attname + '__' + lookup + ' = ' + text_type(value) + '\". Lookup \"' + lookup + '\"\" not recognosed.')\n\n return result\n\n def _get_filters_from_where_node(self, where_node):\n # Check if this is a leaf node\n if isinstance(where_node, Lookup):\n field_attname = where_node.lhs.target.attname\n lookup = where_node.lookup_name\n value = where_node.rhs\n\n # Process the filter\n return self._process_filter(field_attname, lookup, value)\n\n elif isinstance(where_node, SubqueryConstraint):\n raise FilterError('Could not apply filter on search results: Subqueries are not allowed.')\n\n elif isinstance(where_node, WhereNode):\n # Get child filters\n connector = where_node.connector\n child_filters = [self._get_filters_from_where_node(child) for child in where_node.children]\n child_filters = [child_filter for child_filter in child_filters if child_filter]\n\n return self._connect_filters(child_filters, connector, where_node.negated)\n\n else:\n raise FilterError('Could not apply filter on search results: Unknown where node: ' + str(type(where_node)))\n\n def _get_filters_from_queryset(self):\n return self._get_filters_from_where_node(self.queryset.query.where)\n\n\nclass BaseSearchResults(object):\n def __init__(self, backend, query, prefetch_related=None):\n self.backend = backend\n self.query = query\n self.prefetch_related = prefetch_related\n self.start = 0\n self.stop = None\n self._results_cache = None\n self._count_cache = None\n\n def _set_limits(self, start=None, stop=None):\n if stop is not None:\n if self.stop is not None:\n self.stop = min(self.stop, self.start + stop)\n else:\n self.stop = self.start + stop\n\n if start is not None:\n if self.stop is not None:\n self.start = min(self.stop, self.start + start)\n else:\n self.start = self.start + start\n\n def _clone(self):\n klass = self.__class__\n new = klass(self.backend, self.query, prefetch_related=self.prefetch_related)\n new.start = self.start\n new.stop = self.stop\n return new\n\n def _do_search(self):\n raise NotImplementedError\n\n def _do_count(self):\n raise NotImplementedError\n\n def results(self):\n if self._results_cache is None:\n self._results_cache = self._do_search()\n return self._results_cache\n\n def count(self):\n if self._count_cache is None:\n if self._results_cache is not None:\n self._count_cache = len(self._results_cache)\n else:\n self._count_cache = self._do_count()\n return self._count_cache\n\n def __getitem__(self, key):\n new = self._clone()\n\n if isinstance(key, slice):\n # Set limits\n start = int(key.start) if key.start else None\n stop = int(key.stop) if key.stop else None\n new._set_limits(start, stop)\n\n # Copy results cache\n if self._results_cache is not None:\n new._results_cache = self._results_cache[key]\n\n return new\n else:\n if self._results_cache is not None:\n return self._results_cache[key]\n\n new.start = self.start + key\n new.stop = self.start + key + 1\n return list(new)[0]\n\n def __iter__(self):\n return iter(self.results())\n\n def __len__(self):\n return len(self.results())\n\n def __repr__(self):\n data = list(self[:21])\n if len(data) > 20:\n data[-1] = \"...(remaining elements truncated)...\"\n return repr(data)\n\n\nclass BaseSearch(object):\n def __init__(self, params):\n pass\n\n def reset_index(self):\n raise NotImplementedError\n\n def add_type(self, model):\n raise NotImplementedError\n\n def refresh_index(self):\n raise NotImplementedError\n\n def add(self, obj):\n raise NotImplementedError\n\n def add_bulk(self, model, obj_list):\n raise NotImplementedError\n\n def delete(self, obj):\n raise NotImplementedError\n\n def _search(self, queryset, query_string, fields=None):\n raise NotImplementedError\n\n def search(self, query_string, model_or_queryset, fields=None, filters=None, prefetch_related=None):\n # Find model/queryset\n if isinstance(model_or_queryset, QuerySet):\n model = model_or_queryset.model\n queryset = model_or_queryset\n else:\n model = model_or_queryset\n queryset = model_or_queryset.objects.all()\n\n # Model must be a class that is in the index\n if not class_is_indexed(model):\n return []\n\n # Check that theres still a query string after the clean up\n if query_string == \"\":\n return []\n\n # Apply filters to queryset\n if filters:\n queryset = queryset.filter(**filters)\n\n # Prefetch related\n if prefetch_related:\n for prefetch in prefetch_related:\n queryset = queryset.prefetch_related(prefetch)\n\n # Search\n return self._search(queryset, query_string, fields=fields)\n", "path": "wagtail/wagtailsearch/backends/base.py"}]}
| 2,475 | 144 |
gh_patches_debug_5825
|
rasdani/github-patches
|
git_diff
|
Kinto__kinto-500
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
POST with If-None-Match: * and provided id in body always return 412
Detected using kinto-client v0.4.0 https://github.com/Kinto/kinto-client/blob/v0.4.0/src/requests.js#L188-L205
See https://github.com/mozilla-services/cliquet/issues/673
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import codecs
2 import os
3 import sys
4 from setuptools import setup, find_packages
5
6 here = os.path.abspath(os.path.dirname(__file__))
7
8
9 def read_file(filename):
10 """Open a related file and return its content."""
11 with codecs.open(os.path.join(here, filename), encoding='utf-8') as f:
12 content = f.read()
13 return content
14
15 README = read_file('README.rst')
16 CHANGELOG = read_file('CHANGELOG.rst')
17 CONTRIBUTORS = read_file('CONTRIBUTORS.rst')
18
19 REQUIREMENTS = [
20 'waitress',
21 'cliquet>=3,<4',
22 'jsonschema',
23 ]
24
25 POSTGRESQL_REQUIREMENTS = REQUIREMENTS + [
26 'cliquet[postgresql]>=3,<4'
27 ]
28
29 MONITORING_REQUIREMENTS = REQUIREMENTS + [
30 'cliquet[monitoring]>=3,<4'
31 ]
32
33 FXA_REQUIREMENTS = REQUIREMENTS + [
34 'cliquet-fxa<2'
35 ]
36
37 ENTRY_POINTS = {
38 'paste.app_factory': [
39 'main = kinto:main',
40 ],
41 'console_scripts': [
42 'kinto = kinto.__main__:main'
43 ],
44 }
45
46 DEPENDENCY_LINKS = [
47 ]
48
49 setup(name='kinto',
50 version='1.12.0.dev0',
51 description='Kinto Web Service - Store, Sync, Share, and Self-Host.',
52 long_description=README + "\n\n" + CHANGELOG + "\n\n" + CONTRIBUTORS,
53 license='Apache License (2.0)',
54 classifiers=[
55 "Programming Language :: Python",
56 "Programming Language :: Python :: 2",
57 "Programming Language :: Python :: 2.7",
58 "Programming Language :: Python :: 3",
59 "Programming Language :: Python :: 3.4",
60 "Programming Language :: Python :: 3.5",
61 "Programming Language :: Python :: Implementation :: CPython",
62 "Programming Language :: Python :: Implementation :: PyPy",
63 "Topic :: Internet :: WWW/HTTP",
64 "Topic :: Internet :: WWW/HTTP :: WSGI :: Application",
65 "License :: OSI Approved :: Apache Software License"
66 ],
67 keywords="web sync json storage",
68 author='Mozilla Services',
69 author_email='[email protected]',
70 url='https://github.com/Kinto/kinto',
71 packages=find_packages(),
72 include_package_data=True,
73 zip_safe=False,
74 install_requires=REQUIREMENTS,
75 extras_require={
76 'postgresql': POSTGRESQL_REQUIREMENTS,
77 'monitoring': MONITORING_REQUIREMENTS,
78 'fxa': FXA_REQUIREMENTS,
79 ":python_version=='2.7'": ["functools32"],
80 },
81 test_suite="kinto.tests",
82 entry_points=ENTRY_POINTS,
83 dependency_links=DEPENDENCY_LINKS)
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -18,16 +18,16 @@
REQUIREMENTS = [
'waitress',
- 'cliquet>=3,<4',
+ 'cliquet>=3.1,<4',
'jsonschema',
]
POSTGRESQL_REQUIREMENTS = REQUIREMENTS + [
- 'cliquet[postgresql]>=3,<4'
+ 'cliquet[postgresql]>=3.1,<4'
]
MONITORING_REQUIREMENTS = REQUIREMENTS + [
- 'cliquet[monitoring]>=3,<4'
+ 'cliquet[monitoring]>=3.1,<4'
]
FXA_REQUIREMENTS = REQUIREMENTS + [
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -18,16 +18,16 @@\n \n REQUIREMENTS = [\n 'waitress',\n- 'cliquet>=3,<4',\n+ 'cliquet>=3.1,<4',\n 'jsonschema',\n ]\n \n POSTGRESQL_REQUIREMENTS = REQUIREMENTS + [\n- 'cliquet[postgresql]>=3,<4'\n+ 'cliquet[postgresql]>=3.1,<4'\n ]\n \n MONITORING_REQUIREMENTS = REQUIREMENTS + [\n- 'cliquet[monitoring]>=3,<4'\n+ 'cliquet[monitoring]>=3.1,<4'\n ]\n \n FXA_REQUIREMENTS = REQUIREMENTS + [\n", "issue": "POST with If-None-Match: * and provided id in body always return 412\nDetected using kinto-client v0.4.0 https://github.com/Kinto/kinto-client/blob/v0.4.0/src/requests.js#L188-L205\n\nSee https://github.com/mozilla-services/cliquet/issues/673\n\n", "before_files": [{"content": "import codecs\nimport os\nimport sys\nfrom setuptools import setup, find_packages\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\ndef read_file(filename):\n \"\"\"Open a related file and return its content.\"\"\"\n with codecs.open(os.path.join(here, filename), encoding='utf-8') as f:\n content = f.read()\n return content\n\nREADME = read_file('README.rst')\nCHANGELOG = read_file('CHANGELOG.rst')\nCONTRIBUTORS = read_file('CONTRIBUTORS.rst')\n\nREQUIREMENTS = [\n 'waitress',\n 'cliquet>=3,<4',\n 'jsonschema',\n]\n\nPOSTGRESQL_REQUIREMENTS = REQUIREMENTS + [\n 'cliquet[postgresql]>=3,<4'\n]\n\nMONITORING_REQUIREMENTS = REQUIREMENTS + [\n 'cliquet[monitoring]>=3,<4'\n]\n\nFXA_REQUIREMENTS = REQUIREMENTS + [\n 'cliquet-fxa<2'\n]\n\nENTRY_POINTS = {\n 'paste.app_factory': [\n 'main = kinto:main',\n ],\n 'console_scripts': [\n 'kinto = kinto.__main__:main'\n ],\n}\n\nDEPENDENCY_LINKS = [\n]\n\nsetup(name='kinto',\n version='1.12.0.dev0',\n description='Kinto Web Service - Store, Sync, Share, and Self-Host.',\n long_description=README + \"\\n\\n\" + CHANGELOG + \"\\n\\n\" + CONTRIBUTORS,\n license='Apache License (2.0)',\n classifiers=[\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Internet :: WWW/HTTP :: WSGI :: Application\",\n \"License :: OSI Approved :: Apache Software License\"\n ],\n keywords=\"web sync json storage\",\n author='Mozilla Services',\n author_email='[email protected]',\n url='https://github.com/Kinto/kinto',\n packages=find_packages(),\n include_package_data=True,\n zip_safe=False,\n install_requires=REQUIREMENTS,\n extras_require={\n 'postgresql': POSTGRESQL_REQUIREMENTS,\n 'monitoring': MONITORING_REQUIREMENTS,\n 'fxa': FXA_REQUIREMENTS,\n \":python_version=='2.7'\": [\"functools32\"],\n },\n test_suite=\"kinto.tests\",\n entry_points=ENTRY_POINTS,\n dependency_links=DEPENDENCY_LINKS)\n", "path": "setup.py"}], "after_files": [{"content": "import codecs\nimport os\nimport sys\nfrom setuptools import setup, find_packages\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\ndef read_file(filename):\n \"\"\"Open a related file and return its content.\"\"\"\n with codecs.open(os.path.join(here, filename), encoding='utf-8') as f:\n content = f.read()\n return content\n\nREADME = read_file('README.rst')\nCHANGELOG = read_file('CHANGELOG.rst')\nCONTRIBUTORS = read_file('CONTRIBUTORS.rst')\n\nREQUIREMENTS = [\n 'waitress',\n 'cliquet>=3.1,<4',\n 'jsonschema',\n]\n\nPOSTGRESQL_REQUIREMENTS = REQUIREMENTS + [\n 'cliquet[postgresql]>=3.1,<4'\n]\n\nMONITORING_REQUIREMENTS = REQUIREMENTS + [\n 'cliquet[monitoring]>=3.1,<4'\n]\n\nFXA_REQUIREMENTS = REQUIREMENTS + [\n 'cliquet-fxa<2'\n]\n\nENTRY_POINTS = {\n 'paste.app_factory': [\n 'main = kinto:main',\n ],\n 'console_scripts': [\n 'kinto = kinto.__main__:main'\n ],\n}\n\nDEPENDENCY_LINKS = [\n]\n\nsetup(name='kinto',\n version='1.12.0.dev0',\n description='Kinto Web Service - Store, Sync, Share, and Self-Host.',\n long_description=README + \"\\n\\n\" + CHANGELOG + \"\\n\\n\" + CONTRIBUTORS,\n license='Apache License (2.0)',\n classifiers=[\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Internet :: WWW/HTTP :: WSGI :: Application\",\n \"License :: OSI Approved :: Apache Software License\"\n ],\n keywords=\"web sync json storage\",\n author='Mozilla Services',\n author_email='[email protected]',\n url='https://github.com/Kinto/kinto',\n packages=find_packages(),\n include_package_data=True,\n zip_safe=False,\n install_requires=REQUIREMENTS,\n extras_require={\n 'postgresql': POSTGRESQL_REQUIREMENTS,\n 'monitoring': MONITORING_REQUIREMENTS,\n 'fxa': FXA_REQUIREMENTS,\n \":python_version=='2.7'\": [\"functools32\"],\n },\n test_suite=\"kinto.tests\",\n entry_points=ENTRY_POINTS,\n dependency_links=DEPENDENCY_LINKS)\n", "path": "setup.py"}]}
| 1,091 | 166 |
gh_patches_debug_19521
|
rasdani/github-patches
|
git_diff
|
streamlink__streamlink-453
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Less violent way of closing player when stream ends
Currently streamlink uses SIGKILL to close the player when a stream ends. This prevents the player from doing its own cleanup. For example, mpv leaves DPMS/screensaver disabled because of this. I know there is --player-no-close option, but that has an unwanted side-effect of not closing the player immediately in some situations.
I suggest fixing it by using SIGTERM instead:
```diff
diff -bur streamlink-0.1.0-orig/src/streamlink_cli/output.py streamlink-0.1.0/src/streamlink_cli/output.py
--- streamlink-0.1.0-orig/src/streamlink_cli/output.py 2016-11-21 21:56:29.000000000 +0200
+++ streamlink-0.1.0/src/streamlink_cli/output.py 2016-12-08 22:08:23.000000000 +0200
@@ -161,7 +161,7 @@
if self.kill:
with ignored(Exception):
- self.player.kill()
+ self.player.terminate()
self.player.wait()
def _write(self, data):
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink_cli/output.py`
Content:
```
1 import os
2 import shlex
3 import subprocess
4 import sys
5
6 from time import sleep
7
8 import re
9
10 from .compat import is_win32, stdout
11 from .constants import DEFAULT_PLAYER_ARGUMENTS
12 from .utils import ignored
13
14 if is_win32:
15 import msvcrt
16
17
18 class Output(object):
19 def __init__(self):
20 self.opened = False
21
22 def open(self):
23 self._open()
24 self.opened = True
25
26 def close(self):
27 if self.opened:
28 self._close()
29
30 self.opened = False
31
32 def write(self, data):
33 if not self.opened:
34 raise IOError("Output is not opened")
35
36 return self._write(data)
37
38 def _open(self):
39 pass
40
41 def _close(self):
42 pass
43
44 def _write(self, data):
45 pass
46
47
48 class FileOutput(Output):
49 def __init__(self, filename=None, fd=None):
50 super(FileOutput, self).__init__()
51 self.filename = filename
52 self.fd = fd
53
54 def _open(self):
55 if self.filename:
56 self.fd = open(self.filename, "wb")
57
58 if is_win32:
59 msvcrt.setmode(self.fd.fileno(), os.O_BINARY)
60
61 def _close(self):
62 if self.fd is not stdout:
63 self.fd.close()
64
65 def _write(self, data):
66 self.fd.write(data)
67
68
69 class PlayerOutput(Output):
70 def __init__(self, cmd, args=DEFAULT_PLAYER_ARGUMENTS, filename=None, quiet=True, kill=True, call=False, http=False,
71 namedpipe=None):
72 super(PlayerOutput, self).__init__()
73 self.cmd = cmd
74 self.args = args
75 self.kill = kill
76 self.call = call
77 self.quiet = quiet
78
79 self.filename = filename
80 self.namedpipe = namedpipe
81 self.http = http
82
83 if self.namedpipe or self.filename or self.http:
84 self.stdin = sys.stdin
85 else:
86 self.stdin = subprocess.PIPE
87
88 if self.quiet:
89 self.stdout = open(os.devnull, "w")
90 self.stderr = open(os.devnull, "w")
91 else:
92 self.stdout = sys.stdout
93 self.stderr = sys.stderr
94
95 @property
96 def running(self):
97 sleep(0.5)
98 self.player.poll()
99 return self.player.returncode is None
100
101 def _create_arguments(self):
102 if self.namedpipe:
103 filename = self.namedpipe.path
104 elif self.filename:
105 filename = self.filename
106 elif self.http:
107 filename = self.http.url
108 else:
109 filename = "-"
110
111 args = self.args.format(filename=filename)
112 cmd = self.cmd
113 if is_win32:
114 return cmd + " " + args
115
116 return shlex.split(cmd) + shlex.split(args)
117
118 def _open(self):
119 try:
120 if self.call and self.filename:
121 self._open_call()
122 else:
123 self._open_subprocess()
124 finally:
125 if self.quiet:
126 # Output streams no longer needed in parent process
127 self.stdout.close()
128 self.stderr.close()
129
130 def _open_call(self):
131 subprocess.call(self._create_arguments(),
132 stdout=self.stdout,
133 stderr=self.stderr)
134
135 def _open_subprocess(self):
136 # Force bufsize=0 on all Python versions to avoid writing the
137 # unflushed buffer when closing a broken input pipe
138 self.player = subprocess.Popen(self._create_arguments(),
139 stdin=self.stdin, bufsize=0,
140 stdout=self.stdout,
141 stderr=self.stderr)
142 # Wait 0.5 seconds to see if program exited prematurely
143 if not self.running:
144 raise OSError("Process exited prematurely")
145
146 if self.namedpipe:
147 self.namedpipe.open("wb")
148 elif self.http:
149 self.http.open()
150
151 def _close(self):
152 # Close input to the player first to signal the end of the
153 # stream and allow the player to terminate of its own accord
154 if self.namedpipe:
155 self.namedpipe.close()
156 elif self.http:
157 self.http.close()
158 elif not self.filename:
159 self.player.stdin.close()
160
161 if self.kill:
162 with ignored(Exception):
163 self.player.kill()
164 self.player.wait()
165
166 def _write(self, data):
167 if self.namedpipe:
168 self.namedpipe.write(data)
169 elif self.http:
170 self.http.write(data)
171 else:
172 self.player.stdin.write(data)
173
174
175 __all__ = ["PlayerOutput", "FileOutput"]
176
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/streamlink_cli/output.py b/src/streamlink_cli/output.py
--- a/src/streamlink_cli/output.py
+++ b/src/streamlink_cli/output.py
@@ -67,6 +67,8 @@
class PlayerOutput(Output):
+ PLAYER_TERMINATE_TIMEOUT = 10.0
+
def __init__(self, cmd, args=DEFAULT_PLAYER_ARGUMENTS, filename=None, quiet=True, kill=True, call=False, http=False,
namedpipe=None):
super(PlayerOutput, self).__init__()
@@ -160,7 +162,15 @@
if self.kill:
with ignored(Exception):
- self.player.kill()
+ self.player.terminate()
+ if not is_win32:
+ t, timeout = 0.0, self.PLAYER_TERMINATE_TIMEOUT
+ while not self.player.poll() and t < timeout:
+ sleep(0.5)
+ t += 0.5
+
+ if not self.player.returncode:
+ self.player.kill()
self.player.wait()
def _write(self, data):
|
{"golden_diff": "diff --git a/src/streamlink_cli/output.py b/src/streamlink_cli/output.py\n--- a/src/streamlink_cli/output.py\n+++ b/src/streamlink_cli/output.py\n@@ -67,6 +67,8 @@\n \n \n class PlayerOutput(Output):\n+ PLAYER_TERMINATE_TIMEOUT = 10.0\n+\n def __init__(self, cmd, args=DEFAULT_PLAYER_ARGUMENTS, filename=None, quiet=True, kill=True, call=False, http=False,\n namedpipe=None):\n super(PlayerOutput, self).__init__()\n@@ -160,7 +162,15 @@\n \n if self.kill:\n with ignored(Exception):\n- self.player.kill()\n+ self.player.terminate()\n+ if not is_win32:\n+ t, timeout = 0.0, self.PLAYER_TERMINATE_TIMEOUT\n+ while not self.player.poll() and t < timeout:\n+ sleep(0.5)\n+ t += 0.5\n+\n+ if not self.player.returncode:\n+ self.player.kill()\n self.player.wait()\n \n def _write(self, data):\n", "issue": "Less violent way of closing player when stream ends\nCurrently streamlink uses SIGKILL to close the player when a stream ends. This prevents the player from doing its own cleanup. For example, mpv leaves DPMS/screensaver disabled because of this. I know there is --player-no-close option, but that has an unwanted side-effect of not closing the player immediately in some situations.\r\n\r\nI suggest fixing it by using SIGTERM instead:\r\n```diff\r\ndiff -bur streamlink-0.1.0-orig/src/streamlink_cli/output.py streamlink-0.1.0/src/streamlink_cli/output.py\r\n--- streamlink-0.1.0-orig/src/streamlink_cli/output.py 2016-11-21 21:56:29.000000000 +0200\r\n+++ streamlink-0.1.0/src/streamlink_cli/output.py 2016-12-08 22:08:23.000000000 +0200\r\n@@ -161,7 +161,7 @@\r\n \r\n if self.kill:\r\n with ignored(Exception):\r\n- self.player.kill()\r\n+ self.player.terminate()\r\n self.player.wait()\r\n \r\n def _write(self, data):\r\n```\n", "before_files": [{"content": "import os\nimport shlex\nimport subprocess\nimport sys\n\nfrom time import sleep\n\nimport re\n\nfrom .compat import is_win32, stdout\nfrom .constants import DEFAULT_PLAYER_ARGUMENTS\nfrom .utils import ignored\n\nif is_win32:\n import msvcrt\n\n\nclass Output(object):\n def __init__(self):\n self.opened = False\n\n def open(self):\n self._open()\n self.opened = True\n\n def close(self):\n if self.opened:\n self._close()\n\n self.opened = False\n\n def write(self, data):\n if not self.opened:\n raise IOError(\"Output is not opened\")\n\n return self._write(data)\n\n def _open(self):\n pass\n\n def _close(self):\n pass\n\n def _write(self, data):\n pass\n\n\nclass FileOutput(Output):\n def __init__(self, filename=None, fd=None):\n super(FileOutput, self).__init__()\n self.filename = filename\n self.fd = fd\n\n def _open(self):\n if self.filename:\n self.fd = open(self.filename, \"wb\")\n\n if is_win32:\n msvcrt.setmode(self.fd.fileno(), os.O_BINARY)\n\n def _close(self):\n if self.fd is not stdout:\n self.fd.close()\n\n def _write(self, data):\n self.fd.write(data)\n\n\nclass PlayerOutput(Output):\n def __init__(self, cmd, args=DEFAULT_PLAYER_ARGUMENTS, filename=None, quiet=True, kill=True, call=False, http=False,\n namedpipe=None):\n super(PlayerOutput, self).__init__()\n self.cmd = cmd\n self.args = args\n self.kill = kill\n self.call = call\n self.quiet = quiet\n\n self.filename = filename\n self.namedpipe = namedpipe\n self.http = http\n\n if self.namedpipe or self.filename or self.http:\n self.stdin = sys.stdin\n else:\n self.stdin = subprocess.PIPE\n\n if self.quiet:\n self.stdout = open(os.devnull, \"w\")\n self.stderr = open(os.devnull, \"w\")\n else:\n self.stdout = sys.stdout\n self.stderr = sys.stderr\n\n @property\n def running(self):\n sleep(0.5)\n self.player.poll()\n return self.player.returncode is None\n\n def _create_arguments(self):\n if self.namedpipe:\n filename = self.namedpipe.path\n elif self.filename:\n filename = self.filename\n elif self.http:\n filename = self.http.url\n else:\n filename = \"-\"\n\n args = self.args.format(filename=filename)\n cmd = self.cmd\n if is_win32:\n return cmd + \" \" + args\n\n return shlex.split(cmd) + shlex.split(args)\n\n def _open(self):\n try:\n if self.call and self.filename:\n self._open_call()\n else:\n self._open_subprocess()\n finally:\n if self.quiet:\n # Output streams no longer needed in parent process\n self.stdout.close()\n self.stderr.close()\n\n def _open_call(self):\n subprocess.call(self._create_arguments(),\n stdout=self.stdout,\n stderr=self.stderr)\n\n def _open_subprocess(self):\n # Force bufsize=0 on all Python versions to avoid writing the\n # unflushed buffer when closing a broken input pipe\n self.player = subprocess.Popen(self._create_arguments(),\n stdin=self.stdin, bufsize=0,\n stdout=self.stdout,\n stderr=self.stderr)\n # Wait 0.5 seconds to see if program exited prematurely\n if not self.running:\n raise OSError(\"Process exited prematurely\")\n\n if self.namedpipe:\n self.namedpipe.open(\"wb\")\n elif self.http:\n self.http.open()\n\n def _close(self):\n # Close input to the player first to signal the end of the\n # stream and allow the player to terminate of its own accord\n if self.namedpipe:\n self.namedpipe.close()\n elif self.http:\n self.http.close()\n elif not self.filename:\n self.player.stdin.close()\n\n if self.kill:\n with ignored(Exception):\n self.player.kill()\n self.player.wait()\n\n def _write(self, data):\n if self.namedpipe:\n self.namedpipe.write(data)\n elif self.http:\n self.http.write(data)\n else:\n self.player.stdin.write(data)\n\n\n__all__ = [\"PlayerOutput\", \"FileOutput\"]\n", "path": "src/streamlink_cli/output.py"}], "after_files": [{"content": "import os\nimport shlex\nimport subprocess\nimport sys\n\nfrom time import sleep\n\nimport re\n\nfrom .compat import is_win32, stdout\nfrom .constants import DEFAULT_PLAYER_ARGUMENTS\nfrom .utils import ignored\n\nif is_win32:\n import msvcrt\n\n\nclass Output(object):\n def __init__(self):\n self.opened = False\n\n def open(self):\n self._open()\n self.opened = True\n\n def close(self):\n if self.opened:\n self._close()\n\n self.opened = False\n\n def write(self, data):\n if not self.opened:\n raise IOError(\"Output is not opened\")\n\n return self._write(data)\n\n def _open(self):\n pass\n\n def _close(self):\n pass\n\n def _write(self, data):\n pass\n\n\nclass FileOutput(Output):\n def __init__(self, filename=None, fd=None):\n super(FileOutput, self).__init__()\n self.filename = filename\n self.fd = fd\n\n def _open(self):\n if self.filename:\n self.fd = open(self.filename, \"wb\")\n\n if is_win32:\n msvcrt.setmode(self.fd.fileno(), os.O_BINARY)\n\n def _close(self):\n if self.fd is not stdout:\n self.fd.close()\n\n def _write(self, data):\n self.fd.write(data)\n\n\nclass PlayerOutput(Output):\n PLAYER_TERMINATE_TIMEOUT = 10.0\n\n def __init__(self, cmd, args=DEFAULT_PLAYER_ARGUMENTS, filename=None, quiet=True, kill=True, call=False, http=False,\n namedpipe=None):\n super(PlayerOutput, self).__init__()\n self.cmd = cmd\n self.args = args\n self.kill = kill\n self.call = call\n self.quiet = quiet\n\n self.filename = filename\n self.namedpipe = namedpipe\n self.http = http\n\n if self.namedpipe or self.filename or self.http:\n self.stdin = sys.stdin\n else:\n self.stdin = subprocess.PIPE\n\n if self.quiet:\n self.stdout = open(os.devnull, \"w\")\n self.stderr = open(os.devnull, \"w\")\n else:\n self.stdout = sys.stdout\n self.stderr = sys.stderr\n\n @property\n def running(self):\n sleep(0.5)\n self.player.poll()\n return self.player.returncode is None\n\n def _create_arguments(self):\n if self.namedpipe:\n filename = self.namedpipe.path\n elif self.filename:\n filename = self.filename\n elif self.http:\n filename = self.http.url\n else:\n filename = \"-\"\n\n args = self.args.format(filename=filename)\n cmd = self.cmd\n if is_win32:\n return cmd + \" \" + args\n\n return shlex.split(cmd) + shlex.split(args)\n\n def _open(self):\n try:\n if self.call and self.filename:\n self._open_call()\n else:\n self._open_subprocess()\n finally:\n if self.quiet:\n # Output streams no longer needed in parent process\n self.stdout.close()\n self.stderr.close()\n\n def _open_call(self):\n subprocess.call(self._create_arguments(),\n stdout=self.stdout,\n stderr=self.stderr)\n\n def _open_subprocess(self):\n # Force bufsize=0 on all Python versions to avoid writing the\n # unflushed buffer when closing a broken input pipe\n self.player = subprocess.Popen(self._create_arguments(),\n stdin=self.stdin, bufsize=0,\n stdout=self.stdout,\n stderr=self.stderr)\n # Wait 0.5 seconds to see if program exited prematurely\n if not self.running:\n raise OSError(\"Process exited prematurely\")\n\n if self.namedpipe:\n self.namedpipe.open(\"wb\")\n elif self.http:\n self.http.open()\n\n def _close(self):\n # Close input to the player first to signal the end of the\n # stream and allow the player to terminate of its own accord\n if self.namedpipe:\n self.namedpipe.close()\n elif self.http:\n self.http.close()\n elif not self.filename:\n self.player.stdin.close()\n\n if self.kill:\n with ignored(Exception):\n self.player.terminate()\n if not is_win32:\n t, timeout = 0.0, self.PLAYER_TERMINATE_TIMEOUT\n while not self.player.poll() and t < timeout:\n sleep(0.5)\n t += 0.5\n\n if not self.player.returncode:\n self.player.kill()\n self.player.wait()\n\n def _write(self, data):\n if self.namedpipe:\n self.namedpipe.write(data)\n elif self.http:\n self.http.write(data)\n else:\n self.player.stdin.write(data)\n\n\n__all__ = [\"PlayerOutput\", \"FileOutput\"]\n", "path": "src/streamlink_cli/output.py"}]}
| 1,948 | 238 |
gh_patches_debug_3223
|
rasdani/github-patches
|
git_diff
|
searx__searx-2454
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Input turns language to Chinese
<!-- PLEASE FILL THESE FIELDS, IT REALLY HELPS THE MAINTAINERS OF SEARX -->
**Version of Searx, commit number if you are using on master branch and stipulate if you forked Searx**
0.17.0-17b48ff6e858b0c74116068cf6444bd578bbb747
<!-- If you are running on master branch using git execute this command
in order to fetch the latest commit ID:
```
git log -1
```
If you are using searx-docker then look at the bottom of the Searx page
and check for the version after "Powered by searx"
Please also stipulate if you are using a forked version of Searx and
include a link to the fork source code.
-->
**How did you install Searx?**
Manual install
<!-- Did you install Searx using the official wiki or using searx-docker
or manually by executing the searx/webapp.py file? -->
**What happened?**
If I search the phrase `parser error : invalid character in attribute value`, the search language changes to `zh`.
<!-- A clear and concise description of what the bug is. -->
**How To Reproduce**
This works on every searx instance I can find. Just search the phrase `parser error : invalid character in attribute value`.
<!-- How can we reproduce this issue? (as minimally and as precisely as possible) -->
**Expected behavior**
Results in the language chosen.
<!-- A clear and concise description of what you expected to happen. -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `searx/query.py`
Content:
```
1 #!/usr/bin/env python
2
3 '''
4 searx is free software: you can redistribute it and/or modify
5 it under the terms of the GNU Affero General Public License as published by
6 the Free Software Foundation, either version 3 of the License, or
7 (at your option) any later version.
8
9 searx is distributed in the hope that it will be useful,
10 but WITHOUT ANY WARRANTY; without even the implied warranty of
11 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12 GNU Affero General Public License for more details.
13
14 You should have received a copy of the GNU Affero General Public License
15 along with searx. If not, see < http://www.gnu.org/licenses/ >.
16
17 (C) 2014 by Thomas Pointhuber, <[email protected]>
18 '''
19
20 import re
21
22 from searx.languages import language_codes
23 from searx.engines import categories, engines, engine_shortcuts
24 from searx.search import EngineRef
25 from searx.webutils import VALID_LANGUAGE_CODE
26
27
28 class RawTextQuery:
29 """parse raw text query (the value from the html input)"""
30
31 def __init__(self, query, disabled_engines):
32 assert isinstance(query, str)
33 self.query = query
34 self.disabled_engines = []
35
36 if disabled_engines:
37 self.disabled_engines = disabled_engines
38
39 self.query_parts = []
40 self.user_query_parts = []
41 self.enginerefs = []
42 self.languages = []
43 self.timeout_limit = None
44 self.external_bang = None
45 self.specific = False
46 self._parse_query()
47
48 # parse query, if tags are set, which
49 # change the search engine or search-language
50 def _parse_query(self):
51 self.query_parts = []
52
53 # split query, including whitespaces
54 raw_query_parts = re.split(r'(\s+)', self.query)
55
56 for query_part in raw_query_parts:
57 searx_query_part = False
58
59 # part does only contain spaces, skip
60 if query_part.isspace()\
61 or query_part == '':
62 continue
63
64 # this force the timeout
65 if query_part[0] == '<':
66 try:
67 raw_timeout_limit = int(query_part[1:])
68 if raw_timeout_limit < 100:
69 # below 100, the unit is the second ( <3 = 3 seconds timeout )
70 self.timeout_limit = float(raw_timeout_limit)
71 else:
72 # 100 or above, the unit is the millisecond ( <850 = 850 milliseconds timeout )
73 self.timeout_limit = raw_timeout_limit / 1000.0
74 searx_query_part = True
75 except ValueError:
76 # error not reported to the user
77 pass
78
79 # this force a language
80 if query_part[0] == ':':
81 lang = query_part[1:].lower().replace('_', '-')
82
83 # check if any language-code is equal with
84 # declared language-codes
85 for lc in language_codes:
86 lang_id, lang_name, country, english_name = map(str.lower, lc)
87
88 # if correct language-code is found
89 # set it as new search-language
90 if (lang == lang_id
91 or lang == lang_name
92 or lang == english_name
93 or lang.replace('-', ' ') == country)\
94 and lang not in self.languages:
95 searx_query_part = True
96 lang_parts = lang_id.split('-')
97 if len(lang_parts) == 2:
98 self.languages.append(lang_parts[0] + '-' + lang_parts[1].upper())
99 else:
100 self.languages.append(lang_id)
101 # to ensure best match (first match is not necessarily the best one)
102 if lang == lang_id:
103 break
104
105 # user may set a valid, yet not selectable language
106 if VALID_LANGUAGE_CODE.match(lang):
107 lang_parts = lang.split('-')
108 if len(lang_parts) > 1:
109 lang = lang_parts[0].lower() + '-' + lang_parts[1].upper()
110 if lang not in self.languages:
111 self.languages.append(lang)
112 searx_query_part = True
113
114 # external bang
115 if query_part[0:2] == "!!":
116 self.external_bang = query_part[2:]
117 searx_query_part = True
118 continue
119 # this force a engine or category
120 if query_part[0] == '!' or query_part[0] == '?':
121 prefix = query_part[1:].replace('-', ' ').replace('_', ' ')
122
123 # check if prefix is equal with engine shortcut
124 if prefix in engine_shortcuts:
125 searx_query_part = True
126 engine_name = engine_shortcuts[prefix]
127 if engine_name in engines:
128 self.enginerefs.append(EngineRef(engine_name, 'none'))
129
130 # check if prefix is equal with engine name
131 elif prefix in engines:
132 searx_query_part = True
133 self.enginerefs.append(EngineRef(prefix, 'none'))
134
135 # check if prefix is equal with categorie name
136 elif prefix in categories:
137 # using all engines for that search, which
138 # are declared under that categorie name
139 searx_query_part = True
140 self.enginerefs.extend(EngineRef(engine.name, prefix)
141 for engine in categories[prefix]
142 if (engine.name, prefix) not in self.disabled_engines)
143
144 if query_part[0] == '!':
145 self.specific = True
146
147 # append query part to query_part list
148 if searx_query_part:
149 self.query_parts.append(query_part)
150 else:
151 self.user_query_parts.append(query_part)
152
153 def changeQuery(self, query):
154 self.user_query_parts = query.strip().split()
155 return self
156
157 def getQuery(self):
158 return ' '.join(self.user_query_parts)
159
160 def getFullQuery(self):
161 # get full querry including whitespaces
162 return '{0} {1}'.format(''.join(self.query_parts), self.getQuery()).strip()
163
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/searx/query.py b/searx/query.py
--- a/searx/query.py
+++ b/searx/query.py
@@ -77,7 +77,7 @@
pass
# this force a language
- if query_part[0] == ':':
+ if query_part[0] == ':' and len(query_part) > 1:
lang = query_part[1:].lower().replace('_', '-')
# check if any language-code is equal with
|
{"golden_diff": "diff --git a/searx/query.py b/searx/query.py\n--- a/searx/query.py\n+++ b/searx/query.py\n@@ -77,7 +77,7 @@\n pass\n \n # this force a language\n- if query_part[0] == ':':\n+ if query_part[0] == ':' and len(query_part) > 1:\n lang = query_part[1:].lower().replace('_', '-')\n \n # check if any language-code is equal with\n", "issue": "Input turns language to Chinese\n<!-- PLEASE FILL THESE FIELDS, IT REALLY HELPS THE MAINTAINERS OF SEARX -->\r\n\r\n**Version of Searx, commit number if you are using on master branch and stipulate if you forked Searx**\r\n0.17.0-17b48ff6e858b0c74116068cf6444bd578bbb747\r\n<!-- If you are running on master branch using git execute this command\r\nin order to fetch the latest commit ID:\r\n```\r\ngit log -1\r\n``` \r\nIf you are using searx-docker then look at the bottom of the Searx page\r\nand check for the version after \"Powered by searx\"\r\n\r\nPlease also stipulate if you are using a forked version of Searx and\r\ninclude a link to the fork source code.\r\n-->\r\n**How did you install Searx?**\r\nManual install\r\n<!-- Did you install Searx using the official wiki or using searx-docker\r\nor manually by executing the searx/webapp.py file? -->\r\n**What happened?**\r\nIf I search the phrase `parser error : invalid character in attribute value`, the search language changes to `zh`.\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\n**How To Reproduce**\r\nThis works on every searx instance I can find. Just search the phrase `parser error : invalid character in attribute value`.\r\n<!-- How can we reproduce this issue? (as minimally and as precisely as possible) -->\r\n\r\n**Expected behavior**\r\nResults in the language chosen.\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\n'''\nsearx is free software: you can redistribute it and/or modify\nit under the terms of the GNU Affero General Public License as published by\nthe Free Software Foundation, either version 3 of the License, or\n(at your option) any later version.\n\nsearx is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU Affero General Public License for more details.\n\nYou should have received a copy of the GNU Affero General Public License\nalong with searx. If not, see < http://www.gnu.org/licenses/ >.\n\n(C) 2014 by Thomas Pointhuber, <[email protected]>\n'''\n\nimport re\n\nfrom searx.languages import language_codes\nfrom searx.engines import categories, engines, engine_shortcuts\nfrom searx.search import EngineRef\nfrom searx.webutils import VALID_LANGUAGE_CODE\n\n\nclass RawTextQuery:\n \"\"\"parse raw text query (the value from the html input)\"\"\"\n\n def __init__(self, query, disabled_engines):\n assert isinstance(query, str)\n self.query = query\n self.disabled_engines = []\n\n if disabled_engines:\n self.disabled_engines = disabled_engines\n\n self.query_parts = []\n self.user_query_parts = []\n self.enginerefs = []\n self.languages = []\n self.timeout_limit = None\n self.external_bang = None\n self.specific = False\n self._parse_query()\n\n # parse query, if tags are set, which\n # change the search engine or search-language\n def _parse_query(self):\n self.query_parts = []\n\n # split query, including whitespaces\n raw_query_parts = re.split(r'(\\s+)', self.query)\n\n for query_part in raw_query_parts:\n searx_query_part = False\n\n # part does only contain spaces, skip\n if query_part.isspace()\\\n or query_part == '':\n continue\n\n # this force the timeout\n if query_part[0] == '<':\n try:\n raw_timeout_limit = int(query_part[1:])\n if raw_timeout_limit < 100:\n # below 100, the unit is the second ( <3 = 3 seconds timeout )\n self.timeout_limit = float(raw_timeout_limit)\n else:\n # 100 or above, the unit is the millisecond ( <850 = 850 milliseconds timeout )\n self.timeout_limit = raw_timeout_limit / 1000.0\n searx_query_part = True\n except ValueError:\n # error not reported to the user\n pass\n\n # this force a language\n if query_part[0] == ':':\n lang = query_part[1:].lower().replace('_', '-')\n\n # check if any language-code is equal with\n # declared language-codes\n for lc in language_codes:\n lang_id, lang_name, country, english_name = map(str.lower, lc)\n\n # if correct language-code is found\n # set it as new search-language\n if (lang == lang_id\n or lang == lang_name\n or lang == english_name\n or lang.replace('-', ' ') == country)\\\n and lang not in self.languages:\n searx_query_part = True\n lang_parts = lang_id.split('-')\n if len(lang_parts) == 2:\n self.languages.append(lang_parts[0] + '-' + lang_parts[1].upper())\n else:\n self.languages.append(lang_id)\n # to ensure best match (first match is not necessarily the best one)\n if lang == lang_id:\n break\n\n # user may set a valid, yet not selectable language\n if VALID_LANGUAGE_CODE.match(lang):\n lang_parts = lang.split('-')\n if len(lang_parts) > 1:\n lang = lang_parts[0].lower() + '-' + lang_parts[1].upper()\n if lang not in self.languages:\n self.languages.append(lang)\n searx_query_part = True\n\n # external bang\n if query_part[0:2] == \"!!\":\n self.external_bang = query_part[2:]\n searx_query_part = True\n continue\n # this force a engine or category\n if query_part[0] == '!' or query_part[0] == '?':\n prefix = query_part[1:].replace('-', ' ').replace('_', ' ')\n\n # check if prefix is equal with engine shortcut\n if prefix in engine_shortcuts:\n searx_query_part = True\n engine_name = engine_shortcuts[prefix]\n if engine_name in engines:\n self.enginerefs.append(EngineRef(engine_name, 'none'))\n\n # check if prefix is equal with engine name\n elif prefix in engines:\n searx_query_part = True\n self.enginerefs.append(EngineRef(prefix, 'none'))\n\n # check if prefix is equal with categorie name\n elif prefix in categories:\n # using all engines for that search, which\n # are declared under that categorie name\n searx_query_part = True\n self.enginerefs.extend(EngineRef(engine.name, prefix)\n for engine in categories[prefix]\n if (engine.name, prefix) not in self.disabled_engines)\n\n if query_part[0] == '!':\n self.specific = True\n\n # append query part to query_part list\n if searx_query_part:\n self.query_parts.append(query_part)\n else:\n self.user_query_parts.append(query_part)\n\n def changeQuery(self, query):\n self.user_query_parts = query.strip().split()\n return self\n\n def getQuery(self):\n return ' '.join(self.user_query_parts)\n\n def getFullQuery(self):\n # get full querry including whitespaces\n return '{0} {1}'.format(''.join(self.query_parts), self.getQuery()).strip()\n", "path": "searx/query.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n'''\nsearx is free software: you can redistribute it and/or modify\nit under the terms of the GNU Affero General Public License as published by\nthe Free Software Foundation, either version 3 of the License, or\n(at your option) any later version.\n\nsearx is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU Affero General Public License for more details.\n\nYou should have received a copy of the GNU Affero General Public License\nalong with searx. If not, see < http://www.gnu.org/licenses/ >.\n\n(C) 2014 by Thomas Pointhuber, <[email protected]>\n'''\n\nimport re\n\nfrom searx.languages import language_codes\nfrom searx.engines import categories, engines, engine_shortcuts\nfrom searx.search import EngineRef\nfrom searx.webutils import VALID_LANGUAGE_CODE\n\n\nclass RawTextQuery:\n \"\"\"parse raw text query (the value from the html input)\"\"\"\n\n def __init__(self, query, disabled_engines):\n assert isinstance(query, str)\n self.query = query\n self.disabled_engines = []\n\n if disabled_engines:\n self.disabled_engines = disabled_engines\n\n self.query_parts = []\n self.user_query_parts = []\n self.enginerefs = []\n self.languages = []\n self.timeout_limit = None\n self.external_bang = None\n self.specific = False\n self._parse_query()\n\n # parse query, if tags are set, which\n # change the search engine or search-language\n def _parse_query(self):\n self.query_parts = []\n\n # split query, including whitespaces\n raw_query_parts = re.split(r'(\\s+)', self.query)\n\n for query_part in raw_query_parts:\n searx_query_part = False\n\n # part does only contain spaces, skip\n if query_part.isspace()\\\n or query_part == '':\n continue\n\n # this force the timeout\n if query_part[0] == '<':\n try:\n raw_timeout_limit = int(query_part[1:])\n if raw_timeout_limit < 100:\n # below 100, the unit is the second ( <3 = 3 seconds timeout )\n self.timeout_limit = float(raw_timeout_limit)\n else:\n # 100 or above, the unit is the millisecond ( <850 = 850 milliseconds timeout )\n self.timeout_limit = raw_timeout_limit / 1000.0\n searx_query_part = True\n except ValueError:\n # error not reported to the user\n pass\n\n # this force a language\n if query_part[0] == ':' and len(query_part) > 1:\n lang = query_part[1:].lower().replace('_', '-')\n\n # check if any language-code is equal with\n # declared language-codes\n for lc in language_codes:\n lang_id, lang_name, country, english_name = map(str.lower, lc)\n\n # if correct language-code is found\n # set it as new search-language\n if (lang == lang_id\n or lang == lang_name\n or lang == english_name\n or lang.replace('-', ' ') == country)\\\n and lang not in self.languages:\n searx_query_part = True\n lang_parts = lang_id.split('-')\n if len(lang_parts) == 2:\n self.languages.append(lang_parts[0] + '-' + lang_parts[1].upper())\n else:\n self.languages.append(lang_id)\n # to ensure best match (first match is not necessarily the best one)\n if lang == lang_id:\n break\n\n # user may set a valid, yet not selectable language\n if VALID_LANGUAGE_CODE.match(lang):\n lang_parts = lang.split('-')\n if len(lang_parts) > 1:\n lang = lang_parts[0].lower() + '-' + lang_parts[1].upper()\n if lang not in self.languages:\n self.languages.append(lang)\n searx_query_part = True\n\n # external bang\n if query_part[0:2] == \"!!\":\n self.external_bang = query_part[2:]\n searx_query_part = True\n continue\n # this force a engine or category\n if query_part[0] == '!' or query_part[0] == '?':\n prefix = query_part[1:].replace('-', ' ').replace('_', ' ')\n\n # check if prefix is equal with engine shortcut\n if prefix in engine_shortcuts:\n searx_query_part = True\n engine_name = engine_shortcuts[prefix]\n if engine_name in engines:\n self.enginerefs.append(EngineRef(engine_name, 'none'))\n\n # check if prefix is equal with engine name\n elif prefix in engines:\n searx_query_part = True\n self.enginerefs.append(EngineRef(prefix, 'none'))\n\n # check if prefix is equal with categorie name\n elif prefix in categories:\n # using all engines for that search, which\n # are declared under that categorie name\n searx_query_part = True\n self.enginerefs.extend(EngineRef(engine.name, prefix)\n for engine in categories[prefix]\n if (engine.name, prefix) not in self.disabled_engines)\n\n if query_part[0] == '!':\n self.specific = True\n\n # append query part to query_part list\n if searx_query_part:\n self.query_parts.append(query_part)\n else:\n self.user_query_parts.append(query_part)\n\n def changeQuery(self, query):\n self.user_query_parts = query.strip().split()\n return self\n\n def getQuery(self):\n return ' '.join(self.user_query_parts)\n\n def getFullQuery(self):\n # get full querry including whitespaces\n return '{0} {1}'.format(''.join(self.query_parts), self.getQuery()).strip()\n", "path": "searx/query.py"}]}
| 2,319 | 109 |
gh_patches_debug_24906
|
rasdani/github-patches
|
git_diff
|
dotkom__onlineweb4-2341
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
OW doesnt support use of employee mails
**Describe the bug**
Apperantly OW4 doesnt support use of employee mails, which means the user has issues with verifying their membership if their main email is their employee mail
**Expected behavior**
An user should be able to verify their membership and use OW as long as they are a student too.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/dataporten/views.py`
Content:
```
1 import logging
2
3 from django.conf import settings
4 from django.contrib import messages
5 from django.contrib.auth.decorators import login_required
6 from django.db import IntegrityError
7 from django.shortcuts import redirect
8 from oic import rndstr
9 from oic.oauth2 import AuthorizationResponse, ResponseError
10
11 from apps.dataporten.study.tasks import (fetch_groups_information, find_user_study_and_update,
12 set_ntnu_username)
13
14 from .client import client_setup
15
16 logger = logging.getLogger(__name__)
17
18 DATAPORTEN_CLIENT_ID = settings.DATAPORTEN.get('STUDY', {}).get('CLIENT_ID')
19 DATAPORTEN_CLIENT_SECRET = settings.DATAPORTEN.get('STUDY', {}).get('CLIENT_SECRET')
20 DATAPORTEN_REDIRECT_URI = settings.DATAPORTEN.get('STUDY', {}).get('REDIRECT_URI')
21 DATAPORTEN_SCOPES = settings.DATAPORTEN.get('STUDY', {}).get('SCOPES')
22
23
24 @login_required()
25 def study(request):
26 """This view redirects the user to Dataporten to request authorization for fetching information about the
27 user's groups membership, which can be used to verify eligibility for membership of Online."""
28
29 # If the user already is a member we can return early. However, if we're in testing, we want to skip the check.
30 if settings.DATAPORTEN.get('STUDY').get('ENABLED') and request.user.is_member:
31 messages.info(request, 'Du er allerede registrert som medlem.')
32 return redirect('profiles_active', active_tab='membership')
33
34 logger.debug(
35 '{} wants to automatically confirm study programme through Dataporten.'.format(request.user),
36 extra={'user': request.user}
37 )
38
39 client = client_setup(DATAPORTEN_CLIENT_ID, DATAPORTEN_CLIENT_SECRET)
40
41 # Generate random values used to verify that it's the same user when in the callback.
42 state = rndstr()
43 nonce = rndstr()
44
45 request.session['dataporten_study_state'] = state
46 request.session['dataporten_study_nonce'] = nonce
47
48 args = {
49 'client_id': DATAPORTEN_CLIENT_ID,
50 'response_type': 'code',
51 'scope': DATAPORTEN_SCOPES,
52 'redirect_uri': DATAPORTEN_REDIRECT_URI,
53 'nonce': nonce,
54 'state': state,
55 }
56
57 logger.debug(
58 'Constructing authorization request and redirecting user to authorize through Dataporten.',
59 extra={'user': request.user}
60 )
61
62 auth_req = client.construct_AuthorizationRequest(request_args=args)
63 login_url = auth_req.request(client.authorization_endpoint)
64
65 return redirect(login_url)
66
67
68 @login_required() # noqa: C901
69 def study_callback(request):
70 """This view fetches information from Dataporten to verify the eligibility. This is done by fetching
71 the /me/groups-API from Dataporten and further processing the fetched groups to find group membership.
72
73 Dataporten Groups API: https://docs.dataporten.no/docs/groups/"""
74 logger.debug('Fetching study programme for user {}'.format(request.user), extra={'user': request.user})
75
76 client = client_setup(DATAPORTEN_CLIENT_ID, DATAPORTEN_CLIENT_SECRET)
77
78 queryparams = request.GET.urlencode()
79
80 try:
81 auth_resp = client.parse_response(AuthorizationResponse, info=queryparams, sformat='urlencoded')
82 except ResponseError:
83 messages.error(request, 'Forespørselen mangler påkrevde felter, vennligst prøv igjen.')
84 return redirect('profiles_active', active_tab='membership')
85
86 if not request.session.get('dataporten_study_state', '') or \
87 request.session['dataporten_study_state'] != auth_resp['state']:
88 logger.warning('Dataporten state did not equal the one in session!')
89 messages.error(request, 'Verifisering av forespørselen feilet. Vennligst prøv igjen.')
90 return redirect('profiles_active', active_tab='membership')
91
92 args = {
93 'code': auth_resp['code'],
94 'redirect_uri': DATAPORTEN_REDIRECT_URI,
95 }
96
97 token_request = client.do_access_token_request(
98 state=auth_resp['state'], request_args=args, authn_method='client_secret_basic',
99 )
100
101 access_token = token_request.get('access_token')
102
103 # Do user info request
104 userinfo = client.do_user_info_request(state=auth_resp['state'], behavior='use_authorization_header')
105 ntnu_username_dataporten = userinfo.get('email').split('@')[0]
106 if request.user.ntnu_username and request.user.ntnu_username != ntnu_username_dataporten:
107 logger.warning(
108 '{} tried to authorize, but the registered ntnu_username and the one received from Dataporten differ.'
109 .format(request.user),
110 extra={
111 'user': request.user,
112 'ntnu_username__ow4': request.user.ntnu_username,
113 'ntnu_username__dataporten': ntnu_username_dataporten
114 }
115 )
116 messages.error(
117 request,
118 'Brukernavnet for brukerkontoen brukt til verifisering i Dataporten stemmer ikke overens med '
119 'kontoen du er logget inn med hos Online. Pass på at du er logget inn på din egen konto begge '
120 'steder og prøv igjen.'
121 )
122 return redirect('profiles_active', active_tab='membership')
123 elif not request.user.ntnu_username:
124 pass
125 # @ToDo: Register email address. Maybe store it, but ask user to confirm? -> resend auth email
126
127 # Getting information about study of the user
128 groups = fetch_groups_information(access_token)
129
130 try:
131 if not request.user.ntnu_username:
132 set_ntnu_username(request.user, ntnu_username_dataporten)
133 studies_info = find_user_study_and_update(request.user, groups)
134
135 if not studies_info:
136 logger.warning(
137 'Dataporten groups do not match groups for informatics',
138 extra={
139 'user': request.user,
140 'groups': groups,
141 }
142 )
143 messages.error(
144 request,
145 'Studieretningen du studerer ved gir ikke medlemskap i Online. ',
146 'Hvis du mener dette er en feil; ta vennligst kontakt Dotkom slik at vi kan feilsøke prosessen.'
147 )
148 return redirect('profiles_active', active_tab='membership')
149
150 studies_informatics, study_name, study_year = studies_info
151 except IntegrityError:
152 messages.error(
153 request,
154 'En bruker er allerede knyttet til denne NTNU-kontoen. '
155 'Dersom du har glemt passordet til din andre bruker kan du bruke "glemt passord"-funksjonen.'
156 )
157 return redirect('profiles_active', active_tab='membership')
158
159 if studies_informatics:
160 messages.success(
161 request,
162 'Bekreftet studieretning som {} i {}. klasse. Dersom dette er feil, '
163 'kontakt dotkom slik at vi kan rette opp og finne ut hva som gikk galt.'
164 .format(study_name, study_year)
165 )
166 else:
167 messages.error(
168 request,
169 'Det ser ikke ut som du tar informatikkfag. Dersom du mener dette er galt kan du sende inn en søknad '
170 'manuelt. Ta gjerne kontakt med dotkom slik at vi kan feilsøke prosessen.'
171 )
172
173 return redirect('profiles_active', active_tab='membership')
174
```
Path: `apps/dataporten/settings.py`
Content:
```
1 from decouple import config
2
3 DATAPORTEN = {
4 'STUDY': {
5 'ENABLED': config('OW4_DP_STUDY_ENABLED', cast=bool, default=False),
6 'TESTING': config('OW4_DP_STUDY_TESTING', cast=bool, default=True),
7 'CLIENT_ID': config('OW4_DP_STUDY_CLIENT_ID', default=''),
8 'CLIENT_SECRET': config('OW4_DP_STUDY_CLIENT_SECRET', default=''),
9 'REDIRECT_URI': config('OW4_DP_STUDY_REDIRECT_URI', default=''),
10 'PROVIDER_URL': 'https://auth.dataporten.no/oauth/token',
11 'SCOPES': ['openid', 'userid', 'profile', 'groups', 'email'],
12 }
13 }
14
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/apps/dataporten/settings.py b/apps/dataporten/settings.py
--- a/apps/dataporten/settings.py
+++ b/apps/dataporten/settings.py
@@ -8,6 +8,6 @@
'CLIENT_SECRET': config('OW4_DP_STUDY_CLIENT_SECRET', default=''),
'REDIRECT_URI': config('OW4_DP_STUDY_REDIRECT_URI', default=''),
'PROVIDER_URL': 'https://auth.dataporten.no/oauth/token',
- 'SCOPES': ['openid', 'userid', 'profile', 'groups', 'email'],
+ 'SCOPES': ['openid', 'userid-feide', 'profile', 'groups', 'email'],
}
}
diff --git a/apps/dataporten/views.py b/apps/dataporten/views.py
--- a/apps/dataporten/views.py
+++ b/apps/dataporten/views.py
@@ -102,7 +102,8 @@
# Do user info request
userinfo = client.do_user_info_request(state=auth_resp['state'], behavior='use_authorization_header')
- ntnu_username_dataporten = userinfo.get('email').split('@')[0]
+ # connect-userid_sec format is array with "feide:[email protected]"
+ ntnu_username_dataporten = userinfo.get('connect-userid_sec')[0].split(':')[1].split('@')[0]
if request.user.ntnu_username and request.user.ntnu_username != ntnu_username_dataporten:
logger.warning(
'{} tried to authorize, but the registered ntnu_username and the one received from Dataporten differ.'
|
{"golden_diff": "diff --git a/apps/dataporten/settings.py b/apps/dataporten/settings.py\n--- a/apps/dataporten/settings.py\n+++ b/apps/dataporten/settings.py\n@@ -8,6 +8,6 @@\n 'CLIENT_SECRET': config('OW4_DP_STUDY_CLIENT_SECRET', default=''),\n 'REDIRECT_URI': config('OW4_DP_STUDY_REDIRECT_URI', default=''),\n 'PROVIDER_URL': 'https://auth.dataporten.no/oauth/token',\n- 'SCOPES': ['openid', 'userid', 'profile', 'groups', 'email'],\n+ 'SCOPES': ['openid', 'userid-feide', 'profile', 'groups', 'email'],\n }\n }\ndiff --git a/apps/dataporten/views.py b/apps/dataporten/views.py\n--- a/apps/dataporten/views.py\n+++ b/apps/dataporten/views.py\n@@ -102,7 +102,8 @@\n \n # Do user info request\n userinfo = client.do_user_info_request(state=auth_resp['state'], behavior='use_authorization_header')\n- ntnu_username_dataporten = userinfo.get('email').split('@')[0]\n+ # connect-userid_sec format is array with \"feide:[email protected]\"\n+ ntnu_username_dataporten = userinfo.get('connect-userid_sec')[0].split(':')[1].split('@')[0]\n if request.user.ntnu_username and request.user.ntnu_username != ntnu_username_dataporten:\n logger.warning(\n '{} tried to authorize, but the registered ntnu_username and the one received from Dataporten differ.'\n", "issue": "OW doesnt support use of employee mails\n**Describe the bug**\r\nApperantly OW4 doesnt support use of employee mails, which means the user has issues with verifying their membership if their main email is their employee mail\r\n\r\n**Expected behavior**\r\nAn user should be able to verify their membership and use OW as long as they are a student too.\r\n\n", "before_files": [{"content": "import logging\n\nfrom django.conf import settings\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.db import IntegrityError\nfrom django.shortcuts import redirect\nfrom oic import rndstr\nfrom oic.oauth2 import AuthorizationResponse, ResponseError\n\nfrom apps.dataporten.study.tasks import (fetch_groups_information, find_user_study_and_update,\n set_ntnu_username)\n\nfrom .client import client_setup\n\nlogger = logging.getLogger(__name__)\n\nDATAPORTEN_CLIENT_ID = settings.DATAPORTEN.get('STUDY', {}).get('CLIENT_ID')\nDATAPORTEN_CLIENT_SECRET = settings.DATAPORTEN.get('STUDY', {}).get('CLIENT_SECRET')\nDATAPORTEN_REDIRECT_URI = settings.DATAPORTEN.get('STUDY', {}).get('REDIRECT_URI')\nDATAPORTEN_SCOPES = settings.DATAPORTEN.get('STUDY', {}).get('SCOPES')\n\n\n@login_required()\ndef study(request):\n \"\"\"This view redirects the user to Dataporten to request authorization for fetching information about the\n user's groups membership, which can be used to verify eligibility for membership of Online.\"\"\"\n\n # If the user already is a member we can return early. However, if we're in testing, we want to skip the check.\n if settings.DATAPORTEN.get('STUDY').get('ENABLED') and request.user.is_member:\n messages.info(request, 'Du er allerede registrert som medlem.')\n return redirect('profiles_active', active_tab='membership')\n\n logger.debug(\n '{} wants to automatically confirm study programme through Dataporten.'.format(request.user),\n extra={'user': request.user}\n )\n\n client = client_setup(DATAPORTEN_CLIENT_ID, DATAPORTEN_CLIENT_SECRET)\n\n # Generate random values used to verify that it's the same user when in the callback.\n state = rndstr()\n nonce = rndstr()\n\n request.session['dataporten_study_state'] = state\n request.session['dataporten_study_nonce'] = nonce\n\n args = {\n 'client_id': DATAPORTEN_CLIENT_ID,\n 'response_type': 'code',\n 'scope': DATAPORTEN_SCOPES,\n 'redirect_uri': DATAPORTEN_REDIRECT_URI,\n 'nonce': nonce,\n 'state': state,\n }\n\n logger.debug(\n 'Constructing authorization request and redirecting user to authorize through Dataporten.',\n extra={'user': request.user}\n )\n\n auth_req = client.construct_AuthorizationRequest(request_args=args)\n login_url = auth_req.request(client.authorization_endpoint)\n\n return redirect(login_url)\n\n\n@login_required() # noqa: C901\ndef study_callback(request):\n \"\"\"This view fetches information from Dataporten to verify the eligibility. This is done by fetching\n the /me/groups-API from Dataporten and further processing the fetched groups to find group membership.\n\n Dataporten Groups API: https://docs.dataporten.no/docs/groups/\"\"\"\n logger.debug('Fetching study programme for user {}'.format(request.user), extra={'user': request.user})\n\n client = client_setup(DATAPORTEN_CLIENT_ID, DATAPORTEN_CLIENT_SECRET)\n\n queryparams = request.GET.urlencode()\n\n try:\n auth_resp = client.parse_response(AuthorizationResponse, info=queryparams, sformat='urlencoded')\n except ResponseError:\n messages.error(request, 'Foresp\u00f8rselen mangler p\u00e5krevde felter, vennligst pr\u00f8v igjen.')\n return redirect('profiles_active', active_tab='membership')\n\n if not request.session.get('dataporten_study_state', '') or \\\n request.session['dataporten_study_state'] != auth_resp['state']:\n logger.warning('Dataporten state did not equal the one in session!')\n messages.error(request, 'Verifisering av foresp\u00f8rselen feilet. Vennligst pr\u00f8v igjen.')\n return redirect('profiles_active', active_tab='membership')\n\n args = {\n 'code': auth_resp['code'],\n 'redirect_uri': DATAPORTEN_REDIRECT_URI,\n }\n\n token_request = client.do_access_token_request(\n state=auth_resp['state'], request_args=args, authn_method='client_secret_basic',\n )\n\n access_token = token_request.get('access_token')\n\n # Do user info request\n userinfo = client.do_user_info_request(state=auth_resp['state'], behavior='use_authorization_header')\n ntnu_username_dataporten = userinfo.get('email').split('@')[0]\n if request.user.ntnu_username and request.user.ntnu_username != ntnu_username_dataporten:\n logger.warning(\n '{} tried to authorize, but the registered ntnu_username and the one received from Dataporten differ.'\n .format(request.user),\n extra={\n 'user': request.user,\n 'ntnu_username__ow4': request.user.ntnu_username,\n 'ntnu_username__dataporten': ntnu_username_dataporten\n }\n )\n messages.error(\n request,\n 'Brukernavnet for brukerkontoen brukt til verifisering i Dataporten stemmer ikke overens med '\n 'kontoen du er logget inn med hos Online. Pass p\u00e5 at du er logget inn p\u00e5 din egen konto begge '\n 'steder og pr\u00f8v igjen.'\n )\n return redirect('profiles_active', active_tab='membership')\n elif not request.user.ntnu_username:\n pass\n # @ToDo: Register email address. Maybe store it, but ask user to confirm? -> resend auth email\n\n # Getting information about study of the user\n groups = fetch_groups_information(access_token)\n\n try:\n if not request.user.ntnu_username:\n set_ntnu_username(request.user, ntnu_username_dataporten)\n studies_info = find_user_study_and_update(request.user, groups)\n\n if not studies_info:\n logger.warning(\n 'Dataporten groups do not match groups for informatics',\n extra={\n 'user': request.user,\n 'groups': groups,\n }\n )\n messages.error(\n request,\n 'Studieretningen du studerer ved gir ikke medlemskap i Online. ',\n 'Hvis du mener dette er en feil; ta vennligst kontakt Dotkom slik at vi kan feils\u00f8ke prosessen.'\n )\n return redirect('profiles_active', active_tab='membership')\n\n studies_informatics, study_name, study_year = studies_info\n except IntegrityError:\n messages.error(\n request,\n 'En bruker er allerede knyttet til denne NTNU-kontoen. '\n 'Dersom du har glemt passordet til din andre bruker kan du bruke \"glemt passord\"-funksjonen.'\n )\n return redirect('profiles_active', active_tab='membership')\n\n if studies_informatics:\n messages.success(\n request,\n 'Bekreftet studieretning som {} i {}. klasse. Dersom dette er feil, '\n 'kontakt dotkom slik at vi kan rette opp og finne ut hva som gikk galt.'\n .format(study_name, study_year)\n )\n else:\n messages.error(\n request,\n 'Det ser ikke ut som du tar informatikkfag. Dersom du mener dette er galt kan du sende inn en s\u00f8knad '\n 'manuelt. Ta gjerne kontakt med dotkom slik at vi kan feils\u00f8ke prosessen.'\n )\n\n return redirect('profiles_active', active_tab='membership')\n", "path": "apps/dataporten/views.py"}, {"content": "from decouple import config\n\nDATAPORTEN = {\n 'STUDY': {\n 'ENABLED': config('OW4_DP_STUDY_ENABLED', cast=bool, default=False),\n 'TESTING': config('OW4_DP_STUDY_TESTING', cast=bool, default=True),\n 'CLIENT_ID': config('OW4_DP_STUDY_CLIENT_ID', default=''),\n 'CLIENT_SECRET': config('OW4_DP_STUDY_CLIENT_SECRET', default=''),\n 'REDIRECT_URI': config('OW4_DP_STUDY_REDIRECT_URI', default=''),\n 'PROVIDER_URL': 'https://auth.dataporten.no/oauth/token',\n 'SCOPES': ['openid', 'userid', 'profile', 'groups', 'email'],\n }\n}\n", "path": "apps/dataporten/settings.py"}], "after_files": [{"content": "import logging\n\nfrom django.conf import settings\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.db import IntegrityError\nfrom django.shortcuts import redirect\nfrom oic import rndstr\nfrom oic.oauth2 import AuthorizationResponse, ResponseError\n\nfrom apps.dataporten.study.tasks import (fetch_groups_information, find_user_study_and_update,\n set_ntnu_username)\n\nfrom .client import client_setup\n\nlogger = logging.getLogger(__name__)\n\nDATAPORTEN_CLIENT_ID = settings.DATAPORTEN.get('STUDY', {}).get('CLIENT_ID')\nDATAPORTEN_CLIENT_SECRET = settings.DATAPORTEN.get('STUDY', {}).get('CLIENT_SECRET')\nDATAPORTEN_REDIRECT_URI = settings.DATAPORTEN.get('STUDY', {}).get('REDIRECT_URI')\nDATAPORTEN_SCOPES = settings.DATAPORTEN.get('STUDY', {}).get('SCOPES')\n\n\n@login_required()\ndef study(request):\n \"\"\"This view redirects the user to Dataporten to request authorization for fetching information about the\n user's groups membership, which can be used to verify eligibility for membership of Online.\"\"\"\n\n # If the user already is a member we can return early. However, if we're in testing, we want to skip the check.\n if settings.DATAPORTEN.get('STUDY').get('ENABLED') and request.user.is_member:\n messages.info(request, 'Du er allerede registrert som medlem.')\n return redirect('profiles_active', active_tab='membership')\n\n logger.debug(\n '{} wants to automatically confirm study programme through Dataporten.'.format(request.user),\n extra={'user': request.user}\n )\n\n client = client_setup(DATAPORTEN_CLIENT_ID, DATAPORTEN_CLIENT_SECRET)\n\n # Generate random values used to verify that it's the same user when in the callback.\n state = rndstr()\n nonce = rndstr()\n\n request.session['dataporten_study_state'] = state\n request.session['dataporten_study_nonce'] = nonce\n\n args = {\n 'client_id': DATAPORTEN_CLIENT_ID,\n 'response_type': 'code',\n 'scope': DATAPORTEN_SCOPES,\n 'redirect_uri': DATAPORTEN_REDIRECT_URI,\n 'nonce': nonce,\n 'state': state,\n }\n\n logger.debug(\n 'Constructing authorization request and redirecting user to authorize through Dataporten.',\n extra={'user': request.user}\n )\n\n auth_req = client.construct_AuthorizationRequest(request_args=args)\n login_url = auth_req.request(client.authorization_endpoint)\n\n return redirect(login_url)\n\n\n@login_required() # noqa: C901\ndef study_callback(request):\n \"\"\"This view fetches information from Dataporten to verify the eligibility. This is done by fetching\n the /me/groups-API from Dataporten and further processing the fetched groups to find group membership.\n\n Dataporten Groups API: https://docs.dataporten.no/docs/groups/\"\"\"\n logger.debug('Fetching study programme for user {}'.format(request.user), extra={'user': request.user})\n\n client = client_setup(DATAPORTEN_CLIENT_ID, DATAPORTEN_CLIENT_SECRET)\n\n queryparams = request.GET.urlencode()\n\n try:\n auth_resp = client.parse_response(AuthorizationResponse, info=queryparams, sformat='urlencoded')\n except ResponseError:\n messages.error(request, 'Foresp\u00f8rselen mangler p\u00e5krevde felter, vennligst pr\u00f8v igjen.')\n return redirect('profiles_active', active_tab='membership')\n\n if not request.session.get('dataporten_study_state', '') or \\\n request.session['dataporten_study_state'] != auth_resp['state']:\n logger.warning('Dataporten state did not equal the one in session!')\n messages.error(request, 'Verifisering av foresp\u00f8rselen feilet. Vennligst pr\u00f8v igjen.')\n return redirect('profiles_active', active_tab='membership')\n\n args = {\n 'code': auth_resp['code'],\n 'redirect_uri': DATAPORTEN_REDIRECT_URI,\n }\n\n token_request = client.do_access_token_request(\n state=auth_resp['state'], request_args=args, authn_method='client_secret_basic',\n )\n\n access_token = token_request.get('access_token')\n\n # Do user info request\n userinfo = client.do_user_info_request(state=auth_resp['state'], behavior='use_authorization_header')\n # connect-userid_sec format is array with \"feide:[email protected]\"\n ntnu_username_dataporten = userinfo.get('connect-userid_sec')[0].split(':')[1].split('@')[0]\n if request.user.ntnu_username and request.user.ntnu_username != ntnu_username_dataporten:\n logger.warning(\n '{} tried to authorize, but the registered ntnu_username and the one received from Dataporten differ.'\n .format(request.user),\n extra={\n 'user': request.user,\n 'ntnu_username__ow4': request.user.ntnu_username,\n 'ntnu_username__dataporten': ntnu_username_dataporten\n }\n )\n messages.error(\n request,\n 'Brukernavnet for brukerkontoen brukt til verifisering i Dataporten stemmer ikke overens med '\n 'kontoen du er logget inn med hos Online. Pass p\u00e5 at du er logget inn p\u00e5 din egen konto begge '\n 'steder og pr\u00f8v igjen.'\n )\n return redirect('profiles_active', active_tab='membership')\n elif not request.user.ntnu_username:\n pass\n # @ToDo: Register email address. Maybe store it, but ask user to confirm? -> resend auth email\n\n # Getting information about study of the user\n groups = fetch_groups_information(access_token)\n\n try:\n if not request.user.ntnu_username:\n set_ntnu_username(request.user, ntnu_username_dataporten)\n studies_info = find_user_study_and_update(request.user, groups)\n\n if not studies_info:\n logger.warning(\n 'Dataporten groups do not match groups for informatics',\n extra={\n 'user': request.user,\n 'groups': groups,\n }\n )\n messages.error(\n request,\n 'Studieretningen du studerer ved gir ikke medlemskap i Online. ',\n 'Hvis du mener dette er en feil; ta vennligst kontakt Dotkom slik at vi kan feils\u00f8ke prosessen.'\n )\n return redirect('profiles_active', active_tab='membership')\n\n studies_informatics, study_name, study_year = studies_info\n except IntegrityError:\n messages.error(\n request,\n 'En bruker er allerede knyttet til denne NTNU-kontoen. '\n 'Dersom du har glemt passordet til din andre bruker kan du bruke \"glemt passord\"-funksjonen.'\n )\n return redirect('profiles_active', active_tab='membership')\n\n if studies_informatics:\n messages.success(\n request,\n 'Bekreftet studieretning som {} i {}. klasse. Dersom dette er feil, '\n 'kontakt dotkom slik at vi kan rette opp og finne ut hva som gikk galt.'\n .format(study_name, study_year)\n )\n else:\n messages.error(\n request,\n 'Det ser ikke ut som du tar informatikkfag. Dersom du mener dette er galt kan du sende inn en s\u00f8knad '\n 'manuelt. Ta gjerne kontakt med dotkom slik at vi kan feils\u00f8ke prosessen.'\n )\n\n return redirect('profiles_active', active_tab='membership')\n", "path": "apps/dataporten/views.py"}, {"content": "from decouple import config\n\nDATAPORTEN = {\n 'STUDY': {\n 'ENABLED': config('OW4_DP_STUDY_ENABLED', cast=bool, default=False),\n 'TESTING': config('OW4_DP_STUDY_TESTING', cast=bool, default=True),\n 'CLIENT_ID': config('OW4_DP_STUDY_CLIENT_ID', default=''),\n 'CLIENT_SECRET': config('OW4_DP_STUDY_CLIENT_SECRET', default=''),\n 'REDIRECT_URI': config('OW4_DP_STUDY_REDIRECT_URI', default=''),\n 'PROVIDER_URL': 'https://auth.dataporten.no/oauth/token',\n 'SCOPES': ['openid', 'userid-feide', 'profile', 'groups', 'email'],\n }\n}\n", "path": "apps/dataporten/settings.py"}]}
| 2,625 | 364 |
gh_patches_debug_5184
|
rasdani/github-patches
|
git_diff
|
googleapis__python-bigquery-51
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BigQuery: TypeError: from_arrays() takes at least 2 positional arguments (1 given)
Hi all, i tried bq client in python with the default example. Since i moved from 1.23.1 to 1.24.0 last week i get the following issue.
Its related to pyarrow but i was not upgrading pyarrow (worked with it before)
#### Environment details
- Python 3.7.6
- bigquery.__version__ '1.24.0'
- pyarrow.__version__ '0.11.1'
- Linux jupyter-generic 4.15.0-1057-aws googleapis/google-cloud-python#59-Ubuntu SMP Wed Dec 4 10:02:00 UTC 2019 x86_64
- x86_64 x86_64 GNU/Linux
- Name: google-cloud-bigquery
- Version: 1.24.0
- Summary: Google BigQuery API client library
- Location: /opt/conda/lib/python3.7/site-packages
- Requires: google-cloud-core, google-auth, six, google-resumable-media, protobuf, google-api-core
- Required-by: pandas-gbq
#### Steps to reproduce
just running a default example form the webhttps://cloud.google.com/bigquery/docs/bigquery-storage-python-pandas
```python
import google.auth
from google.cloud import bigquery
client = bigquery.Client.from_service_account_json('cred.json')
# Download query results.
query_string = """
SELECT
CONCAT(
'https://stackoverflow.com/questions/',
CAST(id as STRING)) as url,
view_count
FROM `bigquery-public-data.stackoverflow.posts_questions`
WHERE tags like '%google-bigquery%'
ORDER BY view_count DESC
"""
dataframe = (
client.query(query_string)
.result()
.to_dataframe()
)
print(dataframe.head())
```
#### Stack trace
```
--------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-11-61d06599dbdd> in <module>
12
13 dataframe = (
---> 14 client.query(query_string)
15 .result()
16 .to_dataframe()
/opt/conda/lib/python3.7/site-packages/google/cloud/bigquery/table.py in to_dataframe(self, bqstorage_client, dtypes, progress_bar_type, create_bqstorage_client)
1727 progress_bar_type=progress_bar_type,
1728 bqstorage_client=bqstorage_client,
-> 1729 create_bqstorage_client=create_bqstorage_client,
1730 )
1731 df = record_batch.to_pandas()
/opt/conda/lib/python3.7/site-packages/google/cloud/bigquery/table.py in to_arrow(self, progress_bar_type, bqstorage_client, create_bqstorage_client)
1541 record_batches = []
1542 for record_batch in self._to_arrow_iterable(
-> 1543 bqstorage_client=bqstorage_client
1544 ):
1545 record_batches.append(record_batch)
/opt/conda/lib/python3.7/site-packages/google/cloud/bigquery/table.py in _to_page_iterable(self, bqstorage_download, tabledata_list_download, bqstorage_client)
1433 )
1434 )
-> 1435 for item in tabledata_list_download():
1436 yield item
1437
/opt/conda/lib/python3.7/site-packages/google/cloud/bigquery/_pandas_helpers.py in download_arrow_tabledata_list(pages, bq_schema)
523
524 for page in pages:
--> 525 yield _tabledata_list_page_to_arrow(page, column_names, arrow_types)
526
527
/opt/conda/lib/python3.7/site-packages/google/cloud/bigquery/_pandas_helpers.py in _tabledata_list_page_to_arrow(page, column_names, arrow_types)
499
500 if isinstance(column_names, pyarrow.Schema):
--> 501 return pyarrow.RecordBatch.from_arrays(arrays, schema=column_names)
502 return pyarrow.RecordBatch.from_arrays(arrays, names=column_names)
503
/opt/conda/lib/python3.7/site-packages/pyarrow/table.pxi in pyarrow.lib.RecordBatch.from_arrays()
TypeError: from_arrays() takes at least 2 positional arguments (1 given)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import io
16 import os
17
18 import setuptools
19
20
21 # Package metadata.
22
23 name = "google-cloud-bigquery"
24 description = "Google BigQuery API client library"
25 version = "1.24.0"
26 # Should be one of:
27 # 'Development Status :: 3 - Alpha'
28 # 'Development Status :: 4 - Beta'
29 # 'Development Status :: 5 - Production/Stable'
30 release_status = "Development Status :: 5 - Production/Stable"
31 dependencies = [
32 'enum34; python_version < "3.4"',
33 "google-auth >= 1.9.0, < 2.0dev",
34 "google-api-core >= 1.15.0, < 2.0dev",
35 "google-cloud-core >= 1.1.0, < 2.0dev",
36 "google-resumable-media >= 0.5.0, < 0.6dev",
37 "protobuf >= 3.6.0",
38 "six >=1.13.0,< 2.0.0dev",
39 ]
40 extras = {
41 "bqstorage": [
42 "google-cloud-bigquery-storage >= 0.6.0, <2.0.0dev",
43 # Bad Linux release for 0.14.0.
44 # https://issues.apache.org/jira/browse/ARROW-5868
45 "pyarrow>=0.13.0, != 0.14.0",
46 ],
47 "pandas": ["pandas>=0.17.1"],
48 # Exclude PyArrow dependency from Windows Python 2.7.
49 'pyarrow: platform_system != "Windows" or python_version >= "3.4"': [
50 # Bad Linux release for 0.14.0.
51 # https://issues.apache.org/jira/browse/ARROW-5868
52 "pyarrow>=0.4.1, != 0.14.0"
53 ],
54 "tqdm": ["tqdm >= 4.0.0, <5.0.0dev"],
55 "fastparquet": ["fastparquet", "python-snappy"],
56 }
57
58 all_extras = []
59
60 for extra in extras:
61 if extra == "fastparquet":
62 # Skip fastparquet from "all" because it is redundant with pyarrow and
63 # creates a dependency on pre-release versions of numpy. See:
64 # https://github.com/googleapis/google-cloud-python/issues/8549
65 continue
66 all_extras.extend(extras[extra])
67
68 extras["all"] = all_extras
69
70 # Setup boilerplate below this line.
71
72 package_root = os.path.abspath(os.path.dirname(__file__))
73
74 readme_filename = os.path.join(package_root, "README.rst")
75 with io.open(readme_filename, encoding="utf-8") as readme_file:
76 readme = readme_file.read()
77
78 # Only include packages under the 'google' namespace. Do not include tests,
79 # benchmarks, etc.
80 packages = [
81 package for package in setuptools.find_packages() if package.startswith("google")
82 ]
83
84 # Determine which namespaces are needed.
85 namespaces = ["google"]
86 if "google.cloud" in packages:
87 namespaces.append("google.cloud")
88
89
90 setuptools.setup(
91 name=name,
92 version=version,
93 description=description,
94 long_description=readme,
95 author="Google LLC",
96 author_email="[email protected]",
97 license="Apache 2.0",
98 url="https://github.com/googleapis/python-bigquery",
99 classifiers=[
100 release_status,
101 "Intended Audience :: Developers",
102 "License :: OSI Approved :: Apache Software License",
103 "Programming Language :: Python",
104 "Programming Language :: Python :: 2",
105 "Programming Language :: Python :: 2.7",
106 "Programming Language :: Python :: 3",
107 "Programming Language :: Python :: 3.5",
108 "Programming Language :: Python :: 3.6",
109 "Programming Language :: Python :: 3.7",
110 "Operating System :: OS Independent",
111 "Topic :: Internet",
112 ],
113 platforms="Posix; MacOS X; Windows",
114 packages=packages,
115 namespace_packages=namespaces,
116 install_requires=dependencies,
117 extras_require=extras,
118 python_requires=">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*",
119 include_package_data=True,
120 zip_safe=False,
121 )
122
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -40,9 +40,7 @@
extras = {
"bqstorage": [
"google-cloud-bigquery-storage >= 0.6.0, <2.0.0dev",
- # Bad Linux release for 0.14.0.
- # https://issues.apache.org/jira/browse/ARROW-5868
- "pyarrow>=0.13.0, != 0.14.0",
+ "pyarrow>=0.16.0, < 2.0dev",
],
"pandas": ["pandas>=0.17.1"],
# Exclude PyArrow dependency from Windows Python 2.7.
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -40,9 +40,7 @@\n extras = {\n \"bqstorage\": [\n \"google-cloud-bigquery-storage >= 0.6.0, <2.0.0dev\",\n- # Bad Linux release for 0.14.0.\n- # https://issues.apache.org/jira/browse/ARROW-5868\n- \"pyarrow>=0.13.0, != 0.14.0\",\n+ \"pyarrow>=0.16.0, < 2.0dev\",\n ],\n \"pandas\": [\"pandas>=0.17.1\"],\n # Exclude PyArrow dependency from Windows Python 2.7.\n", "issue": "BigQuery: TypeError: from_arrays() takes at least 2 positional arguments (1 given)\nHi all, i tried bq client in python with the default example. Since i moved from 1.23.1 to 1.24.0 last week i get the following issue.\r\n\r\nIts related to pyarrow but i was not upgrading pyarrow (worked with it before)\r\n\r\n#### Environment details\r\n- Python 3.7.6\r\n- bigquery.__version__ '1.24.0'\r\n- pyarrow.__version__ '0.11.1'\r\n- Linux jupyter-generic 4.15.0-1057-aws googleapis/google-cloud-python#59-Ubuntu SMP Wed Dec 4 10:02:00 UTC 2019 x86_64 \r\n\r\n- x86_64 x86_64 GNU/Linux\r\n- Name: google-cloud-bigquery\r\n- Version: 1.24.0\r\n- Summary: Google BigQuery API client library\r\n- Location: /opt/conda/lib/python3.7/site-packages\r\n- Requires: google-cloud-core, google-auth, six, google-resumable-media, protobuf, google-api-core\r\n- Required-by: pandas-gbq\r\n\r\n#### Steps to reproduce\r\njust running a default example form the webhttps://cloud.google.com/bigquery/docs/bigquery-storage-python-pandas\r\n\r\n```python\r\nimport google.auth\r\nfrom google.cloud import bigquery\r\nclient = bigquery.Client.from_service_account_json('cred.json')\r\n\r\n# Download query results.\r\nquery_string = \"\"\"\r\nSELECT\r\nCONCAT(\r\n 'https://stackoverflow.com/questions/',\r\n CAST(id as STRING)) as url,\r\nview_count\r\nFROM `bigquery-public-data.stackoverflow.posts_questions`\r\nWHERE tags like '%google-bigquery%'\r\nORDER BY view_count DESC\r\n\"\"\"\r\n\r\ndataframe = (\r\n client.query(query_string)\r\n .result()\r\n .to_dataframe()\r\n)\r\nprint(dataframe.head())\r\n```\r\n\r\n\r\n#### Stack trace\r\n```\r\n--------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-11-61d06599dbdd> in <module>\r\n 12 \r\n 13 dataframe = (\r\n---> 14 client.query(query_string)\r\n 15 .result()\r\n 16 .to_dataframe()\r\n\r\n/opt/conda/lib/python3.7/site-packages/google/cloud/bigquery/table.py in to_dataframe(self, bqstorage_client, dtypes, progress_bar_type, create_bqstorage_client)\r\n 1727 progress_bar_type=progress_bar_type,\r\n 1728 bqstorage_client=bqstorage_client,\r\n-> 1729 create_bqstorage_client=create_bqstorage_client,\r\n 1730 )\r\n 1731 df = record_batch.to_pandas()\r\n\r\n/opt/conda/lib/python3.7/site-packages/google/cloud/bigquery/table.py in to_arrow(self, progress_bar_type, bqstorage_client, create_bqstorage_client)\r\n 1541 record_batches = []\r\n 1542 for record_batch in self._to_arrow_iterable(\r\n-> 1543 bqstorage_client=bqstorage_client\r\n 1544 ):\r\n 1545 record_batches.append(record_batch)\r\n\r\n/opt/conda/lib/python3.7/site-packages/google/cloud/bigquery/table.py in _to_page_iterable(self, bqstorage_download, tabledata_list_download, bqstorage_client)\r\n 1433 )\r\n 1434 )\r\n-> 1435 for item in tabledata_list_download():\r\n 1436 yield item\r\n 1437 \r\n\r\n/opt/conda/lib/python3.7/site-packages/google/cloud/bigquery/_pandas_helpers.py in download_arrow_tabledata_list(pages, bq_schema)\r\n 523 \r\n 524 for page in pages:\r\n--> 525 yield _tabledata_list_page_to_arrow(page, column_names, arrow_types)\r\n 526 \r\n 527 \r\n\r\n/opt/conda/lib/python3.7/site-packages/google/cloud/bigquery/_pandas_helpers.py in _tabledata_list_page_to_arrow(page, column_names, arrow_types)\r\n 499 \r\n 500 if isinstance(column_names, pyarrow.Schema):\r\n--> 501 return pyarrow.RecordBatch.from_arrays(arrays, schema=column_names)\r\n 502 return pyarrow.RecordBatch.from_arrays(arrays, names=column_names)\r\n 503 \r\n\r\n/opt/conda/lib/python3.7/site-packages/pyarrow/table.pxi in pyarrow.lib.RecordBatch.from_arrays()\r\n\r\nTypeError: from_arrays() takes at least 2 positional arguments (1 given)\r\n\r\n```\r\n\r\n\n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\n\nimport setuptools\n\n\n# Package metadata.\n\nname = \"google-cloud-bigquery\"\ndescription = \"Google BigQuery API client library\"\nversion = \"1.24.0\"\n# Should be one of:\n# 'Development Status :: 3 - Alpha'\n# 'Development Status :: 4 - Beta'\n# 'Development Status :: 5 - Production/Stable'\nrelease_status = \"Development Status :: 5 - Production/Stable\"\ndependencies = [\n 'enum34; python_version < \"3.4\"',\n \"google-auth >= 1.9.0, < 2.0dev\",\n \"google-api-core >= 1.15.0, < 2.0dev\",\n \"google-cloud-core >= 1.1.0, < 2.0dev\",\n \"google-resumable-media >= 0.5.0, < 0.6dev\",\n \"protobuf >= 3.6.0\",\n \"six >=1.13.0,< 2.0.0dev\",\n]\nextras = {\n \"bqstorage\": [\n \"google-cloud-bigquery-storage >= 0.6.0, <2.0.0dev\",\n # Bad Linux release for 0.14.0.\n # https://issues.apache.org/jira/browse/ARROW-5868\n \"pyarrow>=0.13.0, != 0.14.0\",\n ],\n \"pandas\": [\"pandas>=0.17.1\"],\n # Exclude PyArrow dependency from Windows Python 2.7.\n 'pyarrow: platform_system != \"Windows\" or python_version >= \"3.4\"': [\n # Bad Linux release for 0.14.0.\n # https://issues.apache.org/jira/browse/ARROW-5868\n \"pyarrow>=0.4.1, != 0.14.0\"\n ],\n \"tqdm\": [\"tqdm >= 4.0.0, <5.0.0dev\"],\n \"fastparquet\": [\"fastparquet\", \"python-snappy\"],\n}\n\nall_extras = []\n\nfor extra in extras:\n if extra == \"fastparquet\":\n # Skip fastparquet from \"all\" because it is redundant with pyarrow and\n # creates a dependency on pre-release versions of numpy. See:\n # https://github.com/googleapis/google-cloud-python/issues/8549\n continue\n all_extras.extend(extras[extra])\n\nextras[\"all\"] = all_extras\n\n# Setup boilerplate below this line.\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, \"README.rst\")\nwith io.open(readme_filename, encoding=\"utf-8\") as readme_file:\n readme = readme_file.read()\n\n# Only include packages under the 'google' namespace. Do not include tests,\n# benchmarks, etc.\npackages = [\n package for package in setuptools.find_packages() if package.startswith(\"google\")\n]\n\n# Determine which namespaces are needed.\nnamespaces = [\"google\"]\nif \"google.cloud\" in packages:\n namespaces.append(\"google.cloud\")\n\n\nsetuptools.setup(\n name=name,\n version=version,\n description=description,\n long_description=readme,\n author=\"Google LLC\",\n author_email=\"[email protected]\",\n license=\"Apache 2.0\",\n url=\"https://github.com/googleapis/python-bigquery\",\n classifiers=[\n release_status,\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet\",\n ],\n platforms=\"Posix; MacOS X; Windows\",\n packages=packages,\n namespace_packages=namespaces,\n install_requires=dependencies,\n extras_require=extras,\n python_requires=\">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*\",\n include_package_data=True,\n zip_safe=False,\n)\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\n\nimport setuptools\n\n\n# Package metadata.\n\nname = \"google-cloud-bigquery\"\ndescription = \"Google BigQuery API client library\"\nversion = \"1.24.0\"\n# Should be one of:\n# 'Development Status :: 3 - Alpha'\n# 'Development Status :: 4 - Beta'\n# 'Development Status :: 5 - Production/Stable'\nrelease_status = \"Development Status :: 5 - Production/Stable\"\ndependencies = [\n 'enum34; python_version < \"3.4\"',\n \"google-auth >= 1.9.0, < 2.0dev\",\n \"google-api-core >= 1.15.0, < 2.0dev\",\n \"google-cloud-core >= 1.1.0, < 2.0dev\",\n \"google-resumable-media >= 0.5.0, < 0.6dev\",\n \"protobuf >= 3.6.0\",\n \"six >=1.13.0,< 2.0.0dev\",\n]\nextras = {\n \"bqstorage\": [\n \"google-cloud-bigquery-storage >= 0.6.0, <2.0.0dev\",\n \"pyarrow>=0.16.0, < 2.0dev\",\n ],\n \"pandas\": [\"pandas>=0.17.1\"],\n # Exclude PyArrow dependency from Windows Python 2.7.\n 'pyarrow: platform_system != \"Windows\" or python_version >= \"3.4\"': [\n # Bad Linux release for 0.14.0.\n # https://issues.apache.org/jira/browse/ARROW-5868\n \"pyarrow>=0.4.1, != 0.14.0\"\n ],\n \"tqdm\": [\"tqdm >= 4.0.0, <5.0.0dev\"],\n \"fastparquet\": [\"fastparquet\", \"python-snappy\"],\n}\n\nall_extras = []\n\nfor extra in extras:\n if extra == \"fastparquet\":\n # Skip fastparquet from \"all\" because it is redundant with pyarrow and\n # creates a dependency on pre-release versions of numpy. See:\n # https://github.com/googleapis/google-cloud-python/issues/8549\n continue\n all_extras.extend(extras[extra])\n\nextras[\"all\"] = all_extras\n\n# Setup boilerplate below this line.\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, \"README.rst\")\nwith io.open(readme_filename, encoding=\"utf-8\") as readme_file:\n readme = readme_file.read()\n\n# Only include packages under the 'google' namespace. Do not include tests,\n# benchmarks, etc.\npackages = [\n package for package in setuptools.find_packages() if package.startswith(\"google\")\n]\n\n# Determine which namespaces are needed.\nnamespaces = [\"google\"]\nif \"google.cloud\" in packages:\n namespaces.append(\"google.cloud\")\n\n\nsetuptools.setup(\n name=name,\n version=version,\n description=description,\n long_description=readme,\n author=\"Google LLC\",\n author_email=\"[email protected]\",\n license=\"Apache 2.0\",\n url=\"https://github.com/googleapis/python-bigquery\",\n classifiers=[\n release_status,\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet\",\n ],\n platforms=\"Posix; MacOS X; Windows\",\n packages=packages,\n namespace_packages=namespaces,\n install_requires=dependencies,\n extras_require=extras,\n python_requires=\">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*\",\n include_package_data=True,\n zip_safe=False,\n)\n", "path": "setup.py"}]}
| 2,612 | 174 |
gh_patches_debug_7018
|
rasdani/github-patches
|
git_diff
|
bookwyrm-social__bookwyrm-592
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Wrong baseurl in remote @-mention notification
When someone on a remote server @-mentions me on Bookwyrm, the "status" link in notifications has the wrong baseUrl.
Example:

The url that the username `Flancian` links to is `https://bookwyrm.social/user/[email protected]`. The url that `status` links to is `https://social.coop/users/flancian/status/14794`, which takes me to a 404 on the `social.coop` server. The correct url that status should be linking to is `https://bookwyrm.social/user/[email protected]/status/14794`.
I've confirmed this happens for @-mentions from any remote server.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bookwyrm/models/base_model.py`
Content:
```
1 ''' base model with default fields '''
2 from base64 import b64encode
3 from functools import reduce
4 import operator
5 from uuid import uuid4
6
7 from Crypto.PublicKey import RSA
8 from Crypto.Signature import pkcs1_15
9 from Crypto.Hash import SHA256
10 from django.core.paginator import Paginator
11 from django.db import models
12 from django.db.models import Q
13 from django.dispatch import receiver
14
15 from bookwyrm import activitypub
16 from bookwyrm.settings import DOMAIN, PAGE_LENGTH
17 from .fields import ImageField, ManyToManyField, RemoteIdField
18
19
20 class BookWyrmModel(models.Model):
21 ''' shared fields '''
22 created_date = models.DateTimeField(auto_now_add=True)
23 updated_date = models.DateTimeField(auto_now=True)
24 remote_id = RemoteIdField(null=True, activitypub_field='id')
25
26 def get_remote_id(self):
27 ''' generate a url that resolves to the local object '''
28 base_path = 'https://%s' % DOMAIN
29 if hasattr(self, 'user'):
30 base_path = self.user.remote_id
31 model_name = type(self).__name__.lower()
32 return '%s/%s/%d' % (base_path, model_name, self.id)
33
34 class Meta:
35 ''' this is just here to provide default fields for other models '''
36 abstract = True
37
38 @property
39 def local_path(self):
40 ''' how to link to this object in the local app '''
41 return self.get_remote_id().replace('https://%s' % DOMAIN, '')
42
43
44 @receiver(models.signals.post_save)
45 #pylint: disable=unused-argument
46 def execute_after_save(sender, instance, created, *args, **kwargs):
47 ''' set the remote_id after save (when the id is available) '''
48 if not created or not hasattr(instance, 'get_remote_id'):
49 return
50 if not instance.remote_id:
51 instance.remote_id = instance.get_remote_id()
52 instance.save()
53
54
55 def unfurl_related_field(related_field, sort_field=None):
56 ''' load reverse lookups (like public key owner or Status attachment '''
57 if hasattr(related_field, 'all'):
58 return [unfurl_related_field(i) for i in related_field.order_by(
59 sort_field).all()]
60 if related_field.reverse_unfurl:
61 return related_field.field_to_activity()
62 return related_field.remote_id
63
64
65 class ActivitypubMixin:
66 ''' add this mixin for models that are AP serializable '''
67 activity_serializer = lambda: {}
68 reverse_unfurl = False
69
70 def __init__(self, *args, **kwargs):
71 ''' collect some info on model fields '''
72 self.image_fields = []
73 self.many_to_many_fields = []
74 self.simple_fields = [] # "simple"
75 for field in self._meta.get_fields():
76 if not hasattr(field, 'field_to_activity'):
77 continue
78
79 if isinstance(field, ImageField):
80 self.image_fields.append(field)
81 elif isinstance(field, ManyToManyField):
82 self.many_to_many_fields.append(field)
83 else:
84 self.simple_fields.append(field)
85
86 self.activity_fields = self.image_fields + \
87 self.many_to_many_fields + self.simple_fields
88
89 self.deserialize_reverse_fields = self.deserialize_reverse_fields \
90 if hasattr(self, 'deserialize_reverse_fields') else []
91 self.serialize_reverse_fields = self.serialize_reverse_fields \
92 if hasattr(self, 'serialize_reverse_fields') else []
93
94 super().__init__(*args, **kwargs)
95
96
97 @classmethod
98 def find_existing_by_remote_id(cls, remote_id):
99 ''' look up a remote id in the db '''
100 return cls.find_existing({'id': remote_id})
101
102 @classmethod
103 def find_existing(cls, data):
104 ''' compare data to fields that can be used for deduplation.
105 This always includes remote_id, but can also be unique identifiers
106 like an isbn for an edition '''
107 filters = []
108 for field in cls._meta.get_fields():
109 if not hasattr(field, 'deduplication_field') or \
110 not field.deduplication_field:
111 continue
112
113 value = data.get(field.get_activitypub_field())
114 if not value:
115 continue
116 filters.append({field.name: value})
117
118 if hasattr(cls, 'origin_id') and 'id' in data:
119 # kinda janky, but this handles special case for books
120 filters.append({'origin_id': data['id']})
121
122 if not filters:
123 # if there are no deduplication fields, it will match the first
124 # item no matter what. this shouldn't happen but just in case.
125 return None
126
127 objects = cls.objects
128 if hasattr(objects, 'select_subclasses'):
129 objects = objects.select_subclasses()
130
131 # an OR operation on all the match fields
132 match = objects.filter(
133 reduce(
134 operator.or_, (Q(**f) for f in filters)
135 )
136 )
137 # there OUGHT to be only one match
138 return match.first()
139
140
141 def to_activity(self):
142 ''' convert from a model to an activity '''
143 activity = generate_activity(self)
144 return self.activity_serializer(**activity).serialize()
145
146
147 def to_create_activity(self, user, **kwargs):
148 ''' returns the object wrapped in a Create activity '''
149 activity_object = self.to_activity(**kwargs)
150
151 signature = None
152 create_id = self.remote_id + '/activity'
153 if 'content' in activity_object:
154 signer = pkcs1_15.new(RSA.import_key(user.key_pair.private_key))
155 content = activity_object['content']
156 signed_message = signer.sign(SHA256.new(content.encode('utf8')))
157
158 signature = activitypub.Signature(
159 creator='%s#main-key' % user.remote_id,
160 created=activity_object['published'],
161 signatureValue=b64encode(signed_message).decode('utf8')
162 )
163
164 return activitypub.Create(
165 id=create_id,
166 actor=user.remote_id,
167 to=activity_object['to'],
168 cc=activity_object['cc'],
169 object=activity_object,
170 signature=signature,
171 ).serialize()
172
173
174 def to_delete_activity(self, user):
175 ''' notice of deletion '''
176 return activitypub.Delete(
177 id=self.remote_id + '/activity',
178 actor=user.remote_id,
179 to=['%s/followers' % user.remote_id],
180 cc=['https://www.w3.org/ns/activitystreams#Public'],
181 object=self.to_activity(),
182 ).serialize()
183
184
185 def to_update_activity(self, user):
186 ''' wrapper for Updates to an activity '''
187 activity_id = '%s#update/%s' % (self.remote_id, uuid4())
188 return activitypub.Update(
189 id=activity_id,
190 actor=user.remote_id,
191 to=['https://www.w3.org/ns/activitystreams#Public'],
192 object=self.to_activity()
193 ).serialize()
194
195
196 def to_undo_activity(self, user):
197 ''' undo an action '''
198 return activitypub.Undo(
199 id='%s#undo' % self.remote_id,
200 actor=user.remote_id,
201 object=self.to_activity()
202 ).serialize()
203
204
205 class OrderedCollectionPageMixin(ActivitypubMixin):
206 ''' just the paginator utilities, so you don't HAVE to
207 override ActivitypubMixin's to_activity (ie, for outbox '''
208 @property
209 def collection_remote_id(self):
210 ''' this can be overriden if there's a special remote id, ie outbox '''
211 return self.remote_id
212
213
214 def to_ordered_collection(self, queryset, \
215 remote_id=None, page=False, collection_only=False, **kwargs):
216 ''' an ordered collection of whatevers '''
217 if not queryset.ordered:
218 raise RuntimeError('queryset must be ordered')
219
220 remote_id = remote_id or self.remote_id
221 if page:
222 return to_ordered_collection_page(
223 queryset, remote_id, **kwargs)
224
225 if collection_only or not hasattr(self, 'activity_serializer'):
226 serializer = activitypub.OrderedCollection
227 activity = {}
228 else:
229 serializer = self.activity_serializer
230 # a dict from the model fields
231 activity = generate_activity(self)
232
233 if remote_id:
234 activity['id'] = remote_id
235
236 paginated = Paginator(queryset, PAGE_LENGTH)
237 # add computed fields specific to orderd collections
238 activity['totalItems'] = paginated.count
239 activity['first'] = '%s?page=1' % remote_id
240 activity['last'] = '%s?page=%d' % (remote_id, paginated.num_pages)
241
242 return serializer(**activity).serialize()
243
244
245 # pylint: disable=unused-argument
246 def to_ordered_collection_page(
247 queryset, remote_id, id_only=False, page=1, **kwargs):
248 ''' serialize and pagiante a queryset '''
249 paginated = Paginator(queryset, PAGE_LENGTH)
250
251 activity_page = paginated.page(page)
252 if id_only:
253 items = [s.remote_id for s in activity_page.object_list]
254 else:
255 items = [s.to_activity() for s in activity_page.object_list]
256
257 prev_page = next_page = None
258 if activity_page.has_next():
259 next_page = '%s?page=%d' % (remote_id, activity_page.next_page_number())
260 if activity_page.has_previous():
261 prev_page = '%s?page=%d' % \
262 (remote_id, activity_page.previous_page_number())
263 return activitypub.OrderedCollectionPage(
264 id='%s?page=%s' % (remote_id, page),
265 partOf=remote_id,
266 orderedItems=items,
267 next=next_page,
268 prev=prev_page
269 ).serialize()
270
271
272 class OrderedCollectionMixin(OrderedCollectionPageMixin):
273 ''' extends activitypub models to work as ordered collections '''
274 @property
275 def collection_queryset(self):
276 ''' usually an ordered collection model aggregates a different model '''
277 raise NotImplementedError('Model must define collection_queryset')
278
279 activity_serializer = activitypub.OrderedCollection
280
281 def to_activity(self, **kwargs):
282 ''' an ordered collection of the specified model queryset '''
283 return self.to_ordered_collection(self.collection_queryset, **kwargs)
284
285
286 def generate_activity(obj):
287 ''' go through the fields on an object '''
288 activity = {}
289 for field in obj.activity_fields:
290 field.set_activity_from_field(activity, obj)
291
292 if hasattr(obj, 'serialize_reverse_fields'):
293 # for example, editions of a work
294 for model_field_name, activity_field_name, sort_field in \
295 obj.serialize_reverse_fields:
296 related_field = getattr(obj, model_field_name)
297 activity[activity_field_name] = \
298 unfurl_related_field(related_field, sort_field)
299
300 if not activity.get('id'):
301 activity['id'] = obj.get_remote_id()
302 return activity
303
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bookwyrm/models/base_model.py b/bookwyrm/models/base_model.py
--- a/bookwyrm/models/base_model.py
+++ b/bookwyrm/models/base_model.py
@@ -27,7 +27,7 @@
''' generate a url that resolves to the local object '''
base_path = 'https://%s' % DOMAIN
if hasattr(self, 'user'):
- base_path = self.user.remote_id
+ base_path = '%s%s' % (base_path, self.user.local_path)
model_name = type(self).__name__.lower()
return '%s/%s/%d' % (base_path, model_name, self.id)
|
{"golden_diff": "diff --git a/bookwyrm/models/base_model.py b/bookwyrm/models/base_model.py\n--- a/bookwyrm/models/base_model.py\n+++ b/bookwyrm/models/base_model.py\n@@ -27,7 +27,7 @@\n ''' generate a url that resolves to the local object '''\n base_path = 'https://%s' % DOMAIN\n if hasattr(self, 'user'):\n- base_path = self.user.remote_id\n+ base_path = '%s%s' % (base_path, self.user.local_path)\n model_name = type(self).__name__.lower()\n return '%s/%s/%d' % (base_path, model_name, self.id)\n", "issue": "Wrong baseurl in remote @-mention notification\nWhen someone on a remote server @-mentions me on Bookwyrm, the \"status\" link in notifications has the wrong baseUrl.\r\n\r\nExample:\r\n\r\n\r\n\r\nThe url that the username `Flancian` links to is `https://bookwyrm.social/user/[email protected]`. The url that `status` links to is `https://social.coop/users/flancian/status/14794`, which takes me to a 404 on the `social.coop` server. The correct url that status should be linking to is `https://bookwyrm.social/user/[email protected]/status/14794`.\r\n\r\nI've confirmed this happens for @-mentions from any remote server.\n", "before_files": [{"content": "''' base model with default fields '''\nfrom base64 import b64encode\nfrom functools import reduce\nimport operator\nfrom uuid import uuid4\n\nfrom Crypto.PublicKey import RSA\nfrom Crypto.Signature import pkcs1_15\nfrom Crypto.Hash import SHA256\nfrom django.core.paginator import Paginator\nfrom django.db import models\nfrom django.db.models import Q\nfrom django.dispatch import receiver\n\nfrom bookwyrm import activitypub\nfrom bookwyrm.settings import DOMAIN, PAGE_LENGTH\nfrom .fields import ImageField, ManyToManyField, RemoteIdField\n\n\nclass BookWyrmModel(models.Model):\n ''' shared fields '''\n created_date = models.DateTimeField(auto_now_add=True)\n updated_date = models.DateTimeField(auto_now=True)\n remote_id = RemoteIdField(null=True, activitypub_field='id')\n\n def get_remote_id(self):\n ''' generate a url that resolves to the local object '''\n base_path = 'https://%s' % DOMAIN\n if hasattr(self, 'user'):\n base_path = self.user.remote_id\n model_name = type(self).__name__.lower()\n return '%s/%s/%d' % (base_path, model_name, self.id)\n\n class Meta:\n ''' this is just here to provide default fields for other models '''\n abstract = True\n\n @property\n def local_path(self):\n ''' how to link to this object in the local app '''\n return self.get_remote_id().replace('https://%s' % DOMAIN, '')\n\n\n@receiver(models.signals.post_save)\n#pylint: disable=unused-argument\ndef execute_after_save(sender, instance, created, *args, **kwargs):\n ''' set the remote_id after save (when the id is available) '''\n if not created or not hasattr(instance, 'get_remote_id'):\n return\n if not instance.remote_id:\n instance.remote_id = instance.get_remote_id()\n instance.save()\n\n\ndef unfurl_related_field(related_field, sort_field=None):\n ''' load reverse lookups (like public key owner or Status attachment '''\n if hasattr(related_field, 'all'):\n return [unfurl_related_field(i) for i in related_field.order_by(\n sort_field).all()]\n if related_field.reverse_unfurl:\n return related_field.field_to_activity()\n return related_field.remote_id\n\n\nclass ActivitypubMixin:\n ''' add this mixin for models that are AP serializable '''\n activity_serializer = lambda: {}\n reverse_unfurl = False\n\n def __init__(self, *args, **kwargs):\n ''' collect some info on model fields '''\n self.image_fields = []\n self.many_to_many_fields = []\n self.simple_fields = [] # \"simple\"\n for field in self._meta.get_fields():\n if not hasattr(field, 'field_to_activity'):\n continue\n\n if isinstance(field, ImageField):\n self.image_fields.append(field)\n elif isinstance(field, ManyToManyField):\n self.many_to_many_fields.append(field)\n else:\n self.simple_fields.append(field)\n\n self.activity_fields = self.image_fields + \\\n self.many_to_many_fields + self.simple_fields\n\n self.deserialize_reverse_fields = self.deserialize_reverse_fields \\\n if hasattr(self, 'deserialize_reverse_fields') else []\n self.serialize_reverse_fields = self.serialize_reverse_fields \\\n if hasattr(self, 'serialize_reverse_fields') else []\n\n super().__init__(*args, **kwargs)\n\n\n @classmethod\n def find_existing_by_remote_id(cls, remote_id):\n ''' look up a remote id in the db '''\n return cls.find_existing({'id': remote_id})\n\n @classmethod\n def find_existing(cls, data):\n ''' compare data to fields that can be used for deduplation.\n This always includes remote_id, but can also be unique identifiers\n like an isbn for an edition '''\n filters = []\n for field in cls._meta.get_fields():\n if not hasattr(field, 'deduplication_field') or \\\n not field.deduplication_field:\n continue\n\n value = data.get(field.get_activitypub_field())\n if not value:\n continue\n filters.append({field.name: value})\n\n if hasattr(cls, 'origin_id') and 'id' in data:\n # kinda janky, but this handles special case for books\n filters.append({'origin_id': data['id']})\n\n if not filters:\n # if there are no deduplication fields, it will match the first\n # item no matter what. this shouldn't happen but just in case.\n return None\n\n objects = cls.objects\n if hasattr(objects, 'select_subclasses'):\n objects = objects.select_subclasses()\n\n # an OR operation on all the match fields\n match = objects.filter(\n reduce(\n operator.or_, (Q(**f) for f in filters)\n )\n )\n # there OUGHT to be only one match\n return match.first()\n\n\n def to_activity(self):\n ''' convert from a model to an activity '''\n activity = generate_activity(self)\n return self.activity_serializer(**activity).serialize()\n\n\n def to_create_activity(self, user, **kwargs):\n ''' returns the object wrapped in a Create activity '''\n activity_object = self.to_activity(**kwargs)\n\n signature = None\n create_id = self.remote_id + '/activity'\n if 'content' in activity_object:\n signer = pkcs1_15.new(RSA.import_key(user.key_pair.private_key))\n content = activity_object['content']\n signed_message = signer.sign(SHA256.new(content.encode('utf8')))\n\n signature = activitypub.Signature(\n creator='%s#main-key' % user.remote_id,\n created=activity_object['published'],\n signatureValue=b64encode(signed_message).decode('utf8')\n )\n\n return activitypub.Create(\n id=create_id,\n actor=user.remote_id,\n to=activity_object['to'],\n cc=activity_object['cc'],\n object=activity_object,\n signature=signature,\n ).serialize()\n\n\n def to_delete_activity(self, user):\n ''' notice of deletion '''\n return activitypub.Delete(\n id=self.remote_id + '/activity',\n actor=user.remote_id,\n to=['%s/followers' % user.remote_id],\n cc=['https://www.w3.org/ns/activitystreams#Public'],\n object=self.to_activity(),\n ).serialize()\n\n\n def to_update_activity(self, user):\n ''' wrapper for Updates to an activity '''\n activity_id = '%s#update/%s' % (self.remote_id, uuid4())\n return activitypub.Update(\n id=activity_id,\n actor=user.remote_id,\n to=['https://www.w3.org/ns/activitystreams#Public'],\n object=self.to_activity()\n ).serialize()\n\n\n def to_undo_activity(self, user):\n ''' undo an action '''\n return activitypub.Undo(\n id='%s#undo' % self.remote_id,\n actor=user.remote_id,\n object=self.to_activity()\n ).serialize()\n\n\nclass OrderedCollectionPageMixin(ActivitypubMixin):\n ''' just the paginator utilities, so you don't HAVE to\n override ActivitypubMixin's to_activity (ie, for outbox '''\n @property\n def collection_remote_id(self):\n ''' this can be overriden if there's a special remote id, ie outbox '''\n return self.remote_id\n\n\n def to_ordered_collection(self, queryset, \\\n remote_id=None, page=False, collection_only=False, **kwargs):\n ''' an ordered collection of whatevers '''\n if not queryset.ordered:\n raise RuntimeError('queryset must be ordered')\n\n remote_id = remote_id or self.remote_id\n if page:\n return to_ordered_collection_page(\n queryset, remote_id, **kwargs)\n\n if collection_only or not hasattr(self, 'activity_serializer'):\n serializer = activitypub.OrderedCollection\n activity = {}\n else:\n serializer = self.activity_serializer\n # a dict from the model fields\n activity = generate_activity(self)\n\n if remote_id:\n activity['id'] = remote_id\n\n paginated = Paginator(queryset, PAGE_LENGTH)\n # add computed fields specific to orderd collections\n activity['totalItems'] = paginated.count\n activity['first'] = '%s?page=1' % remote_id\n activity['last'] = '%s?page=%d' % (remote_id, paginated.num_pages)\n\n return serializer(**activity).serialize()\n\n\n# pylint: disable=unused-argument\ndef to_ordered_collection_page(\n queryset, remote_id, id_only=False, page=1, **kwargs):\n ''' serialize and pagiante a queryset '''\n paginated = Paginator(queryset, PAGE_LENGTH)\n\n activity_page = paginated.page(page)\n if id_only:\n items = [s.remote_id for s in activity_page.object_list]\n else:\n items = [s.to_activity() for s in activity_page.object_list]\n\n prev_page = next_page = None\n if activity_page.has_next():\n next_page = '%s?page=%d' % (remote_id, activity_page.next_page_number())\n if activity_page.has_previous():\n prev_page = '%s?page=%d' % \\\n (remote_id, activity_page.previous_page_number())\n return activitypub.OrderedCollectionPage(\n id='%s?page=%s' % (remote_id, page),\n partOf=remote_id,\n orderedItems=items,\n next=next_page,\n prev=prev_page\n ).serialize()\n\n\nclass OrderedCollectionMixin(OrderedCollectionPageMixin):\n ''' extends activitypub models to work as ordered collections '''\n @property\n def collection_queryset(self):\n ''' usually an ordered collection model aggregates a different model '''\n raise NotImplementedError('Model must define collection_queryset')\n\n activity_serializer = activitypub.OrderedCollection\n\n def to_activity(self, **kwargs):\n ''' an ordered collection of the specified model queryset '''\n return self.to_ordered_collection(self.collection_queryset, **kwargs)\n\n\ndef generate_activity(obj):\n ''' go through the fields on an object '''\n activity = {}\n for field in obj.activity_fields:\n field.set_activity_from_field(activity, obj)\n\n if hasattr(obj, 'serialize_reverse_fields'):\n # for example, editions of a work\n for model_field_name, activity_field_name, sort_field in \\\n obj.serialize_reverse_fields:\n related_field = getattr(obj, model_field_name)\n activity[activity_field_name] = \\\n unfurl_related_field(related_field, sort_field)\n\n if not activity.get('id'):\n activity['id'] = obj.get_remote_id()\n return activity\n", "path": "bookwyrm/models/base_model.py"}], "after_files": [{"content": "''' base model with default fields '''\nfrom base64 import b64encode\nfrom functools import reduce\nimport operator\nfrom uuid import uuid4\n\nfrom Crypto.PublicKey import RSA\nfrom Crypto.Signature import pkcs1_15\nfrom Crypto.Hash import SHA256\nfrom django.core.paginator import Paginator\nfrom django.db import models\nfrom django.db.models import Q\nfrom django.dispatch import receiver\n\nfrom bookwyrm import activitypub\nfrom bookwyrm.settings import DOMAIN, PAGE_LENGTH\nfrom .fields import ImageField, ManyToManyField, RemoteIdField\n\n\nclass BookWyrmModel(models.Model):\n ''' shared fields '''\n created_date = models.DateTimeField(auto_now_add=True)\n updated_date = models.DateTimeField(auto_now=True)\n remote_id = RemoteIdField(null=True, activitypub_field='id')\n\n def get_remote_id(self):\n ''' generate a url that resolves to the local object '''\n base_path = 'https://%s' % DOMAIN\n if hasattr(self, 'user'):\n base_path = '%s%s' % (base_path, self.user.local_path)\n model_name = type(self).__name__.lower()\n return '%s/%s/%d' % (base_path, model_name, self.id)\n\n class Meta:\n ''' this is just here to provide default fields for other models '''\n abstract = True\n\n @property\n def local_path(self):\n ''' how to link to this object in the local app '''\n return self.get_remote_id().replace('https://%s' % DOMAIN, '')\n\n\n@receiver(models.signals.post_save)\n#pylint: disable=unused-argument\ndef execute_after_save(sender, instance, created, *args, **kwargs):\n ''' set the remote_id after save (when the id is available) '''\n if not created or not hasattr(instance, 'get_remote_id'):\n return\n if not instance.remote_id:\n instance.remote_id = instance.get_remote_id()\n instance.save()\n\n\ndef unfurl_related_field(related_field, sort_field=None):\n ''' load reverse lookups (like public key owner or Status attachment '''\n if hasattr(related_field, 'all'):\n return [unfurl_related_field(i) for i in related_field.order_by(\n sort_field).all()]\n if related_field.reverse_unfurl:\n return related_field.field_to_activity()\n return related_field.remote_id\n\n\nclass ActivitypubMixin:\n ''' add this mixin for models that are AP serializable '''\n activity_serializer = lambda: {}\n reverse_unfurl = False\n\n def __init__(self, *args, **kwargs):\n ''' collect some info on model fields '''\n self.image_fields = []\n self.many_to_many_fields = []\n self.simple_fields = [] # \"simple\"\n for field in self._meta.get_fields():\n if not hasattr(field, 'field_to_activity'):\n continue\n\n if isinstance(field, ImageField):\n self.image_fields.append(field)\n elif isinstance(field, ManyToManyField):\n self.many_to_many_fields.append(field)\n else:\n self.simple_fields.append(field)\n\n self.activity_fields = self.image_fields + \\\n self.many_to_many_fields + self.simple_fields\n\n self.deserialize_reverse_fields = self.deserialize_reverse_fields \\\n if hasattr(self, 'deserialize_reverse_fields') else []\n self.serialize_reverse_fields = self.serialize_reverse_fields \\\n if hasattr(self, 'serialize_reverse_fields') else []\n\n super().__init__(*args, **kwargs)\n\n\n @classmethod\n def find_existing_by_remote_id(cls, remote_id):\n ''' look up a remote id in the db '''\n return cls.find_existing({'id': remote_id})\n\n @classmethod\n def find_existing(cls, data):\n ''' compare data to fields that can be used for deduplation.\n This always includes remote_id, but can also be unique identifiers\n like an isbn for an edition '''\n filters = []\n for field in cls._meta.get_fields():\n if not hasattr(field, 'deduplication_field') or \\\n not field.deduplication_field:\n continue\n\n value = data.get(field.get_activitypub_field())\n if not value:\n continue\n filters.append({field.name: value})\n\n if hasattr(cls, 'origin_id') and 'id' in data:\n # kinda janky, but this handles special case for books\n filters.append({'origin_id': data['id']})\n\n if not filters:\n # if there are no deduplication fields, it will match the first\n # item no matter what. this shouldn't happen but just in case.\n return None\n\n objects = cls.objects\n if hasattr(objects, 'select_subclasses'):\n objects = objects.select_subclasses()\n\n # an OR operation on all the match fields\n match = objects.filter(\n reduce(\n operator.or_, (Q(**f) for f in filters)\n )\n )\n # there OUGHT to be only one match\n return match.first()\n\n\n def to_activity(self):\n ''' convert from a model to an activity '''\n activity = generate_activity(self)\n return self.activity_serializer(**activity).serialize()\n\n\n def to_create_activity(self, user, **kwargs):\n ''' returns the object wrapped in a Create activity '''\n activity_object = self.to_activity(**kwargs)\n\n signature = None\n create_id = self.remote_id + '/activity'\n if 'content' in activity_object:\n signer = pkcs1_15.new(RSA.import_key(user.key_pair.private_key))\n content = activity_object['content']\n signed_message = signer.sign(SHA256.new(content.encode('utf8')))\n\n signature = activitypub.Signature(\n creator='%s#main-key' % user.remote_id,\n created=activity_object['published'],\n signatureValue=b64encode(signed_message).decode('utf8')\n )\n\n return activitypub.Create(\n id=create_id,\n actor=user.remote_id,\n to=activity_object['to'],\n cc=activity_object['cc'],\n object=activity_object,\n signature=signature,\n ).serialize()\n\n\n def to_delete_activity(self, user):\n ''' notice of deletion '''\n return activitypub.Delete(\n id=self.remote_id + '/activity',\n actor=user.remote_id,\n to=['%s/followers' % user.remote_id],\n cc=['https://www.w3.org/ns/activitystreams#Public'],\n object=self.to_activity(),\n ).serialize()\n\n\n def to_update_activity(self, user):\n ''' wrapper for Updates to an activity '''\n activity_id = '%s#update/%s' % (self.remote_id, uuid4())\n return activitypub.Update(\n id=activity_id,\n actor=user.remote_id,\n to=['https://www.w3.org/ns/activitystreams#Public'],\n object=self.to_activity()\n ).serialize()\n\n\n def to_undo_activity(self, user):\n ''' undo an action '''\n return activitypub.Undo(\n id='%s#undo' % self.remote_id,\n actor=user.remote_id,\n object=self.to_activity()\n ).serialize()\n\n\nclass OrderedCollectionPageMixin(ActivitypubMixin):\n ''' just the paginator utilities, so you don't HAVE to\n override ActivitypubMixin's to_activity (ie, for outbox '''\n @property\n def collection_remote_id(self):\n ''' this can be overriden if there's a special remote id, ie outbox '''\n return self.remote_id\n\n\n def to_ordered_collection(self, queryset, \\\n remote_id=None, page=False, collection_only=False, **kwargs):\n ''' an ordered collection of whatevers '''\n if not queryset.ordered:\n raise RuntimeError('queryset must be ordered')\n\n remote_id = remote_id or self.remote_id\n if page:\n return to_ordered_collection_page(\n queryset, remote_id, **kwargs)\n\n if collection_only or not hasattr(self, 'activity_serializer'):\n serializer = activitypub.OrderedCollection\n activity = {}\n else:\n serializer = self.activity_serializer\n # a dict from the model fields\n activity = generate_activity(self)\n\n if remote_id:\n activity['id'] = remote_id\n\n paginated = Paginator(queryset, PAGE_LENGTH)\n # add computed fields specific to orderd collections\n activity['totalItems'] = paginated.count\n activity['first'] = '%s?page=1' % remote_id\n activity['last'] = '%s?page=%d' % (remote_id, paginated.num_pages)\n\n return serializer(**activity).serialize()\n\n\n# pylint: disable=unused-argument\ndef to_ordered_collection_page(\n queryset, remote_id, id_only=False, page=1, **kwargs):\n ''' serialize and pagiante a queryset '''\n paginated = Paginator(queryset, PAGE_LENGTH)\n\n activity_page = paginated.page(page)\n if id_only:\n items = [s.remote_id for s in activity_page.object_list]\n else:\n items = [s.to_activity() for s in activity_page.object_list]\n\n prev_page = next_page = None\n if activity_page.has_next():\n next_page = '%s?page=%d' % (remote_id, activity_page.next_page_number())\n if activity_page.has_previous():\n prev_page = '%s?page=%d' % \\\n (remote_id, activity_page.previous_page_number())\n return activitypub.OrderedCollectionPage(\n id='%s?page=%s' % (remote_id, page),\n partOf=remote_id,\n orderedItems=items,\n next=next_page,\n prev=prev_page\n ).serialize()\n\n\nclass OrderedCollectionMixin(OrderedCollectionPageMixin):\n ''' extends activitypub models to work as ordered collections '''\n @property\n def collection_queryset(self):\n ''' usually an ordered collection model aggregates a different model '''\n raise NotImplementedError('Model must define collection_queryset')\n\n activity_serializer = activitypub.OrderedCollection\n\n def to_activity(self, **kwargs):\n ''' an ordered collection of the specified model queryset '''\n return self.to_ordered_collection(self.collection_queryset, **kwargs)\n\n\ndef generate_activity(obj):\n ''' go through the fields on an object '''\n activity = {}\n for field in obj.activity_fields:\n field.set_activity_from_field(activity, obj)\n\n if hasattr(obj, 'serialize_reverse_fields'):\n # for example, editions of a work\n for model_field_name, activity_field_name, sort_field in \\\n obj.serialize_reverse_fields:\n related_field = getattr(obj, model_field_name)\n activity[activity_field_name] = \\\n unfurl_related_field(related_field, sort_field)\n\n if not activity.get('id'):\n activity['id'] = obj.get_remote_id()\n return activity\n", "path": "bookwyrm/models/base_model.py"}]}
| 3,565 | 144 |
gh_patches_debug_15039
|
rasdani/github-patches
|
git_diff
|
googleapis__google-api-python-client-469
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`TypeError` when trying to issue `BatchHttpRequest` when service was built using application default credentials
This issue is very similar to #211. My team has an app-engine application that uses batch requests in the Gmail API. After an upgrade of this client library, we started seeing failures:
```
...
File "/base/data/home/apps/s~app-id/modname:version/path/to/my/code.py", line 241, in _user_method
return batch_request.execute(http=self._http)
File "/base/data/home/apps/s~app-id/modname:version/third_party/oauth2client/_helpers.py", line 133, in positional_wrapper
return wrapped(*args, **kwargs)
File "/base/data/home/apps/s~app-id/modname:version/third_party/googleapiclient/http.py", line 1417, in execute
self._execute(http, self._order, self._requests)
File "/base/data/home/apps/s~app-id/modname:version/third_party/googleapiclient/http.py", line 1333, in _execute
body = self._serialize_request(request)
File "/base/data/home/apps/s~app-id/modname:version/third_party/googleapiclient/http.py", line 1204, in _serialize_request
request.http.request.credentials.apply(headers)
File "/base/data/home/apps/s~app-id/modname:version/third_party/oauth2client/client.py", line 558, in apply
headers['Authorization'] = 'Bearer ' + self.access_token
TypeError: cannot concatenate 'str' and 'NoneType' objects
```
Our usage of this client library looks like the following:
```python
from googleapiclient.discovery import build
gmail_service = build('gmail', 'v1') # Uses application credentials
...
authorized_http = user_credentials.authorize(httplib2.Http()) # Uses end user credentials
batch_request = gmail_service.new_batch_http_request()
batch_request.add(
gmail_service.users().threads().get(id=thread_id, userId='me'),
callback=callback,
request_id=thread_id)
batch_request.execute(http=authorized_http)
```
The `gmail_service` is cached at the application level to avoid needing to do an API call for each request (which is why it ends up with the application credentials). What I believe is happening is that
```py
gmail_service.users().threads().get(id=thread_id, userId='me')
```
ends up creating an `HttpRequest` that uses the same `http` that was used to construct `gmail_service` (i.e. the application credentials) and I believe that the application credentials have no `access_token`. This results in the `TypeError` seen above. The fix in [#232](https://github.com/google/google-api-python-client/pull/232/files) won't help (the `http` that is getting passed to `batch_request.execute` is valid).
It seems to me that the application of credentials in [_serialize_request](https://github.com/google/google-api-python-client/blob/master/googleapiclient/http.py#L1202) should be conditional on the credentials actually having an access token:
```python
if request.http is not None and hasattr(request.http.request,
'credentials'):
if request.http.request.credentials.access_token:
request.http.request.credentials.apply(headers)
```
Otherwise, if the sub-request isn't authenticated then the authentication from the outer request should be used (at least, if the [gmail docs](https://developers.google.com/gmail/api/guides/batch) are any indicator):
> The HTTP headers for the outer batch request, except for the Content- headers such as Content-Type, apply to every request in the batch. If you specify a given HTTP header in both the outer request and an individual call, then the individual call header's value overrides the outer batch request header's value. The headers for an individual call apply only to that call.
> For example, if you provide an Authorization header for a specific call, then that header applies only to that call. If you provide an Authorization header for the outer request, then that header applies to all of the individual calls unless they override it with Authorization headers of their own.
--------
In the event that this behavior is actually working as intended or if my proposed fix isn't satisfactory and a real fix would be too difficult, a work-around to this issue is possible by updating the requests that you add to the batch with an appropriately authorized http instance. i.e changing the above to:
```python
batch_request = gmail_service.new_batch_http_request()
request = gmail_service.users().threads().get(id=thread_id, userId='me')
request.http = authorized_http # explicitly set the authorization on the request.
batch_request.add(
request,
callback=callback,
request_id=thread_id)
batch_request.execute(http=authorized_http)
```
Seems to fix the issue.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `googleapiclient/_auth.py`
Content:
```
1 # Copyright 2016 Google Inc. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Helpers for authentication using oauth2client or google-auth."""
16
17 import httplib2
18
19 try:
20 import google.auth
21 import google.auth.credentials
22 HAS_GOOGLE_AUTH = True
23 except ImportError: # pragma: NO COVER
24 HAS_GOOGLE_AUTH = False
25
26 try:
27 import google_auth_httplib2
28 except ImportError: # pragma: NO COVER
29 google_auth_httplib2 = None
30
31 try:
32 import oauth2client
33 import oauth2client.client
34 HAS_OAUTH2CLIENT = True
35 except ImportError: # pragma: NO COVER
36 HAS_OAUTH2CLIENT = False
37
38
39 def default_credentials():
40 """Returns Application Default Credentials."""
41 if HAS_GOOGLE_AUTH:
42 credentials, _ = google.auth.default()
43 return credentials
44 elif HAS_OAUTH2CLIENT:
45 return oauth2client.client.GoogleCredentials.get_application_default()
46 else:
47 raise EnvironmentError(
48 'No authentication library is available. Please install either '
49 'google-auth or oauth2client.')
50
51
52 def with_scopes(credentials, scopes):
53 """Scopes the credentials if necessary.
54
55 Args:
56 credentials (Union[
57 google.auth.credentials.Credentials,
58 oauth2client.client.Credentials]): The credentials to scope.
59 scopes (Sequence[str]): The list of scopes.
60
61 Returns:
62 Union[google.auth.credentials.Credentials,
63 oauth2client.client.Credentials]: The scoped credentials.
64 """
65 if HAS_GOOGLE_AUTH and isinstance(
66 credentials, google.auth.credentials.Credentials):
67 return google.auth.credentials.with_scopes_if_required(
68 credentials, scopes)
69 else:
70 try:
71 if credentials.create_scoped_required():
72 return credentials.create_scoped(scopes)
73 else:
74 return credentials
75 except AttributeError:
76 return credentials
77
78
79 def authorized_http(credentials):
80 """Returns an http client that is authorized with the given credentials.
81
82 Args:
83 credentials (Union[
84 google.auth.credentials.Credentials,
85 oauth2client.client.Credentials]): The credentials to use.
86
87 Returns:
88 Union[httplib2.Http, google_auth_httplib2.AuthorizedHttp]: An
89 authorized http client.
90 """
91 from googleapiclient.http import build_http
92
93 if HAS_GOOGLE_AUTH and isinstance(
94 credentials, google.auth.credentials.Credentials):
95 if google_auth_httplib2 is None:
96 raise ValueError(
97 'Credentials from google.auth specified, but '
98 'google-api-python-client is unable to use these credentials '
99 'unless google-auth-httplib2 is installed. Please install '
100 'google-auth-httplib2.')
101 return google_auth_httplib2.AuthorizedHttp(credentials,
102 http=build_http())
103 else:
104 return credentials.authorize(build_http())
105
106
107 def refresh_credentials(credentials):
108 # Refresh must use a new http instance, as the one associated with the
109 # credentials could be a AuthorizedHttp or an oauth2client-decorated
110 # Http instance which would cause a weird recursive loop of refreshing
111 # and likely tear a hole in spacetime.
112 refresh_http = httplib2.Http()
113 if HAS_GOOGLE_AUTH and isinstance(
114 credentials, google.auth.credentials.Credentials):
115 request = google_auth_httplib2.Request(refresh_http)
116 return credentials.refresh(request)
117 else:
118 return credentials.refresh(refresh_http)
119
120
121 def apply_credentials(credentials, headers):
122 # oauth2client and google-auth have the same interface for this.
123 return credentials.apply(headers)
124
125
126 def is_valid(credentials):
127 if HAS_GOOGLE_AUTH and isinstance(
128 credentials, google.auth.credentials.Credentials):
129 return credentials.valid
130 else:
131 return not credentials.access_token_expired
132
133
134 def get_credentials_from_http(http):
135 if http is None:
136 return None
137 elif hasattr(http.request, 'credentials'):
138 return http.request.credentials
139 elif (hasattr(http, 'credentials')
140 and not isinstance(http.credentials, httplib2.Credentials)):
141 return http.credentials
142 else:
143 return None
144
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/googleapiclient/_auth.py b/googleapiclient/_auth.py
--- a/googleapiclient/_auth.py
+++ b/googleapiclient/_auth.py
@@ -120,6 +120,8 @@
def apply_credentials(credentials, headers):
# oauth2client and google-auth have the same interface for this.
+ if not is_valid(credentials):
+ refresh_credentials(credentials)
return credentials.apply(headers)
@@ -128,7 +130,9 @@
credentials, google.auth.credentials.Credentials):
return credentials.valid
else:
- return not credentials.access_token_expired
+ return (
+ credentials.access_token is not None and
+ not credentials.access_token_expired)
def get_credentials_from_http(http):
|
{"golden_diff": "diff --git a/googleapiclient/_auth.py b/googleapiclient/_auth.py\n--- a/googleapiclient/_auth.py\n+++ b/googleapiclient/_auth.py\n@@ -120,6 +120,8 @@\n \n def apply_credentials(credentials, headers):\n # oauth2client and google-auth have the same interface for this.\n+ if not is_valid(credentials):\n+ refresh_credentials(credentials)\n return credentials.apply(headers)\n \n \n@@ -128,7 +130,9 @@\n credentials, google.auth.credentials.Credentials):\n return credentials.valid\n else:\n- return not credentials.access_token_expired\n+ return (\n+ credentials.access_token is not None and\n+ not credentials.access_token_expired)\n \n \n def get_credentials_from_http(http):\n", "issue": "`TypeError` when trying to issue `BatchHttpRequest` when service was built using application default credentials\nThis issue is very similar to #211. My team has an app-engine application that uses batch requests in the Gmail API. After an upgrade of this client library, we started seeing failures:\r\n\r\n```\r\n...\r\n File \"/base/data/home/apps/s~app-id/modname:version/path/to/my/code.py\", line 241, in _user_method\r\n return batch_request.execute(http=self._http)\r\n File \"/base/data/home/apps/s~app-id/modname:version/third_party/oauth2client/_helpers.py\", line 133, in positional_wrapper\r\n return wrapped(*args, **kwargs)\r\n File \"/base/data/home/apps/s~app-id/modname:version/third_party/googleapiclient/http.py\", line 1417, in execute\r\n self._execute(http, self._order, self._requests)\r\n File \"/base/data/home/apps/s~app-id/modname:version/third_party/googleapiclient/http.py\", line 1333, in _execute\r\n body = self._serialize_request(request)\r\n File \"/base/data/home/apps/s~app-id/modname:version/third_party/googleapiclient/http.py\", line 1204, in _serialize_request\r\n request.http.request.credentials.apply(headers)\r\n File \"/base/data/home/apps/s~app-id/modname:version/third_party/oauth2client/client.py\", line 558, in apply\r\n headers['Authorization'] = 'Bearer ' + self.access_token\r\nTypeError: cannot concatenate 'str' and 'NoneType' objects\r\n```\r\n\r\nOur usage of this client library looks like the following:\r\n\r\n```python\r\nfrom googleapiclient.discovery import build\r\ngmail_service = build('gmail', 'v1') # Uses application credentials\r\n\r\n...\r\n\r\nauthorized_http = user_credentials.authorize(httplib2.Http()) # Uses end user credentials\r\n\r\nbatch_request = gmail_service.new_batch_http_request()\r\nbatch_request.add(\r\n gmail_service.users().threads().get(id=thread_id, userId='me'),\r\n callback=callback,\r\n request_id=thread_id)\r\n\r\nbatch_request.execute(http=authorized_http)\r\n```\r\n\r\nThe `gmail_service` is cached at the application level to avoid needing to do an API call for each request (which is why it ends up with the application credentials). What I believe is happening is that\r\n\r\n```py\r\ngmail_service.users().threads().get(id=thread_id, userId='me')\r\n```\r\n\r\nends up creating an `HttpRequest` that uses the same `http` that was used to construct `gmail_service` (i.e. the application credentials) and I believe that the application credentials have no `access_token`. This results in the `TypeError` seen above. The fix in [#232](https://github.com/google/google-api-python-client/pull/232/files) won't help (the `http` that is getting passed to `batch_request.execute` is valid).\r\n\r\nIt seems to me that the application of credentials in [_serialize_request](https://github.com/google/google-api-python-client/blob/master/googleapiclient/http.py#L1202) should be conditional on the credentials actually having an access token:\r\n\r\n```python\r\nif request.http is not None and hasattr(request.http.request,\r\n 'credentials'):\r\n if request.http.request.credentials.access_token:\r\n request.http.request.credentials.apply(headers)\r\n```\r\n\r\nOtherwise, if the sub-request isn't authenticated then the authentication from the outer request should be used (at least, if the [gmail docs](https://developers.google.com/gmail/api/guides/batch) are any indicator):\r\n\r\n> The HTTP headers for the outer batch request, except for the Content- headers such as Content-Type, apply to every request in the batch. If you specify a given HTTP header in both the outer request and an individual call, then the individual call header's value overrides the outer batch request header's value. The headers for an individual call apply only to that call.\r\n\r\n> For example, if you provide an Authorization header for a specific call, then that header applies only to that call. If you provide an Authorization header for the outer request, then that header applies to all of the individual calls unless they override it with Authorization headers of their own.\r\n\r\n--------\r\n\r\nIn the event that this behavior is actually working as intended or if my proposed fix isn't satisfactory and a real fix would be too difficult, a work-around to this issue is possible by updating the requests that you add to the batch with an appropriately authorized http instance. i.e changing the above to:\r\n\r\n```python\r\n\r\nbatch_request = gmail_service.new_batch_http_request()\r\n\r\nrequest = gmail_service.users().threads().get(id=thread_id, userId='me')\r\nrequest.http = authorized_http # explicitly set the authorization on the request.\r\n\r\nbatch_request.add(\r\n request,\r\n callback=callback,\r\n request_id=thread_id)\r\n\r\nbatch_request.execute(http=authorized_http)\r\n```\r\n\r\nSeems to fix the issue.\n", "before_files": [{"content": "# Copyright 2016 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Helpers for authentication using oauth2client or google-auth.\"\"\"\n\nimport httplib2\n\ntry:\n import google.auth\n import google.auth.credentials\n HAS_GOOGLE_AUTH = True\nexcept ImportError: # pragma: NO COVER\n HAS_GOOGLE_AUTH = False\n\ntry:\n import google_auth_httplib2\nexcept ImportError: # pragma: NO COVER\n google_auth_httplib2 = None\n\ntry:\n import oauth2client\n import oauth2client.client\n HAS_OAUTH2CLIENT = True\nexcept ImportError: # pragma: NO COVER\n HAS_OAUTH2CLIENT = False\n\n\ndef default_credentials():\n \"\"\"Returns Application Default Credentials.\"\"\"\n if HAS_GOOGLE_AUTH:\n credentials, _ = google.auth.default()\n return credentials\n elif HAS_OAUTH2CLIENT:\n return oauth2client.client.GoogleCredentials.get_application_default()\n else:\n raise EnvironmentError(\n 'No authentication library is available. Please install either '\n 'google-auth or oauth2client.')\n\n\ndef with_scopes(credentials, scopes):\n \"\"\"Scopes the credentials if necessary.\n\n Args:\n credentials (Union[\n google.auth.credentials.Credentials,\n oauth2client.client.Credentials]): The credentials to scope.\n scopes (Sequence[str]): The list of scopes.\n\n Returns:\n Union[google.auth.credentials.Credentials,\n oauth2client.client.Credentials]: The scoped credentials.\n \"\"\"\n if HAS_GOOGLE_AUTH and isinstance(\n credentials, google.auth.credentials.Credentials):\n return google.auth.credentials.with_scopes_if_required(\n credentials, scopes)\n else:\n try:\n if credentials.create_scoped_required():\n return credentials.create_scoped(scopes)\n else:\n return credentials\n except AttributeError:\n return credentials\n\n\ndef authorized_http(credentials):\n \"\"\"Returns an http client that is authorized with the given credentials.\n\n Args:\n credentials (Union[\n google.auth.credentials.Credentials,\n oauth2client.client.Credentials]): The credentials to use.\n\n Returns:\n Union[httplib2.Http, google_auth_httplib2.AuthorizedHttp]: An\n authorized http client.\n \"\"\"\n from googleapiclient.http import build_http\n\n if HAS_GOOGLE_AUTH and isinstance(\n credentials, google.auth.credentials.Credentials):\n if google_auth_httplib2 is None:\n raise ValueError(\n 'Credentials from google.auth specified, but '\n 'google-api-python-client is unable to use these credentials '\n 'unless google-auth-httplib2 is installed. Please install '\n 'google-auth-httplib2.')\n return google_auth_httplib2.AuthorizedHttp(credentials,\n http=build_http())\n else:\n return credentials.authorize(build_http())\n\n\ndef refresh_credentials(credentials):\n # Refresh must use a new http instance, as the one associated with the\n # credentials could be a AuthorizedHttp or an oauth2client-decorated\n # Http instance which would cause a weird recursive loop of refreshing\n # and likely tear a hole in spacetime.\n refresh_http = httplib2.Http()\n if HAS_GOOGLE_AUTH and isinstance(\n credentials, google.auth.credentials.Credentials):\n request = google_auth_httplib2.Request(refresh_http)\n return credentials.refresh(request)\n else:\n return credentials.refresh(refresh_http)\n\n\ndef apply_credentials(credentials, headers):\n # oauth2client and google-auth have the same interface for this.\n return credentials.apply(headers)\n\n\ndef is_valid(credentials):\n if HAS_GOOGLE_AUTH and isinstance(\n credentials, google.auth.credentials.Credentials):\n return credentials.valid\n else:\n return not credentials.access_token_expired\n\n\ndef get_credentials_from_http(http):\n if http is None:\n return None\n elif hasattr(http.request, 'credentials'):\n return http.request.credentials\n elif (hasattr(http, 'credentials')\n and not isinstance(http.credentials, httplib2.Credentials)):\n return http.credentials\n else:\n return None\n", "path": "googleapiclient/_auth.py"}], "after_files": [{"content": "# Copyright 2016 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Helpers for authentication using oauth2client or google-auth.\"\"\"\n\nimport httplib2\n\ntry:\n import google.auth\n import google.auth.credentials\n HAS_GOOGLE_AUTH = True\nexcept ImportError: # pragma: NO COVER\n HAS_GOOGLE_AUTH = False\n\ntry:\n import google_auth_httplib2\nexcept ImportError: # pragma: NO COVER\n google_auth_httplib2 = None\n\ntry:\n import oauth2client\n import oauth2client.client\n HAS_OAUTH2CLIENT = True\nexcept ImportError: # pragma: NO COVER\n HAS_OAUTH2CLIENT = False\n\n\ndef default_credentials():\n \"\"\"Returns Application Default Credentials.\"\"\"\n if HAS_GOOGLE_AUTH:\n credentials, _ = google.auth.default()\n return credentials\n elif HAS_OAUTH2CLIENT:\n return oauth2client.client.GoogleCredentials.get_application_default()\n else:\n raise EnvironmentError(\n 'No authentication library is available. Please install either '\n 'google-auth or oauth2client.')\n\n\ndef with_scopes(credentials, scopes):\n \"\"\"Scopes the credentials if necessary.\n\n Args:\n credentials (Union[\n google.auth.credentials.Credentials,\n oauth2client.client.Credentials]): The credentials to scope.\n scopes (Sequence[str]): The list of scopes.\n\n Returns:\n Union[google.auth.credentials.Credentials,\n oauth2client.client.Credentials]: The scoped credentials.\n \"\"\"\n if HAS_GOOGLE_AUTH and isinstance(\n credentials, google.auth.credentials.Credentials):\n return google.auth.credentials.with_scopes_if_required(\n credentials, scopes)\n else:\n try:\n if credentials.create_scoped_required():\n return credentials.create_scoped(scopes)\n else:\n return credentials\n except AttributeError:\n return credentials\n\n\ndef authorized_http(credentials):\n \"\"\"Returns an http client that is authorized with the given credentials.\n\n Args:\n credentials (Union[\n google.auth.credentials.Credentials,\n oauth2client.client.Credentials]): The credentials to use.\n\n Returns:\n Union[httplib2.Http, google_auth_httplib2.AuthorizedHttp]: An\n authorized http client.\n \"\"\"\n from googleapiclient.http import build_http\n\n if HAS_GOOGLE_AUTH and isinstance(\n credentials, google.auth.credentials.Credentials):\n if google_auth_httplib2 is None:\n raise ValueError(\n 'Credentials from google.auth specified, but '\n 'google-api-python-client is unable to use these credentials '\n 'unless google-auth-httplib2 is installed. Please install '\n 'google-auth-httplib2.')\n return google_auth_httplib2.AuthorizedHttp(credentials,\n http=build_http())\n else:\n return credentials.authorize(build_http())\n\n\ndef refresh_credentials(credentials):\n # Refresh must use a new http instance, as the one associated with the\n # credentials could be a AuthorizedHttp or an oauth2client-decorated\n # Http instance which would cause a weird recursive loop of refreshing\n # and likely tear a hole in spacetime.\n refresh_http = httplib2.Http()\n if HAS_GOOGLE_AUTH and isinstance(\n credentials, google.auth.credentials.Credentials):\n request = google_auth_httplib2.Request(refresh_http)\n return credentials.refresh(request)\n else:\n return credentials.refresh(refresh_http)\n\n\ndef apply_credentials(credentials, headers):\n # oauth2client and google-auth have the same interface for this.\n if not is_valid(credentials):\n refresh_credentials(credentials)\n return credentials.apply(headers)\n\n\ndef is_valid(credentials):\n if HAS_GOOGLE_AUTH and isinstance(\n credentials, google.auth.credentials.Credentials):\n return credentials.valid\n else:\n return (\n credentials.access_token is not None and\n not credentials.access_token_expired)\n\n\ndef get_credentials_from_http(http):\n if http is None:\n return None\n elif hasattr(http.request, 'credentials'):\n return http.request.credentials\n elif (hasattr(http, 'credentials')\n and not isinstance(http.credentials, httplib2.Credentials)):\n return http.credentials\n else:\n return None\n", "path": "googleapiclient/_auth.py"}]}
| 2,601 | 168 |
gh_patches_debug_18802
|
rasdani/github-patches
|
git_diff
|
cobbler__cobbler-3620
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
scm_track: Push script not working
### Describe the bug
After the refactoring of
### Steps to reproduce
1. Enable `scm_track`
2. Perform any change action in Cobbler
3. See error in logs
Note: The error with pathspec is already fixed on `main` through #3021.
### Expected behavior
Cobbler can push the commits to the specified remote.
### Cobbler version
<!--- Paste output from `cobbler version` -->
````paste below
cobbler:~ # cobbler version
Cobbler 3.3.3
source: ?, ?
build time: Thu Dec 19 12:00:00 2019
````
### Operating system
SLES 15 SP5
### Cobbler log
<!--- Paste (partial) output from `/var/log/cobbler/cobbler.log` -->
````paste below
[Thread-20] 2024-02-12T16:03:34 - DEBUG | running python triggers from /var/lib/cobbler/triggers/change/*
[Thread-20] 2024-02-12T16:03:34 - DEBUG | running python trigger cobbler.modules.scm_track
[Thread-20] 2024-02-12T16:03:34 - INFO | running: ['git', 'add', '--all', 'collections']
[Thread-20] 2024-02-12T16:03:34 - INFO | received on stdout:
[Thread-20] 2024-02-12T16:03:34 - DEBUG | received on stderr:
[Thread-20] 2024-02-12T16:03:34 - INFO | running: ['git', 'add', '--all', 'templates']
[Thread-20] 2024-02-12T16:03:34 - INFO | received on stdout:
[Thread-20] 2024-02-12T16:03:34 - DEBUG | received on stderr:
[Thread-20] 2024-02-12T16:03:34 - INFO | running: ['git', 'add', '--all', 'snippets']
[Thread-20] 2024-02-12T16:03:34 - INFO | received on stdout:
[Thread-20] 2024-02-12T16:03:34 - DEBUG | received on stderr:
[Thread-20] 2024-02-12T16:03:34 - INFO | running: ['git', 'commit', '-m', 'API', 'update', '--author', 'Cobbler <[email protected]>']
[Thread-20] 2024-02-12T16:03:34 - INFO | received on stdout:
[Thread-20] 2024-02-12T16:03:34 - DEBUG | received on stderr: error: pathspec 'update' did not match any file(s) known to git
````
### Screenshots
None
### Additional information
Snippet for from the settings:
```yaml
scm_track_enabled: true
scm_track_mode: "git"
scm_track_author: "Cobbler <[email protected]>"
# scm_push_script: "git push"
scm_push_script: ""
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cobbler/modules/scm_track.py`
Content:
```
1 """
2 Cobbler Trigger Module that puts the content of the Cobbler data directory under version control. Depending on
3 ``scm_track_mode`` in the settings, this can either be git or Mercurial.
4 """
5
6 # SPDX-License-Identifier: GPL-2.0-or-later
7 # SPDX-FileCopyrightText: Copyright 2009, Red Hat Inc.
8 # SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
9
10
11 import os
12 from typing import TYPE_CHECKING, Any
13
14 from cobbler import utils
15 from cobbler.cexceptions import CX
16
17 if TYPE_CHECKING:
18 from cobbler.api import CobblerAPI
19
20
21 def register() -> str:
22 """
23 This pure python trigger acts as if it were a legacy shell-trigger, but is much faster. The return of this method
24 indicates the trigger type
25 :return: Always: ``/var/lib/cobbler/triggers/change/*``
26 """
27
28 return "/var/lib/cobbler/triggers/change/*"
29
30
31 def run(api: "CobblerAPI", args: Any):
32 """
33 Runs the trigger, meaning in this case track any changed which happen to a config or data file.
34
35 :param api: The api instance of the Cobbler server. Used to look up if scm_track_enabled is true.
36 :param args: The parameter is currently unused for this trigger.
37 :return: 0 on success, otherwise an exception is risen.
38 """
39 settings = api.settings()
40
41 if not settings.scm_track_enabled:
42 # feature disabled
43 return 0
44
45 mode = str(settings.scm_track_mode).lower()
46 author = str(settings.scm_track_author)
47 push_script = str(settings.scm_push_script)
48
49 if mode == "git":
50 old_dir = os.getcwd()
51 os.chdir("/var/lib/cobbler")
52 if os.getcwd() != "/var/lib/cobbler":
53 raise CX("danger will robinson")
54
55 if not os.path.exists("/var/lib/cobbler/.git"):
56 utils.subprocess_call(["git", "init"], shell=False)
57
58 # FIXME: If we know the remote user of an XMLRPC call use them as the author
59 utils.subprocess_call(["git", "add", "--all", "collections"], shell=False)
60 utils.subprocess_call(["git", "add", "--all", "templates"], shell=False)
61 utils.subprocess_call(["git", "add", "--all", "snippets"], shell=False)
62 utils.subprocess_call(
63 ["git", "commit", "-m", "API update", "--author", author], shell=False
64 )
65
66 if push_script:
67 utils.subprocess_call([push_script], shell=False)
68
69 os.chdir(old_dir)
70 return 0
71
72 if mode == "hg":
73 # use mercurial
74 old_dir = os.getcwd()
75 os.chdir("/var/lib/cobbler")
76 if os.getcwd() != "/var/lib/cobbler":
77 raise CX("danger will robinson")
78
79 if not os.path.exists("/var/lib/cobbler/.hg"):
80 utils.subprocess_call(["hg", "init"], shell=False)
81
82 # FIXME: If we know the remote user of an XMLRPC call use them as the user
83 utils.subprocess_call(["hg", "add collections"], shell=False)
84 utils.subprocess_call(["hg", "add templates"], shell=False)
85 utils.subprocess_call(["hg", "add snippets"], shell=False)
86 utils.subprocess_call(
87 ["hg", "commit", "-m", "API", "update", "--user", author], shell=False
88 )
89
90 if push_script:
91 utils.subprocess_call([push_script], shell=False)
92
93 os.chdir(old_dir)
94 return 0
95
96 raise CX(f"currently unsupported SCM type: {mode}")
97
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/cobbler/modules/scm_track.py b/cobbler/modules/scm_track.py
--- a/cobbler/modules/scm_track.py
+++ b/cobbler/modules/scm_track.py
@@ -64,7 +64,7 @@
)
if push_script:
- utils.subprocess_call([push_script], shell=False)
+ utils.subprocess_call(push_script.split(" "), shell=False)
os.chdir(old_dir)
return 0
@@ -84,11 +84,11 @@
utils.subprocess_call(["hg", "add templates"], shell=False)
utils.subprocess_call(["hg", "add snippets"], shell=False)
utils.subprocess_call(
- ["hg", "commit", "-m", "API", "update", "--user", author], shell=False
+ ["hg", "commit", "-m", "API update", "--user", author], shell=False
)
if push_script:
- utils.subprocess_call([push_script], shell=False)
+ utils.subprocess_call(push_script.split(" "), shell=False)
os.chdir(old_dir)
return 0
|
{"golden_diff": "diff --git a/cobbler/modules/scm_track.py b/cobbler/modules/scm_track.py\n--- a/cobbler/modules/scm_track.py\n+++ b/cobbler/modules/scm_track.py\n@@ -64,7 +64,7 @@\n )\n \n if push_script:\n- utils.subprocess_call([push_script], shell=False)\n+ utils.subprocess_call(push_script.split(\" \"), shell=False)\n \n os.chdir(old_dir)\n return 0\n@@ -84,11 +84,11 @@\n utils.subprocess_call([\"hg\", \"add templates\"], shell=False)\n utils.subprocess_call([\"hg\", \"add snippets\"], shell=False)\n utils.subprocess_call(\n- [\"hg\", \"commit\", \"-m\", \"API\", \"update\", \"--user\", author], shell=False\n+ [\"hg\", \"commit\", \"-m\", \"API update\", \"--user\", author], shell=False\n )\n \n if push_script:\n- utils.subprocess_call([push_script], shell=False)\n+ utils.subprocess_call(push_script.split(\" \"), shell=False)\n \n os.chdir(old_dir)\n return 0\n", "issue": "scm_track: Push script not working\n### Describe the bug\r\n\r\nAfter the refactoring of \r\n\r\n### Steps to reproduce\r\n\r\n1. Enable `scm_track` \r\n2. Perform any change action in Cobbler\r\n3. See error in logs\r\n\r\nNote: The error with pathspec is already fixed on `main` through #3021.\r\n\r\n### Expected behavior\r\n\r\nCobbler can push the commits to the specified remote.\r\n\r\n### Cobbler version\r\n\r\n<!--- Paste output from `cobbler version` -->\r\n````paste below\r\ncobbler:~ # cobbler version\r\nCobbler 3.3.3\r\n source: ?, ?\r\n build time: Thu Dec 19 12:00:00 2019\r\n````\r\n\r\n### Operating system\r\n\r\nSLES 15 SP5\r\n\r\n### Cobbler log\r\n\r\n<!--- Paste (partial) output from `/var/log/cobbler/cobbler.log` -->\r\n````paste below\r\n[Thread-20] 2024-02-12T16:03:34 - DEBUG | running python triggers from /var/lib/cobbler/triggers/change/*\r\n[Thread-20] 2024-02-12T16:03:34 - DEBUG | running python trigger cobbler.modules.scm_track\r\n[Thread-20] 2024-02-12T16:03:34 - INFO | running: ['git', 'add', '--all', 'collections']\r\n[Thread-20] 2024-02-12T16:03:34 - INFO | received on stdout: \r\n[Thread-20] 2024-02-12T16:03:34 - DEBUG | received on stderr: \r\n[Thread-20] 2024-02-12T16:03:34 - INFO | running: ['git', 'add', '--all', 'templates']\r\n[Thread-20] 2024-02-12T16:03:34 - INFO | received on stdout: \r\n[Thread-20] 2024-02-12T16:03:34 - DEBUG | received on stderr: \r\n[Thread-20] 2024-02-12T16:03:34 - INFO | running: ['git', 'add', '--all', 'snippets']\r\n[Thread-20] 2024-02-12T16:03:34 - INFO | received on stdout: \r\n[Thread-20] 2024-02-12T16:03:34 - DEBUG | received on stderr: \r\n[Thread-20] 2024-02-12T16:03:34 - INFO | running: ['git', 'commit', '-m', 'API', 'update', '--author', 'Cobbler <[email protected]>']\r\n[Thread-20] 2024-02-12T16:03:34 - INFO | received on stdout: \r\n[Thread-20] 2024-02-12T16:03:34 - DEBUG | received on stderr: error: pathspec 'update' did not match any file(s) known to git\r\n````\r\n\r\n### Screenshots\r\n\r\nNone\r\n\r\n### Additional information\r\n\r\nSnippet for from the settings:\r\n\r\n```yaml\r\nscm_track_enabled: true\r\nscm_track_mode: \"git\"\r\nscm_track_author: \"Cobbler <[email protected]>\"\r\n# scm_push_script: \"git push\"\r\nscm_push_script: \"\"\r\n```\n", "before_files": [{"content": "\"\"\"\nCobbler Trigger Module that puts the content of the Cobbler data directory under version control. Depending on\n``scm_track_mode`` in the settings, this can either be git or Mercurial.\n\"\"\"\n\n# SPDX-License-Identifier: GPL-2.0-or-later\n# SPDX-FileCopyrightText: Copyright 2009, Red Hat Inc.\n# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>\n\n\nimport os\nfrom typing import TYPE_CHECKING, Any\n\nfrom cobbler import utils\nfrom cobbler.cexceptions import CX\n\nif TYPE_CHECKING:\n from cobbler.api import CobblerAPI\n\n\ndef register() -> str:\n \"\"\"\n This pure python trigger acts as if it were a legacy shell-trigger, but is much faster. The return of this method\n indicates the trigger type\n :return: Always: ``/var/lib/cobbler/triggers/change/*``\n \"\"\"\n\n return \"/var/lib/cobbler/triggers/change/*\"\n\n\ndef run(api: \"CobblerAPI\", args: Any):\n \"\"\"\n Runs the trigger, meaning in this case track any changed which happen to a config or data file.\n\n :param api: The api instance of the Cobbler server. Used to look up if scm_track_enabled is true.\n :param args: The parameter is currently unused for this trigger.\n :return: 0 on success, otherwise an exception is risen.\n \"\"\"\n settings = api.settings()\n\n if not settings.scm_track_enabled:\n # feature disabled\n return 0\n\n mode = str(settings.scm_track_mode).lower()\n author = str(settings.scm_track_author)\n push_script = str(settings.scm_push_script)\n\n if mode == \"git\":\n old_dir = os.getcwd()\n os.chdir(\"/var/lib/cobbler\")\n if os.getcwd() != \"/var/lib/cobbler\":\n raise CX(\"danger will robinson\")\n\n if not os.path.exists(\"/var/lib/cobbler/.git\"):\n utils.subprocess_call([\"git\", \"init\"], shell=False)\n\n # FIXME: If we know the remote user of an XMLRPC call use them as the author\n utils.subprocess_call([\"git\", \"add\", \"--all\", \"collections\"], shell=False)\n utils.subprocess_call([\"git\", \"add\", \"--all\", \"templates\"], shell=False)\n utils.subprocess_call([\"git\", \"add\", \"--all\", \"snippets\"], shell=False)\n utils.subprocess_call(\n [\"git\", \"commit\", \"-m\", \"API update\", \"--author\", author], shell=False\n )\n\n if push_script:\n utils.subprocess_call([push_script], shell=False)\n\n os.chdir(old_dir)\n return 0\n\n if mode == \"hg\":\n # use mercurial\n old_dir = os.getcwd()\n os.chdir(\"/var/lib/cobbler\")\n if os.getcwd() != \"/var/lib/cobbler\":\n raise CX(\"danger will robinson\")\n\n if not os.path.exists(\"/var/lib/cobbler/.hg\"):\n utils.subprocess_call([\"hg\", \"init\"], shell=False)\n\n # FIXME: If we know the remote user of an XMLRPC call use them as the user\n utils.subprocess_call([\"hg\", \"add collections\"], shell=False)\n utils.subprocess_call([\"hg\", \"add templates\"], shell=False)\n utils.subprocess_call([\"hg\", \"add snippets\"], shell=False)\n utils.subprocess_call(\n [\"hg\", \"commit\", \"-m\", \"API\", \"update\", \"--user\", author], shell=False\n )\n\n if push_script:\n utils.subprocess_call([push_script], shell=False)\n\n os.chdir(old_dir)\n return 0\n\n raise CX(f\"currently unsupported SCM type: {mode}\")\n", "path": "cobbler/modules/scm_track.py"}], "after_files": [{"content": "\"\"\"\nCobbler Trigger Module that puts the content of the Cobbler data directory under version control. Depending on\n``scm_track_mode`` in the settings, this can either be git or Mercurial.\n\"\"\"\n\n# SPDX-License-Identifier: GPL-2.0-or-later\n# SPDX-FileCopyrightText: Copyright 2009, Red Hat Inc.\n# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>\n\n\nimport os\nfrom typing import TYPE_CHECKING, Any\n\nfrom cobbler import utils\nfrom cobbler.cexceptions import CX\n\nif TYPE_CHECKING:\n from cobbler.api import CobblerAPI\n\n\ndef register() -> str:\n \"\"\"\n This pure python trigger acts as if it were a legacy shell-trigger, but is much faster. The return of this method\n indicates the trigger type\n :return: Always: ``/var/lib/cobbler/triggers/change/*``\n \"\"\"\n\n return \"/var/lib/cobbler/triggers/change/*\"\n\n\ndef run(api: \"CobblerAPI\", args: Any):\n \"\"\"\n Runs the trigger, meaning in this case track any changed which happen to a config or data file.\n\n :param api: The api instance of the Cobbler server. Used to look up if scm_track_enabled is true.\n :param args: The parameter is currently unused for this trigger.\n :return: 0 on success, otherwise an exception is risen.\n \"\"\"\n settings = api.settings()\n\n if not settings.scm_track_enabled:\n # feature disabled\n return 0\n\n mode = str(settings.scm_track_mode).lower()\n author = str(settings.scm_track_author)\n push_script = str(settings.scm_push_script)\n\n if mode == \"git\":\n old_dir = os.getcwd()\n os.chdir(\"/var/lib/cobbler\")\n if os.getcwd() != \"/var/lib/cobbler\":\n raise CX(\"danger will robinson\")\n\n if not os.path.exists(\"/var/lib/cobbler/.git\"):\n utils.subprocess_call([\"git\", \"init\"], shell=False)\n\n # FIXME: If we know the remote user of an XMLRPC call use them as the author\n utils.subprocess_call([\"git\", \"add\", \"--all\", \"collections\"], shell=False)\n utils.subprocess_call([\"git\", \"add\", \"--all\", \"templates\"], shell=False)\n utils.subprocess_call([\"git\", \"add\", \"--all\", \"snippets\"], shell=False)\n utils.subprocess_call(\n [\"git\", \"commit\", \"-m\", \"API update\", \"--author\", author], shell=False\n )\n\n if push_script:\n utils.subprocess_call(push_script.split(\" \"), shell=False)\n\n os.chdir(old_dir)\n return 0\n\n if mode == \"hg\":\n # use mercurial\n old_dir = os.getcwd()\n os.chdir(\"/var/lib/cobbler\")\n if os.getcwd() != \"/var/lib/cobbler\":\n raise CX(\"danger will robinson\")\n\n if not os.path.exists(\"/var/lib/cobbler/.hg\"):\n utils.subprocess_call([\"hg\", \"init\"], shell=False)\n\n # FIXME: If we know the remote user of an XMLRPC call use them as the user\n utils.subprocess_call([\"hg\", \"add collections\"], shell=False)\n utils.subprocess_call([\"hg\", \"add templates\"], shell=False)\n utils.subprocess_call([\"hg\", \"add snippets\"], shell=False)\n utils.subprocess_call(\n [\"hg\", \"commit\", \"-m\", \"API update\", \"--user\", author], shell=False\n )\n\n if push_script:\n utils.subprocess_call(push_script.split(\" \"), shell=False)\n\n os.chdir(old_dir)\n return 0\n\n raise CX(f\"currently unsupported SCM type: {mode}\")\n", "path": "cobbler/modules/scm_track.py"}]}
| 2,131 | 248 |
gh_patches_debug_4727
|
rasdani/github-patches
|
git_diff
|
kserve__kserve-658
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Help wanted] Add e2e test for canary rollout
/kind feature
**Describe the solution you'd like**
[A clear and concise description of what you want to happen.]
**Anything else you would like to add:**
[Miscellaneous information that will assist in solving the issue.]
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/kfserving/kfserving/constants/constants.py`
Content:
```
1 # Copyright 2019 kubeflow.org.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16
17 # KFServing K8S constants
18 KFSERVING_GROUP = 'serving.kubeflow.org'
19 KFSERVING_KIND = 'InferenceService'
20 KFSERVING_PLURAL = 'inferenceservices'
21 KFSERVING_VERSION = os.environ.get('KFSERVING_VERSION', 'v1alpha2')
22
23 KFSERVING_LOGLEVEL = os.environ.get('KFSERVING_LOGLEVEL', 'INFO').upper()
24
25 # INFERENCESERVICE credentials common constants
26 INFERENCESERVICE_CONFIG_MAP_NAME = 'inferenceservice-config'
27 INFERENCESERVICE_SYSTEM_NAMESPACE = 'kfserving-system'
28 DEFAULT_SECRET_NAME = "kfserving-secret-"
29 DEFAULT_SA_NAME = "kfserving-service-credentials"
30
31 # S3 credentials constants
32 S3_ACCESS_KEY_ID_DEFAULT_NAME = "awsAccessKeyID"
33 S3_SECRET_ACCESS_KEY_DEFAULT_NAME = "awsSecretAccessKey"
34 S3_DEFAULT_CREDS_FILE = '~/.aws/credentials'
35
36 # GCS credentials constants
37 GCS_CREDS_FILE_DEFAULT_NAME = 'gcloud-application-credentials.json'
38 GCS_DEFAULT_CREDS_FILE = '~/.config/gcloud/application_default_credentials.json'
39
40 # Azure credentials constants
41 AZ_DEFAULT_CREDS_FILE = '~/.azure/azure_credentials.json'
42
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/python/kfserving/kfserving/constants/constants.py b/python/kfserving/kfserving/constants/constants.py
--- a/python/kfserving/kfserving/constants/constants.py
+++ b/python/kfserving/kfserving/constants/constants.py
@@ -19,6 +19,7 @@
KFSERVING_KIND = 'InferenceService'
KFSERVING_PLURAL = 'inferenceservices'
KFSERVING_VERSION = os.environ.get('KFSERVING_VERSION', 'v1alpha2')
+KFSERVING_API_VERSION = KFSERVING_GROUP + '/' + KFSERVING_VERSION
KFSERVING_LOGLEVEL = os.environ.get('KFSERVING_LOGLEVEL', 'INFO').upper()
|
{"golden_diff": "diff --git a/python/kfserving/kfserving/constants/constants.py b/python/kfserving/kfserving/constants/constants.py\n--- a/python/kfserving/kfserving/constants/constants.py\n+++ b/python/kfserving/kfserving/constants/constants.py\n@@ -19,6 +19,7 @@\n KFSERVING_KIND = 'InferenceService'\n KFSERVING_PLURAL = 'inferenceservices'\n KFSERVING_VERSION = os.environ.get('KFSERVING_VERSION', 'v1alpha2')\n+KFSERVING_API_VERSION = KFSERVING_GROUP + '/' + KFSERVING_VERSION\n \n KFSERVING_LOGLEVEL = os.environ.get('KFSERVING_LOGLEVEL', 'INFO').upper()\n", "issue": "[Help wanted] Add e2e test for canary rollout\n/kind feature\r\n\r\n**Describe the solution you'd like**\r\n[A clear and concise description of what you want to happen.]\r\n\r\n\r\n**Anything else you would like to add:**\r\n[Miscellaneous information that will assist in solving the issue.]\r\n\n", "before_files": [{"content": "# Copyright 2019 kubeflow.org.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\n\n# KFServing K8S constants\nKFSERVING_GROUP = 'serving.kubeflow.org'\nKFSERVING_KIND = 'InferenceService'\nKFSERVING_PLURAL = 'inferenceservices'\nKFSERVING_VERSION = os.environ.get('KFSERVING_VERSION', 'v1alpha2')\n\nKFSERVING_LOGLEVEL = os.environ.get('KFSERVING_LOGLEVEL', 'INFO').upper()\n\n# INFERENCESERVICE credentials common constants\nINFERENCESERVICE_CONFIG_MAP_NAME = 'inferenceservice-config'\nINFERENCESERVICE_SYSTEM_NAMESPACE = 'kfserving-system'\nDEFAULT_SECRET_NAME = \"kfserving-secret-\"\nDEFAULT_SA_NAME = \"kfserving-service-credentials\"\n\n# S3 credentials constants\nS3_ACCESS_KEY_ID_DEFAULT_NAME = \"awsAccessKeyID\"\nS3_SECRET_ACCESS_KEY_DEFAULT_NAME = \"awsSecretAccessKey\"\nS3_DEFAULT_CREDS_FILE = '~/.aws/credentials'\n\n# GCS credentials constants\nGCS_CREDS_FILE_DEFAULT_NAME = 'gcloud-application-credentials.json'\nGCS_DEFAULT_CREDS_FILE = '~/.config/gcloud/application_default_credentials.json'\n\n# Azure credentials constants\nAZ_DEFAULT_CREDS_FILE = '~/.azure/azure_credentials.json'\n", "path": "python/kfserving/kfserving/constants/constants.py"}], "after_files": [{"content": "# Copyright 2019 kubeflow.org.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\n\n# KFServing K8S constants\nKFSERVING_GROUP = 'serving.kubeflow.org'\nKFSERVING_KIND = 'InferenceService'\nKFSERVING_PLURAL = 'inferenceservices'\nKFSERVING_VERSION = os.environ.get('KFSERVING_VERSION', 'v1alpha2')\nKFSERVING_API_VERSION = KFSERVING_GROUP + '/' + KFSERVING_VERSION\n\nKFSERVING_LOGLEVEL = os.environ.get('KFSERVING_LOGLEVEL', 'INFO').upper()\n\n# INFERENCESERVICE credentials common constants\nINFERENCESERVICE_CONFIG_MAP_NAME = 'inferenceservice-config'\nINFERENCESERVICE_SYSTEM_NAMESPACE = 'kfserving-system'\nDEFAULT_SECRET_NAME = \"kfserving-secret-\"\nDEFAULT_SA_NAME = \"kfserving-service-credentials\"\n\n# S3 credentials constants\nS3_ACCESS_KEY_ID_DEFAULT_NAME = \"awsAccessKeyID\"\nS3_SECRET_ACCESS_KEY_DEFAULT_NAME = \"awsSecretAccessKey\"\nS3_DEFAULT_CREDS_FILE = '~/.aws/credentials'\n\n# GCS credentials constants\nGCS_CREDS_FILE_DEFAULT_NAME = 'gcloud-application-credentials.json'\nGCS_DEFAULT_CREDS_FILE = '~/.config/gcloud/application_default_credentials.json'\n\n# Azure credentials constants\nAZ_DEFAULT_CREDS_FILE = '~/.azure/azure_credentials.json'\n", "path": "python/kfserving/kfserving/constants/constants.py"}]}
| 801 | 154 |
gh_patches_debug_31066
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-python-434
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Exception: raise OSError("handle is closed")
When I initialized sentry_sdk and used concurrent.futures.process.ProcessPoolExecutor, the exception will be raised after python exit.
```
from concurrent.futures.process import ProcessPoolExecutor
import sentry_sdk
sentry_sdk.init(dsn="")
def test():
...
if __name__ == "__main__":
with ProcessPoolExecutor(max_workers=4) as worker:
worker.submit(test)
```
The exception:
```
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/Users/tony.li/miniconda3/lib/python3.7/concurrent/futures/process.py", line 101, in _python_exit
thread_wakeup.wakeup()
File "/Users/tony.li/miniconda3/lib/python3.7/concurrent/futures/process.py", line 89, in wakeup
self._writer.send_bytes(b"")
File "/Users/tony.li/miniconda3/lib/python3.7/multiprocessing/connection.py", line 183, in send_bytes
self._check_closed()
File "/Users/tony.li/miniconda3/lib/python3.7/multiprocessing/connection.py", line 136, in _check_closed
raise OSError("handle is closed")
OSError: handle is closed
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sentry_sdk/integrations/threading.py`
Content:
```
1 from __future__ import absolute_import
2
3 import sys
4
5 from threading import Thread
6
7 from sentry_sdk import Hub
8 from sentry_sdk._compat import reraise
9 from sentry_sdk.utils import event_from_exception
10 from sentry_sdk.integrations import Integration
11
12 from sentry_sdk._types import MYPY
13
14 if MYPY:
15 from typing import Any
16
17
18 class ThreadingIntegration(Integration):
19 identifier = "threading"
20
21 def __init__(self, propagate_hub=False):
22 self.propagate_hub = propagate_hub
23
24 @staticmethod
25 def setup_once():
26 # type: () -> None
27 old_start = Thread.start
28
29 def sentry_start(self, *a, **kw):
30 hub = Hub.current
31 integration = hub.get_integration(ThreadingIntegration)
32 if integration is not None:
33 if not integration.propagate_hub:
34 hub_ = None
35 else:
36 hub_ = Hub(hub)
37
38 self.run = _wrap_run(hub_, self.run)
39
40 return old_start(self, *a, **kw) # type: ignore
41
42 Thread.start = sentry_start # type: ignore
43
44
45 def _wrap_run(parent_hub, old_run):
46 def run(*a, **kw):
47 hub = parent_hub or Hub.current
48
49 with hub:
50 try:
51 return old_run(*a, **kw)
52 except Exception:
53 reraise(*_capture_exception())
54
55 return run
56
57
58 def _capture_exception():
59 hub = Hub.current
60 exc_info = sys.exc_info()
61
62 if hub.get_integration(ThreadingIntegration) is not None:
63 # If an integration is there, a client has to be there.
64 client = hub.client # type: Any
65
66 event, hint = event_from_exception(
67 exc_info,
68 client_options=client.options,
69 mechanism={"type": "threading", "handled": False},
70 )
71 hub.capture_event(event, hint=hint)
72
73 return exc_info
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sentry_sdk/integrations/threading.py b/sentry_sdk/integrations/threading.py
--- a/sentry_sdk/integrations/threading.py
+++ b/sentry_sdk/integrations/threading.py
@@ -1,15 +1,13 @@
from __future__ import absolute_import
import sys
-
-from threading import Thread
+from threading import Thread, current_thread
from sentry_sdk import Hub
from sentry_sdk._compat import reraise
-from sentry_sdk.utils import event_from_exception
-from sentry_sdk.integrations import Integration
-
from sentry_sdk._types import MYPY
+from sentry_sdk.integrations import Integration
+from sentry_sdk.utils import event_from_exception
if MYPY:
from typing import Any
@@ -34,21 +32,26 @@
hub_ = None
else:
hub_ = Hub(hub)
-
- self.run = _wrap_run(hub_, self.run)
+ # Patching instance methods in `start()` creates a reference cycle if
+ # done in a naive way. See
+ # https://github.com/getsentry/sentry-python/pull/434
+ #
+ # In threading module, using current_thread API will access current thread instance
+ # without holding it to avoid a reference cycle in an easier way.
+ self.run = _wrap_run(hub_, self.run.__func__)
return old_start(self, *a, **kw) # type: ignore
Thread.start = sentry_start # type: ignore
-def _wrap_run(parent_hub, old_run):
+def _wrap_run(parent_hub, old_run_func):
def run(*a, **kw):
hub = parent_hub or Hub.current
-
with hub:
try:
- return old_run(*a, **kw)
+ self = current_thread()
+ return old_run_func(self, *a, **kw)
except Exception:
reraise(*_capture_exception())
|
{"golden_diff": "diff --git a/sentry_sdk/integrations/threading.py b/sentry_sdk/integrations/threading.py\n--- a/sentry_sdk/integrations/threading.py\n+++ b/sentry_sdk/integrations/threading.py\n@@ -1,15 +1,13 @@\n from __future__ import absolute_import\n \n import sys\n-\n-from threading import Thread\n+from threading import Thread, current_thread\n \n from sentry_sdk import Hub\n from sentry_sdk._compat import reraise\n-from sentry_sdk.utils import event_from_exception\n-from sentry_sdk.integrations import Integration\n-\n from sentry_sdk._types import MYPY\n+from sentry_sdk.integrations import Integration\n+from sentry_sdk.utils import event_from_exception\n \n if MYPY:\n from typing import Any\n@@ -34,21 +32,26 @@\n hub_ = None\n else:\n hub_ = Hub(hub)\n-\n- self.run = _wrap_run(hub_, self.run)\n+ # Patching instance methods in `start()` creates a reference cycle if\n+ # done in a naive way. See\n+ # https://github.com/getsentry/sentry-python/pull/434\n+ #\n+ # In threading module, using current_thread API will access current thread instance\n+ # without holding it to avoid a reference cycle in an easier way.\n+ self.run = _wrap_run(hub_, self.run.__func__)\n \n return old_start(self, *a, **kw) # type: ignore\n \n Thread.start = sentry_start # type: ignore\n \n \n-def _wrap_run(parent_hub, old_run):\n+def _wrap_run(parent_hub, old_run_func):\n def run(*a, **kw):\n hub = parent_hub or Hub.current\n-\n with hub:\n try:\n- return old_run(*a, **kw)\n+ self = current_thread()\n+ return old_run_func(self, *a, **kw)\n except Exception:\n reraise(*_capture_exception())\n", "issue": "Exception: raise OSError(\"handle is closed\")\nWhen I initialized sentry_sdk and used concurrent.futures.process.ProcessPoolExecutor, the exception will be raised after python exit.\r\n\r\n```\r\nfrom concurrent.futures.process import ProcessPoolExecutor\r\n\r\nimport sentry_sdk\r\n\r\nsentry_sdk.init(dsn=\"\")\r\n\r\n\r\ndef test():\r\n ...\r\n\r\n\r\nif __name__ == \"__main__\":\r\n with ProcessPoolExecutor(max_workers=4) as worker:\r\n worker.submit(test)\r\n```\r\n\r\nThe exception:\r\n```\r\nError in atexit._run_exitfuncs:\r\nTraceback (most recent call last):\r\n File \"/Users/tony.li/miniconda3/lib/python3.7/concurrent/futures/process.py\", line 101, in _python_exit\r\n thread_wakeup.wakeup()\r\n File \"/Users/tony.li/miniconda3/lib/python3.7/concurrent/futures/process.py\", line 89, in wakeup\r\n self._writer.send_bytes(b\"\")\r\n File \"/Users/tony.li/miniconda3/lib/python3.7/multiprocessing/connection.py\", line 183, in send_bytes\r\n self._check_closed()\r\n File \"/Users/tony.li/miniconda3/lib/python3.7/multiprocessing/connection.py\", line 136, in _check_closed\r\n raise OSError(\"handle is closed\")\r\nOSError: handle is closed\r\n```\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport sys\n\nfrom threading import Thread\n\nfrom sentry_sdk import Hub\nfrom sentry_sdk._compat import reraise\nfrom sentry_sdk.utils import event_from_exception\nfrom sentry_sdk.integrations import Integration\n\nfrom sentry_sdk._types import MYPY\n\nif MYPY:\n from typing import Any\n\n\nclass ThreadingIntegration(Integration):\n identifier = \"threading\"\n\n def __init__(self, propagate_hub=False):\n self.propagate_hub = propagate_hub\n\n @staticmethod\n def setup_once():\n # type: () -> None\n old_start = Thread.start\n\n def sentry_start(self, *a, **kw):\n hub = Hub.current\n integration = hub.get_integration(ThreadingIntegration)\n if integration is not None:\n if not integration.propagate_hub:\n hub_ = None\n else:\n hub_ = Hub(hub)\n\n self.run = _wrap_run(hub_, self.run)\n\n return old_start(self, *a, **kw) # type: ignore\n\n Thread.start = sentry_start # type: ignore\n\n\ndef _wrap_run(parent_hub, old_run):\n def run(*a, **kw):\n hub = parent_hub or Hub.current\n\n with hub:\n try:\n return old_run(*a, **kw)\n except Exception:\n reraise(*_capture_exception())\n\n return run\n\n\ndef _capture_exception():\n hub = Hub.current\n exc_info = sys.exc_info()\n\n if hub.get_integration(ThreadingIntegration) is not None:\n # If an integration is there, a client has to be there.\n client = hub.client # type: Any\n\n event, hint = event_from_exception(\n exc_info,\n client_options=client.options,\n mechanism={\"type\": \"threading\", \"handled\": False},\n )\n hub.capture_event(event, hint=hint)\n\n return exc_info\n", "path": "sentry_sdk/integrations/threading.py"}], "after_files": [{"content": "from __future__ import absolute_import\n\nimport sys\nfrom threading import Thread, current_thread\n\nfrom sentry_sdk import Hub\nfrom sentry_sdk._compat import reraise\nfrom sentry_sdk._types import MYPY\nfrom sentry_sdk.integrations import Integration\nfrom sentry_sdk.utils import event_from_exception\n\nif MYPY:\n from typing import Any\n\n\nclass ThreadingIntegration(Integration):\n identifier = \"threading\"\n\n def __init__(self, propagate_hub=False):\n self.propagate_hub = propagate_hub\n\n @staticmethod\n def setup_once():\n # type: () -> None\n old_start = Thread.start\n\n def sentry_start(self, *a, **kw):\n hub = Hub.current\n integration = hub.get_integration(ThreadingIntegration)\n if integration is not None:\n if not integration.propagate_hub:\n hub_ = None\n else:\n hub_ = Hub(hub)\n # Patching instance methods in `start()` creates a reference cycle if\n # done in a naive way. See\n # https://github.com/getsentry/sentry-python/pull/434\n #\n # In threading module, using current_thread API will access current thread instance\n # without holding it to avoid a reference cycle in an easier way.\n self.run = _wrap_run(hub_, self.run.__func__)\n\n return old_start(self, *a, **kw) # type: ignore\n\n Thread.start = sentry_start # type: ignore\n\n\ndef _wrap_run(parent_hub, old_run_func):\n def run(*a, **kw):\n hub = parent_hub or Hub.current\n with hub:\n try:\n self = current_thread()\n return old_run_func(self, *a, **kw)\n except Exception:\n reraise(*_capture_exception())\n\n return run\n\n\ndef _capture_exception():\n hub = Hub.current\n exc_info = sys.exc_info()\n\n if hub.get_integration(ThreadingIntegration) is not None:\n # If an integration is there, a client has to be there.\n client = hub.client # type: Any\n\n event, hint = event_from_exception(\n exc_info,\n client_options=client.options,\n mechanism={\"type\": \"threading\", \"handled\": False},\n )\n hub.capture_event(event, hint=hint)\n\n return exc_info\n", "path": "sentry_sdk/integrations/threading.py"}]}
| 1,126 | 441 |
gh_patches_debug_18060
|
rasdani/github-patches
|
git_diff
|
scrapy__scrapy-4378
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove SCRAPY_SELECTORS_BACKEND from scrapy.settings.deprecated
There is no trace of `SCRAPY_SELECTORS_BACKEND` in the code anymore, so that line should go.
It would be a good chance to review the rest of the lines in that file, some others may be worth removing as well.
Related to https://github.com/scrapy/scrapy/issues/4356
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/settings/deprecated.py`
Content:
```
1 import warnings
2 from scrapy.exceptions import ScrapyDeprecationWarning
3
4 DEPRECATED_SETTINGS = [
5 ('TRACK_REFS', 'no longer needed (trackref is always enabled)'),
6 ('RESPONSE_CLASSES', 'no longer supported'),
7 ('DEFAULT_RESPONSE_ENCODING', 'no longer supported'),
8 ('BOT_VERSION', 'no longer used (user agent defaults to Scrapy now)'),
9 ('ENCODING_ALIASES', 'no longer needed (encoding discovery uses w3lib now)'),
10 ('STATS_ENABLED', 'no longer supported (change STATS_CLASS instead)'),
11 ('SQLITE_DB', 'no longer supported'),
12 ('SELECTORS_BACKEND', 'use SCRAPY_SELECTORS_BACKEND environment variable instead'),
13 ('AUTOTHROTTLE_MIN_DOWNLOAD_DELAY', 'use DOWNLOAD_DELAY instead'),
14 ('AUTOTHROTTLE_MAX_CONCURRENCY', 'use CONCURRENT_REQUESTS_PER_DOMAIN instead'),
15 ('AUTOTHROTTLE_MAX_CONCURRENCY', 'use CONCURRENT_REQUESTS_PER_DOMAIN instead'),
16 ('REDIRECT_MAX_METAREFRESH_DELAY', 'use METAREFRESH_MAXDELAY instead'),
17 ('LOG_UNSERIALIZABLE_REQUESTS', 'use SCHEDULER_DEBUG instead'),
18 ]
19
20
21 def check_deprecated_settings(settings):
22 deprecated = [x for x in DEPRECATED_SETTINGS if settings[x[0]] is not None]
23 if deprecated:
24 msg = "You are using the following settings which are deprecated or obsolete"
25 msg += " (ask [email protected] for alternatives):"
26 msg = msg + "\n " + "\n ".join("%s: %s" % x for x in deprecated)
27 warnings.warn(msg, ScrapyDeprecationWarning)
28
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scrapy/settings/deprecated.py b/scrapy/settings/deprecated.py
--- a/scrapy/settings/deprecated.py
+++ b/scrapy/settings/deprecated.py
@@ -9,10 +9,8 @@
('ENCODING_ALIASES', 'no longer needed (encoding discovery uses w3lib now)'),
('STATS_ENABLED', 'no longer supported (change STATS_CLASS instead)'),
('SQLITE_DB', 'no longer supported'),
- ('SELECTORS_BACKEND', 'use SCRAPY_SELECTORS_BACKEND environment variable instead'),
('AUTOTHROTTLE_MIN_DOWNLOAD_DELAY', 'use DOWNLOAD_DELAY instead'),
('AUTOTHROTTLE_MAX_CONCURRENCY', 'use CONCURRENT_REQUESTS_PER_DOMAIN instead'),
- ('AUTOTHROTTLE_MAX_CONCURRENCY', 'use CONCURRENT_REQUESTS_PER_DOMAIN instead'),
('REDIRECT_MAX_METAREFRESH_DELAY', 'use METAREFRESH_MAXDELAY instead'),
('LOG_UNSERIALIZABLE_REQUESTS', 'use SCHEDULER_DEBUG instead'),
]
|
{"golden_diff": "diff --git a/scrapy/settings/deprecated.py b/scrapy/settings/deprecated.py\n--- a/scrapy/settings/deprecated.py\n+++ b/scrapy/settings/deprecated.py\n@@ -9,10 +9,8 @@\n ('ENCODING_ALIASES', 'no longer needed (encoding discovery uses w3lib now)'),\n ('STATS_ENABLED', 'no longer supported (change STATS_CLASS instead)'),\n ('SQLITE_DB', 'no longer supported'),\n- ('SELECTORS_BACKEND', 'use SCRAPY_SELECTORS_BACKEND environment variable instead'),\n ('AUTOTHROTTLE_MIN_DOWNLOAD_DELAY', 'use DOWNLOAD_DELAY instead'),\n ('AUTOTHROTTLE_MAX_CONCURRENCY', 'use CONCURRENT_REQUESTS_PER_DOMAIN instead'),\n- ('AUTOTHROTTLE_MAX_CONCURRENCY', 'use CONCURRENT_REQUESTS_PER_DOMAIN instead'),\n ('REDIRECT_MAX_METAREFRESH_DELAY', 'use METAREFRESH_MAXDELAY instead'),\n ('LOG_UNSERIALIZABLE_REQUESTS', 'use SCHEDULER_DEBUG instead'),\n ]\n", "issue": "Remove SCRAPY_SELECTORS_BACKEND from scrapy.settings.deprecated\nThere is no trace of `SCRAPY_SELECTORS_BACKEND` in the code anymore, so that line should go.\r\n\r\nIt would be a good chance to review the rest of the lines in that file, some others may be worth removing as well.\r\n\r\nRelated to https://github.com/scrapy/scrapy/issues/4356\n", "before_files": [{"content": "import warnings\nfrom scrapy.exceptions import ScrapyDeprecationWarning\n\nDEPRECATED_SETTINGS = [\n ('TRACK_REFS', 'no longer needed (trackref is always enabled)'),\n ('RESPONSE_CLASSES', 'no longer supported'),\n ('DEFAULT_RESPONSE_ENCODING', 'no longer supported'),\n ('BOT_VERSION', 'no longer used (user agent defaults to Scrapy now)'),\n ('ENCODING_ALIASES', 'no longer needed (encoding discovery uses w3lib now)'),\n ('STATS_ENABLED', 'no longer supported (change STATS_CLASS instead)'),\n ('SQLITE_DB', 'no longer supported'),\n ('SELECTORS_BACKEND', 'use SCRAPY_SELECTORS_BACKEND environment variable instead'),\n ('AUTOTHROTTLE_MIN_DOWNLOAD_DELAY', 'use DOWNLOAD_DELAY instead'),\n ('AUTOTHROTTLE_MAX_CONCURRENCY', 'use CONCURRENT_REQUESTS_PER_DOMAIN instead'),\n ('AUTOTHROTTLE_MAX_CONCURRENCY', 'use CONCURRENT_REQUESTS_PER_DOMAIN instead'),\n ('REDIRECT_MAX_METAREFRESH_DELAY', 'use METAREFRESH_MAXDELAY instead'),\n ('LOG_UNSERIALIZABLE_REQUESTS', 'use SCHEDULER_DEBUG instead'),\n]\n\n\ndef check_deprecated_settings(settings):\n deprecated = [x for x in DEPRECATED_SETTINGS if settings[x[0]] is not None]\n if deprecated:\n msg = \"You are using the following settings which are deprecated or obsolete\"\n msg += \" (ask [email protected] for alternatives):\"\n msg = msg + \"\\n \" + \"\\n \".join(\"%s: %s\" % x for x in deprecated)\n warnings.warn(msg, ScrapyDeprecationWarning)\n", "path": "scrapy/settings/deprecated.py"}], "after_files": [{"content": "import warnings\nfrom scrapy.exceptions import ScrapyDeprecationWarning\n\nDEPRECATED_SETTINGS = [\n ('TRACK_REFS', 'no longer needed (trackref is always enabled)'),\n ('RESPONSE_CLASSES', 'no longer supported'),\n ('DEFAULT_RESPONSE_ENCODING', 'no longer supported'),\n ('BOT_VERSION', 'no longer used (user agent defaults to Scrapy now)'),\n ('ENCODING_ALIASES', 'no longer needed (encoding discovery uses w3lib now)'),\n ('STATS_ENABLED', 'no longer supported (change STATS_CLASS instead)'),\n ('SQLITE_DB', 'no longer supported'),\n ('AUTOTHROTTLE_MIN_DOWNLOAD_DELAY', 'use DOWNLOAD_DELAY instead'),\n ('AUTOTHROTTLE_MAX_CONCURRENCY', 'use CONCURRENT_REQUESTS_PER_DOMAIN instead'),\n ('REDIRECT_MAX_METAREFRESH_DELAY', 'use METAREFRESH_MAXDELAY instead'),\n ('LOG_UNSERIALIZABLE_REQUESTS', 'use SCHEDULER_DEBUG instead'),\n]\n\n\ndef check_deprecated_settings(settings):\n deprecated = [x for x in DEPRECATED_SETTINGS if settings[x[0]] is not None]\n if deprecated:\n msg = \"You are using the following settings which are deprecated or obsolete\"\n msg += \" (ask [email protected] for alternatives):\"\n msg = msg + \"\\n \" + \"\\n \".join(\"%s: %s\" % x for x in deprecated)\n warnings.warn(msg, ScrapyDeprecationWarning)\n", "path": "scrapy/settings/deprecated.py"}]}
| 740 | 218 |
gh_patches_debug_22000
|
rasdani/github-patches
|
git_diff
|
hpcaitech__ColossalAI-2442
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG]: colossalai run failed with unknown reason
### 🐛 Describe the bug
Some users have reported that they encounter launch failure when using `colossalai run`, but there is no error message to tell exactly what went wrong. A sample output is given below.
```text
Error: failed to run torchrun --nproc_per_node=4 --nnodes=1 --node_rank=0 --rdzv_backend=c10d --rdzv_endpoint=127.0.0.1:29500 --rdzv_id=colossalai-default-job train.py --config config.py -s on 127.0.0.1
```
### Environment
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `colossalai/cli/launcher/multinode_runner.py`
Content:
```
1 import fabric
2 from .hostinfo import HostInfo, HostInfoList
3 from multiprocessing import Pipe, Process
4 from multiprocessing import connection as mp_connection
5 import click
6
7
8 def run_on_host(hostinfo: HostInfo, workdir: str, recv_conn: mp_connection.Connection,
9 send_conn: mp_connection.Connection, env: dict) -> None:
10 """
11 Use fabric connection to execute command on local or remote hosts.
12
13 Args:
14 hostinfo (HostInfo): host information
15 workdir (str): the directory to execute the command
16 recv_conn (multiprocessing.connection.Connection): receive messages from the master sender
17 send_conn (multiprocessing.connection.Connection): send messages to the master receiver
18 env (dict): a dictionary for environment variables
19 """
20
21 fab_conn = fabric.Connection(hostinfo.hostname, port=hostinfo.port)
22 finish = False
23 env_msg = ' '.join([f'{k}=\"{v}\"' for k, v in env.items()])
24
25 # keep listening until exit
26 while not finish:
27 # receive cmd
28 cmds = recv_conn.recv()
29
30 if cmds == 'exit':
31 # exit from the loop
32 finish = True
33 break
34 else:
35 # execute the commands
36 try:
37 # cd to execute directory
38 with fab_conn.cd(workdir):
39 # propagate the runtime environment
40 with fab_conn.prefix(f"export {env_msg}"):
41 if hostinfo.is_local_host:
42 # execute on the local machine
43 fab_conn.local(cmds, hide=False)
44 else:
45 # execute on the remote machine
46 fab_conn.run(cmds, hide=False)
47 send_conn.send('success')
48 except:
49 click.echo(f"Error: failed to run {cmds} on {hostinfo.hostname}")
50 send_conn.send('failure')
51
52 # shutdown
53 send_conn.send("finish")
54 fab_conn.close()
55
56
57 class MultiNodeRunner:
58 """
59 A runner to execute commands on an array of machines. This runner
60 is inspired by Nezha (https://github.com/zhuzilin/NeZha).
61 """
62
63 def __init__(self):
64 self.processes = {}
65 self.master_send_conns = {}
66 self.master_recv_conns = {}
67
68 def connect(self, host_info_list: HostInfoList, workdir: str, env: dict) -> None:
69 """
70 Establish connections to a list of hosts
71
72 Args:
73 host_info_list (HostInfoList): a list of HostInfo objects
74 workdir (str): the directory where command is executed
75 env (dict): environment variables to propagate to hosts
76 """
77 for hostinfo in host_info_list:
78 master_send_conn, worker_recv_conn = Pipe()
79 master_recv_conn, worker_send_conn = Pipe()
80 p = Process(target=run_on_host, args=(hostinfo, workdir, worker_recv_conn, worker_send_conn, env))
81 p.start()
82 self.processes[hostinfo.hostname] = p
83 self.master_recv_conns[hostinfo.hostname] = master_recv_conn
84 self.master_send_conns[hostinfo.hostname] = master_send_conn
85
86 def send(self, hostinfo: HostInfo, cmd: str) -> None:
87 """
88 Send a command to a local/remote host.
89
90 Args:
91 hostinfo (HostInfo): host information
92 cmd (str): the command to execute
93 """
94
95 assert hostinfo.hostname in self.master_send_conns, \
96 f'{hostinfo} is not found in the current connections'
97 conn = self.master_send_conns[hostinfo.hostname]
98 conn.send(cmd)
99
100 def stop_all(self) -> None:
101 """
102 Stop connections to all hosts.
103 """
104
105 for hostname, conn in self.master_send_conns.items():
106 conn.send('exit')
107
108 def recv_from_all(self) -> dict:
109 """
110 Receive messages from all hosts
111
112 Returns:
113 msg_from_node (dict): a dictionry which contains messages from each node
114 """
115
116 msg_from_node = dict()
117 for hostname, conn in self.master_recv_conns.items():
118 msg_from_node[hostname] = conn.recv()
119 return msg_from_node
120
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/colossalai/cli/launcher/multinode_runner.py b/colossalai/cli/launcher/multinode_runner.py
--- a/colossalai/cli/launcher/multinode_runner.py
+++ b/colossalai/cli/launcher/multinode_runner.py
@@ -1,8 +1,10 @@
-import fabric
-from .hostinfo import HostInfo, HostInfoList
from multiprocessing import Pipe, Process
from multiprocessing import connection as mp_connection
+
import click
+import fabric
+
+from .hostinfo import HostInfo, HostInfoList
def run_on_host(hostinfo: HostInfo, workdir: str, recv_conn: mp_connection.Connection,
@@ -45,8 +47,10 @@
# execute on the remote machine
fab_conn.run(cmds, hide=False)
send_conn.send('success')
- except:
- click.echo(f"Error: failed to run {cmds} on {hostinfo.hostname}")
+ except Exception as e:
+ click.echo(
+ f"Error: failed to run {cmds} on {hostinfo.hostname}, is localhost: {hostinfo.is_local_host}, exception: {e}"
+ )
send_conn.send('failure')
# shutdown
|
{"golden_diff": "diff --git a/colossalai/cli/launcher/multinode_runner.py b/colossalai/cli/launcher/multinode_runner.py\n--- a/colossalai/cli/launcher/multinode_runner.py\n+++ b/colossalai/cli/launcher/multinode_runner.py\n@@ -1,8 +1,10 @@\n-import fabric\n-from .hostinfo import HostInfo, HostInfoList\n from multiprocessing import Pipe, Process\n from multiprocessing import connection as mp_connection\n+\n import click\n+import fabric\n+\n+from .hostinfo import HostInfo, HostInfoList\n \n \n def run_on_host(hostinfo: HostInfo, workdir: str, recv_conn: mp_connection.Connection,\n@@ -45,8 +47,10 @@\n # execute on the remote machine\n fab_conn.run(cmds, hide=False)\n send_conn.send('success')\n- except:\n- click.echo(f\"Error: failed to run {cmds} on {hostinfo.hostname}\")\n+ except Exception as e:\n+ click.echo(\n+ f\"Error: failed to run {cmds} on {hostinfo.hostname}, is localhost: {hostinfo.is_local_host}, exception: {e}\"\n+ )\n send_conn.send('failure')\n \n # shutdown\n", "issue": "[BUG]: colossalai run failed with unknown reason\n### \ud83d\udc1b Describe the bug\n\nSome users have reported that they encounter launch failure when using `colossalai run`, but there is no error message to tell exactly what went wrong. A sample output is given below.\r\n\r\n```text\r\nError: failed to run torchrun --nproc_per_node=4 --nnodes=1 --node_rank=0 --rdzv_backend=c10d --rdzv_endpoint=127.0.0.1:29500 --rdzv_id=colossalai-default-job train.py --config config.py -s on 127.0.0.1\r\n```\n\n### Environment\n\n_No response_\n", "before_files": [{"content": "import fabric\nfrom .hostinfo import HostInfo, HostInfoList\nfrom multiprocessing import Pipe, Process\nfrom multiprocessing import connection as mp_connection\nimport click\n\n\ndef run_on_host(hostinfo: HostInfo, workdir: str, recv_conn: mp_connection.Connection,\n send_conn: mp_connection.Connection, env: dict) -> None:\n \"\"\"\n Use fabric connection to execute command on local or remote hosts.\n\n Args:\n hostinfo (HostInfo): host information\n workdir (str): the directory to execute the command\n recv_conn (multiprocessing.connection.Connection): receive messages from the master sender\n send_conn (multiprocessing.connection.Connection): send messages to the master receiver\n env (dict): a dictionary for environment variables\n \"\"\"\n\n fab_conn = fabric.Connection(hostinfo.hostname, port=hostinfo.port)\n finish = False\n env_msg = ' '.join([f'{k}=\\\"{v}\\\"' for k, v in env.items()])\n\n # keep listening until exit\n while not finish:\n # receive cmd\n cmds = recv_conn.recv()\n\n if cmds == 'exit':\n # exit from the loop\n finish = True\n break\n else:\n # execute the commands\n try:\n # cd to execute directory\n with fab_conn.cd(workdir):\n # propagate the runtime environment\n with fab_conn.prefix(f\"export {env_msg}\"):\n if hostinfo.is_local_host:\n # execute on the local machine\n fab_conn.local(cmds, hide=False)\n else:\n # execute on the remote machine\n fab_conn.run(cmds, hide=False)\n send_conn.send('success')\n except:\n click.echo(f\"Error: failed to run {cmds} on {hostinfo.hostname}\")\n send_conn.send('failure')\n\n # shutdown\n send_conn.send(\"finish\")\n fab_conn.close()\n\n\nclass MultiNodeRunner:\n \"\"\"\n A runner to execute commands on an array of machines. This runner\n is inspired by Nezha (https://github.com/zhuzilin/NeZha).\n \"\"\"\n\n def __init__(self):\n self.processes = {}\n self.master_send_conns = {}\n self.master_recv_conns = {}\n\n def connect(self, host_info_list: HostInfoList, workdir: str, env: dict) -> None:\n \"\"\"\n Establish connections to a list of hosts\n\n Args:\n host_info_list (HostInfoList): a list of HostInfo objects\n workdir (str): the directory where command is executed\n env (dict): environment variables to propagate to hosts\n \"\"\"\n for hostinfo in host_info_list:\n master_send_conn, worker_recv_conn = Pipe()\n master_recv_conn, worker_send_conn = Pipe()\n p = Process(target=run_on_host, args=(hostinfo, workdir, worker_recv_conn, worker_send_conn, env))\n p.start()\n self.processes[hostinfo.hostname] = p\n self.master_recv_conns[hostinfo.hostname] = master_recv_conn\n self.master_send_conns[hostinfo.hostname] = master_send_conn\n\n def send(self, hostinfo: HostInfo, cmd: str) -> None:\n \"\"\"\n Send a command to a local/remote host.\n\n Args:\n hostinfo (HostInfo): host information\n cmd (str): the command to execute\n \"\"\"\n\n assert hostinfo.hostname in self.master_send_conns, \\\n f'{hostinfo} is not found in the current connections'\n conn = self.master_send_conns[hostinfo.hostname]\n conn.send(cmd)\n\n def stop_all(self) -> None:\n \"\"\"\n Stop connections to all hosts.\n \"\"\"\n\n for hostname, conn in self.master_send_conns.items():\n conn.send('exit')\n\n def recv_from_all(self) -> dict:\n \"\"\"\n Receive messages from all hosts\n\n Returns:\n msg_from_node (dict): a dictionry which contains messages from each node\n \"\"\"\n\n msg_from_node = dict()\n for hostname, conn in self.master_recv_conns.items():\n msg_from_node[hostname] = conn.recv()\n return msg_from_node\n", "path": "colossalai/cli/launcher/multinode_runner.py"}], "after_files": [{"content": "from multiprocessing import Pipe, Process\nfrom multiprocessing import connection as mp_connection\n\nimport click\nimport fabric\n\nfrom .hostinfo import HostInfo, HostInfoList\n\n\ndef run_on_host(hostinfo: HostInfo, workdir: str, recv_conn: mp_connection.Connection,\n send_conn: mp_connection.Connection, env: dict) -> None:\n \"\"\"\n Use fabric connection to execute command on local or remote hosts.\n\n Args:\n hostinfo (HostInfo): host information\n workdir (str): the directory to execute the command\n recv_conn (multiprocessing.connection.Connection): receive messages from the master sender\n send_conn (multiprocessing.connection.Connection): send messages to the master receiver\n env (dict): a dictionary for environment variables\n \"\"\"\n\n fab_conn = fabric.Connection(hostinfo.hostname, port=hostinfo.port)\n finish = False\n env_msg = ' '.join([f'{k}=\\\"{v}\\\"' for k, v in env.items()])\n\n # keep listening until exit\n while not finish:\n # receive cmd\n cmds = recv_conn.recv()\n\n if cmds == 'exit':\n # exit from the loop\n finish = True\n break\n else:\n # execute the commands\n try:\n # cd to execute directory\n with fab_conn.cd(workdir):\n # propagate the runtime environment\n with fab_conn.prefix(f\"export {env_msg}\"):\n if hostinfo.is_local_host:\n # execute on the local machine\n fab_conn.local(cmds, hide=False)\n else:\n # execute on the remote machine\n fab_conn.run(cmds, hide=False)\n send_conn.send('success')\n except Exception as e:\n click.echo(\n f\"Error: failed to run {cmds} on {hostinfo.hostname}, is localhost: {hostinfo.is_local_host}, exception: {e}\"\n )\n send_conn.send('failure')\n\n # shutdown\n send_conn.send(\"finish\")\n fab_conn.close()\n\n\nclass MultiNodeRunner:\n \"\"\"\n A runner to execute commands on an array of machines. This runner\n is inspired by Nezha (https://github.com/zhuzilin/NeZha).\n \"\"\"\n\n def __init__(self):\n self.processes = {}\n self.master_send_conns = {}\n self.master_recv_conns = {}\n\n def connect(self, host_info_list: HostInfoList, workdir: str, env: dict) -> None:\n \"\"\"\n Establish connections to a list of hosts\n\n Args:\n host_info_list (HostInfoList): a list of HostInfo objects\n workdir (str): the directory where command is executed\n env (dict): environment variables to propagate to hosts\n \"\"\"\n for hostinfo in host_info_list:\n master_send_conn, worker_recv_conn = Pipe()\n master_recv_conn, worker_send_conn = Pipe()\n p = Process(target=run_on_host, args=(hostinfo, workdir, worker_recv_conn, worker_send_conn, env))\n p.start()\n self.processes[hostinfo.hostname] = p\n self.master_recv_conns[hostinfo.hostname] = master_recv_conn\n self.master_send_conns[hostinfo.hostname] = master_send_conn\n\n def send(self, hostinfo: HostInfo, cmd: str) -> None:\n \"\"\"\n Send a command to a local/remote host.\n\n Args:\n hostinfo (HostInfo): host information\n cmd (str): the command to execute\n \"\"\"\n\n assert hostinfo.hostname in self.master_send_conns, \\\n f'{hostinfo} is not found in the current connections'\n conn = self.master_send_conns[hostinfo.hostname]\n conn.send(cmd)\n\n def stop_all(self) -> None:\n \"\"\"\n Stop connections to all hosts.\n \"\"\"\n\n for hostname, conn in self.master_send_conns.items():\n conn.send('exit')\n\n def recv_from_all(self) -> dict:\n \"\"\"\n Receive messages from all hosts\n\n Returns:\n msg_from_node (dict): a dictionry which contains messages from each node\n \"\"\"\n\n msg_from_node = dict()\n for hostname, conn in self.master_recv_conns.items():\n msg_from_node[hostname] = conn.recv()\n return msg_from_node\n", "path": "colossalai/cli/launcher/multinode_runner.py"}]}
| 1,566 | 268 |
gh_patches_debug_5847
|
rasdani/github-patches
|
git_diff
|
enthought__chaco-401
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`six.moves` incorrectly imported in `errorbar_plot.py`
**Problem Description**
Noticed a stacktrace in my work that pointed out a `NameError` (`global name 'sm' is not defined`) at `errorbar_plot.py:77` of 4.7.1, which is:
l1, l2, l3 = sm.map(len, (index, value_low, value_high))
It seems that this is because normally `six.moves` is imported like `import six.moves as sm` and the author was used to this, but this file just has `import six.moves`. This seems to result from [`dc08831`](https://github.com/enthought/chaco/commit/dc08831d35c60057b0e26466e412e644dea1c89b#diff-7b3ca9023e76b4689bb2a0e42bf4d8f1)
**Reproduction Steps:**
I don't have clear cut reproduction steps, but a cursory glance at the code seems to make the cause of the error obvious. I'm afraid I'm not actually even able to easily modify the setup we have to create a more minimal example or to even test the proposed change (which should be to just add the alias to the import).
Especially since the traceback doesn't even contain any of our code, so I don't have an easy way to find the offending code, haha. Traceback is:
Traceback (most recent call last):
File "/[removed]/site-packages/enable-4.7.1-py2.7-linux-x86_64.egg/enable/qt4/base_window.py", line 202, in paintEvent
self.handler.paintEvent(event)
File "/[removed]/site-packages/enable-4.7.1-py2.7-linux-x86_64.egg/enable/qt4/base_window.py", line 54, in paintEvent
self._enable_window._paint(event)
File "/[removed]/site-packages/enable-4.7.1-py2.7-linux-x86_64.egg/enable/abstract_window.py", line 468, in _paint
self.component.draw(gc, view_bounds=(0, 0, size[0], size[1]))
File "/[removed]/site-packages/enable-4.7.1-py2.7-linux-x86_64.egg/enable/component.py", line 427, in draw
self._draw(gc, view_bounds, mode)
File "/[removed]/site-packages/enable-4.7.1-py2.7-linux-x86_64.egg/enable/component.py", line 769, in _draw
self._dispatch_draw(layer, bb, view_bounds, mode)
File "/[removed]/site-packages/enable-4.7.1-py2.7-linux-x86_64.egg/enable/container.py", line 272, in _dispatch_draw
component._dispatch_draw(layer, gc, new_bounds, mode)
File "/[removed]/site-packages/enable-4.7.1-py2.7-linux-x86_64.egg/enable/container.py", line 272, in _dispatch_draw
component._dispatch_draw(layer, gc, new_bounds, mode)
File "/[removed]/site-packages/enable-4.7.1-py2.7-linux-x86_64.egg/enable/component.py", line 799, in _dispatch_draw
handler(gc, view_bounds, mode)
File "/[removed]/site-packages/chaco-4.7.1-py2.7-linux-x86_64.egg/chaco/base_xy_plot.py", line 466, in _draw_plot
self._draw_component(gc, view_bounds, mode)
File "/[removed]/site-packages/chaco-4.7.1-py2.7-linux-x86_64.egg/chaco/base_xy_plot.py", line 473, in _draw_component
pts = self.get_screen_points()
File "/[removed]/site-packages/chaco-4.7.1-py2.7-linux-x86_64.egg/chaco/errorbar_plot.py", line 61, in get_screen_points
self._gather_points()
File "/[removed]/site-packages/chaco-4.7.1-py2.7-linux-x86_64.egg/chaco/errorbar_plot.py", line 77, in _gather_points
l1, l2, l3 = sm.map(len, (index, value_low, value_high))
NameError: global name 'sm' is not defined
**Expected behavior:**
The traceback does not occur.
**OS, Python version:**
Ubuntu 14.04, Python 2.7.14, and chaco 4.7.1 with enable 4.7.1.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `chaco/errorbar_plot.py`
Content:
```
1
2 from __future__ import with_statement
3
4 import six
5 import six.moves
6
7 # Major library imports
8 from numpy import column_stack, compress, invert, isnan, transpose
9 import logging
10
11 # Enthought library imports
12 from traits.api import Any, Enum, Float, Instance
13
14 # Chaco imports
15 from .lineplot import LinePlot
16 from .abstract_data_source import AbstractDataSource
17
18 # Set up a logger for this module
19 logger = logging.getLogger(__name__)
20
21
22
23 class ErrorBarPlot(LinePlot):
24 """ Renders errorbars at various points.
25 """
26
27 # The datasource containing the low values
28 value_low = Instance(AbstractDataSource)
29
30 # The datasource containing the high values
31 value_high = Instance(AbstractDataSource)
32
33 # The screen-space width of the endcap bars
34 endcap_size = Float(5.0)
35
36 # The kind of encap to render on error bars
37 endcap_style = Enum("bar", "none", None)
38
39 # Override the inherited trait definition
40 _cached_data_pts = Any
41
42 def map_screen(self, data_array):
43 """ data_array can be Nx2 or Nx3. In the former case, each row is
44 treated as (index, value), and this method returns screen X and Y
45 coordinates. In the latter case, each row is treated as (index,
46 value_low, value_high), and the method returns either (x, ylow, yhigh)
47 or (y, xlow, xhigh) depending on self.orientation.
48 """
49 if len(data_array) == 0:
50 return []
51 elif data_array.shape[1] == 2:
52 return LinePlot.map_screen(self, data_array)
53 else:
54 x, ylow, yhigh = transpose(data_array)
55 sx = self.index_mapper.map_screen(x)
56 sylow = self.value_mapper.map_screen(ylow)
57 syhigh = self.value_mapper.map_screen(yhigh)
58 return column_stack((sx, sylow, syhigh))
59
60 def get_screen_points(self):
61 self._gather_points()
62 return self.map_screen(self._cached_data_pts)
63
64 def _gather_points(self):
65
66 if self._cache_valid:
67 return
68
69 if not self.index or not self.value_low or not self.value_high:
70 return
71
72 index, index_mask = self.index.get_data_mask()
73 value_low, value_low_mask = self.value_low.get_data_mask()
74 value_high, value_high_mask = self.value_high.get_data_mask()
75 value_mask = value_low_mask & value_high_mask
76
77 l1, l2, l3 = sm.map(len, (index, value_low, value_high))
78 if 0 in (l1, l2, l3) or not (l1 == l2 == l3):
79 logger.warn("Chaco: using empty dataset; index_len=%d, value_low_len=%d, value_high_len=%d." % (l1,l2,l3))
80 self._cached_data_pts = []
81 self._cache_valid = True
82 return
83
84 index_range_mask = self.index_mapper.range.mask_data(index)
85 value_low_mask = self.value_mapper.range.mask_data(value_low)
86 value_high_mask = self.value_mapper.range.mask_data(value_high)
87 value_range_mask = value_low_mask | value_high_mask
88
89 nan_mask = invert(isnan(index_mask) | isnan(value_mask))
90 point_mask = index_mask & value_mask & nan_mask & index_range_mask & value_range_mask
91
92 points = column_stack((index, value_low, value_high))
93
94 self._cached_data_pts = compress(point_mask, points, axis=0)
95 self._cache_valid = True
96 return
97
98 def _render(self, gc, points, icon_mode=False):
99 if len(points) == 0:
100 return
101
102 if not icon_mode:
103 gc.clip_to_rect(self.x, self.y, self.width, self.height)
104
105 with gc:
106 gc.set_antialias(False)
107 gc.set_stroke_color(self.color_)
108 gc.set_line_width(self.line_width)
109 gc.set_line_dash(self.line_style_)
110
111 if self.orientation == "h":
112 x, ylow, yhigh = transpose(points)
113 start, end = column_stack((x, ylow)), column_stack((x, yhigh))
114 gc.line_set(start, end)
115 axis = 0
116 low = ylow
117 high = yhigh
118
119 else:
120 y, xlow, xhigh = transpose(points)
121 start, end = column_stack((xlow, y)), column_stack((xhigh, y))
122 gc.line_set(start, end)
123 axis = 1
124 low = xlow
125 high = xhigh
126
127 if self.endcap_style == "bar":
128 self._render_bar_endcap(gc, start, end, low, high, axis)
129 else:
130 gc.stroke_path()
131
132 if not icon_mode:
133 self._draw_default_axes(gc)
134 return
135
136
137 def _render_bar_endcap(self, gc, start, end, low, high, axis):
138 """ Renders the endcaps for endcap_style == "bar". start and end are
139 the two endpoints of the bare errorbar. axis is the column index
140 corresponding to the index direction, so for orientation of 'h', axis
141 is 0.
142
143 This method modifies start and end.
144 """
145 delta = self.endcap_size / 2.0
146 start[:,axis] -= delta
147 end[:,axis] += delta
148
149 start[:,1-axis] = low
150 end[:,1-axis] = low
151 gc.line_set(start, end)
152
153 start[:,1-axis] = high
154 end[:,1-axis] = high
155 gc.line_set(start, end)
156 gc.stroke_path()
157 return
158
159
160 def _render_icon(self, gc, x, y, width, height):
161 pass
162
163
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/chaco/errorbar_plot.py b/chaco/errorbar_plot.py
--- a/chaco/errorbar_plot.py
+++ b/chaco/errorbar_plot.py
@@ -2,7 +2,7 @@
from __future__ import with_statement
import six
-import six.moves
+import six.moves as sm
# Major library imports
from numpy import column_stack, compress, invert, isnan, transpose
@@ -159,4 +159,3 @@
def _render_icon(self, gc, x, y, width, height):
pass
-
|
{"golden_diff": "diff --git a/chaco/errorbar_plot.py b/chaco/errorbar_plot.py\n--- a/chaco/errorbar_plot.py\n+++ b/chaco/errorbar_plot.py\n@@ -2,7 +2,7 @@\n from __future__ import with_statement\n \n import six\n-import six.moves\n+import six.moves as sm\n \n # Major library imports\n from numpy import column_stack, compress, invert, isnan, transpose\n@@ -159,4 +159,3 @@\n \n def _render_icon(self, gc, x, y, width, height):\n pass\n-\n", "issue": "`six.moves` incorrectly imported in `errorbar_plot.py`\n**Problem Description**\r\nNoticed a stacktrace in my work that pointed out a `NameError` (`global name 'sm' is not defined`) at `errorbar_plot.py:77` of 4.7.1, which is:\r\n\r\n l1, l2, l3 = sm.map(len, (index, value_low, value_high))\r\n\r\nIt seems that this is because normally `six.moves` is imported like `import six.moves as sm` and the author was used to this, but this file just has `import six.moves`. This seems to result from [`dc08831`](https://github.com/enthought/chaco/commit/dc08831d35c60057b0e26466e412e644dea1c89b#diff-7b3ca9023e76b4689bb2a0e42bf4d8f1)\r\n\r\n**Reproduction Steps:**\r\n\r\nI don't have clear cut reproduction steps, but a cursory glance at the code seems to make the cause of the error obvious. I'm afraid I'm not actually even able to easily modify the setup we have to create a more minimal example or to even test the proposed change (which should be to just add the alias to the import).\r\n\r\nEspecially since the traceback doesn't even contain any of our code, so I don't have an easy way to find the offending code, haha. Traceback is:\r\n\r\n Traceback (most recent call last):\r\n File \"/[removed]/site-packages/enable-4.7.1-py2.7-linux-x86_64.egg/enable/qt4/base_window.py\", line 202, in paintEvent\r\n self.handler.paintEvent(event)\r\n File \"/[removed]/site-packages/enable-4.7.1-py2.7-linux-x86_64.egg/enable/qt4/base_window.py\", line 54, in paintEvent\r\n self._enable_window._paint(event)\r\n File \"/[removed]/site-packages/enable-4.7.1-py2.7-linux-x86_64.egg/enable/abstract_window.py\", line 468, in _paint\r\n self.component.draw(gc, view_bounds=(0, 0, size[0], size[1]))\r\n File \"/[removed]/site-packages/enable-4.7.1-py2.7-linux-x86_64.egg/enable/component.py\", line 427, in draw\r\n self._draw(gc, view_bounds, mode)\r\n File \"/[removed]/site-packages/enable-4.7.1-py2.7-linux-x86_64.egg/enable/component.py\", line 769, in _draw\r\n self._dispatch_draw(layer, bb, view_bounds, mode)\r\n File \"/[removed]/site-packages/enable-4.7.1-py2.7-linux-x86_64.egg/enable/container.py\", line 272, in _dispatch_draw\r\n component._dispatch_draw(layer, gc, new_bounds, mode)\r\n File \"/[removed]/site-packages/enable-4.7.1-py2.7-linux-x86_64.egg/enable/container.py\", line 272, in _dispatch_draw\r\n component._dispatch_draw(layer, gc, new_bounds, mode)\r\n File \"/[removed]/site-packages/enable-4.7.1-py2.7-linux-x86_64.egg/enable/component.py\", line 799, in _dispatch_draw\r\n handler(gc, view_bounds, mode)\r\n File \"/[removed]/site-packages/chaco-4.7.1-py2.7-linux-x86_64.egg/chaco/base_xy_plot.py\", line 466, in _draw_plot\r\n self._draw_component(gc, view_bounds, mode)\r\n File \"/[removed]/site-packages/chaco-4.7.1-py2.7-linux-x86_64.egg/chaco/base_xy_plot.py\", line 473, in _draw_component\r\n pts = self.get_screen_points()\r\n File \"/[removed]/site-packages/chaco-4.7.1-py2.7-linux-x86_64.egg/chaco/errorbar_plot.py\", line 61, in get_screen_points\r\n self._gather_points()\r\n File \"/[removed]/site-packages/chaco-4.7.1-py2.7-linux-x86_64.egg/chaco/errorbar_plot.py\", line 77, in _gather_points\r\n l1, l2, l3 = sm.map(len, (index, value_low, value_high))\r\n NameError: global name 'sm' is not defined\r\n\r\n**Expected behavior:**\r\n\r\nThe traceback does not occur.\r\n\r\n**OS, Python version:**\r\n\r\nUbuntu 14.04, Python 2.7.14, and chaco 4.7.1 with enable 4.7.1.\r\n\n", "before_files": [{"content": "\nfrom __future__ import with_statement\n\nimport six\nimport six.moves\n\n# Major library imports\nfrom numpy import column_stack, compress, invert, isnan, transpose\nimport logging\n\n# Enthought library imports\nfrom traits.api import Any, Enum, Float, Instance\n\n# Chaco imports\nfrom .lineplot import LinePlot\nfrom .abstract_data_source import AbstractDataSource\n\n# Set up a logger for this module\nlogger = logging.getLogger(__name__)\n\n\n\nclass ErrorBarPlot(LinePlot):\n \"\"\" Renders errorbars at various points.\n \"\"\"\n\n # The datasource containing the low values\n value_low = Instance(AbstractDataSource)\n\n # The datasource containing the high values\n value_high = Instance(AbstractDataSource)\n\n # The screen-space width of the endcap bars\n endcap_size = Float(5.0)\n\n # The kind of encap to render on error bars\n endcap_style = Enum(\"bar\", \"none\", None)\n\n # Override the inherited trait definition\n _cached_data_pts = Any\n\n def map_screen(self, data_array):\n \"\"\" data_array can be Nx2 or Nx3. In the former case, each row is\n treated as (index, value), and this method returns screen X and Y\n coordinates. In the latter case, each row is treated as (index,\n value_low, value_high), and the method returns either (x, ylow, yhigh)\n or (y, xlow, xhigh) depending on self.orientation.\n \"\"\"\n if len(data_array) == 0:\n return []\n elif data_array.shape[1] == 2:\n return LinePlot.map_screen(self, data_array)\n else:\n x, ylow, yhigh = transpose(data_array)\n sx = self.index_mapper.map_screen(x)\n sylow = self.value_mapper.map_screen(ylow)\n syhigh = self.value_mapper.map_screen(yhigh)\n return column_stack((sx, sylow, syhigh))\n\n def get_screen_points(self):\n self._gather_points()\n return self.map_screen(self._cached_data_pts)\n\n def _gather_points(self):\n\n if self._cache_valid:\n return\n\n if not self.index or not self.value_low or not self.value_high:\n return\n\n index, index_mask = self.index.get_data_mask()\n value_low, value_low_mask = self.value_low.get_data_mask()\n value_high, value_high_mask = self.value_high.get_data_mask()\n value_mask = value_low_mask & value_high_mask\n\n l1, l2, l3 = sm.map(len, (index, value_low, value_high))\n if 0 in (l1, l2, l3) or not (l1 == l2 == l3):\n logger.warn(\"Chaco: using empty dataset; index_len=%d, value_low_len=%d, value_high_len=%d.\" % (l1,l2,l3))\n self._cached_data_pts = []\n self._cache_valid = True\n return\n\n index_range_mask = self.index_mapper.range.mask_data(index)\n value_low_mask = self.value_mapper.range.mask_data(value_low)\n value_high_mask = self.value_mapper.range.mask_data(value_high)\n value_range_mask = value_low_mask | value_high_mask\n\n nan_mask = invert(isnan(index_mask) | isnan(value_mask))\n point_mask = index_mask & value_mask & nan_mask & index_range_mask & value_range_mask\n\n points = column_stack((index, value_low, value_high))\n\n self._cached_data_pts = compress(point_mask, points, axis=0)\n self._cache_valid = True\n return\n\n def _render(self, gc, points, icon_mode=False):\n if len(points) == 0:\n return\n\n if not icon_mode:\n gc.clip_to_rect(self.x, self.y, self.width, self.height)\n\n with gc:\n gc.set_antialias(False)\n gc.set_stroke_color(self.color_)\n gc.set_line_width(self.line_width)\n gc.set_line_dash(self.line_style_)\n\n if self.orientation == \"h\":\n x, ylow, yhigh = transpose(points)\n start, end = column_stack((x, ylow)), column_stack((x, yhigh))\n gc.line_set(start, end)\n axis = 0\n low = ylow\n high = yhigh\n\n else:\n y, xlow, xhigh = transpose(points)\n start, end = column_stack((xlow, y)), column_stack((xhigh, y))\n gc.line_set(start, end)\n axis = 1\n low = xlow\n high = xhigh\n\n if self.endcap_style == \"bar\":\n self._render_bar_endcap(gc, start, end, low, high, axis)\n else:\n gc.stroke_path()\n\n if not icon_mode:\n self._draw_default_axes(gc)\n return\n\n\n def _render_bar_endcap(self, gc, start, end, low, high, axis):\n \"\"\" Renders the endcaps for endcap_style == \"bar\". start and end are\n the two endpoints of the bare errorbar. axis is the column index\n corresponding to the index direction, so for orientation of 'h', axis\n is 0.\n\n This method modifies start and end.\n \"\"\"\n delta = self.endcap_size / 2.0\n start[:,axis] -= delta\n end[:,axis] += delta\n\n start[:,1-axis] = low\n end[:,1-axis] = low\n gc.line_set(start, end)\n\n start[:,1-axis] = high\n end[:,1-axis] = high\n gc.line_set(start, end)\n gc.stroke_path()\n return\n\n\n def _render_icon(self, gc, x, y, width, height):\n pass\n\n", "path": "chaco/errorbar_plot.py"}], "after_files": [{"content": "\nfrom __future__ import with_statement\n\nimport six\nimport six.moves as sm\n\n# Major library imports\nfrom numpy import column_stack, compress, invert, isnan, transpose\nimport logging\n\n# Enthought library imports\nfrom traits.api import Any, Enum, Float, Instance\n\n# Chaco imports\nfrom .lineplot import LinePlot\nfrom .abstract_data_source import AbstractDataSource\n\n# Set up a logger for this module\nlogger = logging.getLogger(__name__)\n\n\n\nclass ErrorBarPlot(LinePlot):\n \"\"\" Renders errorbars at various points.\n \"\"\"\n\n # The datasource containing the low values\n value_low = Instance(AbstractDataSource)\n\n # The datasource containing the high values\n value_high = Instance(AbstractDataSource)\n\n # The screen-space width of the endcap bars\n endcap_size = Float(5.0)\n\n # The kind of encap to render on error bars\n endcap_style = Enum(\"bar\", \"none\", None)\n\n # Override the inherited trait definition\n _cached_data_pts = Any\n\n def map_screen(self, data_array):\n \"\"\" data_array can be Nx2 or Nx3. In the former case, each row is\n treated as (index, value), and this method returns screen X and Y\n coordinates. In the latter case, each row is treated as (index,\n value_low, value_high), and the method returns either (x, ylow, yhigh)\n or (y, xlow, xhigh) depending on self.orientation.\n \"\"\"\n if len(data_array) == 0:\n return []\n elif data_array.shape[1] == 2:\n return LinePlot.map_screen(self, data_array)\n else:\n x, ylow, yhigh = transpose(data_array)\n sx = self.index_mapper.map_screen(x)\n sylow = self.value_mapper.map_screen(ylow)\n syhigh = self.value_mapper.map_screen(yhigh)\n return column_stack((sx, sylow, syhigh))\n\n def get_screen_points(self):\n self._gather_points()\n return self.map_screen(self._cached_data_pts)\n\n def _gather_points(self):\n\n if self._cache_valid:\n return\n\n if not self.index or not self.value_low or not self.value_high:\n return\n\n index, index_mask = self.index.get_data_mask()\n value_low, value_low_mask = self.value_low.get_data_mask()\n value_high, value_high_mask = self.value_high.get_data_mask()\n value_mask = value_low_mask & value_high_mask\n\n l1, l2, l3 = sm.map(len, (index, value_low, value_high))\n if 0 in (l1, l2, l3) or not (l1 == l2 == l3):\n logger.warn(\"Chaco: using empty dataset; index_len=%d, value_low_len=%d, value_high_len=%d.\" % (l1,l2,l3))\n self._cached_data_pts = []\n self._cache_valid = True\n return\n\n index_range_mask = self.index_mapper.range.mask_data(index)\n value_low_mask = self.value_mapper.range.mask_data(value_low)\n value_high_mask = self.value_mapper.range.mask_data(value_high)\n value_range_mask = value_low_mask | value_high_mask\n\n nan_mask = invert(isnan(index_mask) | isnan(value_mask))\n point_mask = index_mask & value_mask & nan_mask & index_range_mask & value_range_mask\n\n points = column_stack((index, value_low, value_high))\n\n self._cached_data_pts = compress(point_mask, points, axis=0)\n self._cache_valid = True\n return\n\n def _render(self, gc, points, icon_mode=False):\n if len(points) == 0:\n return\n\n if not icon_mode:\n gc.clip_to_rect(self.x, self.y, self.width, self.height)\n\n with gc:\n gc.set_antialias(False)\n gc.set_stroke_color(self.color_)\n gc.set_line_width(self.line_width)\n gc.set_line_dash(self.line_style_)\n\n if self.orientation == \"h\":\n x, ylow, yhigh = transpose(points)\n start, end = column_stack((x, ylow)), column_stack((x, yhigh))\n gc.line_set(start, end)\n axis = 0\n low = ylow\n high = yhigh\n\n else:\n y, xlow, xhigh = transpose(points)\n start, end = column_stack((xlow, y)), column_stack((xhigh, y))\n gc.line_set(start, end)\n axis = 1\n low = xlow\n high = xhigh\n\n if self.endcap_style == \"bar\":\n self._render_bar_endcap(gc, start, end, low, high, axis)\n else:\n gc.stroke_path()\n\n if not icon_mode:\n self._draw_default_axes(gc)\n return\n\n\n def _render_bar_endcap(self, gc, start, end, low, high, axis):\n \"\"\" Renders the endcaps for endcap_style == \"bar\". start and end are\n the two endpoints of the bare errorbar. axis is the column index\n corresponding to the index direction, so for orientation of 'h', axis\n is 0.\n\n This method modifies start and end.\n \"\"\"\n delta = self.endcap_size / 2.0\n start[:,axis] -= delta\n end[:,axis] += delta\n\n start[:,1-axis] = low\n end[:,1-axis] = low\n gc.line_set(start, end)\n\n start[:,1-axis] = high\n end[:,1-axis] = high\n gc.line_set(start, end)\n gc.stroke_path()\n return\n\n\n def _render_icon(self, gc, x, y, width, height):\n pass\n", "path": "chaco/errorbar_plot.py"}]}
| 3,035 | 124 |
gh_patches_debug_17718
|
rasdani/github-patches
|
git_diff
|
bokeh__bokeh-8795
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
DirectoryHandler does not handle ipynb files correctly
The documentation says that the ``DirectoryHandler`` allows serving main.ipynb notebook files inside a directory and the code does have codepaths that mention notebooks. However the implementation tries to load notebook files as if they were normal scripts, leading to immediate errors because notebook files are actually JSON. If the DirectoryHandler finds an ipynb file it should apply the same nbconvert transform used by the NotebookHandler.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bokeh/application/handlers/directory.py`
Content:
```
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2012 - 2019, Anaconda, Inc., and Bokeh Contributors.
3 # All rights reserved.
4 #
5 # The full license is in the file LICENSE.txt, distributed with this software.
6 #-----------------------------------------------------------------------------
7 ''' Provide a Bokeh Application Handler to build up documents by running
8 the code from ``main.py`` or ``main.ipynb`` files in specified directories.
9
10 The directory may also optionally contain:
11
12 * A ``server_lifecyle.py`` module to provide lifecycle callbacks for the
13 application and sessions.
14
15 * A ``static`` subdirectory containing app-specific static resources to
16 serve.
17
18 * A ``theme.yaml`` file containing a Bokeh theme to automatically apply to
19 all new documents.
20
21 * A ``templates`` subdirectory containing templates for app display
22
23 A full directory layout might look like:
24
25 .. code-block:: none
26
27 myapp
28 |
29 +---main.py
30 +---server_lifecycle.py
31 +---static
32 +---theme.yaml
33 +---templates
34 +---index.html
35
36 '''
37
38 #-----------------------------------------------------------------------------
39 # Boilerplate
40 #-----------------------------------------------------------------------------
41 from __future__ import absolute_import, division, print_function, unicode_literals
42
43 import logging
44 log = logging.getLogger(__name__)
45
46 #-----------------------------------------------------------------------------
47 # Imports
48 #-----------------------------------------------------------------------------
49
50 # Standard library imports
51 from os.path import basename, dirname, exists, join
52
53 # External imports
54 from jinja2 import Environment, FileSystemLoader
55
56 # Bokeh imports
57 from .handler import Handler
58 from .script import ScriptHandler
59 from .server_lifecycle import ServerLifecycleHandler
60
61 #-----------------------------------------------------------------------------
62 # Globals and constants
63 #-----------------------------------------------------------------------------
64
65 __all__ = (
66 'DirectoryHandler',
67 )
68
69 #-----------------------------------------------------------------------------
70 # General API
71 #-----------------------------------------------------------------------------
72
73 #-----------------------------------------------------------------------------
74 # Dev API
75 #-----------------------------------------------------------------------------
76
77 class DirectoryHandler(Handler):
78 ''' Load an application directory which modifies a Document.
79
80 '''
81
82 def __init__(self, *args, **kwargs):
83 '''
84 Keywords:
85 filename (str) : a path to an application directory with either "main.py" or "main.ipynb"
86
87 argv (list[str], optional) : a list of string arguments to make available as sys.argv to main.py
88 '''
89 super(DirectoryHandler, self).__init__(*args, **kwargs)
90
91 if 'filename' not in kwargs:
92 raise ValueError('Must pass a filename to DirectoryHandler')
93 src_path = kwargs['filename']
94 argv = kwargs.get('argv', [])
95
96 main_py = join(src_path, 'main.py')
97 main_ipy = join(src_path, 'main.ipynb')
98 if exists(main_py) and exists(main_ipy):
99 log.warning("Found both 'main.py' and 'main.ipynb' in %s, using 'main.py'" % (src_path))
100 main = main_py
101 elif exists(main_py):
102 main = main_py
103 elif exists(main_ipy):
104 main = main_ipy
105 else:
106 raise ValueError("No 'main.py' or 'main.ipynb' in %s" % (src_path))
107 self._path = src_path
108 self._main = main
109 self._main_handler = ScriptHandler(filename=self._main, argv=argv)
110
111 lifecycle = join(src_path, 'server_lifecycle.py')
112 if exists(lifecycle):
113 self._lifecycle = lifecycle
114 self._lifecycle_handler = ServerLifecycleHandler(filename=self._lifecycle, argv=argv)
115 else:
116 self._lifecycle = None
117 self._lifecycle_handler = Handler() # no-op handler
118
119 self._theme = None
120 themeyaml = join(src_path, 'theme.yaml')
121 if exists(themeyaml):
122 from bokeh.themes import Theme
123 self._theme = Theme(filename=themeyaml)
124
125 appstatic = join(src_path, 'static')
126 if exists(appstatic):
127 self._static = appstatic
128
129 self._template = None
130 appindex = join(src_path, 'templates', 'index.html')
131 if exists(appindex):
132 env = Environment(loader=FileSystemLoader(dirname(appindex)))
133 self._template = env.get_template('index.html')
134
135 # Properties --------------------------------------------------------------
136
137 @property
138 def error(self):
139 ''' If the handler fails, may contain a related error message.
140
141 '''
142 return self._main_handler.error or self._lifecycle_handler.error
143
144 @property
145 def error_detail(self):
146 ''' If the handler fails, may contain a traceback or other details.
147
148 '''
149 return self._main_handler.error_detail or self._lifecycle_handler.error_detail
150
151 @property
152 def failed(self):
153 ''' ``True`` if the handler failed to modify the doc
154
155 '''
156 return self._main_handler.failed or self._lifecycle_handler.failed
157
158 @property
159 def safe_to_fork(self):
160 ''' Whether it is still safe for the Bokeh server to fork new workers.
161
162 ``False`` if the configured code (script, notebook, etc.) has already
163 been run.
164
165 '''
166 return self._main_handler.safe_to_fork
167
168 # Public methods ----------------------------------------------------------
169
170 def modify_document(self, doc):
171 ''' Execute the configured ``main.py`` or ``main.ipynb`` to modify the
172 document.
173
174 This method will also search the app directory for any theme or
175 template files, and automatically configure the document with them
176 if they are found.
177
178 '''
179 if self._lifecycle_handler.failed:
180 return
181 # Note: we do NOT copy self._theme, which assumes the Theme
182 # class is immutable (has no setters)
183 if self._theme is not None:
184 doc.theme = self._theme
185
186 if self._template is not None:
187 doc.template = self._template
188
189 # This internal handler should never add a template
190 self._main_handler.modify_document(doc)
191
192 def on_server_loaded(self, server_context):
193 ''' Execute `on_server_unloaded`` from ``server_lifecycle.py`` (if
194 it is defined) when the server is first started.
195
196 Args:
197 server_context (ServerContext) :
198
199 '''
200 return self._lifecycle_handler.on_server_loaded(server_context)
201
202 def on_server_unloaded(self, server_context):
203 ''' Execute ``on_server_unloaded`` from ``server_lifecycle.py`` (if
204 it is defined) when the server cleanly exits. (Before stopping the
205 server's ``IOLoop``.)
206
207 Args:
208 server_context (ServerContext) :
209
210 .. warning::
211 In practice this code may not run, since servers are often killed
212 by a signal.
213
214
215 '''
216 return self._lifecycle_handler.on_server_unloaded(server_context)
217
218 def on_session_created(self, session_context):
219 ''' Execute ``on_session_created`` from ``server_lifecycle.py`` (if
220 it is defined) when a new session is created.
221
222 Args:
223 session_context (SessionContext) :
224
225 '''
226 return self._lifecycle_handler.on_session_created(session_context)
227
228 def on_session_destroyed(self, session_context):
229 ''' Execute ``on_session_destroyed`` from ``server_lifecycle.py`` (if
230 it is defined) when a session is destroyed.
231
232 Args:
233 session_context (SessionContext) :
234
235 '''
236 return self._lifecycle_handler.on_session_destroyed(session_context)
237
238 def url_path(self):
239 ''' The last path component for the basename of the path to the
240 configured directory.
241
242 '''
243 if self.failed:
244 return None
245 else:
246 # TODO should fix invalid URL characters
247 return '/' + basename(self._path)
248
249 #-----------------------------------------------------------------------------
250 # Private API
251 #-----------------------------------------------------------------------------
252
253 #-----------------------------------------------------------------------------
254 # Code
255 #-----------------------------------------------------------------------------
256
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bokeh/application/handlers/directory.py b/bokeh/application/handlers/directory.py
--- a/bokeh/application/handlers/directory.py
+++ b/bokeh/application/handlers/directory.py
@@ -55,6 +55,7 @@
# Bokeh imports
from .handler import Handler
+from .notebook import NotebookHandler
from .script import ScriptHandler
from .server_lifecycle import ServerLifecycleHandler
@@ -106,7 +107,9 @@
raise ValueError("No 'main.py' or 'main.ipynb' in %s" % (src_path))
self._path = src_path
self._main = main
- self._main_handler = ScriptHandler(filename=self._main, argv=argv)
+
+ handler = NotebookHandler if main.endswith('.ipynb') else ScriptHandler
+ self._main_handler = handler(filename=self._main, argv=argv)
lifecycle = join(src_path, 'server_lifecycle.py')
if exists(lifecycle):
|
{"golden_diff": "diff --git a/bokeh/application/handlers/directory.py b/bokeh/application/handlers/directory.py\n--- a/bokeh/application/handlers/directory.py\n+++ b/bokeh/application/handlers/directory.py\n@@ -55,6 +55,7 @@\n \n # Bokeh imports\n from .handler import Handler\n+from .notebook import NotebookHandler\n from .script import ScriptHandler\n from .server_lifecycle import ServerLifecycleHandler\n \n@@ -106,7 +107,9 @@\n raise ValueError(\"No 'main.py' or 'main.ipynb' in %s\" % (src_path))\n self._path = src_path\n self._main = main\n- self._main_handler = ScriptHandler(filename=self._main, argv=argv)\n+\n+ handler = NotebookHandler if main.endswith('.ipynb') else ScriptHandler\n+ self._main_handler = handler(filename=self._main, argv=argv)\n \n lifecycle = join(src_path, 'server_lifecycle.py')\n if exists(lifecycle):\n", "issue": "DirectoryHandler does not handle ipynb files correctly\nThe documentation says that the ``DirectoryHandler`` allows serving main.ipynb notebook files inside a directory and the code does have codepaths that mention notebooks. However the implementation tries to load notebook files as if they were normal scripts, leading to immediate errors because notebook files are actually JSON. If the DirectoryHandler finds an ipynb file it should apply the same nbconvert transform used by the NotebookHandler.\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2012 - 2019, Anaconda, Inc., and Bokeh Contributors.\n# All rights reserved.\n#\n# The full license is in the file LICENSE.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n''' Provide a Bokeh Application Handler to build up documents by running\nthe code from ``main.py`` or ``main.ipynb`` files in specified directories.\n\nThe directory may also optionally contain:\n\n* A ``server_lifecyle.py`` module to provide lifecycle callbacks for the\n application and sessions.\n\n* A ``static`` subdirectory containing app-specific static resources to\n serve.\n\n* A ``theme.yaml`` file containing a Bokeh theme to automatically apply to\n all new documents.\n\n* A ``templates`` subdirectory containing templates for app display\n\nA full directory layout might look like:\n\n.. code-block:: none\n\n myapp\n |\n +---main.py\n +---server_lifecycle.py\n +---static\n +---theme.yaml\n +---templates\n +---index.html\n\n'''\n\n#-----------------------------------------------------------------------------\n# Boilerplate\n#-----------------------------------------------------------------------------\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport logging\nlog = logging.getLogger(__name__)\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\n# Standard library imports\nfrom os.path import basename, dirname, exists, join\n\n# External imports\nfrom jinja2 import Environment, FileSystemLoader\n\n# Bokeh imports\nfrom .handler import Handler\nfrom .script import ScriptHandler\nfrom .server_lifecycle import ServerLifecycleHandler\n\n#-----------------------------------------------------------------------------\n# Globals and constants\n#-----------------------------------------------------------------------------\n\n__all__ = (\n 'DirectoryHandler',\n)\n\n#-----------------------------------------------------------------------------\n# General API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Dev API\n#-----------------------------------------------------------------------------\n\nclass DirectoryHandler(Handler):\n ''' Load an application directory which modifies a Document.\n\n '''\n\n def __init__(self, *args, **kwargs):\n '''\n Keywords:\n filename (str) : a path to an application directory with either \"main.py\" or \"main.ipynb\"\n\n argv (list[str], optional) : a list of string arguments to make available as sys.argv to main.py\n '''\n super(DirectoryHandler, self).__init__(*args, **kwargs)\n\n if 'filename' not in kwargs:\n raise ValueError('Must pass a filename to DirectoryHandler')\n src_path = kwargs['filename']\n argv = kwargs.get('argv', [])\n\n main_py = join(src_path, 'main.py')\n main_ipy = join(src_path, 'main.ipynb')\n if exists(main_py) and exists(main_ipy):\n log.warning(\"Found both 'main.py' and 'main.ipynb' in %s, using 'main.py'\" % (src_path))\n main = main_py\n elif exists(main_py):\n main = main_py\n elif exists(main_ipy):\n main = main_ipy\n else:\n raise ValueError(\"No 'main.py' or 'main.ipynb' in %s\" % (src_path))\n self._path = src_path\n self._main = main\n self._main_handler = ScriptHandler(filename=self._main, argv=argv)\n\n lifecycle = join(src_path, 'server_lifecycle.py')\n if exists(lifecycle):\n self._lifecycle = lifecycle\n self._lifecycle_handler = ServerLifecycleHandler(filename=self._lifecycle, argv=argv)\n else:\n self._lifecycle = None\n self._lifecycle_handler = Handler() # no-op handler\n\n self._theme = None\n themeyaml = join(src_path, 'theme.yaml')\n if exists(themeyaml):\n from bokeh.themes import Theme\n self._theme = Theme(filename=themeyaml)\n\n appstatic = join(src_path, 'static')\n if exists(appstatic):\n self._static = appstatic\n\n self._template = None\n appindex = join(src_path, 'templates', 'index.html')\n if exists(appindex):\n env = Environment(loader=FileSystemLoader(dirname(appindex)))\n self._template = env.get_template('index.html')\n\n # Properties --------------------------------------------------------------\n\n @property\n def error(self):\n ''' If the handler fails, may contain a related error message.\n\n '''\n return self._main_handler.error or self._lifecycle_handler.error\n\n @property\n def error_detail(self):\n ''' If the handler fails, may contain a traceback or other details.\n\n '''\n return self._main_handler.error_detail or self._lifecycle_handler.error_detail\n\n @property\n def failed(self):\n ''' ``True`` if the handler failed to modify the doc\n\n '''\n return self._main_handler.failed or self._lifecycle_handler.failed\n\n @property\n def safe_to_fork(self):\n ''' Whether it is still safe for the Bokeh server to fork new workers.\n\n ``False`` if the configured code (script, notebook, etc.) has already\n been run.\n\n '''\n return self._main_handler.safe_to_fork\n\n # Public methods ----------------------------------------------------------\n\n def modify_document(self, doc):\n ''' Execute the configured ``main.py`` or ``main.ipynb`` to modify the\n document.\n\n This method will also search the app directory for any theme or\n template files, and automatically configure the document with them\n if they are found.\n\n '''\n if self._lifecycle_handler.failed:\n return\n # Note: we do NOT copy self._theme, which assumes the Theme\n # class is immutable (has no setters)\n if self._theme is not None:\n doc.theme = self._theme\n\n if self._template is not None:\n doc.template = self._template\n\n # This internal handler should never add a template\n self._main_handler.modify_document(doc)\n\n def on_server_loaded(self, server_context):\n ''' Execute `on_server_unloaded`` from ``server_lifecycle.py`` (if\n it is defined) when the server is first started.\n\n Args:\n server_context (ServerContext) :\n\n '''\n return self._lifecycle_handler.on_server_loaded(server_context)\n\n def on_server_unloaded(self, server_context):\n ''' Execute ``on_server_unloaded`` from ``server_lifecycle.py`` (if\n it is defined) when the server cleanly exits. (Before stopping the\n server's ``IOLoop``.)\n\n Args:\n server_context (ServerContext) :\n\n .. warning::\n In practice this code may not run, since servers are often killed\n by a signal.\n\n\n '''\n return self._lifecycle_handler.on_server_unloaded(server_context)\n\n def on_session_created(self, session_context):\n ''' Execute ``on_session_created`` from ``server_lifecycle.py`` (if\n it is defined) when a new session is created.\n\n Args:\n session_context (SessionContext) :\n\n '''\n return self._lifecycle_handler.on_session_created(session_context)\n\n def on_session_destroyed(self, session_context):\n ''' Execute ``on_session_destroyed`` from ``server_lifecycle.py`` (if\n it is defined) when a session is destroyed.\n\n Args:\n session_context (SessionContext) :\n\n '''\n return self._lifecycle_handler.on_session_destroyed(session_context)\n\n def url_path(self):\n ''' The last path component for the basename of the path to the\n configured directory.\n\n '''\n if self.failed:\n return None\n else:\n # TODO should fix invalid URL characters\n return '/' + basename(self._path)\n\n#-----------------------------------------------------------------------------\n# Private API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Code\n#-----------------------------------------------------------------------------\n", "path": "bokeh/application/handlers/directory.py"}], "after_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2012 - 2019, Anaconda, Inc., and Bokeh Contributors.\n# All rights reserved.\n#\n# The full license is in the file LICENSE.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n''' Provide a Bokeh Application Handler to build up documents by running\nthe code from ``main.py`` or ``main.ipynb`` files in specified directories.\n\nThe directory may also optionally contain:\n\n* A ``server_lifecyle.py`` module to provide lifecycle callbacks for the\n application and sessions.\n\n* A ``static`` subdirectory containing app-specific static resources to\n serve.\n\n* A ``theme.yaml`` file containing a Bokeh theme to automatically apply to\n all new documents.\n\n* A ``templates`` subdirectory containing templates for app display\n\nA full directory layout might look like:\n\n.. code-block:: none\n\n myapp\n |\n +---main.py\n +---server_lifecycle.py\n +---static\n +---theme.yaml\n +---templates\n +---index.html\n\n'''\n\n#-----------------------------------------------------------------------------\n# Boilerplate\n#-----------------------------------------------------------------------------\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport logging\nlog = logging.getLogger(__name__)\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\n# Standard library imports\nfrom os.path import basename, dirname, exists, join\n\n# External imports\nfrom jinja2 import Environment, FileSystemLoader\n\n# Bokeh imports\nfrom .handler import Handler\nfrom .notebook import NotebookHandler\nfrom .script import ScriptHandler\nfrom .server_lifecycle import ServerLifecycleHandler\n\n#-----------------------------------------------------------------------------\n# Globals and constants\n#-----------------------------------------------------------------------------\n\n__all__ = (\n 'DirectoryHandler',\n)\n\n#-----------------------------------------------------------------------------\n# General API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Dev API\n#-----------------------------------------------------------------------------\n\nclass DirectoryHandler(Handler):\n ''' Load an application directory which modifies a Document.\n\n '''\n\n def __init__(self, *args, **kwargs):\n '''\n Keywords:\n filename (str) : a path to an application directory with either \"main.py\" or \"main.ipynb\"\n\n argv (list[str], optional) : a list of string arguments to make available as sys.argv to main.py\n '''\n super(DirectoryHandler, self).__init__(*args, **kwargs)\n\n if 'filename' not in kwargs:\n raise ValueError('Must pass a filename to DirectoryHandler')\n src_path = kwargs['filename']\n argv = kwargs.get('argv', [])\n\n main_py = join(src_path, 'main.py')\n main_ipy = join(src_path, 'main.ipynb')\n if exists(main_py) and exists(main_ipy):\n log.warning(\"Found both 'main.py' and 'main.ipynb' in %s, using 'main.py'\" % (src_path))\n main = main_py\n elif exists(main_py):\n main = main_py\n elif exists(main_ipy):\n main = main_ipy\n else:\n raise ValueError(\"No 'main.py' or 'main.ipynb' in %s\" % (src_path))\n self._path = src_path\n self._main = main\n\n handler = NotebookHandler if main.endswith('.ipynb') else ScriptHandler\n self._main_handler = handler(filename=self._main, argv=argv)\n\n lifecycle = join(src_path, 'server_lifecycle.py')\n if exists(lifecycle):\n self._lifecycle = lifecycle\n self._lifecycle_handler = ServerLifecycleHandler(filename=self._lifecycle, argv=argv)\n else:\n self._lifecycle = None\n self._lifecycle_handler = Handler() # no-op handler\n\n self._theme = None\n themeyaml = join(src_path, 'theme.yaml')\n if exists(themeyaml):\n from bokeh.themes import Theme\n self._theme = Theme(filename=themeyaml)\n\n appstatic = join(src_path, 'static')\n if exists(appstatic):\n self._static = appstatic\n\n self._template = None\n appindex = join(src_path, 'templates', 'index.html')\n if exists(appindex):\n env = Environment(loader=FileSystemLoader(dirname(appindex)))\n self._template = env.get_template('index.html')\n\n # Properties --------------------------------------------------------------\n\n @property\n def error(self):\n ''' If the handler fails, may contain a related error message.\n\n '''\n return self._main_handler.error or self._lifecycle_handler.error\n\n @property\n def error_detail(self):\n ''' If the handler fails, may contain a traceback or other details.\n\n '''\n return self._main_handler.error_detail or self._lifecycle_handler.error_detail\n\n @property\n def failed(self):\n ''' ``True`` if the handler failed to modify the doc\n\n '''\n return self._main_handler.failed or self._lifecycle_handler.failed\n\n @property\n def safe_to_fork(self):\n ''' Whether it is still safe for the Bokeh server to fork new workers.\n\n ``False`` if the configured code (script, notebook, etc.) has already\n been run.\n\n '''\n return self._main_handler.safe_to_fork\n\n # Public methods ----------------------------------------------------------\n\n def modify_document(self, doc):\n ''' Execute the configured ``main.py`` or ``main.ipynb`` to modify the\n document.\n\n This method will also search the app directory for any theme or\n template files, and automatically configure the document with them\n if they are found.\n\n '''\n if self._lifecycle_handler.failed:\n return\n # Note: we do NOT copy self._theme, which assumes the Theme\n # class is immutable (has no setters)\n if self._theme is not None:\n doc.theme = self._theme\n\n if self._template is not None:\n doc.template = self._template\n\n # This internal handler should never add a template\n self._main_handler.modify_document(doc)\n\n def on_server_loaded(self, server_context):\n ''' Execute `on_server_unloaded`` from ``server_lifecycle.py`` (if\n it is defined) when the server is first started.\n\n Args:\n server_context (ServerContext) :\n\n '''\n return self._lifecycle_handler.on_server_loaded(server_context)\n\n def on_server_unloaded(self, server_context):\n ''' Execute ``on_server_unloaded`` from ``server_lifecycle.py`` (if\n it is defined) when the server cleanly exits. (Before stopping the\n server's ``IOLoop``.)\n\n Args:\n server_context (ServerContext) :\n\n .. warning::\n In practice this code may not run, since servers are often killed\n by a signal.\n\n\n '''\n return self._lifecycle_handler.on_server_unloaded(server_context)\n\n def on_session_created(self, session_context):\n ''' Execute ``on_session_created`` from ``server_lifecycle.py`` (if\n it is defined) when a new session is created.\n\n Args:\n session_context (SessionContext) :\n\n '''\n return self._lifecycle_handler.on_session_created(session_context)\n\n def on_session_destroyed(self, session_context):\n ''' Execute ``on_session_destroyed`` from ``server_lifecycle.py`` (if\n it is defined) when a session is destroyed.\n\n Args:\n session_context (SessionContext) :\n\n '''\n return self._lifecycle_handler.on_session_destroyed(session_context)\n\n def url_path(self):\n ''' The last path component for the basename of the path to the\n configured directory.\n\n '''\n if self.failed:\n return None\n else:\n # TODO should fix invalid URL characters\n return '/' + basename(self._path)\n\n#-----------------------------------------------------------------------------\n# Private API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Code\n#-----------------------------------------------------------------------------\n", "path": "bokeh/application/handlers/directory.py"}]}
| 2,693 | 230 |
gh_patches_debug_32894
|
rasdani/github-patches
|
git_diff
|
facebookresearch__hydra-609
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Feature Request] Allow @hydra.main() to take a config object and pass it through
# 🚀 Feature Request
Allow @hydra.main() to take a config and pass it through
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hydra/main.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 import functools
3 from typing import Callable, Optional
4
5 from ._internal.utils import get_args_parser, run_hydra
6 from .types import TaskFunction
7
8
9 def main(
10 config_path: Optional[str] = None,
11 config_name: Optional[str] = None,
12 strict: Optional[bool] = None,
13 ) -> Callable[[TaskFunction], Callable[[], None]]:
14 """
15 :param config_path: the config path, a directory relative to the declaring python file.
16 :param config_name: the name of the config (usually the file name without the .yaml extension)
17 :param strict: (Deprecated) strict mode, will throw an error if command line overrides are not changing an
18 existing key or if the code is accessing a non existent key
19 """
20
21 def main_decorator(task_function: TaskFunction) -> Callable[[], None]:
22 @functools.wraps(task_function)
23 def decorated_main() -> None:
24 run_hydra(
25 args_parser=get_args_parser(),
26 task_function=task_function,
27 config_path=config_path,
28 config_name=config_name,
29 strict=strict,
30 )
31
32 return decorated_main
33
34 return main_decorator
35
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/hydra/main.py b/hydra/main.py
--- a/hydra/main.py
+++ b/hydra/main.py
@@ -1,6 +1,8 @@
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
import functools
-from typing import Callable, Optional
+from typing import Any, Callable, Optional
+
+from omegaconf import DictConfig
from ._internal.utils import get_args_parser, run_hydra
from .types import TaskFunction
@@ -10,7 +12,7 @@
config_path: Optional[str] = None,
config_name: Optional[str] = None,
strict: Optional[bool] = None,
-) -> Callable[[TaskFunction], Callable[[], None]]:
+) -> Callable[[TaskFunction], Any]:
"""
:param config_path: the config path, a directory relative to the declaring python file.
:param config_name: the name of the config (usually the file name without the .yaml extension)
@@ -20,14 +22,20 @@
def main_decorator(task_function: TaskFunction) -> Callable[[], None]:
@functools.wraps(task_function)
- def decorated_main() -> None:
- run_hydra(
- args_parser=get_args_parser(),
- task_function=task_function,
- config_path=config_path,
- config_name=config_name,
- strict=strict,
- )
+ def decorated_main(cfg_passthrough: Optional[DictConfig] = None) -> Any:
+ if cfg_passthrough is not None:
+ return task_function(cfg_passthrough)
+ else:
+ args = get_args_parser()
+ # no return value from run_hydra() as it may sometime actually run the task_function
+ # multiple times (--multirun)
+ run_hydra(
+ args_parser=args,
+ task_function=task_function,
+ config_path=config_path,
+ config_name=config_name,
+ strict=strict,
+ )
return decorated_main
|
{"golden_diff": "diff --git a/hydra/main.py b/hydra/main.py\n--- a/hydra/main.py\n+++ b/hydra/main.py\n@@ -1,6 +1,8 @@\n # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n import functools\n-from typing import Callable, Optional\n+from typing import Any, Callable, Optional\n+\n+from omegaconf import DictConfig\n \n from ._internal.utils import get_args_parser, run_hydra\n from .types import TaskFunction\n@@ -10,7 +12,7 @@\n config_path: Optional[str] = None,\n config_name: Optional[str] = None,\n strict: Optional[bool] = None,\n-) -> Callable[[TaskFunction], Callable[[], None]]:\n+) -> Callable[[TaskFunction], Any]:\n \"\"\"\n :param config_path: the config path, a directory relative to the declaring python file.\n :param config_name: the name of the config (usually the file name without the .yaml extension)\n@@ -20,14 +22,20 @@\n \n def main_decorator(task_function: TaskFunction) -> Callable[[], None]:\n @functools.wraps(task_function)\n- def decorated_main() -> None:\n- run_hydra(\n- args_parser=get_args_parser(),\n- task_function=task_function,\n- config_path=config_path,\n- config_name=config_name,\n- strict=strict,\n- )\n+ def decorated_main(cfg_passthrough: Optional[DictConfig] = None) -> Any:\n+ if cfg_passthrough is not None:\n+ return task_function(cfg_passthrough)\n+ else:\n+ args = get_args_parser()\n+ # no return value from run_hydra() as it may sometime actually run the task_function\n+ # multiple times (--multirun)\n+ run_hydra(\n+ args_parser=args,\n+ task_function=task_function,\n+ config_path=config_path,\n+ config_name=config_name,\n+ strict=strict,\n+ )\n \n return decorated_main\n", "issue": "[Feature Request] Allow @hydra.main() to take a config object and pass it through\n# \ud83d\ude80 Feature Request\r\n\r\nAllow @hydra.main() to take a config and pass it through\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport functools\nfrom typing import Callable, Optional\n\nfrom ._internal.utils import get_args_parser, run_hydra\nfrom .types import TaskFunction\n\n\ndef main(\n config_path: Optional[str] = None,\n config_name: Optional[str] = None,\n strict: Optional[bool] = None,\n) -> Callable[[TaskFunction], Callable[[], None]]:\n \"\"\"\n :param config_path: the config path, a directory relative to the declaring python file.\n :param config_name: the name of the config (usually the file name without the .yaml extension)\n :param strict: (Deprecated) strict mode, will throw an error if command line overrides are not changing an\n existing key or if the code is accessing a non existent key\n \"\"\"\n\n def main_decorator(task_function: TaskFunction) -> Callable[[], None]:\n @functools.wraps(task_function)\n def decorated_main() -> None:\n run_hydra(\n args_parser=get_args_parser(),\n task_function=task_function,\n config_path=config_path,\n config_name=config_name,\n strict=strict,\n )\n\n return decorated_main\n\n return main_decorator\n", "path": "hydra/main.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport functools\nfrom typing import Any, Callable, Optional\n\nfrom omegaconf import DictConfig\n\nfrom ._internal.utils import get_args_parser, run_hydra\nfrom .types import TaskFunction\n\n\ndef main(\n config_path: Optional[str] = None,\n config_name: Optional[str] = None,\n strict: Optional[bool] = None,\n) -> Callable[[TaskFunction], Any]:\n \"\"\"\n :param config_path: the config path, a directory relative to the declaring python file.\n :param config_name: the name of the config (usually the file name without the .yaml extension)\n :param strict: (Deprecated) strict mode, will throw an error if command line overrides are not changing an\n existing key or if the code is accessing a non existent key\n \"\"\"\n\n def main_decorator(task_function: TaskFunction) -> Callable[[], None]:\n @functools.wraps(task_function)\n def decorated_main(cfg_passthrough: Optional[DictConfig] = None) -> Any:\n if cfg_passthrough is not None:\n return task_function(cfg_passthrough)\n else:\n args = get_args_parser()\n # no return value from run_hydra() as it may sometime actually run the task_function\n # multiple times (--multirun)\n run_hydra(\n args_parser=args,\n task_function=task_function,\n config_path=config_path,\n config_name=config_name,\n strict=strict,\n )\n\n return decorated_main\n\n return main_decorator\n", "path": "hydra/main.py"}]}
| 627 | 444 |
gh_patches_debug_44261
|
rasdani/github-patches
|
git_diff
|
Project-MONAI__MONAI-3308
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Niftisaver doesn't save new voxelspacing
### Discussed in https://github.com/Project-MONAI/MONAI/discussions/3299 as well as https://github.com/Project-MONAI/MONAI/discussions/2029
<div type='discussions-op-text'>
<sup>Originally posted by **Gijz33** November 10, 2021</sup>
Hi,
I am changing the voxel spacing of my CT data using monai.transforms.Spacingd and saving the CT with new spacing into a .nii.gz-file using monai.data.NiftiSaver.
The voxel size transformation is succesful inside the python script. However, opening the .nii.gz file using different software (3D-slicer) the spacing of the old/orginal CT is used. Can someone help me out? A snippet of my code is shown below:
```python
train_images = sorted(glob.glob(os.path.join(data_folder, "*_ct.nii.gz")))
train_labels = sorted(glob.glob(os.path.join(data_folder, "*_seg.nii.gz")))
data_dicts = [ {"image": image_name, "label": label_name} for image_name, label_name in zip(train_images, train_labels) ]
loader = LoadImaged(keys=("image","label"))
data_dict = loader(data_dicts[0])
add_channel = AddChanneld(keys=["image", "label"])
data_dict = add_channel(data_dict)
orientation = Orientationd(keys=["image", "label"], axcodes="LPS")
data_dict = orientation(data_dict)
spacing = Spacingd(keys=["image", "label"], pixdim=(0.8, 0.8, 3.0), mode=("bilinear"))
data_dict = spacing(data_dict)
saver = NiftiSaver(output_dir="./", output_postfix="test" ,output_ext=".nii.gz",mode="nearest")
saver.save(data_dict["image"], data_dict['image_meta_dict'])
```
</div>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `monai/data/nifti_saver.py`
Content:
```
1 # Copyright 2020 - 2021 MONAI Consortium
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 # http://www.apache.org/licenses/LICENSE-2.0
6 # Unless required by applicable law or agreed to in writing, software
7 # distributed under the License is distributed on an "AS IS" BASIS,
8 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
9 # See the License for the specific language governing permissions and
10 # limitations under the License.
11
12 from pathlib import Path
13 from typing import Dict, Optional, Union
14
15 import numpy as np
16 import torch
17
18 from monai.config import DtypeLike
19 from monai.data.nifti_writer import write_nifti
20 from monai.data.utils import create_file_basename
21 from monai.utils import GridSampleMode, GridSamplePadMode
22 from monai.utils import ImageMetaKey as Key
23
24
25 class NiftiSaver:
26 """
27 Save the data as NIfTI file, it can support single data content or a batch of data.
28 Typically, the data can be segmentation predictions, call `save` for single data
29 or call `save_batch` to save a batch of data together.
30 The name of saved file will be `{input_image_name}_{output_postfix}{output_ext}`,
31 where the input image name is extracted from the provided meta data dictionary.
32 If no meta data provided, use index from 0 as the filename prefix.
33
34 Note: image should include channel dimension: [B],C,H,W,[D].
35
36 """
37
38 def __init__(
39 self,
40 output_dir: Union[Path, str] = "./",
41 output_postfix: str = "seg",
42 output_ext: str = ".nii.gz",
43 resample: bool = True,
44 mode: Union[GridSampleMode, str] = GridSampleMode.BILINEAR,
45 padding_mode: Union[GridSamplePadMode, str] = GridSamplePadMode.BORDER,
46 align_corners: bool = False,
47 dtype: DtypeLike = np.float64,
48 output_dtype: DtypeLike = np.float32,
49 squeeze_end_dims: bool = True,
50 data_root_dir: str = "",
51 separate_folder: bool = True,
52 print_log: bool = True,
53 ) -> None:
54 """
55 Args:
56 output_dir: output image directory.
57 output_postfix: a string appended to all output file names.
58 output_ext: output file extension name.
59 resample: whether to resample before saving the data array.
60 mode: {``"bilinear"``, ``"nearest"``}
61 This option is used when ``resample = True``.
62 Interpolation mode to calculate output values. Defaults to ``"bilinear"``.
63 See also: https://pytorch.org/docs/stable/nn.functional.html#grid-sample
64 padding_mode: {``"zeros"``, ``"border"``, ``"reflection"``}
65 This option is used when ``resample = True``.
66 Padding mode for outside grid values. Defaults to ``"border"``.
67 See also: https://pytorch.org/docs/stable/nn.functional.html#grid-sample
68 align_corners: Geometrically, we consider the pixels of the input as squares rather than points.
69 See also: https://pytorch.org/docs/stable/nn.functional.html#grid-sample
70 dtype: data type for resampling computation. Defaults to ``np.float64`` for best precision.
71 If None, use the data type of input data.
72 output_dtype: data type for saving data. Defaults to ``np.float32``.
73 squeeze_end_dims: if True, any trailing singleton dimensions will be removed (after the channel
74 has been moved to the end). So if input is (C,H,W,D), this will be altered to (H,W,D,C), and
75 then if C==1, it will be saved as (H,W,D). If D also ==1, it will be saved as (H,W). If false,
76 image will always be saved as (H,W,D,C).
77 data_root_dir: if not empty, it specifies the beginning parts of the input file's
78 absolute path. it's used to compute `input_file_rel_path`, the relative path to the file from
79 `data_root_dir` to preserve folder structure when saving in case there are files in different
80 folders with the same file names. for example:
81 input_file_name: /foo/bar/test1/image.nii,
82 postfix: seg
83 output_ext: nii.gz
84 output_dir: /output,
85 data_root_dir: /foo/bar,
86 output will be: /output/test1/image/image_seg.nii.gz
87 separate_folder: whether to save every file in a separate folder, for example: if input filename is
88 `image.nii`, postfix is `seg` and folder_path is `output`, if `True`, save as:
89 `output/image/image_seg.nii`, if `False`, save as `output/image_seg.nii`. default to `True`.
90 print_log: whether to print log about the saved NIfTI file path, etc. default to `True`.
91
92 """
93 self.output_dir = output_dir
94 self.output_postfix = output_postfix
95 self.output_ext = output_ext
96 self.resample = resample
97 self.mode: GridSampleMode = GridSampleMode(mode)
98 self.padding_mode: GridSamplePadMode = GridSamplePadMode(padding_mode)
99 self.align_corners = align_corners
100 self.dtype = dtype
101 self.output_dtype = output_dtype
102 self._data_index = 0
103 self.squeeze_end_dims = squeeze_end_dims
104 self.data_root_dir = data_root_dir
105 self.separate_folder = separate_folder
106 self.print_log = print_log
107
108 def save(self, data: Union[torch.Tensor, np.ndarray], meta_data: Optional[Dict] = None) -> None:
109 """
110 Save data into a Nifti file.
111 The meta_data could optionally have the following keys:
112
113 - ``'filename_or_obj'`` -- for output file name creation, corresponding to filename or object.
114 - ``'original_affine'`` -- for data orientation handling, defaulting to an identity matrix.
115 - ``'affine'`` -- for data output affine, defaulting to an identity matrix.
116 - ``'spatial_shape'`` -- for data output shape.
117 - ``'patch_index'`` -- if the data is a patch of big image, append the patch index to filename.
118
119 When meta_data is specified, the saver will try to resample batch data from the space
120 defined by "affine" to the space defined by "original_affine".
121
122 If meta_data is None, use the default index (starting from 0) as the filename.
123
124 Args:
125 data: target data content that to be saved as a NIfTI format file.
126 Assuming the data shape starts with a channel dimension and followed by spatial dimensions.
127 meta_data: the meta data information corresponding to the data.
128
129 See Also
130 :py:meth:`monai.data.nifti_writer.write_nifti`
131 """
132 filename = meta_data[Key.FILENAME_OR_OBJ] if meta_data else str(self._data_index)
133 self._data_index += 1
134 original_affine = meta_data.get("original_affine", None) if meta_data else None
135 affine = meta_data.get("affine", None) if meta_data else None
136 spatial_shape = meta_data.get("spatial_shape", None) if meta_data else None
137 patch_index = meta_data.get(Key.PATCH_INDEX, None) if meta_data else None
138
139 if isinstance(data, torch.Tensor):
140 data = data.detach().cpu().numpy()
141
142 path = create_file_basename(
143 postfix=self.output_postfix,
144 input_file_name=filename,
145 folder_path=self.output_dir,
146 data_root_dir=self.data_root_dir,
147 separate_folder=self.separate_folder,
148 patch_index=patch_index,
149 )
150 path = f"{path}{self.output_ext}"
151 # change data shape to be (channel, h, w, d)
152 while len(data.shape) < 4:
153 data = np.expand_dims(data, -1)
154 # change data to "channel last" format and write to nifti format file
155 data = np.moveaxis(np.asarray(data), 0, -1)
156
157 # if desired, remove trailing singleton dimensions
158 if self.squeeze_end_dims:
159 while data.shape[-1] == 1:
160 data = np.squeeze(data, -1)
161
162 write_nifti(
163 data,
164 file_name=path,
165 affine=affine,
166 target_affine=original_affine,
167 resample=self.resample,
168 output_spatial_shape=spatial_shape,
169 mode=self.mode,
170 padding_mode=self.padding_mode,
171 align_corners=self.align_corners,
172 dtype=self.dtype,
173 output_dtype=self.output_dtype,
174 )
175
176 if self.print_log:
177 print(f"file written: {path}.")
178
179 def save_batch(self, batch_data: Union[torch.Tensor, np.ndarray], meta_data: Optional[Dict] = None) -> None:
180 """
181 Save a batch of data into Nifti format files.
182
183 Spatially it supports up to three dimensions, that is, H, HW, HWD for
184 1D, 2D, 3D respectively (with resampling supports for 2D and 3D only).
185
186 When saving multiple time steps or multiple channels `batch_data`,
187 time and/or modality axes should be appended after the batch dimensions.
188 For example, the shape of a batch of 2D eight-class
189 segmentation probabilities to be saved could be `(batch, 8, 64, 64)`;
190 in this case each item in the batch will be saved as (64, 64, 1, 8)
191 NIfTI file (the third dimension is reserved as a spatial dimension).
192
193 Args:
194 batch_data: target batch data content that save into NIfTI format.
195 meta_data: every key-value in the meta_data is corresponding to a batch of data.
196
197 """
198 for i, data in enumerate(batch_data): # save a batch of files
199 self.save(data=data, meta_data={k: meta_data[k][i] for k in meta_data} if meta_data is not None else None)
200
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/monai/data/nifti_saver.py b/monai/data/nifti_saver.py
--- a/monai/data/nifti_saver.py
+++ b/monai/data/nifti_saver.py
@@ -56,7 +56,8 @@
output_dir: output image directory.
output_postfix: a string appended to all output file names.
output_ext: output file extension name.
- resample: whether to resample before saving the data array.
+ resample: whether to convert the data array to it's original coordinate system
+ based on `original_affine` in the `meta_data`.
mode: {``"bilinear"``, ``"nearest"``}
This option is used when ``resample = True``.
Interpolation mode to calculate output values. Defaults to ``"bilinear"``.
@@ -107,7 +108,7 @@
def save(self, data: Union[torch.Tensor, np.ndarray], meta_data: Optional[Dict] = None) -> None:
"""
- Save data into a Nifti file.
+ Save data into a NIfTI file.
The meta_data could optionally have the following keys:
- ``'filename_or_obj'`` -- for output file name creation, corresponding to filename or object.
@@ -116,7 +117,7 @@
- ``'spatial_shape'`` -- for data output shape.
- ``'patch_index'`` -- if the data is a patch of big image, append the patch index to filename.
- When meta_data is specified, the saver will try to resample batch data from the space
+ When meta_data is specified and `resample=True`, the saver will try to resample batch data from the space
defined by "affine" to the space defined by "original_affine".
If meta_data is None, use the default index (starting from 0) as the filename.
@@ -131,7 +132,7 @@
"""
filename = meta_data[Key.FILENAME_OR_OBJ] if meta_data else str(self._data_index)
self._data_index += 1
- original_affine = meta_data.get("original_affine", None) if meta_data else None
+ original_affine = meta_data.get("original_affine", None) if meta_data and self.resample else None
affine = meta_data.get("affine", None) if meta_data else None
spatial_shape = meta_data.get("spatial_shape", None) if meta_data else None
patch_index = meta_data.get(Key.PATCH_INDEX, None) if meta_data else None
@@ -151,7 +152,7 @@
# change data shape to be (channel, h, w, d)
while len(data.shape) < 4:
data = np.expand_dims(data, -1)
- # change data to "channel last" format and write to nifti format file
+ # change data to "channel last" format and write to NIfTI format file
data = np.moveaxis(np.asarray(data), 0, -1)
# if desired, remove trailing singleton dimensions
@@ -164,7 +165,7 @@
file_name=path,
affine=affine,
target_affine=original_affine,
- resample=self.resample,
+ resample=True,
output_spatial_shape=spatial_shape,
mode=self.mode,
padding_mode=self.padding_mode,
@@ -178,7 +179,7 @@
def save_batch(self, batch_data: Union[torch.Tensor, np.ndarray], meta_data: Optional[Dict] = None) -> None:
"""
- Save a batch of data into Nifti format files.
+ Save a batch of data into NIfTI format files.
Spatially it supports up to three dimensions, that is, H, HW, HWD for
1D, 2D, 3D respectively (with resampling supports for 2D and 3D only).
|
{"golden_diff": "diff --git a/monai/data/nifti_saver.py b/monai/data/nifti_saver.py\n--- a/monai/data/nifti_saver.py\n+++ b/monai/data/nifti_saver.py\n@@ -56,7 +56,8 @@\n output_dir: output image directory.\n output_postfix: a string appended to all output file names.\n output_ext: output file extension name.\n- resample: whether to resample before saving the data array.\n+ resample: whether to convert the data array to it's original coordinate system\n+ based on `original_affine` in the `meta_data`.\n mode: {``\"bilinear\"``, ``\"nearest\"``}\n This option is used when ``resample = True``.\n Interpolation mode to calculate output values. Defaults to ``\"bilinear\"``.\n@@ -107,7 +108,7 @@\n \n def save(self, data: Union[torch.Tensor, np.ndarray], meta_data: Optional[Dict] = None) -> None:\n \"\"\"\n- Save data into a Nifti file.\n+ Save data into a NIfTI file.\n The meta_data could optionally have the following keys:\n \n - ``'filename_or_obj'`` -- for output file name creation, corresponding to filename or object.\n@@ -116,7 +117,7 @@\n - ``'spatial_shape'`` -- for data output shape.\n - ``'patch_index'`` -- if the data is a patch of big image, append the patch index to filename.\n \n- When meta_data is specified, the saver will try to resample batch data from the space\n+ When meta_data is specified and `resample=True`, the saver will try to resample batch data from the space\n defined by \"affine\" to the space defined by \"original_affine\".\n \n If meta_data is None, use the default index (starting from 0) as the filename.\n@@ -131,7 +132,7 @@\n \"\"\"\n filename = meta_data[Key.FILENAME_OR_OBJ] if meta_data else str(self._data_index)\n self._data_index += 1\n- original_affine = meta_data.get(\"original_affine\", None) if meta_data else None\n+ original_affine = meta_data.get(\"original_affine\", None) if meta_data and self.resample else None\n affine = meta_data.get(\"affine\", None) if meta_data else None\n spatial_shape = meta_data.get(\"spatial_shape\", None) if meta_data else None\n patch_index = meta_data.get(Key.PATCH_INDEX, None) if meta_data else None\n@@ -151,7 +152,7 @@\n # change data shape to be (channel, h, w, d)\n while len(data.shape) < 4:\n data = np.expand_dims(data, -1)\n- # change data to \"channel last\" format and write to nifti format file\n+ # change data to \"channel last\" format and write to NIfTI format file\n data = np.moveaxis(np.asarray(data), 0, -1)\n \n # if desired, remove trailing singleton dimensions\n@@ -164,7 +165,7 @@\n file_name=path,\n affine=affine,\n target_affine=original_affine,\n- resample=self.resample,\n+ resample=True,\n output_spatial_shape=spatial_shape,\n mode=self.mode,\n padding_mode=self.padding_mode,\n@@ -178,7 +179,7 @@\n \n def save_batch(self, batch_data: Union[torch.Tensor, np.ndarray], meta_data: Optional[Dict] = None) -> None:\n \"\"\"\n- Save a batch of data into Nifti format files.\n+ Save a batch of data into NIfTI format files.\n \n Spatially it supports up to three dimensions, that is, H, HW, HWD for\n 1D, 2D, 3D respectively (with resampling supports for 2D and 3D only).\n", "issue": "Niftisaver doesn't save new voxelspacing\n### Discussed in https://github.com/Project-MONAI/MONAI/discussions/3299 as well as https://github.com/Project-MONAI/MONAI/discussions/2029\r\n\r\n\r\n<div type='discussions-op-text'>\r\n\r\n<sup>Originally posted by **Gijz33** November 10, 2021</sup>\r\nHi,\r\n\r\nI am changing the voxel spacing of my CT data using monai.transforms.Spacingd and saving the CT with new spacing into a .nii.gz-file using monai.data.NiftiSaver.\r\nThe voxel size transformation is succesful inside the python script. However, opening the .nii.gz file using different software (3D-slicer) the spacing of the old/orginal CT is used. Can someone help me out? A snippet of my code is shown below:\r\n\r\n\r\n```python\r\ntrain_images = sorted(glob.glob(os.path.join(data_folder, \"*_ct.nii.gz\")))\r\ntrain_labels = sorted(glob.glob(os.path.join(data_folder, \"*_seg.nii.gz\")))\r\n\r\ndata_dicts = [ {\"image\": image_name, \"label\": label_name} for image_name, label_name in zip(train_images, train_labels) ]\r\n\r\nloader = LoadImaged(keys=(\"image\",\"label\"))\r\ndata_dict = loader(data_dicts[0])\r\n \r\nadd_channel = AddChanneld(keys=[\"image\", \"label\"])\r\ndata_dict = add_channel(data_dict)\r\n\r\norientation = Orientationd(keys=[\"image\", \"label\"], axcodes=\"LPS\")\r\ndata_dict = orientation(data_dict)\r\n\r\nspacing = Spacingd(keys=[\"image\", \"label\"], pixdim=(0.8, 0.8, 3.0), mode=(\"bilinear\")) \r\ndata_dict = spacing(data_dict)\r\n\r\n\r\nsaver = NiftiSaver(output_dir=\"./\", output_postfix=\"test\" ,output_ext=\".nii.gz\",mode=\"nearest\")\r\nsaver.save(data_dict[\"image\"], data_dict['image_meta_dict'])\r\n```\r\n\r\n\r\n\r\n</div>\n", "before_files": [{"content": "# Copyright 2020 - 2021 MONAI Consortium\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# http://www.apache.org/licenses/LICENSE-2.0\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom pathlib import Path\nfrom typing import Dict, Optional, Union\n\nimport numpy as np\nimport torch\n\nfrom monai.config import DtypeLike\nfrom monai.data.nifti_writer import write_nifti\nfrom monai.data.utils import create_file_basename\nfrom monai.utils import GridSampleMode, GridSamplePadMode\nfrom monai.utils import ImageMetaKey as Key\n\n\nclass NiftiSaver:\n \"\"\"\n Save the data as NIfTI file, it can support single data content or a batch of data.\n Typically, the data can be segmentation predictions, call `save` for single data\n or call `save_batch` to save a batch of data together.\n The name of saved file will be `{input_image_name}_{output_postfix}{output_ext}`,\n where the input image name is extracted from the provided meta data dictionary.\n If no meta data provided, use index from 0 as the filename prefix.\n\n Note: image should include channel dimension: [B],C,H,W,[D].\n\n \"\"\"\n\n def __init__(\n self,\n output_dir: Union[Path, str] = \"./\",\n output_postfix: str = \"seg\",\n output_ext: str = \".nii.gz\",\n resample: bool = True,\n mode: Union[GridSampleMode, str] = GridSampleMode.BILINEAR,\n padding_mode: Union[GridSamplePadMode, str] = GridSamplePadMode.BORDER,\n align_corners: bool = False,\n dtype: DtypeLike = np.float64,\n output_dtype: DtypeLike = np.float32,\n squeeze_end_dims: bool = True,\n data_root_dir: str = \"\",\n separate_folder: bool = True,\n print_log: bool = True,\n ) -> None:\n \"\"\"\n Args:\n output_dir: output image directory.\n output_postfix: a string appended to all output file names.\n output_ext: output file extension name.\n resample: whether to resample before saving the data array.\n mode: {``\"bilinear\"``, ``\"nearest\"``}\n This option is used when ``resample = True``.\n Interpolation mode to calculate output values. Defaults to ``\"bilinear\"``.\n See also: https://pytorch.org/docs/stable/nn.functional.html#grid-sample\n padding_mode: {``\"zeros\"``, ``\"border\"``, ``\"reflection\"``}\n This option is used when ``resample = True``.\n Padding mode for outside grid values. Defaults to ``\"border\"``.\n See also: https://pytorch.org/docs/stable/nn.functional.html#grid-sample\n align_corners: Geometrically, we consider the pixels of the input as squares rather than points.\n See also: https://pytorch.org/docs/stable/nn.functional.html#grid-sample\n dtype: data type for resampling computation. Defaults to ``np.float64`` for best precision.\n If None, use the data type of input data.\n output_dtype: data type for saving data. Defaults to ``np.float32``.\n squeeze_end_dims: if True, any trailing singleton dimensions will be removed (after the channel\n has been moved to the end). So if input is (C,H,W,D), this will be altered to (H,W,D,C), and\n then if C==1, it will be saved as (H,W,D). If D also ==1, it will be saved as (H,W). If false,\n image will always be saved as (H,W,D,C).\n data_root_dir: if not empty, it specifies the beginning parts of the input file's\n absolute path. it's used to compute `input_file_rel_path`, the relative path to the file from\n `data_root_dir` to preserve folder structure when saving in case there are files in different\n folders with the same file names. for example:\n input_file_name: /foo/bar/test1/image.nii,\n postfix: seg\n output_ext: nii.gz\n output_dir: /output,\n data_root_dir: /foo/bar,\n output will be: /output/test1/image/image_seg.nii.gz\n separate_folder: whether to save every file in a separate folder, for example: if input filename is\n `image.nii`, postfix is `seg` and folder_path is `output`, if `True`, save as:\n `output/image/image_seg.nii`, if `False`, save as `output/image_seg.nii`. default to `True`.\n print_log: whether to print log about the saved NIfTI file path, etc. default to `True`.\n\n \"\"\"\n self.output_dir = output_dir\n self.output_postfix = output_postfix\n self.output_ext = output_ext\n self.resample = resample\n self.mode: GridSampleMode = GridSampleMode(mode)\n self.padding_mode: GridSamplePadMode = GridSamplePadMode(padding_mode)\n self.align_corners = align_corners\n self.dtype = dtype\n self.output_dtype = output_dtype\n self._data_index = 0\n self.squeeze_end_dims = squeeze_end_dims\n self.data_root_dir = data_root_dir\n self.separate_folder = separate_folder\n self.print_log = print_log\n\n def save(self, data: Union[torch.Tensor, np.ndarray], meta_data: Optional[Dict] = None) -> None:\n \"\"\"\n Save data into a Nifti file.\n The meta_data could optionally have the following keys:\n\n - ``'filename_or_obj'`` -- for output file name creation, corresponding to filename or object.\n - ``'original_affine'`` -- for data orientation handling, defaulting to an identity matrix.\n - ``'affine'`` -- for data output affine, defaulting to an identity matrix.\n - ``'spatial_shape'`` -- for data output shape.\n - ``'patch_index'`` -- if the data is a patch of big image, append the patch index to filename.\n\n When meta_data is specified, the saver will try to resample batch data from the space\n defined by \"affine\" to the space defined by \"original_affine\".\n\n If meta_data is None, use the default index (starting from 0) as the filename.\n\n Args:\n data: target data content that to be saved as a NIfTI format file.\n Assuming the data shape starts with a channel dimension and followed by spatial dimensions.\n meta_data: the meta data information corresponding to the data.\n\n See Also\n :py:meth:`monai.data.nifti_writer.write_nifti`\n \"\"\"\n filename = meta_data[Key.FILENAME_OR_OBJ] if meta_data else str(self._data_index)\n self._data_index += 1\n original_affine = meta_data.get(\"original_affine\", None) if meta_data else None\n affine = meta_data.get(\"affine\", None) if meta_data else None\n spatial_shape = meta_data.get(\"spatial_shape\", None) if meta_data else None\n patch_index = meta_data.get(Key.PATCH_INDEX, None) if meta_data else None\n\n if isinstance(data, torch.Tensor):\n data = data.detach().cpu().numpy()\n\n path = create_file_basename(\n postfix=self.output_postfix,\n input_file_name=filename,\n folder_path=self.output_dir,\n data_root_dir=self.data_root_dir,\n separate_folder=self.separate_folder,\n patch_index=patch_index,\n )\n path = f\"{path}{self.output_ext}\"\n # change data shape to be (channel, h, w, d)\n while len(data.shape) < 4:\n data = np.expand_dims(data, -1)\n # change data to \"channel last\" format and write to nifti format file\n data = np.moveaxis(np.asarray(data), 0, -1)\n\n # if desired, remove trailing singleton dimensions\n if self.squeeze_end_dims:\n while data.shape[-1] == 1:\n data = np.squeeze(data, -1)\n\n write_nifti(\n data,\n file_name=path,\n affine=affine,\n target_affine=original_affine,\n resample=self.resample,\n output_spatial_shape=spatial_shape,\n mode=self.mode,\n padding_mode=self.padding_mode,\n align_corners=self.align_corners,\n dtype=self.dtype,\n output_dtype=self.output_dtype,\n )\n\n if self.print_log:\n print(f\"file written: {path}.\")\n\n def save_batch(self, batch_data: Union[torch.Tensor, np.ndarray], meta_data: Optional[Dict] = None) -> None:\n \"\"\"\n Save a batch of data into Nifti format files.\n\n Spatially it supports up to three dimensions, that is, H, HW, HWD for\n 1D, 2D, 3D respectively (with resampling supports for 2D and 3D only).\n\n When saving multiple time steps or multiple channels `batch_data`,\n time and/or modality axes should be appended after the batch dimensions.\n For example, the shape of a batch of 2D eight-class\n segmentation probabilities to be saved could be `(batch, 8, 64, 64)`;\n in this case each item in the batch will be saved as (64, 64, 1, 8)\n NIfTI file (the third dimension is reserved as a spatial dimension).\n\n Args:\n batch_data: target batch data content that save into NIfTI format.\n meta_data: every key-value in the meta_data is corresponding to a batch of data.\n\n \"\"\"\n for i, data in enumerate(batch_data): # save a batch of files\n self.save(data=data, meta_data={k: meta_data[k][i] for k in meta_data} if meta_data is not None else None)\n", "path": "monai/data/nifti_saver.py"}], "after_files": [{"content": "# Copyright 2020 - 2021 MONAI Consortium\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# http://www.apache.org/licenses/LICENSE-2.0\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom pathlib import Path\nfrom typing import Dict, Optional, Union\n\nimport numpy as np\nimport torch\n\nfrom monai.config import DtypeLike\nfrom monai.data.nifti_writer import write_nifti\nfrom monai.data.utils import create_file_basename\nfrom monai.utils import GridSampleMode, GridSamplePadMode\nfrom monai.utils import ImageMetaKey as Key\n\n\nclass NiftiSaver:\n \"\"\"\n Save the data as NIfTI file, it can support single data content or a batch of data.\n Typically, the data can be segmentation predictions, call `save` for single data\n or call `save_batch` to save a batch of data together.\n The name of saved file will be `{input_image_name}_{output_postfix}{output_ext}`,\n where the input image name is extracted from the provided meta data dictionary.\n If no meta data provided, use index from 0 as the filename prefix.\n\n Note: image should include channel dimension: [B],C,H,W,[D].\n\n \"\"\"\n\n def __init__(\n self,\n output_dir: Union[Path, str] = \"./\",\n output_postfix: str = \"seg\",\n output_ext: str = \".nii.gz\",\n resample: bool = True,\n mode: Union[GridSampleMode, str] = GridSampleMode.BILINEAR,\n padding_mode: Union[GridSamplePadMode, str] = GridSamplePadMode.BORDER,\n align_corners: bool = False,\n dtype: DtypeLike = np.float64,\n output_dtype: DtypeLike = np.float32,\n squeeze_end_dims: bool = True,\n data_root_dir: str = \"\",\n separate_folder: bool = True,\n print_log: bool = True,\n ) -> None:\n \"\"\"\n Args:\n output_dir: output image directory.\n output_postfix: a string appended to all output file names.\n output_ext: output file extension name.\n resample: whether to convert the data array to it's original coordinate system\n based on `original_affine` in the `meta_data`.\n mode: {``\"bilinear\"``, ``\"nearest\"``}\n This option is used when ``resample = True``.\n Interpolation mode to calculate output values. Defaults to ``\"bilinear\"``.\n See also: https://pytorch.org/docs/stable/nn.functional.html#grid-sample\n padding_mode: {``\"zeros\"``, ``\"border\"``, ``\"reflection\"``}\n This option is used when ``resample = True``.\n Padding mode for outside grid values. Defaults to ``\"border\"``.\n See also: https://pytorch.org/docs/stable/nn.functional.html#grid-sample\n align_corners: Geometrically, we consider the pixels of the input as squares rather than points.\n See also: https://pytorch.org/docs/stable/nn.functional.html#grid-sample\n dtype: data type for resampling computation. Defaults to ``np.float64`` for best precision.\n If None, use the data type of input data.\n output_dtype: data type for saving data. Defaults to ``np.float32``.\n squeeze_end_dims: if True, any trailing singleton dimensions will be removed (after the channel\n has been moved to the end). So if input is (C,H,W,D), this will be altered to (H,W,D,C), and\n then if C==1, it will be saved as (H,W,D). If D also ==1, it will be saved as (H,W). If false,\n image will always be saved as (H,W,D,C).\n data_root_dir: if not empty, it specifies the beginning parts of the input file's\n absolute path. it's used to compute `input_file_rel_path`, the relative path to the file from\n `data_root_dir` to preserve folder structure when saving in case there are files in different\n folders with the same file names. for example:\n input_file_name: /foo/bar/test1/image.nii,\n postfix: seg\n output_ext: nii.gz\n output_dir: /output,\n data_root_dir: /foo/bar,\n output will be: /output/test1/image/image_seg.nii.gz\n separate_folder: whether to save every file in a separate folder, for example: if input filename is\n `image.nii`, postfix is `seg` and folder_path is `output`, if `True`, save as:\n `output/image/image_seg.nii`, if `False`, save as `output/image_seg.nii`. default to `True`.\n print_log: whether to print log about the saved NIfTI file path, etc. default to `True`.\n\n \"\"\"\n self.output_dir = output_dir\n self.output_postfix = output_postfix\n self.output_ext = output_ext\n self.resample = resample\n self.mode: GridSampleMode = GridSampleMode(mode)\n self.padding_mode: GridSamplePadMode = GridSamplePadMode(padding_mode)\n self.align_corners = align_corners\n self.dtype = dtype\n self.output_dtype = output_dtype\n self._data_index = 0\n self.squeeze_end_dims = squeeze_end_dims\n self.data_root_dir = data_root_dir\n self.separate_folder = separate_folder\n self.print_log = print_log\n\n def save(self, data: Union[torch.Tensor, np.ndarray], meta_data: Optional[Dict] = None) -> None:\n \"\"\"\n Save data into a NIfTI file.\n The meta_data could optionally have the following keys:\n\n - ``'filename_or_obj'`` -- for output file name creation, corresponding to filename or object.\n - ``'original_affine'`` -- for data orientation handling, defaulting to an identity matrix.\n - ``'affine'`` -- for data output affine, defaulting to an identity matrix.\n - ``'spatial_shape'`` -- for data output shape.\n - ``'patch_index'`` -- if the data is a patch of big image, append the patch index to filename.\n\n When meta_data is specified and `resample=True`, the saver will try to resample batch data from the space\n defined by \"affine\" to the space defined by \"original_affine\".\n\n If meta_data is None, use the default index (starting from 0) as the filename.\n\n Args:\n data: target data content that to be saved as a NIfTI format file.\n Assuming the data shape starts with a channel dimension and followed by spatial dimensions.\n meta_data: the meta data information corresponding to the data.\n\n See Also\n :py:meth:`monai.data.nifti_writer.write_nifti`\n \"\"\"\n filename = meta_data[Key.FILENAME_OR_OBJ] if meta_data else str(self._data_index)\n self._data_index += 1\n original_affine = meta_data.get(\"original_affine\", None) if meta_data and self.resample else None\n affine = meta_data.get(\"affine\", None) if meta_data else None\n spatial_shape = meta_data.get(\"spatial_shape\", None) if meta_data else None\n patch_index = meta_data.get(Key.PATCH_INDEX, None) if meta_data else None\n\n if isinstance(data, torch.Tensor):\n data = data.detach().cpu().numpy()\n\n path = create_file_basename(\n postfix=self.output_postfix,\n input_file_name=filename,\n folder_path=self.output_dir,\n data_root_dir=self.data_root_dir,\n separate_folder=self.separate_folder,\n patch_index=patch_index,\n )\n path = f\"{path}{self.output_ext}\"\n # change data shape to be (channel, h, w, d)\n while len(data.shape) < 4:\n data = np.expand_dims(data, -1)\n # change data to \"channel last\" format and write to NIfTI format file\n data = np.moveaxis(np.asarray(data), 0, -1)\n\n # if desired, remove trailing singleton dimensions\n if self.squeeze_end_dims:\n while data.shape[-1] == 1:\n data = np.squeeze(data, -1)\n\n write_nifti(\n data,\n file_name=path,\n affine=affine,\n target_affine=original_affine,\n resample=True,\n output_spatial_shape=spatial_shape,\n mode=self.mode,\n padding_mode=self.padding_mode,\n align_corners=self.align_corners,\n dtype=self.dtype,\n output_dtype=self.output_dtype,\n )\n\n if self.print_log:\n print(f\"file written: {path}.\")\n\n def save_batch(self, batch_data: Union[torch.Tensor, np.ndarray], meta_data: Optional[Dict] = None) -> None:\n \"\"\"\n Save a batch of data into NIfTI format files.\n\n Spatially it supports up to three dimensions, that is, H, HW, HWD for\n 1D, 2D, 3D respectively (with resampling supports for 2D and 3D only).\n\n When saving multiple time steps or multiple channels `batch_data`,\n time and/or modality axes should be appended after the batch dimensions.\n For example, the shape of a batch of 2D eight-class\n segmentation probabilities to be saved could be `(batch, 8, 64, 64)`;\n in this case each item in the batch will be saved as (64, 64, 1, 8)\n NIfTI file (the third dimension is reserved as a spatial dimension).\n\n Args:\n batch_data: target batch data content that save into NIfTI format.\n meta_data: every key-value in the meta_data is corresponding to a batch of data.\n\n \"\"\"\n for i, data in enumerate(batch_data): # save a batch of files\n self.save(data=data, meta_data={k: meta_data[k][i] for k in meta_data} if meta_data is not None else None)\n", "path": "monai/data/nifti_saver.py"}]}
| 3,440 | 899 |
gh_patches_debug_41639
|
rasdani/github-patches
|
git_diff
|
mathesar-foundation__mathesar-477
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Undesirable record grouping behaviours
## Description
Record grouping has a set of behaviours, that are not desirable.
* It considers order_by, which leads to formation of incorrect query on the backend, if we don't group by the sorted column.

* It considers limit and offset. These apply on the grouped result itself, and is unrelated to the record limit & offset.


## Expected behavior
* It should not consider order_by.
* It should not consider limit and offset.
We could also probably have a dedicated API for this. It could also obtain the values for columns, to filter the grouped results. Having it as part of records API makes less sense, since the group count is not a reflection of the record results.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `db/records.py`
Content:
```
1 import logging
2 from sqlalchemy import delete, select, Column, func
3 from sqlalchemy.inspection import inspect
4 from sqlalchemy_filters import apply_filters, apply_sort
5 from sqlalchemy_filters.exceptions import FieldNotFound
6
7 from db.constants import ID
8
9 logger = logging.getLogger(__name__)
10
11
12 # Grouping exceptions follow the sqlalchemy_filters exceptions patterns
13 class BadGroupFormat(Exception):
14 pass
15
16
17 class GroupFieldNotFound(FieldNotFound):
18 pass
19
20
21 def _get_primary_key_column(table):
22 primary_key_list = list(inspect(table).primary_key)
23 # We do not support getting by composite primary keys
24 assert len(primary_key_list) == 1
25 return primary_key_list[0]
26
27
28 def _create_col_objects(table, column_list):
29 return [
30 table.columns[col] if type(col) == str else col
31 for col in column_list
32 ]
33
34
35 def get_record(table, engine, id_value):
36 primary_key_column = _get_primary_key_column(table)
37 query = select(table).where(primary_key_column == id_value)
38 with engine.begin() as conn:
39 result = conn.execute(query).fetchall()
40 assert len(result) <= 1
41 return result[0] if result else None
42
43
44 def get_records(
45 table, engine, limit=None, offset=None, order_by=[], filters=[],
46 ):
47 """
48 Returns records from a table.
49
50 Args:
51 table: SQLAlchemy table object
52 engine: SQLAlchemy engine object
53 limit: int, gives number of rows to return
54 offset: int, gives number of rows to skip
55 order_by: list of dictionaries, where each dictionary has a 'field' and
56 'direction' field.
57 See: https://github.com/centerofci/sqlalchemy-filters#sort-format
58 filters: list of dictionaries, where each dictionary has a 'field' and 'op'
59 field, in addition to an 'value' field if appropriate.
60 See: https://github.com/centerofci/sqlalchemy-filters#filters-format
61 """
62 query = select(table).limit(limit).offset(offset)
63 if order_by is not None:
64 query = apply_sort(query, order_by)
65 if filters is not None:
66 query = apply_filters(query, filters)
67 with engine.begin() as conn:
68 return conn.execute(query).fetchall()
69
70
71 def get_group_counts(
72 table, engine, group_by, limit=None, offset=None, order_by=[], filters=[],
73 ):
74 """
75 Returns counts by specified groupings
76
77 Args:
78 table: SQLAlchemy table object
79 engine: SQLAlchemy engine object
80 limit: int, gives number of rows to return
81 offset: int, gives number of rows to skip
82 group_by: list or tuple of column names or column objects to group by
83 order_by: list of dictionaries, where each dictionary has a 'field' and
84 'direction' field.
85 See: https://github.com/centerofci/sqlalchemy-filters#sort-format
86 filters: list of dictionaries, where each dictionary has a 'field' and 'op'
87 field, in addition to an 'value' field if appropriate.
88 See: https://github.com/centerofci/sqlalchemy-filters#filters-format
89 """
90 if type(group_by) not in (tuple, list):
91 raise BadGroupFormat(f"Group spec {group_by} must be list or tuple.")
92 for field in group_by:
93 if type(field) not in (str, Column):
94 raise BadGroupFormat(f"Group field {field} must be a string or Column.")
95 field_name = field if type(field) == str else field.name
96 if field_name not in table.c:
97 raise GroupFieldNotFound(f"Group field {field} not found in {table}.")
98
99 query = (
100 select(table)
101 .limit(limit)
102 .offset(offset)
103 )
104 if order_by is not None:
105 query = apply_sort(query, order_by)
106 if filters is not None:
107 query = apply_filters(query, filters)
108 subquery = query.subquery()
109
110 group_by = [
111 subquery.columns[col] if type(col) == str else subquery.columns[col.name]
112 for col in group_by
113 ]
114 query = select(*group_by, func.count(subquery.c[ID])).group_by(*group_by)
115 with engine.begin() as conn:
116 records = conn.execute(query).fetchall()
117
118 # Last field is the count, preceding fields are the group by fields
119 counts = {
120 (*record[:-1],): record[-1]
121 for record in records
122 }
123 return counts
124
125
126 def get_distinct_tuple_values(
127 column_list, engine, table=None, limit=None, offset=None,
128 ):
129 """
130 Returns distinct tuples from a given list of columns.
131
132 Args:
133 column_list: list of column names or SQLAlchemy column objects
134 engine: SQLAlchemy engine object
135 table: SQLAlchemy table object
136 limit: int, gives number of rows to return
137 offset: int, gives number of rows to skip
138
139 If no table is given, the column_list must consist entirely of
140 SQLAlchemy column objects associated with a table.
141 """
142 if table is not None:
143 column_objects = _create_col_objects(table, column_list)
144 else:
145 column_objects = column_list
146 try:
147 assert all([type(col) == Column for col in column_objects])
148 except AssertionError as e:
149 logger.error("All columns must be str or sqlalchemy.Column type")
150 raise e
151
152 query = (
153 select(*column_objects)
154 .distinct()
155 .limit(limit)
156 .offset(offset)
157 )
158 with engine.begin() as conn:
159 res = conn.execute(query).fetchall()
160 return [tuple(zip(column_objects, row)) for row in res]
161
162
163 def distinct_tuples_to_filter(distinct_tuples):
164 filters = []
165 for col, value in distinct_tuples:
166 filters.append({
167 "field": col,
168 "op": "==",
169 "value": value,
170 })
171 return filters
172
173
174 def create_record_or_records(table, engine, record_data):
175 """
176 record_data can be a dictionary, tuple, or list of dictionaries or tuples.
177 if record_data is a list, it creates multiple records.
178 """
179 id_value = None
180 with engine.begin() as connection:
181 result = connection.execute(table.insert(), record_data)
182 # If there was only a single record created, return the record.
183 if result.rowcount == 1:
184 # We need to manually commit insertion so that we can retrieve the record.
185 connection.commit()
186 id_value = result.inserted_primary_key[0]
187 if id_value is not None:
188 return get_record(table, engine, id_value)
189 # Do not return any records if multiple rows were added.
190 return None
191
192
193 def create_records_from_csv(
194 table,
195 engine,
196 csv_filename,
197 column_names,
198 header,
199 delimiter=None,
200 escape=None,
201 quote=None,
202 ):
203 with open(csv_filename, 'rb') as csv_file:
204 with engine.begin() as conn:
205 cursor = conn.connection.cursor()
206 relation = '.'.join('"{}"'.format(part) for part in (table.schema, table.name))
207 formatted_columns = '({})'.format(','.join([f'"{column_name}"' for column_name in column_names]))
208
209 copy_sql = f'COPY {relation} {formatted_columns} FROM STDIN CSV'
210 if header:
211 copy_sql += " HEADER"
212 if delimiter:
213 copy_sql += f" DELIMITER E'{delimiter}'"
214 if escape:
215 copy_sql += f" ESCAPE '{escape}'"
216 if quote:
217 if quote == "'":
218 quote = "''"
219 copy_sql += f" QUOTE '{quote}'"
220
221 cursor.copy_expert(copy_sql, csv_file)
222
223
224 def update_record(table, engine, id_value, record_data):
225 primary_key_column = _get_primary_key_column(table)
226 with engine.begin() as connection:
227 connection.execute(
228 table.update().where(primary_key_column == id_value).values(record_data)
229 )
230 return get_record(table, engine, id_value)
231
232
233 def delete_record(table, engine, id_value):
234 primary_key_column = _get_primary_key_column(table)
235 query = delete(table).where(primary_key_column == id_value)
236 with engine.begin() as conn:
237 return conn.execute(query)
238
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/db/records.py b/db/records.py
--- a/db/records.py
+++ b/db/records.py
@@ -4,7 +4,6 @@
from sqlalchemy_filters import apply_filters, apply_sort
from sqlalchemy_filters.exceptions import FieldNotFound
-from db.constants import ID
logger = logging.getLogger(__name__)
@@ -32,13 +31,27 @@
]
+def _get_query(table, limit, offset, order_by, filters):
+ query = select(table).limit(limit).offset(offset)
+ if order_by is not None:
+ query = apply_sort(query, order_by)
+ if filters is not None:
+ query = apply_filters(query, filters)
+ return query
+
+
+def _execute_query(query, engine):
+ with engine.begin() as conn:
+ records = conn.execute(query).fetchall()
+ return records
+
+
def get_record(table, engine, id_value):
primary_key_column = _get_primary_key_column(table)
query = select(table).where(primary_key_column == id_value)
- with engine.begin() as conn:
- result = conn.execute(query).fetchall()
- assert len(result) <= 1
- return result[0] if result else None
+ result = _execute_query(query, engine)
+ assert len(result) <= 1
+ return result[0] if result else None
def get_records(
@@ -59,13 +72,8 @@
field, in addition to an 'value' field if appropriate.
See: https://github.com/centerofci/sqlalchemy-filters#filters-format
"""
- query = select(table).limit(limit).offset(offset)
- if order_by is not None:
- query = apply_sort(query, order_by)
- if filters is not None:
- query = apply_filters(query, filters)
- with engine.begin() as conn:
- return conn.execute(query).fetchall()
+ query = _get_query(table, limit, offset, order_by, filters)
+ return _execute_query(query, engine)
def get_group_counts(
@@ -96,24 +104,17 @@
if field_name not in table.c:
raise GroupFieldNotFound(f"Group field {field} not found in {table}.")
- query = (
- select(table)
- .limit(limit)
- .offset(offset)
- )
- if order_by is not None:
- query = apply_sort(query, order_by)
- if filters is not None:
- query = apply_filters(query, filters)
- subquery = query.subquery()
+ # Get the list of groups that we should count.
+ # We're considering limit and offset here so that we only count relevant groups
+ relevant_groups_query = _get_query(table, limit, offset, order_by, filters)
+ subquery = relevant_groups_query.subquery()
- group_by = [
+ columns = [
subquery.columns[col] if type(col) == str else subquery.columns[col.name]
for col in group_by
]
- query = select(*group_by, func.count(subquery.c[ID])).group_by(*group_by)
- with engine.begin() as conn:
- records = conn.execute(query).fetchall()
+ count_query = select(*columns, func.count(columns[0])).group_by(*columns)
+ records = _execute_query(count_query, engine)
# Last field is the count, preceding fields are the group by fields
counts = {
@@ -155,9 +156,8 @@
.limit(limit)
.offset(offset)
)
- with engine.begin() as conn:
- res = conn.execute(query).fetchall()
- return [tuple(zip(column_objects, row)) for row in res]
+ result = _execute_query(query, engine)
+ return [tuple(zip(column_objects, row)) for row in result]
def distinct_tuples_to_filter(distinct_tuples):
|
{"golden_diff": "diff --git a/db/records.py b/db/records.py\n--- a/db/records.py\n+++ b/db/records.py\n@@ -4,7 +4,6 @@\n from sqlalchemy_filters import apply_filters, apply_sort\n from sqlalchemy_filters.exceptions import FieldNotFound\n \n-from db.constants import ID\n \n logger = logging.getLogger(__name__)\n \n@@ -32,13 +31,27 @@\n ]\n \n \n+def _get_query(table, limit, offset, order_by, filters):\n+ query = select(table).limit(limit).offset(offset)\n+ if order_by is not None:\n+ query = apply_sort(query, order_by)\n+ if filters is not None:\n+ query = apply_filters(query, filters)\n+ return query\n+\n+\n+def _execute_query(query, engine):\n+ with engine.begin() as conn:\n+ records = conn.execute(query).fetchall()\n+ return records\n+\n+\n def get_record(table, engine, id_value):\n primary_key_column = _get_primary_key_column(table)\n query = select(table).where(primary_key_column == id_value)\n- with engine.begin() as conn:\n- result = conn.execute(query).fetchall()\n- assert len(result) <= 1\n- return result[0] if result else None\n+ result = _execute_query(query, engine)\n+ assert len(result) <= 1\n+ return result[0] if result else None\n \n \n def get_records(\n@@ -59,13 +72,8 @@\n field, in addition to an 'value' field if appropriate.\n See: https://github.com/centerofci/sqlalchemy-filters#filters-format\n \"\"\"\n- query = select(table).limit(limit).offset(offset)\n- if order_by is not None:\n- query = apply_sort(query, order_by)\n- if filters is not None:\n- query = apply_filters(query, filters)\n- with engine.begin() as conn:\n- return conn.execute(query).fetchall()\n+ query = _get_query(table, limit, offset, order_by, filters)\n+ return _execute_query(query, engine)\n \n \n def get_group_counts(\n@@ -96,24 +104,17 @@\n if field_name not in table.c:\n raise GroupFieldNotFound(f\"Group field {field} not found in {table}.\")\n \n- query = (\n- select(table)\n- .limit(limit)\n- .offset(offset)\n- )\n- if order_by is not None:\n- query = apply_sort(query, order_by)\n- if filters is not None:\n- query = apply_filters(query, filters)\n- subquery = query.subquery()\n+ # Get the list of groups that we should count.\n+ # We're considering limit and offset here so that we only count relevant groups\n+ relevant_groups_query = _get_query(table, limit, offset, order_by, filters)\n+ subquery = relevant_groups_query.subquery()\n \n- group_by = [\n+ columns = [\n subquery.columns[col] if type(col) == str else subquery.columns[col.name]\n for col in group_by\n ]\n- query = select(*group_by, func.count(subquery.c[ID])).group_by(*group_by)\n- with engine.begin() as conn:\n- records = conn.execute(query).fetchall()\n+ count_query = select(*columns, func.count(columns[0])).group_by(*columns)\n+ records = _execute_query(count_query, engine)\n \n # Last field is the count, preceding fields are the group by fields\n counts = {\n@@ -155,9 +156,8 @@\n .limit(limit)\n .offset(offset)\n )\n- with engine.begin() as conn:\n- res = conn.execute(query).fetchall()\n- return [tuple(zip(column_objects, row)) for row in res]\n+ result = _execute_query(query, engine)\n+ return [tuple(zip(column_objects, row)) for row in result]\n \n \n def distinct_tuples_to_filter(distinct_tuples):\n", "issue": "Undesirable record grouping behaviours\n## Description\r\nRecord grouping has a set of behaviours, that are not desirable.\r\n* It considers order_by, which leads to formation of incorrect query on the backend, if we don't group by the sorted column.\r\n\r\n\r\n* It considers limit and offset. These apply on the grouped result itself, and is unrelated to the record limit & offset.\r\n\r\n\r\n\r\n\r\n## Expected behavior\r\n* It should not consider order_by.\r\n* It should not consider limit and offset.\r\n\r\nWe could also probably have a dedicated API for this. It could also obtain the values for columns, to filter the grouped results. Having it as part of records API makes less sense, since the group count is not a reflection of the record results.\n", "before_files": [{"content": "import logging\nfrom sqlalchemy import delete, select, Column, func\nfrom sqlalchemy.inspection import inspect\nfrom sqlalchemy_filters import apply_filters, apply_sort\nfrom sqlalchemy_filters.exceptions import FieldNotFound\n\nfrom db.constants import ID\n\nlogger = logging.getLogger(__name__)\n\n\n# Grouping exceptions follow the sqlalchemy_filters exceptions patterns\nclass BadGroupFormat(Exception):\n pass\n\n\nclass GroupFieldNotFound(FieldNotFound):\n pass\n\n\ndef _get_primary_key_column(table):\n primary_key_list = list(inspect(table).primary_key)\n # We do not support getting by composite primary keys\n assert len(primary_key_list) == 1\n return primary_key_list[0]\n\n\ndef _create_col_objects(table, column_list):\n return [\n table.columns[col] if type(col) == str else col\n for col in column_list\n ]\n\n\ndef get_record(table, engine, id_value):\n primary_key_column = _get_primary_key_column(table)\n query = select(table).where(primary_key_column == id_value)\n with engine.begin() as conn:\n result = conn.execute(query).fetchall()\n assert len(result) <= 1\n return result[0] if result else None\n\n\ndef get_records(\n table, engine, limit=None, offset=None, order_by=[], filters=[],\n):\n \"\"\"\n Returns records from a table.\n\n Args:\n table: SQLAlchemy table object\n engine: SQLAlchemy engine object\n limit: int, gives number of rows to return\n offset: int, gives number of rows to skip\n order_by: list of dictionaries, where each dictionary has a 'field' and\n 'direction' field.\n See: https://github.com/centerofci/sqlalchemy-filters#sort-format\n filters: list of dictionaries, where each dictionary has a 'field' and 'op'\n field, in addition to an 'value' field if appropriate.\n See: https://github.com/centerofci/sqlalchemy-filters#filters-format\n \"\"\"\n query = select(table).limit(limit).offset(offset)\n if order_by is not None:\n query = apply_sort(query, order_by)\n if filters is not None:\n query = apply_filters(query, filters)\n with engine.begin() as conn:\n return conn.execute(query).fetchall()\n\n\ndef get_group_counts(\n table, engine, group_by, limit=None, offset=None, order_by=[], filters=[],\n):\n \"\"\"\n Returns counts by specified groupings\n\n Args:\n table: SQLAlchemy table object\n engine: SQLAlchemy engine object\n limit: int, gives number of rows to return\n offset: int, gives number of rows to skip\n group_by: list or tuple of column names or column objects to group by\n order_by: list of dictionaries, where each dictionary has a 'field' and\n 'direction' field.\n See: https://github.com/centerofci/sqlalchemy-filters#sort-format\n filters: list of dictionaries, where each dictionary has a 'field' and 'op'\n field, in addition to an 'value' field if appropriate.\n See: https://github.com/centerofci/sqlalchemy-filters#filters-format\n \"\"\"\n if type(group_by) not in (tuple, list):\n raise BadGroupFormat(f\"Group spec {group_by} must be list or tuple.\")\n for field in group_by:\n if type(field) not in (str, Column):\n raise BadGroupFormat(f\"Group field {field} must be a string or Column.\")\n field_name = field if type(field) == str else field.name\n if field_name not in table.c:\n raise GroupFieldNotFound(f\"Group field {field} not found in {table}.\")\n\n query = (\n select(table)\n .limit(limit)\n .offset(offset)\n )\n if order_by is not None:\n query = apply_sort(query, order_by)\n if filters is not None:\n query = apply_filters(query, filters)\n subquery = query.subquery()\n\n group_by = [\n subquery.columns[col] if type(col) == str else subquery.columns[col.name]\n for col in group_by\n ]\n query = select(*group_by, func.count(subquery.c[ID])).group_by(*group_by)\n with engine.begin() as conn:\n records = conn.execute(query).fetchall()\n\n # Last field is the count, preceding fields are the group by fields\n counts = {\n (*record[:-1],): record[-1]\n for record in records\n }\n return counts\n\n\ndef get_distinct_tuple_values(\n column_list, engine, table=None, limit=None, offset=None,\n):\n \"\"\"\n Returns distinct tuples from a given list of columns.\n\n Args:\n column_list: list of column names or SQLAlchemy column objects\n engine: SQLAlchemy engine object\n table: SQLAlchemy table object\n limit: int, gives number of rows to return\n offset: int, gives number of rows to skip\n\n If no table is given, the column_list must consist entirely of\n SQLAlchemy column objects associated with a table.\n \"\"\"\n if table is not None:\n column_objects = _create_col_objects(table, column_list)\n else:\n column_objects = column_list\n try:\n assert all([type(col) == Column for col in column_objects])\n except AssertionError as e:\n logger.error(\"All columns must be str or sqlalchemy.Column type\")\n raise e\n\n query = (\n select(*column_objects)\n .distinct()\n .limit(limit)\n .offset(offset)\n )\n with engine.begin() as conn:\n res = conn.execute(query).fetchall()\n return [tuple(zip(column_objects, row)) for row in res]\n\n\ndef distinct_tuples_to_filter(distinct_tuples):\n filters = []\n for col, value in distinct_tuples:\n filters.append({\n \"field\": col,\n \"op\": \"==\",\n \"value\": value,\n })\n return filters\n\n\ndef create_record_or_records(table, engine, record_data):\n \"\"\"\n record_data can be a dictionary, tuple, or list of dictionaries or tuples.\n if record_data is a list, it creates multiple records.\n \"\"\"\n id_value = None\n with engine.begin() as connection:\n result = connection.execute(table.insert(), record_data)\n # If there was only a single record created, return the record.\n if result.rowcount == 1:\n # We need to manually commit insertion so that we can retrieve the record.\n connection.commit()\n id_value = result.inserted_primary_key[0]\n if id_value is not None:\n return get_record(table, engine, id_value)\n # Do not return any records if multiple rows were added.\n return None\n\n\ndef create_records_from_csv(\n table,\n engine,\n csv_filename,\n column_names,\n header,\n delimiter=None,\n escape=None,\n quote=None,\n):\n with open(csv_filename, 'rb') as csv_file:\n with engine.begin() as conn:\n cursor = conn.connection.cursor()\n relation = '.'.join('\"{}\"'.format(part) for part in (table.schema, table.name))\n formatted_columns = '({})'.format(','.join([f'\"{column_name}\"' for column_name in column_names]))\n\n copy_sql = f'COPY {relation} {formatted_columns} FROM STDIN CSV'\n if header:\n copy_sql += \" HEADER\"\n if delimiter:\n copy_sql += f\" DELIMITER E'{delimiter}'\"\n if escape:\n copy_sql += f\" ESCAPE '{escape}'\"\n if quote:\n if quote == \"'\":\n quote = \"''\"\n copy_sql += f\" QUOTE '{quote}'\"\n\n cursor.copy_expert(copy_sql, csv_file)\n\n\ndef update_record(table, engine, id_value, record_data):\n primary_key_column = _get_primary_key_column(table)\n with engine.begin() as connection:\n connection.execute(\n table.update().where(primary_key_column == id_value).values(record_data)\n )\n return get_record(table, engine, id_value)\n\n\ndef delete_record(table, engine, id_value):\n primary_key_column = _get_primary_key_column(table)\n query = delete(table).where(primary_key_column == id_value)\n with engine.begin() as conn:\n return conn.execute(query)\n", "path": "db/records.py"}], "after_files": [{"content": "import logging\nfrom sqlalchemy import delete, select, Column, func\nfrom sqlalchemy.inspection import inspect\nfrom sqlalchemy_filters import apply_filters, apply_sort\nfrom sqlalchemy_filters.exceptions import FieldNotFound\n\n\nlogger = logging.getLogger(__name__)\n\n\n# Grouping exceptions follow the sqlalchemy_filters exceptions patterns\nclass BadGroupFormat(Exception):\n pass\n\n\nclass GroupFieldNotFound(FieldNotFound):\n pass\n\n\ndef _get_primary_key_column(table):\n primary_key_list = list(inspect(table).primary_key)\n # We do not support getting by composite primary keys\n assert len(primary_key_list) == 1\n return primary_key_list[0]\n\n\ndef _create_col_objects(table, column_list):\n return [\n table.columns[col] if type(col) == str else col\n for col in column_list\n ]\n\n\ndef _get_query(table, limit, offset, order_by, filters):\n query = select(table).limit(limit).offset(offset)\n if order_by is not None:\n query = apply_sort(query, order_by)\n if filters is not None:\n query = apply_filters(query, filters)\n return query\n\n\ndef _execute_query(query, engine):\n with engine.begin() as conn:\n records = conn.execute(query).fetchall()\n return records\n\n\ndef get_record(table, engine, id_value):\n primary_key_column = _get_primary_key_column(table)\n query = select(table).where(primary_key_column == id_value)\n result = _execute_query(query, engine)\n assert len(result) <= 1\n return result[0] if result else None\n\n\ndef get_records(\n table, engine, limit=None, offset=None, order_by=[], filters=[],\n):\n \"\"\"\n Returns records from a table.\n\n Args:\n table: SQLAlchemy table object\n engine: SQLAlchemy engine object\n limit: int, gives number of rows to return\n offset: int, gives number of rows to skip\n order_by: list of dictionaries, where each dictionary has a 'field' and\n 'direction' field.\n See: https://github.com/centerofci/sqlalchemy-filters#sort-format\n filters: list of dictionaries, where each dictionary has a 'field' and 'op'\n field, in addition to an 'value' field if appropriate.\n See: https://github.com/centerofci/sqlalchemy-filters#filters-format\n \"\"\"\n query = _get_query(table, limit, offset, order_by, filters)\n return _execute_query(query, engine)\n\n\ndef get_group_counts(\n table, engine, group_by, limit=None, offset=None, order_by=[], filters=[],\n):\n \"\"\"\n Returns counts by specified groupings\n\n Args:\n table: SQLAlchemy table object\n engine: SQLAlchemy engine object\n limit: int, gives number of rows to return\n offset: int, gives number of rows to skip\n group_by: list or tuple of column names or column objects to group by\n order_by: list of dictionaries, where each dictionary has a 'field' and\n 'direction' field.\n See: https://github.com/centerofci/sqlalchemy-filters#sort-format\n filters: list of dictionaries, where each dictionary has a 'field' and 'op'\n field, in addition to an 'value' field if appropriate.\n See: https://github.com/centerofci/sqlalchemy-filters#filters-format\n \"\"\"\n if type(group_by) not in (tuple, list):\n raise BadGroupFormat(f\"Group spec {group_by} must be list or tuple.\")\n for field in group_by:\n if type(field) not in (str, Column):\n raise BadGroupFormat(f\"Group field {field} must be a string or Column.\")\n field_name = field if type(field) == str else field.name\n if field_name not in table.c:\n raise GroupFieldNotFound(f\"Group field {field} not found in {table}.\")\n\n # Get the list of groups that we should count.\n # We're considering limit and offset here so that we only count relevant groups\n relevant_groups_query = _get_query(table, limit, offset, order_by, filters)\n subquery = relevant_groups_query.subquery()\n\n columns = [\n subquery.columns[col] if type(col) == str else subquery.columns[col.name]\n for col in group_by\n ]\n count_query = select(*columns, func.count(columns[0])).group_by(*columns)\n records = _execute_query(count_query, engine)\n\n # Last field is the count, preceding fields are the group by fields\n counts = {\n (*record[:-1],): record[-1]\n for record in records\n }\n return counts\n\n\ndef get_distinct_tuple_values(\n column_list, engine, table=None, limit=None, offset=None,\n):\n \"\"\"\n Returns distinct tuples from a given list of columns.\n\n Args:\n column_list: list of column names or SQLAlchemy column objects\n engine: SQLAlchemy engine object\n table: SQLAlchemy table object\n limit: int, gives number of rows to return\n offset: int, gives number of rows to skip\n\n If no table is given, the column_list must consist entirely of\n SQLAlchemy column objects associated with a table.\n \"\"\"\n if table is not None:\n column_objects = _create_col_objects(table, column_list)\n else:\n column_objects = column_list\n try:\n assert all([type(col) == Column for col in column_objects])\n except AssertionError as e:\n logger.error(\"All columns must be str or sqlalchemy.Column type\")\n raise e\n\n query = (\n select(*column_objects)\n .distinct()\n .limit(limit)\n .offset(offset)\n )\n result = _execute_query(query, engine)\n return [tuple(zip(column_objects, row)) for row in result]\n\n\ndef distinct_tuples_to_filter(distinct_tuples):\n filters = []\n for col, value in distinct_tuples:\n filters.append({\n \"field\": col,\n \"op\": \"==\",\n \"value\": value,\n })\n return filters\n\n\ndef create_record_or_records(table, engine, record_data):\n \"\"\"\n record_data can be a dictionary, tuple, or list of dictionaries or tuples.\n if record_data is a list, it creates multiple records.\n \"\"\"\n id_value = None\n with engine.begin() as connection:\n result = connection.execute(table.insert(), record_data)\n # If there was only a single record created, return the record.\n if result.rowcount == 1:\n # We need to manually commit insertion so that we can retrieve the record.\n connection.commit()\n id_value = result.inserted_primary_key[0]\n if id_value is not None:\n return get_record(table, engine, id_value)\n # Do not return any records if multiple rows were added.\n return None\n\n\ndef create_records_from_csv(\n table,\n engine,\n csv_filename,\n column_names,\n header,\n delimiter=None,\n escape=None,\n quote=None,\n):\n with open(csv_filename, 'rb') as csv_file:\n with engine.begin() as conn:\n cursor = conn.connection.cursor()\n relation = '.'.join('\"{}\"'.format(part) for part in (table.schema, table.name))\n formatted_columns = '({})'.format(','.join([f'\"{column_name}\"' for column_name in column_names]))\n\n copy_sql = f'COPY {relation} {formatted_columns} FROM STDIN CSV'\n if header:\n copy_sql += \" HEADER\"\n if delimiter:\n copy_sql += f\" DELIMITER E'{delimiter}'\"\n if escape:\n copy_sql += f\" ESCAPE '{escape}'\"\n if quote:\n if quote == \"'\":\n quote = \"''\"\n copy_sql += f\" QUOTE '{quote}'\"\n\n cursor.copy_expert(copy_sql, csv_file)\n\n\ndef update_record(table, engine, id_value, record_data):\n primary_key_column = _get_primary_key_column(table)\n with engine.begin() as connection:\n connection.execute(\n table.update().where(primary_key_column == id_value).values(record_data)\n )\n return get_record(table, engine, id_value)\n\n\ndef delete_record(table, engine, id_value):\n primary_key_column = _get_primary_key_column(table)\n query = delete(table).where(primary_key_column == id_value)\n with engine.begin() as conn:\n return conn.execute(query)\n", "path": "db/records.py"}]}
| 3,063 | 877 |
gh_patches_debug_12083
|
rasdani/github-patches
|
git_diff
|
huggingface__text-generation-inference-609
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Can't load local flan-small models due to weight conversion failure
### System Info
OS Version:
Distributor ID: Ubuntu
Description: Ubuntu 20.04.3 LTS
Release: 20.04
Codename: focal
8 A-100 GPUS
Using latest text-generation-inference docker version.
I've run fine-tuning on a [Flan-T5-Small](https://huggingface.co/google/flan-t5-small) model and saved the checkpoint in my local directory. I've stored this local model checkpoint in my data2 volume and run the command as follows:
docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data2 ghcr.io/huggingface/text-generation-inference:0.9 --model-id /data2/checkpoint-20 --num-shard $num_shard
But I run into errors with the converting weights as mentioned below.
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [X] An officially supported command
- [ ] My own modifications
### Reproduction
Run docker command above.
I get this error now:
2023-07-12T05:45:31.707548Z INFO text_generation_launcher: Args { model_id: "/data2/checkpoint-20", revision: None, sharded: None, num_shard: Some(2), quantize: None, dtype: None, trust_remote_code: false, max_concurrent_requests: 128, max_best_of: 2, max_stop_sequences: 4, max_input_length: 1024, max_total_tokens: 2048, waiting_served_ratio: 1.2, max_batch_prefill_tokens: 4096, max_batch_total_tokens: 16000, max_waiting_tokens: 20, hostname: "0341f92fe465", port: 80, shard_uds_path: "/tmp/text-generation-server", master_addr: "localhost", master_port: 29500, huggingface_hub_cache: Some("/data"), weights_cache_override: None, disable_custom_kernels: false, json_output: false, otlp_endpoint: None, cors_allow_origin: [], watermark_gamma: None, watermark_delta: None, ngrok: false, ngrok_authtoken: None, ngrok_domain: None, ngrok_username: None, ngrok_password: None, env: false }
2023-07-12T05:45:31.707602Z INFO text_generation_launcher: Sharding model on 2 processes
2023-07-12T05:45:31.707781Z INFO text_generation_launcher: Starting download process.
2023-07-12T05:45:33.261253Z WARN download: text_generation_launcher: No safetensors weights found for model /data2/checkpoint-20 at revision None. Converting PyTorch weights to safetensors.
2023-07-12T05:45:33.711218Z ERROR text_generation_launcher: Download encountered an error: Traceback (most recent call last):
File "/opt/conda/bin/text-generation-server", line 8, in <module>
sys.exit(app())
File "/opt/conda/lib/python3.9/site-packages/text_generation_server/cli.py", line 164, in download_weights
utils.convert_files(local_pt_files, local_st_files)
File "/opt/conda/lib/python3.9/site-packages/text_generation_server/utils/convert.py", line 53, in convert_files
convert_file(pt_file, sf_file)
File "/opt/conda/lib/python3.9/site-packages/text_generation_server/utils/convert.py", line 21, in convert_file
if "state_dict" in loaded:
TypeError: argument of type 'Seq2SeqTrainingArguments' is not iterable
Error: DownloadError
### Expected behavior
I would expect the local model to load as do the models from the hugging-face library. Appreciate any help!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `server/text_generation_server/utils/convert.py`
Content:
```
1 import datetime
2 import torch
3 import os
4
5 from loguru import logger
6 from pathlib import Path
7 from safetensors.torch import save_file, load_file, _find_shared_tensors, _is_complete
8 from typing import List, Dict
9 from collections import defaultdict
10
11
12 def _remove_duplicate_names(
13 state_dict: Dict[str, torch.Tensor],
14 *,
15 preferred_names: List[str] = None,
16 discard_names: List[str] = None,
17 ) -> Dict[str, List[str]]:
18 if preferred_names is None:
19 preferred_names = []
20 preferred_names = set(preferred_names)
21 if discard_names is None:
22 discard_names = []
23 discard_names = set(discard_names)
24
25 shareds = _find_shared_tensors(state_dict)
26 to_remove = defaultdict(list)
27 for shared in shareds:
28 complete_names = set(
29 [name for name in shared if _is_complete(state_dict[name])]
30 )
31 if not complete_names:
32 raise RuntimeError(
33 f"Error while trying to find names to remove to save state dict, but found no suitable name to keep for saving amongst: {shared}. None is covering the entire storage.Refusing to save/load the model since you could be storing much more memory than needed. Please refer to https://huggingface.co/docs/safetensors/torch_shared_tensors for more information. Or open an issue."
34 )
35
36 keep_name = sorted(list(complete_names))[0]
37
38 # Mecanism to preferentially select keys to keep
39 # coming from the on-disk file to allow
40 # loading models saved with a different choice
41 # of keep_name
42 preferred = complete_names.difference(discard_names)
43 if preferred:
44 keep_name = sorted(list(preferred))[0]
45
46 if preferred_names:
47 preferred = preferred_names.intersection(complete_names)
48 if preferred:
49 keep_name = sorted(list(preferred))[0]
50 for name in sorted(shared):
51 if name != keep_name:
52 to_remove[keep_name].append(name)
53 return to_remove
54
55
56 def convert_file(pt_file: Path, sf_file: Path, discard_names: List[str]):
57 """
58 Convert a pytorch file to a safetensors file
59 This will remove duplicate tensors from the file.
60
61 Unfortunately, this might not respect *transformers* convention.
62 Forcing us to check for potentially different keys during load when looking
63 for specific tensors (making tensor sharing explicit).
64 """
65 loaded = torch.load(pt_file, map_location="cpu")
66 if "state_dict" in loaded:
67 loaded = loaded["state_dict"]
68 to_removes = _remove_duplicate_names(loaded, discard_names=discard_names)
69
70 metadata = {"format": "pt"}
71 for kept_name, to_remove_group in to_removes.items():
72 for to_remove in to_remove_group:
73 if to_remove not in metadata:
74 metadata[to_remove] = kept_name
75 del loaded[to_remove]
76 # Force tensors to be contiguous
77 loaded = {k: v.contiguous() for k, v in loaded.items()}
78
79 dirname = os.path.dirname(sf_file)
80 os.makedirs(dirname, exist_ok=True)
81 save_file(loaded, sf_file, metadata=metadata)
82 reloaded = load_file(sf_file)
83 for k in loaded:
84 pt_tensor = loaded[k]
85 sf_tensor = reloaded[k]
86 if not torch.equal(pt_tensor, sf_tensor):
87 raise RuntimeError(f"The output tensors do not match for key {k}")
88
89
90 def convert_files(pt_files: List[Path], sf_files: List[Path], discard_names: List[str]):
91 assert len(pt_files) == len(sf_files)
92
93 N = len(pt_files)
94 # We do this instead of using tqdm because we want to parse the logs with the launcher
95
96 for i, (pt_file, sf_file) in enumerate(zip(pt_files, sf_files)):
97 start = datetime.datetime.now()
98 convert_file(pt_file, sf_file, discard_names)
99 elapsed = datetime.datetime.now() - start
100 logger.info(f"Convert: [{i + 1}/{N}] -- Took: {elapsed}")
101
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/server/text_generation_server/utils/convert.py b/server/text_generation_server/utils/convert.py
--- a/server/text_generation_server/utils/convert.py
+++ b/server/text_generation_server/utils/convert.py
@@ -94,6 +94,14 @@
# We do this instead of using tqdm because we want to parse the logs with the launcher
for i, (pt_file, sf_file) in enumerate(zip(pt_files, sf_files)):
+ # Skip blacklisted files
+ if (
+ "arguments" in pt_file.name
+ or "args" in pt_file.name
+ or "training" in pt_file.name
+ ):
+ continue
+
start = datetime.datetime.now()
convert_file(pt_file, sf_file, discard_names)
elapsed = datetime.datetime.now() - start
|
{"golden_diff": "diff --git a/server/text_generation_server/utils/convert.py b/server/text_generation_server/utils/convert.py\n--- a/server/text_generation_server/utils/convert.py\n+++ b/server/text_generation_server/utils/convert.py\n@@ -94,6 +94,14 @@\n # We do this instead of using tqdm because we want to parse the logs with the launcher\n \n for i, (pt_file, sf_file) in enumerate(zip(pt_files, sf_files)):\n+ # Skip blacklisted files\n+ if (\n+ \"arguments\" in pt_file.name\n+ or \"args\" in pt_file.name\n+ or \"training\" in pt_file.name\n+ ):\n+ continue\n+\n start = datetime.datetime.now()\n convert_file(pt_file, sf_file, discard_names)\n elapsed = datetime.datetime.now() - start\n", "issue": "Can't load local flan-small models due to weight conversion failure \n### System Info\n\nOS Version: \r\nDistributor ID: Ubuntu\r\nDescription: Ubuntu 20.04.3 LTS\r\nRelease: 20.04\r\nCodename: focal\r\n\r\n8 A-100 GPUS\r\n\r\nUsing latest text-generation-inference docker version. \r\n\r\nI've run fine-tuning on a [Flan-T5-Small](https://huggingface.co/google/flan-t5-small) model and saved the checkpoint in my local directory. I've stored this local model checkpoint in my data2 volume and run the command as follows:\r\ndocker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data2 ghcr.io/huggingface/text-generation-inference:0.9 --model-id /data2/checkpoint-20 --num-shard $num_shard\r\n\r\nBut I run into errors with the converting weights as mentioned below. \n\n### Information\n\n- [X] Docker\n- [ ] The CLI directly\n\n### Tasks\n\n- [X] An officially supported command\n- [ ] My own modifications\n\n### Reproduction\n\nRun docker command above. \r\n\r\nI get this error now: \r\n\r\n2023-07-12T05:45:31.707548Z INFO text_generation_launcher: Args { model_id: \"/data2/checkpoint-20\", revision: None, sharded: None, num_shard: Some(2), quantize: None, dtype: None, trust_remote_code: false, max_concurrent_requests: 128, max_best_of: 2, max_stop_sequences: 4, max_input_length: 1024, max_total_tokens: 2048, waiting_served_ratio: 1.2, max_batch_prefill_tokens: 4096, max_batch_total_tokens: 16000, max_waiting_tokens: 20, hostname: \"0341f92fe465\", port: 80, shard_uds_path: \"/tmp/text-generation-server\", master_addr: \"localhost\", master_port: 29500, huggingface_hub_cache: Some(\"/data\"), weights_cache_override: None, disable_custom_kernels: false, json_output: false, otlp_endpoint: None, cors_allow_origin: [], watermark_gamma: None, watermark_delta: None, ngrok: false, ngrok_authtoken: None, ngrok_domain: None, ngrok_username: None, ngrok_password: None, env: false }\r\n2023-07-12T05:45:31.707602Z INFO text_generation_launcher: Sharding model on 2 processes\r\n2023-07-12T05:45:31.707781Z INFO text_generation_launcher: Starting download process.\r\n2023-07-12T05:45:33.261253Z WARN download: text_generation_launcher: No safetensors weights found for model /data2/checkpoint-20 at revision None. Converting PyTorch weights to safetensors.\r\n\r\n2023-07-12T05:45:33.711218Z ERROR text_generation_launcher: Download encountered an error: Traceback (most recent call last):\r\n\r\n File \"/opt/conda/bin/text-generation-server\", line 8, in <module>\r\n sys.exit(app())\r\n\r\n File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/cli.py\", line 164, in download_weights\r\n utils.convert_files(local_pt_files, local_st_files)\r\n\r\n File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/utils/convert.py\", line 53, in convert_files\r\n convert_file(pt_file, sf_file)\r\n\r\n File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/utils/convert.py\", line 21, in convert_file\r\n if \"state_dict\" in loaded:\r\n\r\nTypeError: argument of type 'Seq2SeqTrainingArguments' is not iterable\r\n\r\n\r\nError: DownloadError\r\n\n\n### Expected behavior\n\nI would expect the local model to load as do the models from the hugging-face library. Appreciate any help!\n", "before_files": [{"content": "import datetime\nimport torch\nimport os\n\nfrom loguru import logger\nfrom pathlib import Path\nfrom safetensors.torch import save_file, load_file, _find_shared_tensors, _is_complete\nfrom typing import List, Dict\nfrom collections import defaultdict\n\n\ndef _remove_duplicate_names(\n state_dict: Dict[str, torch.Tensor],\n *,\n preferred_names: List[str] = None,\n discard_names: List[str] = None,\n) -> Dict[str, List[str]]:\n if preferred_names is None:\n preferred_names = []\n preferred_names = set(preferred_names)\n if discard_names is None:\n discard_names = []\n discard_names = set(discard_names)\n\n shareds = _find_shared_tensors(state_dict)\n to_remove = defaultdict(list)\n for shared in shareds:\n complete_names = set(\n [name for name in shared if _is_complete(state_dict[name])]\n )\n if not complete_names:\n raise RuntimeError(\n f\"Error while trying to find names to remove to save state dict, but found no suitable name to keep for saving amongst: {shared}. None is covering the entire storage.Refusing to save/load the model since you could be storing much more memory than needed. Please refer to https://huggingface.co/docs/safetensors/torch_shared_tensors for more information. Or open an issue.\"\n )\n\n keep_name = sorted(list(complete_names))[0]\n\n # Mecanism to preferentially select keys to keep\n # coming from the on-disk file to allow\n # loading models saved with a different choice\n # of keep_name\n preferred = complete_names.difference(discard_names)\n if preferred:\n keep_name = sorted(list(preferred))[0]\n\n if preferred_names:\n preferred = preferred_names.intersection(complete_names)\n if preferred:\n keep_name = sorted(list(preferred))[0]\n for name in sorted(shared):\n if name != keep_name:\n to_remove[keep_name].append(name)\n return to_remove\n\n\ndef convert_file(pt_file: Path, sf_file: Path, discard_names: List[str]):\n \"\"\"\n Convert a pytorch file to a safetensors file\n This will remove duplicate tensors from the file.\n\n Unfortunately, this might not respect *transformers* convention.\n Forcing us to check for potentially different keys during load when looking\n for specific tensors (making tensor sharing explicit).\n \"\"\"\n loaded = torch.load(pt_file, map_location=\"cpu\")\n if \"state_dict\" in loaded:\n loaded = loaded[\"state_dict\"]\n to_removes = _remove_duplicate_names(loaded, discard_names=discard_names)\n\n metadata = {\"format\": \"pt\"}\n for kept_name, to_remove_group in to_removes.items():\n for to_remove in to_remove_group:\n if to_remove not in metadata:\n metadata[to_remove] = kept_name\n del loaded[to_remove]\n # Force tensors to be contiguous\n loaded = {k: v.contiguous() for k, v in loaded.items()}\n\n dirname = os.path.dirname(sf_file)\n os.makedirs(dirname, exist_ok=True)\n save_file(loaded, sf_file, metadata=metadata)\n reloaded = load_file(sf_file)\n for k in loaded:\n pt_tensor = loaded[k]\n sf_tensor = reloaded[k]\n if not torch.equal(pt_tensor, sf_tensor):\n raise RuntimeError(f\"The output tensors do not match for key {k}\")\n\n\ndef convert_files(pt_files: List[Path], sf_files: List[Path], discard_names: List[str]):\n assert len(pt_files) == len(sf_files)\n\n N = len(pt_files)\n # We do this instead of using tqdm because we want to parse the logs with the launcher\n\n for i, (pt_file, sf_file) in enumerate(zip(pt_files, sf_files)):\n start = datetime.datetime.now()\n convert_file(pt_file, sf_file, discard_names)\n elapsed = datetime.datetime.now() - start\n logger.info(f\"Convert: [{i + 1}/{N}] -- Took: {elapsed}\")\n", "path": "server/text_generation_server/utils/convert.py"}], "after_files": [{"content": "import datetime\nimport torch\nimport os\n\nfrom loguru import logger\nfrom pathlib import Path\nfrom safetensors.torch import save_file, load_file, _find_shared_tensors, _is_complete\nfrom typing import List, Dict\nfrom collections import defaultdict\n\n\ndef _remove_duplicate_names(\n state_dict: Dict[str, torch.Tensor],\n *,\n preferred_names: List[str] = None,\n discard_names: List[str] = None,\n) -> Dict[str, List[str]]:\n if preferred_names is None:\n preferred_names = []\n preferred_names = set(preferred_names)\n if discard_names is None:\n discard_names = []\n discard_names = set(discard_names)\n\n shareds = _find_shared_tensors(state_dict)\n to_remove = defaultdict(list)\n for shared in shareds:\n complete_names = set(\n [name for name in shared if _is_complete(state_dict[name])]\n )\n if not complete_names:\n raise RuntimeError(\n f\"Error while trying to find names to remove to save state dict, but found no suitable name to keep for saving amongst: {shared}. None is covering the entire storage.Refusing to save/load the model since you could be storing much more memory than needed. Please refer to https://huggingface.co/docs/safetensors/torch_shared_tensors for more information. Or open an issue.\"\n )\n\n keep_name = sorted(list(complete_names))[0]\n\n # Mecanism to preferentially select keys to keep\n # coming from the on-disk file to allow\n # loading models saved with a different choice\n # of keep_name\n preferred = complete_names.difference(discard_names)\n if preferred:\n keep_name = sorted(list(preferred))[0]\n\n if preferred_names:\n preferred = preferred_names.intersection(complete_names)\n if preferred:\n keep_name = sorted(list(preferred))[0]\n for name in sorted(shared):\n if name != keep_name:\n to_remove[keep_name].append(name)\n return to_remove\n\n\ndef convert_file(pt_file: Path, sf_file: Path, discard_names: List[str]):\n \"\"\"\n Convert a pytorch file to a safetensors file\n This will remove duplicate tensors from the file.\n\n Unfortunately, this might not respect *transformers* convention.\n Forcing us to check for potentially different keys during load when looking\n for specific tensors (making tensor sharing explicit).\n \"\"\"\n loaded = torch.load(pt_file, map_location=\"cpu\")\n if \"state_dict\" in loaded:\n loaded = loaded[\"state_dict\"]\n to_removes = _remove_duplicate_names(loaded, discard_names=discard_names)\n\n metadata = {\"format\": \"pt\"}\n for kept_name, to_remove_group in to_removes.items():\n for to_remove in to_remove_group:\n if to_remove not in metadata:\n metadata[to_remove] = kept_name\n del loaded[to_remove]\n # Force tensors to be contiguous\n loaded = {k: v.contiguous() for k, v in loaded.items()}\n\n dirname = os.path.dirname(sf_file)\n os.makedirs(dirname, exist_ok=True)\n save_file(loaded, sf_file, metadata=metadata)\n reloaded = load_file(sf_file)\n for k in loaded:\n pt_tensor = loaded[k]\n sf_tensor = reloaded[k]\n if not torch.equal(pt_tensor, sf_tensor):\n raise RuntimeError(f\"The output tensors do not match for key {k}\")\n\n\ndef convert_files(pt_files: List[Path], sf_files: List[Path], discard_names: List[str]):\n assert len(pt_files) == len(sf_files)\n\n N = len(pt_files)\n # We do this instead of using tqdm because we want to parse the logs with the launcher\n\n for i, (pt_file, sf_file) in enumerate(zip(pt_files, sf_files)):\n # Skip blacklisted files\n if (\n \"arguments\" in pt_file.name\n or \"args\" in pt_file.name\n or \"training\" in pt_file.name\n ):\n continue\n\n start = datetime.datetime.now()\n convert_file(pt_file, sf_file, discard_names)\n elapsed = datetime.datetime.now() - start\n logger.info(f\"Convert: [{i + 1}/{N}] -- Took: {elapsed}\")\n", "path": "server/text_generation_server/utils/convert.py"}]}
| 2,269 | 178 |
gh_patches_debug_354
|
rasdani/github-patches
|
git_diff
|
sanic-org__sanic-1343
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Pin versions for LTS release
I think that versions of (some) should be allowed to float but when we are ready for an LTS release, the versions should be pinned at that time.
@r0fls @ahopkins @seemethere @ashleysommer @yunstanford @ahopkins
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 """
2 Sanic
3 """
4 import codecs
5 import os
6 import re
7 from distutils.errors import DistutilsPlatformError
8 from distutils.util import strtobool
9
10 from setuptools import setup
11
12
13 def open_local(paths, mode='r', encoding='utf8'):
14 path = os.path.join(
15 os.path.abspath(os.path.dirname(__file__)),
16 *paths
17 )
18
19 return codecs.open(path, mode, encoding)
20
21
22 with open_local(['sanic', '__init__.py'], encoding='latin1') as fp:
23 try:
24 version = re.findall(r"^__version__ = '([^']+)'\r?$",
25 fp.read(), re.M)[0]
26 except IndexError:
27 raise RuntimeError('Unable to determine version.')
28
29
30 with open_local(['README.rst']) as rm:
31 long_description = rm.read()
32
33 setup_kwargs = {
34 'name': 'sanic',
35 'version': version,
36 'url': 'http://github.com/channelcat/sanic/',
37 'license': 'MIT',
38 'author': 'Channel Cat',
39 'author_email': '[email protected]',
40 'description': (
41 'A microframework based on uvloop, httptools, and learnings of flask'),
42 'long_description': long_description,
43 'packages': ['sanic'],
44 'platforms': 'any',
45 'classifiers': [
46 'Development Status :: 4 - Beta',
47 'Environment :: Web Environment',
48 'License :: OSI Approved :: MIT License',
49 'Programming Language :: Python :: 3.5',
50 'Programming Language :: Python :: 3.6',
51 ],
52 }
53
54 env_dependency = '; sys_platform != "win32" and implementation_name == "cpython"'
55 ujson = 'ujson>=1.35' + env_dependency
56 uvloop = 'uvloop>=0.5.3' + env_dependency
57
58 requirements = [
59 'httptools>=0.0.9',
60 uvloop,
61 ujson,
62 'aiofiles>=0.3.0',
63 'websockets>=5.0,<6.0',
64 'multidict>=4.0,<5.0',
65 ]
66 if strtobool(os.environ.get("SANIC_NO_UJSON", "no")):
67 print("Installing without uJSON")
68 requirements.remove(ujson)
69
70 # 'nt' means windows OS
71 if strtobool(os.environ.get("SANIC_NO_UVLOOP", "no")):
72 print("Installing without uvLoop")
73 requirements.remove(uvloop)
74
75 setup_kwargs['install_requires'] = requirements
76 setup(**setup_kwargs)
77
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -56,7 +56,7 @@
uvloop = 'uvloop>=0.5.3' + env_dependency
requirements = [
- 'httptools>=0.0.9',
+ 'httptools>=0.0.10',
uvloop,
ujson,
'aiofiles>=0.3.0',
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -56,7 +56,7 @@\n uvloop = 'uvloop>=0.5.3' + env_dependency\n \n requirements = [\n- 'httptools>=0.0.9',\n+ 'httptools>=0.0.10',\n uvloop,\n ujson,\n 'aiofiles>=0.3.0',\n", "issue": "Pin versions for LTS release\nI think that versions of (some) should be allowed to float but when we are ready for an LTS release, the versions should be pinned at that time.\r\n\r\n@r0fls @ahopkins @seemethere @ashleysommer @yunstanford @ahopkins \n", "before_files": [{"content": "\"\"\"\nSanic\n\"\"\"\nimport codecs\nimport os\nimport re\nfrom distutils.errors import DistutilsPlatformError\nfrom distutils.util import strtobool\n\nfrom setuptools import setup\n\n\ndef open_local(paths, mode='r', encoding='utf8'):\n path = os.path.join(\n os.path.abspath(os.path.dirname(__file__)),\n *paths\n )\n\n return codecs.open(path, mode, encoding)\n\n\nwith open_local(['sanic', '__init__.py'], encoding='latin1') as fp:\n try:\n version = re.findall(r\"^__version__ = '([^']+)'\\r?$\",\n fp.read(), re.M)[0]\n except IndexError:\n raise RuntimeError('Unable to determine version.')\n\n\nwith open_local(['README.rst']) as rm:\n long_description = rm.read()\n\nsetup_kwargs = {\n 'name': 'sanic',\n 'version': version,\n 'url': 'http://github.com/channelcat/sanic/',\n 'license': 'MIT',\n 'author': 'Channel Cat',\n 'author_email': '[email protected]',\n 'description': (\n 'A microframework based on uvloop, httptools, and learnings of flask'),\n 'long_description': long_description,\n 'packages': ['sanic'],\n 'platforms': 'any',\n 'classifiers': [\n 'Development Status :: 4 - Beta',\n 'Environment :: Web Environment',\n 'License :: OSI Approved :: MIT License',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n ],\n}\n\nenv_dependency = '; sys_platform != \"win32\" and implementation_name == \"cpython\"'\nujson = 'ujson>=1.35' + env_dependency\nuvloop = 'uvloop>=0.5.3' + env_dependency\n\nrequirements = [\n 'httptools>=0.0.9',\n uvloop,\n ujson,\n 'aiofiles>=0.3.0',\n 'websockets>=5.0,<6.0',\n 'multidict>=4.0,<5.0',\n]\nif strtobool(os.environ.get(\"SANIC_NO_UJSON\", \"no\")):\n print(\"Installing without uJSON\")\n requirements.remove(ujson)\n\n# 'nt' means windows OS\nif strtobool(os.environ.get(\"SANIC_NO_UVLOOP\", \"no\")):\n print(\"Installing without uvLoop\")\n requirements.remove(uvloop)\n\nsetup_kwargs['install_requires'] = requirements\nsetup(**setup_kwargs)\n", "path": "setup.py"}], "after_files": [{"content": "\"\"\"\nSanic\n\"\"\"\nimport codecs\nimport os\nimport re\nfrom distutils.errors import DistutilsPlatformError\nfrom distutils.util import strtobool\n\nfrom setuptools import setup\n\n\ndef open_local(paths, mode='r', encoding='utf8'):\n path = os.path.join(\n os.path.abspath(os.path.dirname(__file__)),\n *paths\n )\n\n return codecs.open(path, mode, encoding)\n\n\nwith open_local(['sanic', '__init__.py'], encoding='latin1') as fp:\n try:\n version = re.findall(r\"^__version__ = '([^']+)'\\r?$\",\n fp.read(), re.M)[0]\n except IndexError:\n raise RuntimeError('Unable to determine version.')\n\n\nwith open_local(['README.rst']) as rm:\n long_description = rm.read()\n\nsetup_kwargs = {\n 'name': 'sanic',\n 'version': version,\n 'url': 'http://github.com/channelcat/sanic/',\n 'license': 'MIT',\n 'author': 'Channel Cat',\n 'author_email': '[email protected]',\n 'description': (\n 'A microframework based on uvloop, httptools, and learnings of flask'),\n 'long_description': long_description,\n 'packages': ['sanic'],\n 'platforms': 'any',\n 'classifiers': [\n 'Development Status :: 4 - Beta',\n 'Environment :: Web Environment',\n 'License :: OSI Approved :: MIT License',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n ],\n}\n\nenv_dependency = '; sys_platform != \"win32\" and implementation_name == \"cpython\"'\nujson = 'ujson>=1.35' + env_dependency\nuvloop = 'uvloop>=0.5.3' + env_dependency\n\nrequirements = [\n 'httptools>=0.0.10',\n uvloop,\n ujson,\n 'aiofiles>=0.3.0',\n 'websockets>=5.0,<6.0',\n 'multidict>=4.0,<5.0',\n]\nif strtobool(os.environ.get(\"SANIC_NO_UJSON\", \"no\")):\n print(\"Installing without uJSON\")\n requirements.remove(ujson)\n\n# 'nt' means windows OS\nif strtobool(os.environ.get(\"SANIC_NO_UVLOOP\", \"no\")):\n print(\"Installing without uvLoop\")\n requirements.remove(uvloop)\n\nsetup_kwargs['install_requires'] = requirements\nsetup(**setup_kwargs)\n", "path": "setup.py"}]}
| 1,019 | 99 |
gh_patches_debug_55601
|
rasdani/github-patches
|
git_diff
|
xonsh__xonsh-138
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
In .xonshrc, import does not create a global name
xonsh: git checkout f44013b31756ba5491f2a7e1dffb7ad64513b28e
python: 3.4.1
OS: Fedora 21
With this as your .xonshrc:
``` python
import subprocess
def get_tty():
tty = subprocess.check_output('tty').decode().strip()
segments = tty.split('/')
return '/'.join(segments[-2:])
$PROMPT='{tty}@{{hostname}}$ '.format(tty=get_tty())
```
Trying to start .xonshrc yields a traceback:
```
Traceback (most recent call last):
File "scripts/xonsh", line 3, in <module>
main()
File "/srv/git/wishlist/xonsh/xonsh/main.py", line 36, in main
shell = Shell()
File "/srv/git/wishlist/xonsh/xonsh/shell.py", line 94, in __init__
execer=self.execer)
File "/srv/git/wishlist/xonsh/xonsh/environ.py", line 168, in xonshrc_context
execer.exec(rc, glbs={}, locs=env)
File "/srv/git/wishlist/xonsh/xonsh/execer.py", line 110, in exec
return exec(code, glbs, locs)
File "/home/badger/.xonshrc", line 7, in <module>
File "/home/badger/.xonshrc", line 259, in get_tty
NameError: name 'subprocess' is not defined
Exception ignored in: <bound method Shell.__del__ of <xonsh.shell.Shell object at 0x7f383127e4e0>>
Traceback (most recent call last):
File "/srv/git/wishlist/xonsh/xonsh/shell.py", line 102, in __del__
teardown_readline()
File "/srv/git/wishlist/xonsh/xonsh/shell.py", line 65, in teardown_readline
import readline
File "<frozen importlib._bootstrap>", line 2237, in _find_and_load
File "<frozen importlib._bootstrap>", line 2222, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 2164, in _find_spec
File "<frozen importlib._bootstrap>", line 1940, in find_spec
File "<frozen importlib._bootstrap>", line 1908, in _get_spec
TypeError: 'NoneType' object is not iterable
```
If I change .xonshrc to have the subprocess import inside of the function then it starts up fine. So it seems like importing does not create a globally available name. The other things I tried such as:
``` python
import subprocess as subprocess
subprocess = __import__('subprocess')
```
also lead to the same traceback.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `xonsh/environ.py`
Content:
```
1 """Environment for the xonsh shell.
2 """
3 import os
4 import re
5 import socket
6 import locale
7 import builtins
8 import platform
9 import subprocess
10 from warnings import warn
11
12 from xonsh.tools import TERM_COLORS
13
14 def current_branch(cwd=None):
15 """Gets the branch for a current working directory. Returns None
16 if the cwd is not a repository. This currently only works for git,
17 bust should be extended in the future.
18 """
19 branch = None
20 cwd = os.getcwd() if cwd is None else cwd
21
22 # step out completely if git is not installed
23 try:
24 binary_location = subprocess.check_output(['which', 'git'], cwd=cwd,
25 stderr=subprocess.PIPE,
26 universal_newlines=True)
27 if not binary_location:
28 return branch
29 except subprocess.CalledProcessError:
30 return branch
31
32 prompt_scripts = [
33 '/usr/lib/git-core/git-sh-prompt',
34 '/usr/local/etc/bash_completion.d/git-prompt.sh'
35 ]
36
37 for script in prompt_scripts:
38 # note that this is about 10x faster than bash -i "__git_ps1"
39 _input = ('source {}; __git_ps1 "${{1:-%s}}"'.format(script))
40 try:
41 branch = subprocess.check_output(['bash',], cwd=cwd, input=_input,
42 stderr=subprocess.PIPE,
43 universal_newlines=True) or None
44 except subprocess.CalledProcessError:
45 continue
46
47 # fall back to using the git binary if the above failed
48 if branch is None:
49 try:
50 s = subprocess.check_output(['git', 'rev-parse','--abbrev-ref', 'HEAD'],
51 stderr=subprocess.PIPE, cwd=cwd,
52 universal_newlines=True)
53 s = s.strip()
54 if len(s) > 0:
55 branch = s
56 except subprocess.CalledProcessError:
57 pass
58
59 return branch
60
61
62 default_prompt = ('{BOLD_GREEN}{user}@{hostname}{BOLD_BLUE} '
63 '{cwd}{BOLD_RED}{curr_branch} {BOLD_BLUE}${NO_COLOR} ')
64 default_title = '{user}@{hostname}: {cwd} | xonsh'
65
66 def format_prompt(template=default_prompt):
67 """Formats a xonsh prompt template string.
68
69 The following keyword arguments are recognized in the template string:
70
71 + user -- Name of current user
72 + hostname -- Name of host computer
73 + cwd -- Current working directory
74 + curr_branch -- Name of current git branch (preceded by a space), if any
75 + (QUALIFIER\_)COLORNAME -- Inserts an ANSI color code
76 - COLORNAME can be any of:
77 BLACK, RED, GREEN, YELLOW, BLUE, PURPLE, CYAN, WHITE
78 - QUALIFIER is optional and can be any of:
79 BOLD, UNDERLINE, BACKGROUND, INTENSE,
80 BOLD_INTENSE, BACKGROUND_INTENSE
81 + NO_COLOR -- Resets any previously used color codes
82 """
83 env = builtins.__xonsh_env__
84 cwd = env['PWD']
85 branch = current_branch(cwd=cwd)
86 branch = '' if branch is None else ' ' + branch
87 p = template.format(
88 user=env.get('USER', '<user>'),
89 hostname=socket.gethostname(),
90 cwd=cwd.replace(env['HOME'], '~'),
91 curr_branch=branch,
92 **TERM_COLORS
93 )
94 return p
95
96
97 RE_HIDDEN = re.compile('\001.*?\002')
98
99 def multiline_prompt():
100 """Returns the filler text for the prompt in multiline scenarios."""
101 curr = builtins.__xonsh_env__.get('PROMPT', "set '$PROMPT = ...' $ ")
102 curr = curr() if callable(curr) else curr
103 curr = format_prompt(curr)
104 line = curr.rsplit('\n', 1)[1] if '\n' in curr else curr
105 line = RE_HIDDEN.sub('', line) # gets rid of colors
106 # most prompts end in whitespace, head is the part before that.
107 head = line.rstrip()
108 headlen = len(head)
109 # tail is the trailing whitespace
110 tail = line if headlen == 0 else line.rsplit(head[-1], 1)[1]
111 # now to constuct the actual string
112 dots = builtins.__xonsh_env__.get('MULTILINE_PROMPT', '.')
113 dots = dots() if callable(dots) else dots
114 if dots is None or len(dots) == 0:
115 return ''
116 return (dots*(headlen//len(dots))) + dots[:headlen%len(dots)] + tail
117
118
119 BASE_ENV = {
120 'INDENT': ' ',
121 'PROMPT': default_prompt,
122 'TITLE': default_title,
123 'MULTILINE_PROMPT': '.',
124 'XONSHRC': os.path.expanduser('~/.xonshrc'),
125 'XONSH_HISTORY_SIZE': 8128,
126 'XONSH_HISTORY_FILE': os.path.expanduser('~/.xonsh_history'),
127 'LC_CTYPE': locale.setlocale(locale.LC_CTYPE),
128 'LC_COLLATE': locale.setlocale(locale.LC_COLLATE),
129 'LC_TIME': locale.setlocale(locale.LC_TIME),
130 'LC_MONETARY': locale.setlocale(locale.LC_MONETARY),
131 'LC_MESSAGES': locale.setlocale(locale.LC_MESSAGES),
132 'LC_NUMERIC': locale.setlocale(locale.LC_NUMERIC),
133 }
134
135 if platform.system() == 'Darwin':
136 BASE_ENV['BASH_COMPLETIONS'] = []
137 else:
138 BASE_ENV['BASH_COMPLETIONS'] = ['/etc/bash_completion',
139 '/usr/share/bash-completion/completions/git']
140
141 def bash_env():
142 """Attempts to compute the bash envinronment variables."""
143 currenv = None
144 if hasattr(builtins, '__xonsh_env__'):
145 currenv = builtins.__xonsh_env__.detype()
146 try:
147 s = subprocess.check_output(['bash', '-i'], input='env', env=currenv,
148 stderr=subprocess.PIPE,
149 universal_newlines=True)
150 except subprocess.CalledProcessError:
151 s = ''
152 items = [line.split('=', 1) for line in s.splitlines() if '=' in line]
153 env = dict(items)
154 return env
155
156 def xonshrc_context(rcfile=None, execer=None):
157 """Attempts to read in xonshrc file, and return the contents."""
158 if rcfile is None or execer is None or not os.path.isfile(rcfile):
159 return {}
160 with open(rcfile, 'r') as f:
161 rc = f.read()
162 if not rc.endswith('\n'):
163 rc += '\n'
164 fname = execer.filename
165 env = {}
166 try:
167 execer.filename = rcfile
168 execer.exec(rc, glbs={}, locs=env)
169 except SyntaxError as err:
170 msg = 'syntax error in xonsh run control file {0!r}: {1!s}'
171 warn(msg.format(rcfile, err), RuntimeWarning)
172 finally:
173 execer.filename = fname
174 return env
175
176 def default_env(env=None):
177 """Constructs a default xonsh environment."""
178 # in order of increasing precedence
179 ctx = dict(BASE_ENV)
180 ctx.update(os.environ)
181 ctx.update(bash_env())
182 if env is not None:
183 ctx.update(env)
184 return ctx
185
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/xonsh/environ.py b/xonsh/environ.py
--- a/xonsh/environ.py
+++ b/xonsh/environ.py
@@ -165,7 +165,7 @@
env = {}
try:
execer.filename = rcfile
- execer.exec(rc, glbs={}, locs=env)
+ execer.exec(rc, glbs=env)
except SyntaxError as err:
msg = 'syntax error in xonsh run control file {0!r}: {1!s}'
warn(msg.format(rcfile, err), RuntimeWarning)
|
{"golden_diff": "diff --git a/xonsh/environ.py b/xonsh/environ.py\n--- a/xonsh/environ.py\n+++ b/xonsh/environ.py\n@@ -165,7 +165,7 @@\n env = {}\n try:\n execer.filename = rcfile\n- execer.exec(rc, glbs={}, locs=env)\n+ execer.exec(rc, glbs=env)\n except SyntaxError as err:\n msg = 'syntax error in xonsh run control file {0!r}: {1!s}'\n warn(msg.format(rcfile, err), RuntimeWarning)\n", "issue": "In .xonshrc, import does not create a global name\nxonsh: git checkout f44013b31756ba5491f2a7e1dffb7ad64513b28e\npython: 3.4.1\nOS: Fedora 21\n\nWith this as your .xonshrc:\n\n``` python\nimport subprocess\n\ndef get_tty():\n tty = subprocess.check_output('tty').decode().strip()\n segments = tty.split('/')\n return '/'.join(segments[-2:])\n\n$PROMPT='{tty}@{{hostname}}$ '.format(tty=get_tty())\n```\n\nTrying to start .xonshrc yields a traceback:\n\n```\nTraceback (most recent call last):\n File \"scripts/xonsh\", line 3, in <module>\n main()\n File \"/srv/git/wishlist/xonsh/xonsh/main.py\", line 36, in main\n shell = Shell()\n File \"/srv/git/wishlist/xonsh/xonsh/shell.py\", line 94, in __init__\n execer=self.execer)\n File \"/srv/git/wishlist/xonsh/xonsh/environ.py\", line 168, in xonshrc_context\n execer.exec(rc, glbs={}, locs=env)\n File \"/srv/git/wishlist/xonsh/xonsh/execer.py\", line 110, in exec\n return exec(code, glbs, locs)\n File \"/home/badger/.xonshrc\", line 7, in <module>\n\n File \"/home/badger/.xonshrc\", line 259, in get_tty\nNameError: name 'subprocess' is not defined\nException ignored in: <bound method Shell.__del__ of <xonsh.shell.Shell object at 0x7f383127e4e0>>\nTraceback (most recent call last):\n File \"/srv/git/wishlist/xonsh/xonsh/shell.py\", line 102, in __del__\n teardown_readline()\n File \"/srv/git/wishlist/xonsh/xonsh/shell.py\", line 65, in teardown_readline\n import readline\n File \"<frozen importlib._bootstrap>\", line 2237, in _find_and_load\n File \"<frozen importlib._bootstrap>\", line 2222, in _find_and_load_unlocked\n File \"<frozen importlib._bootstrap>\", line 2164, in _find_spec\n File \"<frozen importlib._bootstrap>\", line 1940, in find_spec\n File \"<frozen importlib._bootstrap>\", line 1908, in _get_spec\nTypeError: 'NoneType' object is not iterable\n```\n\nIf I change .xonshrc to have the subprocess import inside of the function then it starts up fine. So it seems like importing does not create a globally available name. The other things I tried such as:\n\n``` python\nimport subprocess as subprocess\nsubprocess = __import__('subprocess')\n```\n\nalso lead to the same traceback.\n\n", "before_files": [{"content": "\"\"\"Environment for the xonsh shell.\n\"\"\"\nimport os\nimport re\nimport socket\nimport locale\nimport builtins\nimport platform\nimport subprocess\nfrom warnings import warn\n\nfrom xonsh.tools import TERM_COLORS\n\ndef current_branch(cwd=None):\n \"\"\"Gets the branch for a current working directory. Returns None\n if the cwd is not a repository. This currently only works for git, \n bust should be extended in the future.\n \"\"\"\n branch = None\n cwd = os.getcwd() if cwd is None else cwd\n\n # step out completely if git is not installed\n try:\n binary_location = subprocess.check_output(['which', 'git'], cwd=cwd,\n stderr=subprocess.PIPE,\n universal_newlines=True)\n if not binary_location:\n return branch\n except subprocess.CalledProcessError:\n return branch\n\n prompt_scripts = [\n '/usr/lib/git-core/git-sh-prompt',\n '/usr/local/etc/bash_completion.d/git-prompt.sh'\n ]\n\n for script in prompt_scripts:\n # note that this is about 10x faster than bash -i \"__git_ps1\"\n _input = ('source {}; __git_ps1 \"${{1:-%s}}\"'.format(script))\n try:\n branch = subprocess.check_output(['bash',], cwd=cwd, input=_input,\n stderr=subprocess.PIPE,\n universal_newlines=True) or None\n except subprocess.CalledProcessError:\n continue\n\n # fall back to using the git binary if the above failed\n if branch is None:\n try:\n s = subprocess.check_output(['git', 'rev-parse','--abbrev-ref', 'HEAD'],\n stderr=subprocess.PIPE, cwd=cwd,\n universal_newlines=True) \n s = s.strip()\n if len(s) > 0:\n branch = s\n except subprocess.CalledProcessError:\n pass\n\n return branch\n\n\ndefault_prompt = ('{BOLD_GREEN}{user}@{hostname}{BOLD_BLUE} '\n '{cwd}{BOLD_RED}{curr_branch} {BOLD_BLUE}${NO_COLOR} ')\ndefault_title = '{user}@{hostname}: {cwd} | xonsh'\n\ndef format_prompt(template=default_prompt):\n \"\"\"Formats a xonsh prompt template string.\n\n The following keyword arguments are recognized in the template string:\n\n + user -- Name of current user\n + hostname -- Name of host computer\n + cwd -- Current working directory\n + curr_branch -- Name of current git branch (preceded by a space), if any\n + (QUALIFIER\\_)COLORNAME -- Inserts an ANSI color code\n - COLORNAME can be any of:\n BLACK, RED, GREEN, YELLOW, BLUE, PURPLE, CYAN, WHITE\n - QUALIFIER is optional and can be any of:\n BOLD, UNDERLINE, BACKGROUND, INTENSE,\n BOLD_INTENSE, BACKGROUND_INTENSE\n + NO_COLOR -- Resets any previously used color codes\n \"\"\"\n env = builtins.__xonsh_env__\n cwd = env['PWD']\n branch = current_branch(cwd=cwd)\n branch = '' if branch is None else ' ' + branch\n p = template.format(\n user=env.get('USER', '<user>'),\n hostname=socket.gethostname(),\n cwd=cwd.replace(env['HOME'], '~'),\n curr_branch=branch,\n **TERM_COLORS\n )\n return p\n\n\nRE_HIDDEN = re.compile('\\001.*?\\002')\n\ndef multiline_prompt():\n \"\"\"Returns the filler text for the prompt in multiline scenarios.\"\"\"\n curr = builtins.__xonsh_env__.get('PROMPT', \"set '$PROMPT = ...' $ \")\n curr = curr() if callable(curr) else curr\n curr = format_prompt(curr)\n line = curr.rsplit('\\n', 1)[1] if '\\n' in curr else curr\n line = RE_HIDDEN.sub('', line) # gets rid of colors\n # most prompts end in whitespace, head is the part before that.\n head = line.rstrip()\n headlen = len(head)\n # tail is the trailing whitespace\n tail = line if headlen == 0 else line.rsplit(head[-1], 1)[1]\n # now to constuct the actual string\n dots = builtins.__xonsh_env__.get('MULTILINE_PROMPT', '.')\n dots = dots() if callable(dots) else dots\n if dots is None or len(dots) == 0:\n return ''\n return (dots*(headlen//len(dots))) + dots[:headlen%len(dots)] + tail\n\n\nBASE_ENV = {\n 'INDENT': ' ',\n 'PROMPT': default_prompt,\n 'TITLE': default_title,\n 'MULTILINE_PROMPT': '.',\n 'XONSHRC': os.path.expanduser('~/.xonshrc'),\n 'XONSH_HISTORY_SIZE': 8128,\n 'XONSH_HISTORY_FILE': os.path.expanduser('~/.xonsh_history'),\n 'LC_CTYPE': locale.setlocale(locale.LC_CTYPE),\n 'LC_COLLATE': locale.setlocale(locale.LC_COLLATE),\n 'LC_TIME': locale.setlocale(locale.LC_TIME),\n 'LC_MONETARY': locale.setlocale(locale.LC_MONETARY),\n 'LC_MESSAGES': locale.setlocale(locale.LC_MESSAGES),\n 'LC_NUMERIC': locale.setlocale(locale.LC_NUMERIC),\n }\n\nif platform.system() == 'Darwin':\n BASE_ENV['BASH_COMPLETIONS'] = []\nelse:\n BASE_ENV['BASH_COMPLETIONS'] = ['/etc/bash_completion', \n '/usr/share/bash-completion/completions/git']\n\ndef bash_env():\n \"\"\"Attempts to compute the bash envinronment variables.\"\"\"\n currenv = None\n if hasattr(builtins, '__xonsh_env__'):\n currenv = builtins.__xonsh_env__.detype()\n try:\n s = subprocess.check_output(['bash', '-i'], input='env', env=currenv, \n stderr=subprocess.PIPE,\n universal_newlines=True)\n except subprocess.CalledProcessError:\n s = ''\n items = [line.split('=', 1) for line in s.splitlines() if '=' in line]\n env = dict(items)\n return env\n\ndef xonshrc_context(rcfile=None, execer=None):\n \"\"\"Attempts to read in xonshrc file, and return the contents.\"\"\"\n if rcfile is None or execer is None or not os.path.isfile(rcfile):\n return {}\n with open(rcfile, 'r') as f:\n rc = f.read()\n if not rc.endswith('\\n'):\n rc += '\\n'\n fname = execer.filename\n env = {}\n try:\n execer.filename = rcfile\n execer.exec(rc, glbs={}, locs=env)\n except SyntaxError as err:\n msg = 'syntax error in xonsh run control file {0!r}: {1!s}'\n warn(msg.format(rcfile, err), RuntimeWarning)\n finally:\n execer.filename = fname\n return env\n\ndef default_env(env=None):\n \"\"\"Constructs a default xonsh environment.\"\"\"\n # in order of increasing precedence\n ctx = dict(BASE_ENV)\n ctx.update(os.environ)\n ctx.update(bash_env())\n if env is not None:\n ctx.update(env)\n return ctx\n", "path": "xonsh/environ.py"}], "after_files": [{"content": "\"\"\"Environment for the xonsh shell.\n\"\"\"\nimport os\nimport re\nimport socket\nimport locale\nimport builtins\nimport platform\nimport subprocess\nfrom warnings import warn\n\nfrom xonsh.tools import TERM_COLORS\n\ndef current_branch(cwd=None):\n \"\"\"Gets the branch for a current working directory. Returns None\n if the cwd is not a repository. This currently only works for git, \n bust should be extended in the future.\n \"\"\"\n branch = None\n cwd = os.getcwd() if cwd is None else cwd\n\n # step out completely if git is not installed\n try:\n binary_location = subprocess.check_output(['which', 'git'], cwd=cwd,\n stderr=subprocess.PIPE,\n universal_newlines=True)\n if not binary_location:\n return branch\n except subprocess.CalledProcessError:\n return branch\n\n prompt_scripts = [\n '/usr/lib/git-core/git-sh-prompt',\n '/usr/local/etc/bash_completion.d/git-prompt.sh'\n ]\n\n for script in prompt_scripts:\n # note that this is about 10x faster than bash -i \"__git_ps1\"\n _input = ('source {}; __git_ps1 \"${{1:-%s}}\"'.format(script))\n try:\n branch = subprocess.check_output(['bash',], cwd=cwd, input=_input,\n stderr=subprocess.PIPE,\n universal_newlines=True) or None\n except subprocess.CalledProcessError:\n continue\n\n # fall back to using the git binary if the above failed\n if branch is None:\n try:\n s = subprocess.check_output(['git', 'rev-parse','--abbrev-ref', 'HEAD'],\n stderr=subprocess.PIPE, cwd=cwd,\n universal_newlines=True) \n s = s.strip()\n if len(s) > 0:\n branch = s\n except subprocess.CalledProcessError:\n pass\n\n return branch\n\n\ndefault_prompt = ('{BOLD_GREEN}{user}@{hostname}{BOLD_BLUE} '\n '{cwd}{BOLD_RED}{curr_branch} {BOLD_BLUE}${NO_COLOR} ')\ndefault_title = '{user}@{hostname}: {cwd} | xonsh'\n\ndef format_prompt(template=default_prompt):\n \"\"\"Formats a xonsh prompt template string.\n\n The following keyword arguments are recognized in the template string:\n\n + user -- Name of current user\n + hostname -- Name of host computer\n + cwd -- Current working directory\n + curr_branch -- Name of current git branch (preceded by a space), if any\n + (QUALIFIER\\_)COLORNAME -- Inserts an ANSI color code\n - COLORNAME can be any of:\n BLACK, RED, GREEN, YELLOW, BLUE, PURPLE, CYAN, WHITE\n - QUALIFIER is optional and can be any of:\n BOLD, UNDERLINE, BACKGROUND, INTENSE,\n BOLD_INTENSE, BACKGROUND_INTENSE\n + NO_COLOR -- Resets any previously used color codes\n \"\"\"\n env = builtins.__xonsh_env__\n cwd = env['PWD']\n branch = current_branch(cwd=cwd)\n branch = '' if branch is None else ' ' + branch\n p = template.format(\n user=env.get('USER', '<user>'),\n hostname=socket.gethostname(),\n cwd=cwd.replace(env['HOME'], '~'),\n curr_branch=branch,\n **TERM_COLORS\n )\n return p\n\n\nRE_HIDDEN = re.compile('\\001.*?\\002')\n\ndef multiline_prompt():\n \"\"\"Returns the filler text for the prompt in multiline scenarios.\"\"\"\n curr = builtins.__xonsh_env__.get('PROMPT', \"set '$PROMPT = ...' $ \")\n curr = curr() if callable(curr) else curr\n curr = format_prompt(curr)\n line = curr.rsplit('\\n', 1)[1] if '\\n' in curr else curr\n line = RE_HIDDEN.sub('', line) # gets rid of colors\n # most prompts end in whitespace, head is the part before that.\n head = line.rstrip()\n headlen = len(head)\n # tail is the trailing whitespace\n tail = line if headlen == 0 else line.rsplit(head[-1], 1)[1]\n # now to constuct the actual string\n dots = builtins.__xonsh_env__.get('MULTILINE_PROMPT', '.')\n dots = dots() if callable(dots) else dots\n if dots is None or len(dots) == 0:\n return ''\n return (dots*(headlen//len(dots))) + dots[:headlen%len(dots)] + tail\n\n\nBASE_ENV = {\n 'INDENT': ' ',\n 'PROMPT': default_prompt,\n 'TITLE': default_title,\n 'MULTILINE_PROMPT': '.',\n 'XONSHRC': os.path.expanduser('~/.xonshrc'),\n 'XONSH_HISTORY_SIZE': 8128,\n 'XONSH_HISTORY_FILE': os.path.expanduser('~/.xonsh_history'),\n 'LC_CTYPE': locale.setlocale(locale.LC_CTYPE),\n 'LC_COLLATE': locale.setlocale(locale.LC_COLLATE),\n 'LC_TIME': locale.setlocale(locale.LC_TIME),\n 'LC_MONETARY': locale.setlocale(locale.LC_MONETARY),\n 'LC_MESSAGES': locale.setlocale(locale.LC_MESSAGES),\n 'LC_NUMERIC': locale.setlocale(locale.LC_NUMERIC),\n }\n\nif platform.system() == 'Darwin':\n BASE_ENV['BASH_COMPLETIONS'] = []\nelse:\n BASE_ENV['BASH_COMPLETIONS'] = ['/etc/bash_completion', \n '/usr/share/bash-completion/completions/git']\n\ndef bash_env():\n \"\"\"Attempts to compute the bash envinronment variables.\"\"\"\n currenv = None\n if hasattr(builtins, '__xonsh_env__'):\n currenv = builtins.__xonsh_env__.detype()\n try:\n s = subprocess.check_output(['bash', '-i'], input='env', env=currenv, \n stderr=subprocess.PIPE,\n universal_newlines=True)\n except subprocess.CalledProcessError:\n s = ''\n items = [line.split('=', 1) for line in s.splitlines() if '=' in line]\n env = dict(items)\n return env\n\ndef xonshrc_context(rcfile=None, execer=None):\n \"\"\"Attempts to read in xonshrc file, and return the contents.\"\"\"\n if rcfile is None or execer is None or not os.path.isfile(rcfile):\n return {}\n with open(rcfile, 'r') as f:\n rc = f.read()\n if not rc.endswith('\\n'):\n rc += '\\n'\n fname = execer.filename\n env = {}\n try:\n execer.filename = rcfile\n execer.exec(rc, glbs=env)\n except SyntaxError as err:\n msg = 'syntax error in xonsh run control file {0!r}: {1!s}'\n warn(msg.format(rcfile, err), RuntimeWarning)\n finally:\n execer.filename = fname\n return env\n\ndef default_env(env=None):\n \"\"\"Constructs a default xonsh environment.\"\"\"\n # in order of increasing precedence\n ctx = dict(BASE_ENV)\n ctx.update(os.environ)\n ctx.update(bash_env())\n if env is not None:\n ctx.update(env)\n return ctx\n", "path": "xonsh/environ.py"}]}
| 2,986 | 134 |
gh_patches_debug_32361
|
rasdani/github-patches
|
git_diff
|
hpcaitech__ColossalAI-3420
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[tensor] fix some unittests
[tensor] fix some unittests
[tensor] fix some unittests
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `applications/Chat/coati/models/gpt/gpt_actor.py`
Content:
```
1 from typing import Optional
2
3 from transformers.models.gpt2.configuration_gpt2 import GPT2Config
4 from transformers.models.gpt2.modeling_gpt2 import GPT2LMHeadModel
5
6 from ..base import Actor
7
8
9 class GPTActor(Actor):
10 """
11 GPT Actor model.
12
13 Args:
14 pretrained (str): Pretrained model name or path.
15 config (GPT2Config): Model config.
16 checkpoint (bool): Enable gradient checkpointing.
17 lora_rank (int): Rank of the LoRa layer.
18 lora_train_bias (str): Bias training strategy for the LoRa layer.
19 """
20
21 def __init__(self,
22 pretrained: Optional[str] = None,
23 config: Optional[GPT2Config] = None,
24 checkpoint: bool = False,
25 lora_rank: int = 0,
26 lora_train_bias: str = 'none') -> None:
27 if pretrained is not None:
28 model = GPT2LMHeadModel.from_pretrained(pretrained)
29 elif config is not None:
30 model = GPT2LMHeadModel(config)
31 else:
32 model = GPT2LMHeadModel(GPT2Config())
33 if checkpoint:
34 model.gradient_checkpointing_enable()
35 super().__init__(model, lora_rank, lora_train_bias)
36
```
Path: `applications/Chat/coati/models/gpt/gpt_critic.py`
Content:
```
1 from typing import Optional
2
3 import torch.nn as nn
4 from transformers.models.gpt2.configuration_gpt2 import GPT2Config
5 from transformers.models.gpt2.modeling_gpt2 import GPT2Model
6
7 from ..base import Critic
8
9
10 class GPTCritic(Critic):
11 """
12 GPT Critic model.
13
14 Args:
15 pretrained (str): Pretrained model name or path.
16 config (GPT2Config): Model config.
17 checkpoint (bool): Enable gradient checkpointing.
18 lora_rank (int): Rank of the LO-RA decomposition.
19 lora_train_bias (str): LoRA bias training mode.
20 """
21
22 def __init__(self,
23 pretrained: Optional[str] = None,
24 config: Optional[GPT2Config] = None,
25 checkpoint: bool = False,
26 lora_rank: int = 0,
27 lora_train_bias: str = 'none') -> None:
28 if pretrained is not None:
29 model = GPT2Model.from_pretrained(pretrained)
30 elif config is not None:
31 model = GPT2Model(config)
32 else:
33 model = GPT2Model(GPT2Config())
34 if checkpoint:
35 model.gradient_checkpointing_enable()
36 value_head = nn.Linear(model.config.n_embd, 1)
37 super().__init__(model, value_head, lora_rank, lora_train_bias)
38
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/applications/Chat/coati/models/gpt/gpt_actor.py b/applications/Chat/coati/models/gpt/gpt_actor.py
--- a/applications/Chat/coati/models/gpt/gpt_actor.py
+++ b/applications/Chat/coati/models/gpt/gpt_actor.py
@@ -23,7 +23,8 @@
config: Optional[GPT2Config] = None,
checkpoint: bool = False,
lora_rank: int = 0,
- lora_train_bias: str = 'none') -> None:
+ lora_train_bias: str = 'none',
+ **kwargs) -> None:
if pretrained is not None:
model = GPT2LMHeadModel.from_pretrained(pretrained)
elif config is not None:
@@ -32,4 +33,4 @@
model = GPT2LMHeadModel(GPT2Config())
if checkpoint:
model.gradient_checkpointing_enable()
- super().__init__(model, lora_rank, lora_train_bias)
+ super().__init__(model, lora_rank, lora_train_bias, **kwargs)
diff --git a/applications/Chat/coati/models/gpt/gpt_critic.py b/applications/Chat/coati/models/gpt/gpt_critic.py
--- a/applications/Chat/coati/models/gpt/gpt_critic.py
+++ b/applications/Chat/coati/models/gpt/gpt_critic.py
@@ -24,7 +24,8 @@
config: Optional[GPT2Config] = None,
checkpoint: bool = False,
lora_rank: int = 0,
- lora_train_bias: str = 'none') -> None:
+ lora_train_bias: str = 'none',
+ **kwargs) -> None:
if pretrained is not None:
model = GPT2Model.from_pretrained(pretrained)
elif config is not None:
@@ -34,4 +35,4 @@
if checkpoint:
model.gradient_checkpointing_enable()
value_head = nn.Linear(model.config.n_embd, 1)
- super().__init__(model, value_head, lora_rank, lora_train_bias)
+ super().__init__(model, value_head, lora_rank, lora_train_bias, **kwargs)
|
{"golden_diff": "diff --git a/applications/Chat/coati/models/gpt/gpt_actor.py b/applications/Chat/coati/models/gpt/gpt_actor.py\n--- a/applications/Chat/coati/models/gpt/gpt_actor.py\n+++ b/applications/Chat/coati/models/gpt/gpt_actor.py\n@@ -23,7 +23,8 @@\n config: Optional[GPT2Config] = None,\n checkpoint: bool = False,\n lora_rank: int = 0,\n- lora_train_bias: str = 'none') -> None:\n+ lora_train_bias: str = 'none',\n+ **kwargs) -> None:\n if pretrained is not None:\n model = GPT2LMHeadModel.from_pretrained(pretrained)\n elif config is not None:\n@@ -32,4 +33,4 @@\n model = GPT2LMHeadModel(GPT2Config())\n if checkpoint:\n model.gradient_checkpointing_enable()\n- super().__init__(model, lora_rank, lora_train_bias)\n+ super().__init__(model, lora_rank, lora_train_bias, **kwargs)\ndiff --git a/applications/Chat/coati/models/gpt/gpt_critic.py b/applications/Chat/coati/models/gpt/gpt_critic.py\n--- a/applications/Chat/coati/models/gpt/gpt_critic.py\n+++ b/applications/Chat/coati/models/gpt/gpt_critic.py\n@@ -24,7 +24,8 @@\n config: Optional[GPT2Config] = None,\n checkpoint: bool = False,\n lora_rank: int = 0,\n- lora_train_bias: str = 'none') -> None:\n+ lora_train_bias: str = 'none',\n+ **kwargs) -> None:\n if pretrained is not None:\n model = GPT2Model.from_pretrained(pretrained)\n elif config is not None:\n@@ -34,4 +35,4 @@\n if checkpoint:\n model.gradient_checkpointing_enable()\n value_head = nn.Linear(model.config.n_embd, 1)\n- super().__init__(model, value_head, lora_rank, lora_train_bias)\n+ super().__init__(model, value_head, lora_rank, lora_train_bias, **kwargs)\n", "issue": "[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "from typing import Optional\n\nfrom transformers.models.gpt2.configuration_gpt2 import GPT2Config\nfrom transformers.models.gpt2.modeling_gpt2 import GPT2LMHeadModel\n\nfrom ..base import Actor\n\n\nclass GPTActor(Actor):\n \"\"\"\n GPT Actor model.\n\n Args:\n pretrained (str): Pretrained model name or path.\n config (GPT2Config): Model config.\n checkpoint (bool): Enable gradient checkpointing.\n lora_rank (int): Rank of the LoRa layer.\n lora_train_bias (str): Bias training strategy for the LoRa layer.\n \"\"\"\n\n def __init__(self,\n pretrained: Optional[str] = None,\n config: Optional[GPT2Config] = None,\n checkpoint: bool = False,\n lora_rank: int = 0,\n lora_train_bias: str = 'none') -> None:\n if pretrained is not None:\n model = GPT2LMHeadModel.from_pretrained(pretrained)\n elif config is not None:\n model = GPT2LMHeadModel(config)\n else:\n model = GPT2LMHeadModel(GPT2Config())\n if checkpoint:\n model.gradient_checkpointing_enable()\n super().__init__(model, lora_rank, lora_train_bias)\n", "path": "applications/Chat/coati/models/gpt/gpt_actor.py"}, {"content": "from typing import Optional\n\nimport torch.nn as nn\nfrom transformers.models.gpt2.configuration_gpt2 import GPT2Config\nfrom transformers.models.gpt2.modeling_gpt2 import GPT2Model\n\nfrom ..base import Critic\n\n\nclass GPTCritic(Critic):\n \"\"\"\n GPT Critic model.\n\n Args:\n pretrained (str): Pretrained model name or path.\n config (GPT2Config): Model config.\n checkpoint (bool): Enable gradient checkpointing.\n lora_rank (int): Rank of the LO-RA decomposition.\n lora_train_bias (str): LoRA bias training mode.\n \"\"\"\n\n def __init__(self,\n pretrained: Optional[str] = None,\n config: Optional[GPT2Config] = None,\n checkpoint: bool = False,\n lora_rank: int = 0,\n lora_train_bias: str = 'none') -> None:\n if pretrained is not None:\n model = GPT2Model.from_pretrained(pretrained)\n elif config is not None:\n model = GPT2Model(config)\n else:\n model = GPT2Model(GPT2Config())\n if checkpoint:\n model.gradient_checkpointing_enable()\n value_head = nn.Linear(model.config.n_embd, 1)\n super().__init__(model, value_head, lora_rank, lora_train_bias)\n", "path": "applications/Chat/coati/models/gpt/gpt_critic.py"}], "after_files": [{"content": "from typing import Optional\n\nfrom transformers.models.gpt2.configuration_gpt2 import GPT2Config\nfrom transformers.models.gpt2.modeling_gpt2 import GPT2LMHeadModel\n\nfrom ..base import Actor\n\n\nclass GPTActor(Actor):\n \"\"\"\n GPT Actor model.\n\n Args:\n pretrained (str): Pretrained model name or path.\n config (GPT2Config): Model config.\n checkpoint (bool): Enable gradient checkpointing.\n lora_rank (int): Rank of the LoRa layer.\n lora_train_bias (str): Bias training strategy for the LoRa layer.\n \"\"\"\n\n def __init__(self,\n pretrained: Optional[str] = None,\n config: Optional[GPT2Config] = None,\n checkpoint: bool = False,\n lora_rank: int = 0,\n lora_train_bias: str = 'none',\n **kwargs) -> None:\n if pretrained is not None:\n model = GPT2LMHeadModel.from_pretrained(pretrained)\n elif config is not None:\n model = GPT2LMHeadModel(config)\n else:\n model = GPT2LMHeadModel(GPT2Config())\n if checkpoint:\n model.gradient_checkpointing_enable()\n super().__init__(model, lora_rank, lora_train_bias, **kwargs)\n", "path": "applications/Chat/coati/models/gpt/gpt_actor.py"}, {"content": "from typing import Optional\n\nimport torch.nn as nn\nfrom transformers.models.gpt2.configuration_gpt2 import GPT2Config\nfrom transformers.models.gpt2.modeling_gpt2 import GPT2Model\n\nfrom ..base import Critic\n\n\nclass GPTCritic(Critic):\n \"\"\"\n GPT Critic model.\n\n Args:\n pretrained (str): Pretrained model name or path.\n config (GPT2Config): Model config.\n checkpoint (bool): Enable gradient checkpointing.\n lora_rank (int): Rank of the LO-RA decomposition.\n lora_train_bias (str): LoRA bias training mode.\n \"\"\"\n\n def __init__(self,\n pretrained: Optional[str] = None,\n config: Optional[GPT2Config] = None,\n checkpoint: bool = False,\n lora_rank: int = 0,\n lora_train_bias: str = 'none',\n **kwargs) -> None:\n if pretrained is not None:\n model = GPT2Model.from_pretrained(pretrained)\n elif config is not None:\n model = GPT2Model(config)\n else:\n model = GPT2Model(GPT2Config())\n if checkpoint:\n model.gradient_checkpointing_enable()\n value_head = nn.Linear(model.config.n_embd, 1)\n super().__init__(model, value_head, lora_rank, lora_train_bias, **kwargs)\n", "path": "applications/Chat/coati/models/gpt/gpt_critic.py"}]}
| 1,019 | 495 |
gh_patches_debug_12471
|
rasdani/github-patches
|
git_diff
|
deis__deis-207
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update AMIs in all EC2 regions
Our images are behind on some kernel and security updates and should be re-published as v0.1.0 versions. It's also kind of a performance optimization since we do apt-get upgrade during bootstrap.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `provider/ec2.py`
Content:
```
1 """
2 Deis cloud provider implementation for Amazon EC2.
3 """
4
5 from __future__ import unicode_literals
6
7 import json
8 import time
9
10 from boto import ec2
11 from boto.exception import EC2ResponseError
12
13 # from api.ssh import connect_ssh, exec_ssh
14 from deis import settings
15
16
17 # Deis-optimized EC2 amis -- with 3.8 kernel, chef 11 deps,
18 # and large docker images (e.g. buildstep) pre-installed
19 IMAGE_MAP = {
20 'ap-northeast-1': 'ami-6da8356c',
21 'ap-southeast-1': 'ami-a66f24f4',
22 'ap-southeast-2': 'ami-d5f66bef',
23 'eu-west-1': 'ami-acbf5adb',
24 'sa-east-1': 'ami-f9fd5ae4',
25 'us-east-1': 'ami-69f3bc00',
26 'us-west-1': 'ami-f0695cb5',
27 'us-west-2': 'ami-ea1e82da',
28 }
29
30
31 def seed_flavors():
32 """Seed the database with default flavors for each EC2 region.
33
34 :rtype: list of dicts containing flavor data
35 """
36 flavors = []
37 for r in ('us-east-1', 'us-west-1', 'us-west-2', 'eu-west-1',
38 'ap-northeast-1', 'ap-southeast-1', 'ap-southeast-2',
39 'sa-east-1'):
40 flavors.append({'id': 'ec2-{}'.format(r),
41 'provider': 'ec2',
42 'params': json.dumps({
43 'region': r,
44 'image': IMAGE_MAP[r],
45 'zone': 'any',
46 'size': 'm1.medium'})})
47 return flavors
48
49
50 def build_layer(layer):
51 """
52 Build a layer.
53
54 :param layer: a dict containing formation, id, params, and creds info
55 """
56 region = layer['params'].get('region', 'us-east-1')
57 conn = _create_ec2_connection(layer['creds'], region)
58 # create a new sg and authorize all ports
59 # use iptables on the host to firewall ports
60 name = "{formation}-{id}".format(**layer)
61 sg = conn.create_security_group(name, 'Created by Deis')
62 # import a new keypair using the layer key material
63 conn.import_key_pair(name, layer['ssh_public_key'])
64 # loop until the sg is *actually* there
65 for i in xrange(10):
66 try:
67 sg.authorize(ip_protocol='tcp', from_port=1, to_port=65535,
68 cidr_ip='0.0.0.0/0')
69 break
70 except EC2ResponseError:
71 if i < 10:
72 time.sleep(1.5)
73 continue
74 else:
75 raise RuntimeError('Failed to authorize security group')
76
77
78 def destroy_layer(layer):
79 """
80 Destroy a layer.
81
82 :param layer: a dict containing formation, id, params, and creds info
83 """
84 region = layer['params'].get('region', 'us-east-1')
85 name = "{formation}-{id}".format(**layer)
86 conn = _create_ec2_connection(layer['creds'], region)
87 conn.delete_key_pair(name)
88 # there's an ec2 race condition on instances terminating
89 # successfully but still holding a lock on the security group
90 # let's take a nap
91 time.sleep(5)
92 try:
93 conn.delete_security_group(name)
94 except EC2ResponseError as e:
95 if e.code != 'InvalidGroup.NotFound':
96 raise e
97
98
99 def build_node(node):
100 """
101 Build a node.
102
103 :param node: a dict containing formation, layer, params, and creds info.
104 :rtype: a tuple of (provider_id, fully_qualified_domain_name, metadata)
105 """
106 params, creds = node['params'], node['creds']
107 region = params.setdefault('region', 'us-east-1')
108 conn = _create_ec2_connection(creds, region)
109 name = "{formation}-{layer}".format(**node)
110 params['key_name'] = name
111 sg = conn.get_all_security_groups(name)[0]
112 params.setdefault('security_groups', []).append(sg.name)
113 image_id = params.get(
114 'image', getattr(settings, 'IMAGE_MAP', IMAGE_MAP)[region])
115 images = conn.get_all_images([image_id])
116 if len(images) != 1:
117 raise LookupError('Could not find AMI: %s' % image_id)
118 image = images[0]
119 kwargs = _prepare_run_kwargs(params)
120 reservation = image.run(**kwargs)
121 instances = reservation.instances
122 boto = instances[0]
123 # sleep before tagging
124 time.sleep(10)
125 boto.update()
126 boto.add_tag('Name', node['id'])
127 # loop until running
128 while(True):
129 time.sleep(2)
130 boto.update()
131 if boto.state == 'running':
132 break
133 # prepare return values
134 provider_id = boto.id
135 fqdn = boto.public_dns_name
136 metadata = _format_metadata(boto)
137 return provider_id, fqdn, metadata
138
139
140 def destroy_node(node):
141 """
142 Destroy a node.
143
144 :param node: a dict containing a node's provider_id, params, and creds
145 """
146 provider_id = node['provider_id']
147 region = node['params'].get('region', 'us-east-1')
148 conn = _create_ec2_connection(node['creds'], region)
149 if provider_id:
150 conn.terminate_instances([provider_id])
151 i = conn.get_all_instances([provider_id])[0].instances[0]
152 while(True):
153 time.sleep(2)
154 i.update()
155 if i.state == "terminated":
156 break
157
158
159 def _create_ec2_connection(creds, region):
160 """
161 Connect to an EC2 region with the given credentials.
162
163 :param creds: a dict containing an EC2 access_key and secret_key
164 :region: the name of an EC2 region, such as "us-west-2"
165 :rtype: a connected :class:`~boto.ec2.connection.EC2Connection`
166 :raises EnvironmentError: if no credentials are provided
167 """
168 if not creds:
169 raise EnvironmentError('No credentials provided')
170 return ec2.connect_to_region(region,
171 aws_access_key_id=creds['access_key'],
172 aws_secret_access_key=creds['secret_key'])
173
174
175 def _prepare_run_kwargs(params):
176 # start with sane defaults
177 kwargs = {
178 'min_count': 1, 'max_count': 1,
179 'user_data': None, 'addressing_type': None,
180 'instance_type': None, 'placement': None,
181 'kernel_id': None, 'ramdisk_id': None,
182 'monitoring_enabled': False, 'subnet_id': None,
183 'block_device_map': None,
184 }
185 # convert zone "any" to NoneType
186 requested_zone = params.get('zone')
187 if requested_zone and requested_zone.lower() == 'any':
188 requested_zone = None
189 # lookup kwargs from params
190 param_kwargs = {
191 'instance_type': params.get('size', 'm1.medium'),
192 'security_groups': params['security_groups'],
193 'placement': requested_zone,
194 'key_name': params['key_name'],
195 'kernel_id': params.get('kernel', None),
196 }
197 # add user_data if provided in params
198 user_data = params.get('user_data')
199 if user_data:
200 kwargs.update({'user_data': user_data})
201 # params override defaults
202 kwargs.update(param_kwargs)
203 return kwargs
204
205
206 def _format_metadata(boto):
207 return {
208 'architecture': boto.architecture,
209 'block_device_mapping': {
210 k: v.volume_id for k, v in boto.block_device_mapping.items()
211 },
212 'client_token': boto.client_token,
213 'dns_name': boto.dns_name,
214 'ebs_optimized': boto.ebs_optimized,
215 'eventsSet': boto.eventsSet,
216 'group_name': boto.group_name,
217 'groups': [g.id for g in boto.groups],
218 'hypervisor': boto.hypervisor,
219 'id': boto.id,
220 'image_id': boto.image_id,
221 'instance_profile': boto.instance_profile,
222 'instance_type': boto.instance_type,
223 'interfaces': list(boto.interfaces),
224 'ip_address': boto.ip_address,
225 'kernel': boto.kernel,
226 'key_name': boto.key_name,
227 'launch_time': boto.launch_time,
228 'monitored': boto.monitored,
229 'monitoring_state': boto.monitoring_state,
230 'persistent': boto.persistent,
231 'placement': boto.placement,
232 'placement_group': boto.placement_group,
233 'placement_tenancy': boto.placement_tenancy,
234 'previous_state': boto.previous_state,
235 'private_dns_name': boto.private_dns_name,
236 'private_ip_address': boto.private_ip_address,
237 'public_dns_name': boto.public_dns_name,
238 'ramdisk': boto.ramdisk,
239 'region': boto.region.name,
240 'root_device_name': boto.root_device_name,
241 'root_device_type': boto.root_device_type,
242 'spot_instance_request_id': boto.spot_instance_request_id,
243 'state': boto.state,
244 'state_code': boto.state_code,
245 'state_reason': boto.state_reason,
246 'subnet_id': boto.subnet_id,
247 'tags': dict(boto.tags),
248 'virtualization_type': boto.virtualization_type,
249 'vpc_id': boto.vpc_id,
250 }
251
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/provider/ec2.py b/provider/ec2.py
--- a/provider/ec2.py
+++ b/provider/ec2.py
@@ -17,14 +17,14 @@
# Deis-optimized EC2 amis -- with 3.8 kernel, chef 11 deps,
# and large docker images (e.g. buildstep) pre-installed
IMAGE_MAP = {
- 'ap-northeast-1': 'ami-6da8356c',
- 'ap-southeast-1': 'ami-a66f24f4',
- 'ap-southeast-2': 'ami-d5f66bef',
- 'eu-west-1': 'ami-acbf5adb',
- 'sa-east-1': 'ami-f9fd5ae4',
- 'us-east-1': 'ami-69f3bc00',
- 'us-west-1': 'ami-f0695cb5',
- 'us-west-2': 'ami-ea1e82da',
+ 'ap-northeast-1': 'ami-d95ac4d8',
+ 'ap-southeast-1': 'ami-1823694a',
+ 'ap-southeast-2': 'ami-e56af7df',
+ 'eu-west-1': 'ami-7447a003',
+ 'sa-east-1': 'ami-334bec2e',
+ 'us-east-1': 'ami-493d6a20',
+ 'us-west-1': 'ami-0e2b1f4b',
+ 'us-west-2': 'ami-72e27c42',
}
|
{"golden_diff": "diff --git a/provider/ec2.py b/provider/ec2.py\n--- a/provider/ec2.py\n+++ b/provider/ec2.py\n@@ -17,14 +17,14 @@\n # Deis-optimized EC2 amis -- with 3.8 kernel, chef 11 deps,\n # and large docker images (e.g. buildstep) pre-installed\n IMAGE_MAP = {\n- 'ap-northeast-1': 'ami-6da8356c',\n- 'ap-southeast-1': 'ami-a66f24f4',\n- 'ap-southeast-2': 'ami-d5f66bef',\n- 'eu-west-1': 'ami-acbf5adb',\n- 'sa-east-1': 'ami-f9fd5ae4',\n- 'us-east-1': 'ami-69f3bc00',\n- 'us-west-1': 'ami-f0695cb5',\n- 'us-west-2': 'ami-ea1e82da',\n+ 'ap-northeast-1': 'ami-d95ac4d8',\n+ 'ap-southeast-1': 'ami-1823694a',\n+ 'ap-southeast-2': 'ami-e56af7df',\n+ 'eu-west-1': 'ami-7447a003',\n+ 'sa-east-1': 'ami-334bec2e',\n+ 'us-east-1': 'ami-493d6a20',\n+ 'us-west-1': 'ami-0e2b1f4b',\n+ 'us-west-2': 'ami-72e27c42',\n }\n", "issue": "Update AMIs in all EC2 regions\nOur images are behind on some kernel and security updates and should be re-published as v0.1.0 versions. It's also kind of a performance optimization since we do apt-get upgrade during bootstrap.\n\n", "before_files": [{"content": "\"\"\"\nDeis cloud provider implementation for Amazon EC2.\n\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport json\nimport time\n\nfrom boto import ec2\nfrom boto.exception import EC2ResponseError\n\n# from api.ssh import connect_ssh, exec_ssh\nfrom deis import settings\n\n\n# Deis-optimized EC2 amis -- with 3.8 kernel, chef 11 deps,\n# and large docker images (e.g. buildstep) pre-installed\nIMAGE_MAP = {\n 'ap-northeast-1': 'ami-6da8356c',\n 'ap-southeast-1': 'ami-a66f24f4',\n 'ap-southeast-2': 'ami-d5f66bef',\n 'eu-west-1': 'ami-acbf5adb',\n 'sa-east-1': 'ami-f9fd5ae4',\n 'us-east-1': 'ami-69f3bc00',\n 'us-west-1': 'ami-f0695cb5',\n 'us-west-2': 'ami-ea1e82da',\n}\n\n\ndef seed_flavors():\n \"\"\"Seed the database with default flavors for each EC2 region.\n\n :rtype: list of dicts containing flavor data\n \"\"\"\n flavors = []\n for r in ('us-east-1', 'us-west-1', 'us-west-2', 'eu-west-1',\n 'ap-northeast-1', 'ap-southeast-1', 'ap-southeast-2',\n 'sa-east-1'):\n flavors.append({'id': 'ec2-{}'.format(r),\n 'provider': 'ec2',\n 'params': json.dumps({\n 'region': r,\n 'image': IMAGE_MAP[r],\n 'zone': 'any',\n 'size': 'm1.medium'})})\n return flavors\n\n\ndef build_layer(layer):\n \"\"\"\n Build a layer.\n\n :param layer: a dict containing formation, id, params, and creds info\n \"\"\"\n region = layer['params'].get('region', 'us-east-1')\n conn = _create_ec2_connection(layer['creds'], region)\n # create a new sg and authorize all ports\n # use iptables on the host to firewall ports\n name = \"{formation}-{id}\".format(**layer)\n sg = conn.create_security_group(name, 'Created by Deis')\n # import a new keypair using the layer key material\n conn.import_key_pair(name, layer['ssh_public_key'])\n # loop until the sg is *actually* there\n for i in xrange(10):\n try:\n sg.authorize(ip_protocol='tcp', from_port=1, to_port=65535,\n cidr_ip='0.0.0.0/0')\n break\n except EC2ResponseError:\n if i < 10:\n time.sleep(1.5)\n continue\n else:\n raise RuntimeError('Failed to authorize security group')\n\n\ndef destroy_layer(layer):\n \"\"\"\n Destroy a layer.\n\n :param layer: a dict containing formation, id, params, and creds info\n \"\"\"\n region = layer['params'].get('region', 'us-east-1')\n name = \"{formation}-{id}\".format(**layer)\n conn = _create_ec2_connection(layer['creds'], region)\n conn.delete_key_pair(name)\n # there's an ec2 race condition on instances terminating\n # successfully but still holding a lock on the security group\n # let's take a nap\n time.sleep(5)\n try:\n conn.delete_security_group(name)\n except EC2ResponseError as e:\n if e.code != 'InvalidGroup.NotFound':\n raise e\n\n\ndef build_node(node):\n \"\"\"\n Build a node.\n\n :param node: a dict containing formation, layer, params, and creds info.\n :rtype: a tuple of (provider_id, fully_qualified_domain_name, metadata)\n \"\"\"\n params, creds = node['params'], node['creds']\n region = params.setdefault('region', 'us-east-1')\n conn = _create_ec2_connection(creds, region)\n name = \"{formation}-{layer}\".format(**node)\n params['key_name'] = name\n sg = conn.get_all_security_groups(name)[0]\n params.setdefault('security_groups', []).append(sg.name)\n image_id = params.get(\n 'image', getattr(settings, 'IMAGE_MAP', IMAGE_MAP)[region])\n images = conn.get_all_images([image_id])\n if len(images) != 1:\n raise LookupError('Could not find AMI: %s' % image_id)\n image = images[0]\n kwargs = _prepare_run_kwargs(params)\n reservation = image.run(**kwargs)\n instances = reservation.instances\n boto = instances[0]\n # sleep before tagging\n time.sleep(10)\n boto.update()\n boto.add_tag('Name', node['id'])\n # loop until running\n while(True):\n time.sleep(2)\n boto.update()\n if boto.state == 'running':\n break\n # prepare return values\n provider_id = boto.id\n fqdn = boto.public_dns_name\n metadata = _format_metadata(boto)\n return provider_id, fqdn, metadata\n\n\ndef destroy_node(node):\n \"\"\"\n Destroy a node.\n\n :param node: a dict containing a node's provider_id, params, and creds\n \"\"\"\n provider_id = node['provider_id']\n region = node['params'].get('region', 'us-east-1')\n conn = _create_ec2_connection(node['creds'], region)\n if provider_id:\n conn.terminate_instances([provider_id])\n i = conn.get_all_instances([provider_id])[0].instances[0]\n while(True):\n time.sleep(2)\n i.update()\n if i.state == \"terminated\":\n break\n\n\ndef _create_ec2_connection(creds, region):\n \"\"\"\n Connect to an EC2 region with the given credentials.\n\n :param creds: a dict containing an EC2 access_key and secret_key\n :region: the name of an EC2 region, such as \"us-west-2\"\n :rtype: a connected :class:`~boto.ec2.connection.EC2Connection`\n :raises EnvironmentError: if no credentials are provided\n \"\"\"\n if not creds:\n raise EnvironmentError('No credentials provided')\n return ec2.connect_to_region(region,\n aws_access_key_id=creds['access_key'],\n aws_secret_access_key=creds['secret_key'])\n\n\ndef _prepare_run_kwargs(params):\n # start with sane defaults\n kwargs = {\n 'min_count': 1, 'max_count': 1,\n 'user_data': None, 'addressing_type': None,\n 'instance_type': None, 'placement': None,\n 'kernel_id': None, 'ramdisk_id': None,\n 'monitoring_enabled': False, 'subnet_id': None,\n 'block_device_map': None,\n }\n # convert zone \"any\" to NoneType\n requested_zone = params.get('zone')\n if requested_zone and requested_zone.lower() == 'any':\n requested_zone = None\n # lookup kwargs from params\n param_kwargs = {\n 'instance_type': params.get('size', 'm1.medium'),\n 'security_groups': params['security_groups'],\n 'placement': requested_zone,\n 'key_name': params['key_name'],\n 'kernel_id': params.get('kernel', None),\n }\n # add user_data if provided in params\n user_data = params.get('user_data')\n if user_data:\n kwargs.update({'user_data': user_data})\n # params override defaults\n kwargs.update(param_kwargs)\n return kwargs\n\n\ndef _format_metadata(boto):\n return {\n 'architecture': boto.architecture,\n 'block_device_mapping': {\n k: v.volume_id for k, v in boto.block_device_mapping.items()\n },\n 'client_token': boto.client_token,\n 'dns_name': boto.dns_name,\n 'ebs_optimized': boto.ebs_optimized,\n 'eventsSet': boto.eventsSet,\n 'group_name': boto.group_name,\n 'groups': [g.id for g in boto.groups],\n 'hypervisor': boto.hypervisor,\n 'id': boto.id,\n 'image_id': boto.image_id,\n 'instance_profile': boto.instance_profile,\n 'instance_type': boto.instance_type,\n 'interfaces': list(boto.interfaces),\n 'ip_address': boto.ip_address,\n 'kernel': boto.kernel,\n 'key_name': boto.key_name,\n 'launch_time': boto.launch_time,\n 'monitored': boto.monitored,\n 'monitoring_state': boto.monitoring_state,\n 'persistent': boto.persistent,\n 'placement': boto.placement,\n 'placement_group': boto.placement_group,\n 'placement_tenancy': boto.placement_tenancy,\n 'previous_state': boto.previous_state,\n 'private_dns_name': boto.private_dns_name,\n 'private_ip_address': boto.private_ip_address,\n 'public_dns_name': boto.public_dns_name,\n 'ramdisk': boto.ramdisk,\n 'region': boto.region.name,\n 'root_device_name': boto.root_device_name,\n 'root_device_type': boto.root_device_type,\n 'spot_instance_request_id': boto.spot_instance_request_id,\n 'state': boto.state,\n 'state_code': boto.state_code,\n 'state_reason': boto.state_reason,\n 'subnet_id': boto.subnet_id,\n 'tags': dict(boto.tags),\n 'virtualization_type': boto.virtualization_type,\n 'vpc_id': boto.vpc_id,\n }\n", "path": "provider/ec2.py"}], "after_files": [{"content": "\"\"\"\nDeis cloud provider implementation for Amazon EC2.\n\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport json\nimport time\n\nfrom boto import ec2\nfrom boto.exception import EC2ResponseError\n\n# from api.ssh import connect_ssh, exec_ssh\nfrom deis import settings\n\n\n# Deis-optimized EC2 amis -- with 3.8 kernel, chef 11 deps,\n# and large docker images (e.g. buildstep) pre-installed\nIMAGE_MAP = {\n 'ap-northeast-1': 'ami-d95ac4d8',\n 'ap-southeast-1': 'ami-1823694a',\n 'ap-southeast-2': 'ami-e56af7df',\n 'eu-west-1': 'ami-7447a003',\n 'sa-east-1': 'ami-334bec2e',\n 'us-east-1': 'ami-493d6a20',\n 'us-west-1': 'ami-0e2b1f4b',\n 'us-west-2': 'ami-72e27c42',\n}\n\n\ndef seed_flavors():\n \"\"\"Seed the database with default flavors for each EC2 region.\n\n :rtype: list of dicts containing flavor data\n \"\"\"\n flavors = []\n for r in ('us-east-1', 'us-west-1', 'us-west-2', 'eu-west-1',\n 'ap-northeast-1', 'ap-southeast-1', 'ap-southeast-2',\n 'sa-east-1'):\n flavors.append({'id': 'ec2-{}'.format(r),\n 'provider': 'ec2',\n 'params': json.dumps({\n 'region': r,\n 'image': IMAGE_MAP[r],\n 'zone': 'any',\n 'size': 'm1.medium'})})\n return flavors\n\n\ndef build_layer(layer):\n \"\"\"\n Build a layer.\n\n :param layer: a dict containing formation, id, params, and creds info\n \"\"\"\n region = layer['params'].get('region', 'us-east-1')\n conn = _create_ec2_connection(layer['creds'], region)\n # create a new sg and authorize all ports\n # use iptables on the host to firewall ports\n name = \"{formation}-{id}\".format(**layer)\n sg = conn.create_security_group(name, 'Created by Deis')\n # import a new keypair using the layer key material\n conn.import_key_pair(name, layer['ssh_public_key'])\n # loop until the sg is *actually* there\n for i in xrange(10):\n try:\n sg.authorize(ip_protocol='tcp', from_port=1, to_port=65535,\n cidr_ip='0.0.0.0/0')\n break\n except EC2ResponseError:\n if i < 10:\n time.sleep(1.5)\n continue\n else:\n raise RuntimeError('Failed to authorize security group')\n\n\ndef destroy_layer(layer):\n \"\"\"\n Destroy a layer.\n\n :param layer: a dict containing formation, id, params, and creds info\n \"\"\"\n region = layer['params'].get('region', 'us-east-1')\n name = \"{formation}-{id}\".format(**layer)\n conn = _create_ec2_connection(layer['creds'], region)\n conn.delete_key_pair(name)\n # there's an ec2 race condition on instances terminating\n # successfully but still holding a lock on the security group\n # let's take a nap\n time.sleep(5)\n try:\n conn.delete_security_group(name)\n except EC2ResponseError as e:\n if e.code != 'InvalidGroup.NotFound':\n raise e\n\n\ndef build_node(node):\n \"\"\"\n Build a node.\n\n :param node: a dict containing formation, layer, params, and creds info.\n :rtype: a tuple of (provider_id, fully_qualified_domain_name, metadata)\n \"\"\"\n params, creds = node['params'], node['creds']\n region = params.setdefault('region', 'us-east-1')\n conn = _create_ec2_connection(creds, region)\n name = \"{formation}-{layer}\".format(**node)\n params['key_name'] = name\n sg = conn.get_all_security_groups(name)[0]\n params.setdefault('security_groups', []).append(sg.name)\n image_id = params.get(\n 'image', getattr(settings, 'IMAGE_MAP', IMAGE_MAP)[region])\n images = conn.get_all_images([image_id])\n if len(images) != 1:\n raise LookupError('Could not find AMI: %s' % image_id)\n image = images[0]\n kwargs = _prepare_run_kwargs(params)\n reservation = image.run(**kwargs)\n instances = reservation.instances\n boto = instances[0]\n # sleep before tagging\n time.sleep(10)\n boto.update()\n boto.add_tag('Name', node['id'])\n # loop until running\n while(True):\n time.sleep(2)\n boto.update()\n if boto.state == 'running':\n break\n # prepare return values\n provider_id = boto.id\n fqdn = boto.public_dns_name\n metadata = _format_metadata(boto)\n return provider_id, fqdn, metadata\n\n\ndef destroy_node(node):\n \"\"\"\n Destroy a node.\n\n :param node: a dict containing a node's provider_id, params, and creds\n \"\"\"\n provider_id = node['provider_id']\n region = node['params'].get('region', 'us-east-1')\n conn = _create_ec2_connection(node['creds'], region)\n if provider_id:\n conn.terminate_instances([provider_id])\n i = conn.get_all_instances([provider_id])[0].instances[0]\n while(True):\n time.sleep(2)\n i.update()\n if i.state == \"terminated\":\n break\n\n\ndef _create_ec2_connection(creds, region):\n \"\"\"\n Connect to an EC2 region with the given credentials.\n\n :param creds: a dict containing an EC2 access_key and secret_key\n :region: the name of an EC2 region, such as \"us-west-2\"\n :rtype: a connected :class:`~boto.ec2.connection.EC2Connection`\n :raises EnvironmentError: if no credentials are provided\n \"\"\"\n if not creds:\n raise EnvironmentError('No credentials provided')\n return ec2.connect_to_region(region,\n aws_access_key_id=creds['access_key'],\n aws_secret_access_key=creds['secret_key'])\n\n\ndef _prepare_run_kwargs(params):\n # start with sane defaults\n kwargs = {\n 'min_count': 1, 'max_count': 1,\n 'user_data': None, 'addressing_type': None,\n 'instance_type': None, 'placement': None,\n 'kernel_id': None, 'ramdisk_id': None,\n 'monitoring_enabled': False, 'subnet_id': None,\n 'block_device_map': None,\n }\n # convert zone \"any\" to NoneType\n requested_zone = params.get('zone')\n if requested_zone and requested_zone.lower() == 'any':\n requested_zone = None\n # lookup kwargs from params\n param_kwargs = {\n 'instance_type': params.get('size', 'm1.medium'),\n 'security_groups': params['security_groups'],\n 'placement': requested_zone,\n 'key_name': params['key_name'],\n 'kernel_id': params.get('kernel', None),\n }\n # add user_data if provided in params\n user_data = params.get('user_data')\n if user_data:\n kwargs.update({'user_data': user_data})\n # params override defaults\n kwargs.update(param_kwargs)\n return kwargs\n\n\ndef _format_metadata(boto):\n return {\n 'architecture': boto.architecture,\n 'block_device_mapping': {\n k: v.volume_id for k, v in boto.block_device_mapping.items()\n },\n 'client_token': boto.client_token,\n 'dns_name': boto.dns_name,\n 'ebs_optimized': boto.ebs_optimized,\n 'eventsSet': boto.eventsSet,\n 'group_name': boto.group_name,\n 'groups': [g.id for g in boto.groups],\n 'hypervisor': boto.hypervisor,\n 'id': boto.id,\n 'image_id': boto.image_id,\n 'instance_profile': boto.instance_profile,\n 'instance_type': boto.instance_type,\n 'interfaces': list(boto.interfaces),\n 'ip_address': boto.ip_address,\n 'kernel': boto.kernel,\n 'key_name': boto.key_name,\n 'launch_time': boto.launch_time,\n 'monitored': boto.monitored,\n 'monitoring_state': boto.monitoring_state,\n 'persistent': boto.persistent,\n 'placement': boto.placement,\n 'placement_group': boto.placement_group,\n 'placement_tenancy': boto.placement_tenancy,\n 'previous_state': boto.previous_state,\n 'private_dns_name': boto.private_dns_name,\n 'private_ip_address': boto.private_ip_address,\n 'public_dns_name': boto.public_dns_name,\n 'ramdisk': boto.ramdisk,\n 'region': boto.region.name,\n 'root_device_name': boto.root_device_name,\n 'root_device_type': boto.root_device_type,\n 'spot_instance_request_id': boto.spot_instance_request_id,\n 'state': boto.state,\n 'state_code': boto.state_code,\n 'state_reason': boto.state_reason,\n 'subnet_id': boto.subnet_id,\n 'tags': dict(boto.tags),\n 'virtualization_type': boto.virtualization_type,\n 'vpc_id': boto.vpc_id,\n }\n", "path": "provider/ec2.py"}]}
| 3,055 | 392 |
gh_patches_debug_33155
|
rasdani/github-patches
|
git_diff
|
fossasia__open-event-server-6566
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Call for Speakers Signup Form: "Don't require this email" possible for everyone
Server issue for [fossasia/open-event-frontend#3506](https://github.com/fossasia/open-event-frontend/issues/3506)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/api/speakers.py`
Content:
```
1 from flask import request
2 from flask_login import current_user
3 from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship
4 from flask_rest_jsonapi.exceptions import ObjectNotFound
5
6 from app.api.bootstrap import api
7 from app.api.helpers.db import safe_query, get_count, save_to_db
8 from app.api.helpers.exceptions import ForbiddenException
9 from app.api.helpers.permission_manager import has_access
10 from app.api.helpers.query import event_query
11 from app.api.helpers.utilities import require_relationship
12 from app.api.schema.speakers import SpeakerSchema
13 from app.models import db
14 from app.models.event import Event
15 from app.models.session import Session
16 from app.models.speaker import Speaker
17 from app.models.session_speaker_link import SessionsSpeakersLink
18 from app.models.user import User
19
20
21 class SpeakerListPost(ResourceList):
22 """
23 List and create speakers
24 """
25
26 def before_post(self, args, kwargs, data=None):
27 """
28 method to add user_id to view_kwargs before post
29 :param args:
30 :param kwargs:
31 :param data:
32 :return:
33 """
34 require_relationship(['event', 'user'], data)
35
36 if not has_access('is_coorganizer', event_id=data['event']):
37 event = db.session.query(Event).filter_by(id=data['event']).one()
38 if event.state == "draft":
39 raise ObjectNotFound({'parameter': 'event_id'},
40 "Event: {} not found".format(data['event_id']))
41
42 if get_count(db.session.query(Event).filter_by(id=int(data['event']), is_sessions_speakers_enabled=False)) > 0:
43 raise ForbiddenException({'pointer': ''}, "Speakers are disabled for this Event")
44
45 if not data.get('is_email_overridden') and \
46 get_count(db.session.query(Speaker).filter_by(event_id=int(data['event']), email=data['email'],
47 deleted_at=None)) > 0:
48 raise ForbiddenException({'pointer': ''}, 'Speaker with this Email ID already exists')
49
50 if data.get('is_email_overriden') and not has_access('is_organizer', event_id=data['event']):
51 raise ForbiddenException({'pointer': 'data/attributes/is_email_overriden'},
52 'Organizer access required to override email')
53 elif data.get('is_email_overriden') and has_access('is_organizer', event_id=data['event']) and \
54 not data.get('email'):
55 data['email'] = current_user.email
56
57 if 'sessions' in data:
58 session_ids = data['sessions']
59 for session_id in session_ids:
60 if not has_access('is_session_self_submitted', session_id=session_id):
61 raise ObjectNotFound({'parameter': 'session_id'},
62 "Session: {} not found".format(session_id))
63
64 def after_create_object(self, speaker, data, view_kwargs):
65 """
66 after create method to save resized images for speaker
67 :param speaker:
68 :param data:
69 :param view_kwargs:
70 :return:
71 """
72
73 if data.get('photo_url'):
74 start_image_resizing_tasks(speaker, data['photo_url'])
75
76 schema = SpeakerSchema
77 methods = ['POST', ]
78 data_layer = {'session': db.session,
79 'model': Speaker,
80 'methods': {
81 'after_create_object': after_create_object
82 }}
83
84
85 class SpeakerList(ResourceList):
86 """
87 List speakers based on different params from view_kwargs
88 """
89
90 def query(self, view_kwargs):
91 """
92 query method for speakers list class
93 :param view_kwargs:
94 :return:
95 """
96 query_ = self.session.query(Speaker)
97 query_ = event_query(self, query_, view_kwargs)
98
99 if view_kwargs.get('user_id'):
100 user = safe_query(self, User, 'id', view_kwargs['user_id'], 'user_id')
101 query_ = query_.join(User).filter(User.id == user.id)
102
103 if view_kwargs.get('session_id'):
104 session = safe_query(self, Session, 'id', view_kwargs['session_id'], 'session_id')
105 # session-speaker :: many-to-many relationship
106 query_ = Speaker.query.filter(Speaker.sessions.any(id=session.id))
107 if 'Authorization' in request.headers and not has_access('is_coorganizer', event_id=session.event_id):
108 if not has_access('is_session_self_submitted', session_id=session.id):
109 query_ = query_.filter(Session.state == "approved" or Session.state == "accepted")
110
111 return query_
112
113 view_kwargs = True
114 schema = SpeakerSchema
115 methods = ['GET', ]
116 data_layer = {'session': db.session,
117 'model': Speaker,
118 'methods': {
119 'query': query,
120 }}
121
122
123 class SpeakerDetail(ResourceDetail):
124 """
125 Speakers Detail by id
126 """
127 def before_update_object(self, speaker, data, view_kwargs):
128 """
129 method to save image urls before updating speaker object
130 :param speaker:
131 :param data:
132 :param view_kwargs:
133 :return:
134 """
135 if data.get('photo_url') and data['photo_url'] != speaker.photo_url:
136 start_image_resizing_tasks(speaker, data['photo_url'])
137
138 if data.get('is_email_overriden') and not has_access('is_organizer', event_id=speaker.event_id):
139 raise ForbiddenException({'pointer': 'data/attributes/is_email_overriden'},
140 'Organizer access required to override email')
141 elif data.get('is_email_overriden') and has_access('is_organizer', event_id=speaker.event_id) and \
142 not data.get('email'):
143 data['email'] = current_user.email
144
145 def after_patch(self, result):
146 """
147 method to create session speaker link
148 :param result:
149 """
150 # This method is executed when a new speaker is created
151 # and added to an existing session
152 speaker_id = result['data']['id']
153 speaker = Speaker.query.filter_by(id=speaker_id).first()
154 if SessionsSpeakersLink.query.filter_by(speaker_id=speaker_id).count() == 0:
155 all_sessions = Session.query.filter_by(deleted_at=None)
156 for session in all_sessions:
157 if speaker in session.speakers:
158 session_speaker_link = SessionsSpeakersLink(session_state=session.state,
159 session_id=session.id,
160 event_id=session.event.id,
161 speaker_id=speaker.id)
162 save_to_db(session_speaker_link, "Session Speaker Link Saved")
163
164 decorators = (api.has_permission('is_speaker_itself_or_admin', methods="PATCH,DELETE", fetch="event_id",
165 fetch_as="event_id", model=Speaker),
166 api.has_permission('is_coorganizer_or_user_itself', methods="PATCH,DELETE", fetch="event_id",
167 fetch_as="event_id", model=Speaker),)
168 schema = SpeakerSchema
169 data_layer = {'session': db.session,
170 'model': Speaker,
171 'methods': {
172 'before_update_object': before_update_object
173 }}
174
175
176 class SpeakerRelationshipRequired(ResourceRelationship):
177 """
178 Speaker Relationship class for required entities
179 """
180 decorators = (api.has_permission('is_coorganizer_or_user_itself', methods="PATCH,DELETE", fetch="event_id",
181 fetch_as="event_id", model=Speaker),)
182 methods = ['GET', 'PATCH']
183 schema = SpeakerSchema
184 data_layer = {'session': db.session,
185 'model': Speaker}
186
187
188 class SpeakerRelationshipOptional(ResourceRelationship):
189 """
190 Speaker Relationship class
191 """
192 decorators = (api.has_permission('is_coorganizer_or_user_itself', methods="PATCH,DELETE", fetch="event_id",
193 fetch_as="event_id", model=Speaker),)
194 schema = SpeakerSchema
195 data_layer = {'session': db.session,
196 'model': Speaker}
197
198
199 def start_image_resizing_tasks(speaker, photo_url):
200 speaker_id = str(speaker.id)
201 from .helpers.tasks import resize_speaker_images_task
202 resize_speaker_images_task.delay(speaker_id, photo_url)
203
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/app/api/speakers.py b/app/api/speakers.py
--- a/app/api/speakers.py
+++ b/app/api/speakers.py
@@ -47,10 +47,10 @@
deleted_at=None)) > 0:
raise ForbiddenException({'pointer': ''}, 'Speaker with this Email ID already exists')
- if data.get('is_email_overriden') and not has_access('is_organizer', event_id=data['event']):
- raise ForbiddenException({'pointer': 'data/attributes/is_email_overriden'},
+ if data.get('is_email_overridden') and not has_access('is_organizer', event_id=data['event']):
+ raise ForbiddenException({'pointer': 'data/attributes/is_email_overridden'},
'Organizer access required to override email')
- elif data.get('is_email_overriden') and has_access('is_organizer', event_id=data['event']) and \
+ elif data.get('is_email_overridden') and has_access('is_organizer', event_id=data['event']) and \
not data.get('email'):
data['email'] = current_user.email
@@ -135,10 +135,10 @@
if data.get('photo_url') and data['photo_url'] != speaker.photo_url:
start_image_resizing_tasks(speaker, data['photo_url'])
- if data.get('is_email_overriden') and not has_access('is_organizer', event_id=speaker.event_id):
- raise ForbiddenException({'pointer': 'data/attributes/is_email_overriden'},
+ if data.get('is_email_overridden') and not has_access('is_organizer', event_id=speaker.event_id):
+ raise ForbiddenException({'pointer': 'data/attributes/is_email_overridden'},
'Organizer access required to override email')
- elif data.get('is_email_overriden') and has_access('is_organizer', event_id=speaker.event_id) and \
+ elif data.get('is_email_overridden') and has_access('is_organizer', event_id=speaker.event_id) and \
not data.get('email'):
data['email'] = current_user.email
|
{"golden_diff": "diff --git a/app/api/speakers.py b/app/api/speakers.py\n--- a/app/api/speakers.py\n+++ b/app/api/speakers.py\n@@ -47,10 +47,10 @@\n deleted_at=None)) > 0:\n raise ForbiddenException({'pointer': ''}, 'Speaker with this Email ID already exists')\n \n- if data.get('is_email_overriden') and not has_access('is_organizer', event_id=data['event']):\n- raise ForbiddenException({'pointer': 'data/attributes/is_email_overriden'},\n+ if data.get('is_email_overridden') and not has_access('is_organizer', event_id=data['event']):\n+ raise ForbiddenException({'pointer': 'data/attributes/is_email_overridden'},\n 'Organizer access required to override email')\n- elif data.get('is_email_overriden') and has_access('is_organizer', event_id=data['event']) and \\\n+ elif data.get('is_email_overridden') and has_access('is_organizer', event_id=data['event']) and \\\n not data.get('email'):\n data['email'] = current_user.email\n \n@@ -135,10 +135,10 @@\n if data.get('photo_url') and data['photo_url'] != speaker.photo_url:\n start_image_resizing_tasks(speaker, data['photo_url'])\n \n- if data.get('is_email_overriden') and not has_access('is_organizer', event_id=speaker.event_id):\n- raise ForbiddenException({'pointer': 'data/attributes/is_email_overriden'},\n+ if data.get('is_email_overridden') and not has_access('is_organizer', event_id=speaker.event_id):\n+ raise ForbiddenException({'pointer': 'data/attributes/is_email_overridden'},\n 'Organizer access required to override email')\n- elif data.get('is_email_overriden') and has_access('is_organizer', event_id=speaker.event_id) and \\\n+ elif data.get('is_email_overridden') and has_access('is_organizer', event_id=speaker.event_id) and \\\n not data.get('email'):\n data['email'] = current_user.email\n", "issue": "Call for Speakers Signup Form: \"Don't require this email\" possible for everyone\nServer issue for [fossasia/open-event-frontend#3506](https://github.com/fossasia/open-event-frontend/issues/3506) \n", "before_files": [{"content": "from flask import request\nfrom flask_login import current_user\nfrom flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\nfrom flask_rest_jsonapi.exceptions import ObjectNotFound\n\nfrom app.api.bootstrap import api\nfrom app.api.helpers.db import safe_query, get_count, save_to_db\nfrom app.api.helpers.exceptions import ForbiddenException\nfrom app.api.helpers.permission_manager import has_access\nfrom app.api.helpers.query import event_query\nfrom app.api.helpers.utilities import require_relationship\nfrom app.api.schema.speakers import SpeakerSchema\nfrom app.models import db\nfrom app.models.event import Event\nfrom app.models.session import Session\nfrom app.models.speaker import Speaker\nfrom app.models.session_speaker_link import SessionsSpeakersLink\nfrom app.models.user import User\n\n\nclass SpeakerListPost(ResourceList):\n \"\"\"\n List and create speakers\n \"\"\"\n\n def before_post(self, args, kwargs, data=None):\n \"\"\"\n method to add user_id to view_kwargs before post\n :param args:\n :param kwargs:\n :param data:\n :return:\n \"\"\"\n require_relationship(['event', 'user'], data)\n\n if not has_access('is_coorganizer', event_id=data['event']):\n event = db.session.query(Event).filter_by(id=data['event']).one()\n if event.state == \"draft\":\n raise ObjectNotFound({'parameter': 'event_id'},\n \"Event: {} not found\".format(data['event_id']))\n\n if get_count(db.session.query(Event).filter_by(id=int(data['event']), is_sessions_speakers_enabled=False)) > 0:\n raise ForbiddenException({'pointer': ''}, \"Speakers are disabled for this Event\")\n\n if not data.get('is_email_overridden') and \\\n get_count(db.session.query(Speaker).filter_by(event_id=int(data['event']), email=data['email'],\n deleted_at=None)) > 0:\n raise ForbiddenException({'pointer': ''}, 'Speaker with this Email ID already exists')\n\n if data.get('is_email_overriden') and not has_access('is_organizer', event_id=data['event']):\n raise ForbiddenException({'pointer': 'data/attributes/is_email_overriden'},\n 'Organizer access required to override email')\n elif data.get('is_email_overriden') and has_access('is_organizer', event_id=data['event']) and \\\n not data.get('email'):\n data['email'] = current_user.email\n\n if 'sessions' in data:\n session_ids = data['sessions']\n for session_id in session_ids:\n if not has_access('is_session_self_submitted', session_id=session_id):\n raise ObjectNotFound({'parameter': 'session_id'},\n \"Session: {} not found\".format(session_id))\n\n def after_create_object(self, speaker, data, view_kwargs):\n \"\"\"\n after create method to save resized images for speaker\n :param speaker:\n :param data:\n :param view_kwargs:\n :return:\n \"\"\"\n\n if data.get('photo_url'):\n start_image_resizing_tasks(speaker, data['photo_url'])\n\n schema = SpeakerSchema\n methods = ['POST', ]\n data_layer = {'session': db.session,\n 'model': Speaker,\n 'methods': {\n 'after_create_object': after_create_object\n }}\n\n\nclass SpeakerList(ResourceList):\n \"\"\"\n List speakers based on different params from view_kwargs\n \"\"\"\n\n def query(self, view_kwargs):\n \"\"\"\n query method for speakers list class\n :param view_kwargs:\n :return:\n \"\"\"\n query_ = self.session.query(Speaker)\n query_ = event_query(self, query_, view_kwargs)\n\n if view_kwargs.get('user_id'):\n user = safe_query(self, User, 'id', view_kwargs['user_id'], 'user_id')\n query_ = query_.join(User).filter(User.id == user.id)\n\n if view_kwargs.get('session_id'):\n session = safe_query(self, Session, 'id', view_kwargs['session_id'], 'session_id')\n # session-speaker :: many-to-many relationship\n query_ = Speaker.query.filter(Speaker.sessions.any(id=session.id))\n if 'Authorization' in request.headers and not has_access('is_coorganizer', event_id=session.event_id):\n if not has_access('is_session_self_submitted', session_id=session.id):\n query_ = query_.filter(Session.state == \"approved\" or Session.state == \"accepted\")\n\n return query_\n\n view_kwargs = True\n schema = SpeakerSchema\n methods = ['GET', ]\n data_layer = {'session': db.session,\n 'model': Speaker,\n 'methods': {\n 'query': query,\n }}\n\n\nclass SpeakerDetail(ResourceDetail):\n \"\"\"\n Speakers Detail by id\n \"\"\"\n def before_update_object(self, speaker, data, view_kwargs):\n \"\"\"\n method to save image urls before updating speaker object\n :param speaker:\n :param data:\n :param view_kwargs:\n :return:\n \"\"\"\n if data.get('photo_url') and data['photo_url'] != speaker.photo_url:\n start_image_resizing_tasks(speaker, data['photo_url'])\n\n if data.get('is_email_overriden') and not has_access('is_organizer', event_id=speaker.event_id):\n raise ForbiddenException({'pointer': 'data/attributes/is_email_overriden'},\n 'Organizer access required to override email')\n elif data.get('is_email_overriden') and has_access('is_organizer', event_id=speaker.event_id) and \\\n not data.get('email'):\n data['email'] = current_user.email\n\n def after_patch(self, result):\n \"\"\"\n method to create session speaker link\n :param result:\n \"\"\"\n # This method is executed when a new speaker is created\n # and added to an existing session\n speaker_id = result['data']['id']\n speaker = Speaker.query.filter_by(id=speaker_id).first()\n if SessionsSpeakersLink.query.filter_by(speaker_id=speaker_id).count() == 0:\n all_sessions = Session.query.filter_by(deleted_at=None)\n for session in all_sessions:\n if speaker in session.speakers:\n session_speaker_link = SessionsSpeakersLink(session_state=session.state,\n session_id=session.id,\n event_id=session.event.id,\n speaker_id=speaker.id)\n save_to_db(session_speaker_link, \"Session Speaker Link Saved\")\n\n decorators = (api.has_permission('is_speaker_itself_or_admin', methods=\"PATCH,DELETE\", fetch=\"event_id\",\n fetch_as=\"event_id\", model=Speaker),\n api.has_permission('is_coorganizer_or_user_itself', methods=\"PATCH,DELETE\", fetch=\"event_id\",\n fetch_as=\"event_id\", model=Speaker),)\n schema = SpeakerSchema\n data_layer = {'session': db.session,\n 'model': Speaker,\n 'methods': {\n 'before_update_object': before_update_object\n }}\n\n\nclass SpeakerRelationshipRequired(ResourceRelationship):\n \"\"\"\n Speaker Relationship class for required entities\n \"\"\"\n decorators = (api.has_permission('is_coorganizer_or_user_itself', methods=\"PATCH,DELETE\", fetch=\"event_id\",\n fetch_as=\"event_id\", model=Speaker),)\n methods = ['GET', 'PATCH']\n schema = SpeakerSchema\n data_layer = {'session': db.session,\n 'model': Speaker}\n\n\nclass SpeakerRelationshipOptional(ResourceRelationship):\n \"\"\"\n Speaker Relationship class\n \"\"\"\n decorators = (api.has_permission('is_coorganizer_or_user_itself', methods=\"PATCH,DELETE\", fetch=\"event_id\",\n fetch_as=\"event_id\", model=Speaker),)\n schema = SpeakerSchema\n data_layer = {'session': db.session,\n 'model': Speaker}\n\n\ndef start_image_resizing_tasks(speaker, photo_url):\n speaker_id = str(speaker.id)\n from .helpers.tasks import resize_speaker_images_task\n resize_speaker_images_task.delay(speaker_id, photo_url)\n", "path": "app/api/speakers.py"}], "after_files": [{"content": "from flask import request\nfrom flask_login import current_user\nfrom flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\nfrom flask_rest_jsonapi.exceptions import ObjectNotFound\n\nfrom app.api.bootstrap import api\nfrom app.api.helpers.db import safe_query, get_count, save_to_db\nfrom app.api.helpers.exceptions import ForbiddenException\nfrom app.api.helpers.permission_manager import has_access\nfrom app.api.helpers.query import event_query\nfrom app.api.helpers.utilities import require_relationship\nfrom app.api.schema.speakers import SpeakerSchema\nfrom app.models import db\nfrom app.models.event import Event\nfrom app.models.session import Session\nfrom app.models.speaker import Speaker\nfrom app.models.session_speaker_link import SessionsSpeakersLink\nfrom app.models.user import User\n\n\nclass SpeakerListPost(ResourceList):\n \"\"\"\n List and create speakers\n \"\"\"\n\n def before_post(self, args, kwargs, data=None):\n \"\"\"\n method to add user_id to view_kwargs before post\n :param args:\n :param kwargs:\n :param data:\n :return:\n \"\"\"\n require_relationship(['event', 'user'], data)\n\n if not has_access('is_coorganizer', event_id=data['event']):\n event = db.session.query(Event).filter_by(id=data['event']).one()\n if event.state == \"draft\":\n raise ObjectNotFound({'parameter': 'event_id'},\n \"Event: {} not found\".format(data['event_id']))\n\n if get_count(db.session.query(Event).filter_by(id=int(data['event']), is_sessions_speakers_enabled=False)) > 0:\n raise ForbiddenException({'pointer': ''}, \"Speakers are disabled for this Event\")\n\n if not data.get('is_email_overridden') and \\\n get_count(db.session.query(Speaker).filter_by(event_id=int(data['event']), email=data['email'],\n deleted_at=None)) > 0:\n raise ForbiddenException({'pointer': ''}, 'Speaker with this Email ID already exists')\n\n if data.get('is_email_overridden') and not has_access('is_organizer', event_id=data['event']):\n raise ForbiddenException({'pointer': 'data/attributes/is_email_overridden'},\n 'Organizer access required to override email')\n elif data.get('is_email_overridden') and has_access('is_organizer', event_id=data['event']) and \\\n not data.get('email'):\n data['email'] = current_user.email\n\n if 'sessions' in data:\n session_ids = data['sessions']\n for session_id in session_ids:\n if not has_access('is_session_self_submitted', session_id=session_id):\n raise ObjectNotFound({'parameter': 'session_id'},\n \"Session: {} not found\".format(session_id))\n\n def after_create_object(self, speaker, data, view_kwargs):\n \"\"\"\n after create method to save resized images for speaker\n :param speaker:\n :param data:\n :param view_kwargs:\n :return:\n \"\"\"\n\n if data.get('photo_url'):\n start_image_resizing_tasks(speaker, data['photo_url'])\n\n schema = SpeakerSchema\n methods = ['POST', ]\n data_layer = {'session': db.session,\n 'model': Speaker,\n 'methods': {\n 'after_create_object': after_create_object\n }}\n\n\nclass SpeakerList(ResourceList):\n \"\"\"\n List speakers based on different params from view_kwargs\n \"\"\"\n\n def query(self, view_kwargs):\n \"\"\"\n query method for speakers list class\n :param view_kwargs:\n :return:\n \"\"\"\n query_ = self.session.query(Speaker)\n query_ = event_query(self, query_, view_kwargs)\n\n if view_kwargs.get('user_id'):\n user = safe_query(self, User, 'id', view_kwargs['user_id'], 'user_id')\n query_ = query_.join(User).filter(User.id == user.id)\n\n if view_kwargs.get('session_id'):\n session = safe_query(self, Session, 'id', view_kwargs['session_id'], 'session_id')\n # session-speaker :: many-to-many relationship\n query_ = Speaker.query.filter(Speaker.sessions.any(id=session.id))\n if 'Authorization' in request.headers and not has_access('is_coorganizer', event_id=session.event_id):\n if not has_access('is_session_self_submitted', session_id=session.id):\n query_ = query_.filter(Session.state == \"approved\" or Session.state == \"accepted\")\n\n return query_\n\n view_kwargs = True\n schema = SpeakerSchema\n methods = ['GET', ]\n data_layer = {'session': db.session,\n 'model': Speaker,\n 'methods': {\n 'query': query,\n }}\n\n\nclass SpeakerDetail(ResourceDetail):\n \"\"\"\n Speakers Detail by id\n \"\"\"\n def before_update_object(self, speaker, data, view_kwargs):\n \"\"\"\n method to save image urls before updating speaker object\n :param speaker:\n :param data:\n :param view_kwargs:\n :return:\n \"\"\"\n if data.get('photo_url') and data['photo_url'] != speaker.photo_url:\n start_image_resizing_tasks(speaker, data['photo_url'])\n\n if data.get('is_email_overridden') and not has_access('is_organizer', event_id=speaker.event_id):\n raise ForbiddenException({'pointer': 'data/attributes/is_email_overridden'},\n 'Organizer access required to override email')\n elif data.get('is_email_overridden') and has_access('is_organizer', event_id=speaker.event_id) and \\\n not data.get('email'):\n data['email'] = current_user.email\n\n def after_patch(self, result):\n \"\"\"\n method to create session speaker link\n :param result:\n \"\"\"\n # This method is executed when a new speaker is created\n # and added to an existing session\n speaker_id = result['data']['id']\n speaker = Speaker.query.filter_by(id=speaker_id).first()\n if SessionsSpeakersLink.query.filter_by(speaker_id=speaker_id).count() == 0:\n all_sessions = Session.query.filter_by(deleted_at=None)\n for session in all_sessions:\n if speaker in session.speakers:\n session_speaker_link = SessionsSpeakersLink(session_state=session.state,\n session_id=session.id,\n event_id=session.event.id,\n speaker_id=speaker.id)\n save_to_db(session_speaker_link, \"Session Speaker Link Saved\")\n\n decorators = (api.has_permission('is_speaker_itself_or_admin', methods=\"PATCH,DELETE\", fetch=\"event_id\",\n fetch_as=\"event_id\", model=Speaker),\n api.has_permission('is_coorganizer_or_user_itself', methods=\"PATCH,DELETE\", fetch=\"event_id\",\n fetch_as=\"event_id\", model=Speaker),)\n schema = SpeakerSchema\n data_layer = {'session': db.session,\n 'model': Speaker,\n 'methods': {\n 'before_update_object': before_update_object\n }}\n\n\nclass SpeakerRelationshipRequired(ResourceRelationship):\n \"\"\"\n Speaker Relationship class for required entities\n \"\"\"\n decorators = (api.has_permission('is_coorganizer_or_user_itself', methods=\"PATCH,DELETE\", fetch=\"event_id\",\n fetch_as=\"event_id\", model=Speaker),)\n methods = ['GET', 'PATCH']\n schema = SpeakerSchema\n data_layer = {'session': db.session,\n 'model': Speaker}\n\n\nclass SpeakerRelationshipOptional(ResourceRelationship):\n \"\"\"\n Speaker Relationship class\n \"\"\"\n decorators = (api.has_permission('is_coorganizer_or_user_itself', methods=\"PATCH,DELETE\", fetch=\"event_id\",\n fetch_as=\"event_id\", model=Speaker),)\n schema = SpeakerSchema\n data_layer = {'session': db.session,\n 'model': Speaker}\n\n\ndef start_image_resizing_tasks(speaker, photo_url):\n speaker_id = str(speaker.id)\n from .helpers.tasks import resize_speaker_images_task\n resize_speaker_images_task.delay(speaker_id, photo_url)\n", "path": "app/api/speakers.py"}]}
| 2,524 | 480 |
gh_patches_debug_57504
|
rasdani/github-patches
|
git_diff
|
dotkom__onlineweb4-745
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Filtering my events doesn't work
_Actually I'm not even sure if it's just my local setup that's fucking around or not, but this doesn't seem to work at all.
I can't check with moonshine or prod because everything is down atm, so I'm just making this before I forget._
```
if filters['myevents'] == 'true':
kwargs['attendance_event__attendees'] = request.user
events = Event.objects.filter(**kwargs).order_by('event_start').prefetch_related(
'attendance_event', 'attendance_event__attendees')
```
in events/views.py _search_indexed
Comparing attendance_event__attendees (Attendee) with request.user (OnlineUser) doesn't make sense.
It should be attendance_event__attendees__user which from limited testing seems to work.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/events/views.py`
Content:
```
1 #-*- coding: utf-8 -*-
2
3 import datetime
4
5 from django.utils import timezone
6
7 from django.conf import settings
8 from django.contrib import messages
9 from django.contrib.auth.decorators import login_required, user_passes_test
10 from django.core.urlresolvers import reverse
11 from django.http import HttpResponseRedirect
12 from django.shortcuts import render, get_object_or_404, redirect
13 from django.utils.translation import ugettext as _
14
15 import watson
16
17 from apps.events.forms import CaptchaForm
18 from apps.events.models import Event, AttendanceEvent, Attendee
19 from apps.events.pdf_generator import EventPDF
20
21
22 def index(request):
23 return render(request, 'events/index.html', {})
24
25 def details(request, event_id, event_slug):
26 event = get_object_or_404(Event, pk=event_id)
27
28 is_attendance_event = False
29 user_anonymous = True
30 user_attending = False
31 place_on_wait_list = 0
32 will_be_on_wait_list = False
33 rules = []
34 user_status = False
35
36 try:
37 attendance_event = AttendanceEvent.objects.get(pk=event_id)
38 is_attendance_event = True
39 form = CaptchaForm(user=request.user)
40
41 if attendance_event.rule_bundles:
42 for rule_bundle in attendance_event.rule_bundles.all():
43 rules.append(rule_bundle.get_rule_strings)
44
45 if request.user.is_authenticated():
46 user_anonymous = False
47 if attendance_event.is_attendee(request.user):
48 user_attending = True
49
50
51 will_be_on_wait_list = attendance_event.will_i_be_on_wait_list
52
53 user_status = event.is_eligible_for_signup(request.user)
54
55 # Check if this user is on the waitlist
56 place_on_wait_list = event.what_place_is_user_on_wait_list(request.user)
57
58 except AttendanceEvent.DoesNotExist:
59 pass
60
61 if is_attendance_event:
62 context = {
63 'now': timezone.now(),
64 'event': event,
65 'attendance_event': attendance_event,
66 'user_anonymous': user_anonymous,
67 'user_attending': user_attending,
68 'will_be_on_wait_list': will_be_on_wait_list,
69 'rules': rules,
70 'user_status': user_status,
71 'place_on_wait_list': int(place_on_wait_list),
72 #'position_in_wait_list': position_in_wait_list,
73 'captcha_form': form,
74 }
75
76 return render(request, 'events/details.html', context)
77 else:
78 return render(request, 'events/details.html', {'event': event})
79
80
81 def get_attendee(attendee_id):
82 return get_object_or_404(Attendee, pk=attendee_id)
83
84 @login_required
85 def attendEvent(request, event_id):
86
87 event = get_object_or_404(Event, pk=event_id)
88
89 if not request.POST:
90 messages.error(request, _(u'Vennligst fyll ut skjemaet.'))
91 return redirect(event)
92
93 form = CaptchaForm(request.POST, user=request.user)
94
95 if not form.is_valid():
96 for field,errors in form.errors.items():
97 for error in errors:
98 messages.error(request, error)
99
100 return redirect(event)
101
102 # Check if the user is eligible to attend this event.
103 # If not, an error message will be present in the returned dict
104 attendance_event = event.attendance_event
105
106 response = event.is_eligible_for_signup(request.user);
107
108 if response['status']:
109 Attendee(event=attendance_event, user=request.user).save()
110 messages.success(request, _(u"Du er nå påmeldt på arrangementet!"))
111 return redirect(event)
112 else:
113 messages.error(request, response['message'])
114 return redirect(event)
115
116 @login_required
117 def unattendEvent(request, event_id):
118
119 event = get_object_or_404(Event, pk=event_id)
120 attendance_event = event.attendance_event
121
122 # Check if the deadline for unattending has passed
123 if attendance_event.unattend_deadline < timezone.now():
124 messages.error(request, _(u"Avmeldingsfristen for dette arrangementet har utløpt."))
125 return redirect(event)
126
127 event.notify_waiting_list(host=request.META['HTTP_HOST'], unattended_user=request.user)
128 Attendee.objects.get(event=attendance_event, user=request.user).delete()
129
130 messages.success(request, _(u"Du ble meldt av arrangementet."))
131 return redirect(event)
132
133 def search_events(request):
134 query = request.GET.get('query')
135 filters = {
136 'future' : request.GET.get('future'),
137 'myevents' : request.GET.get('myevents')
138 }
139 events = _search_indexed(request, query, filters)
140
141 return render(request, 'events/search.html', {'events': events})
142
143
144 def _search_indexed(request, query, filters):
145 results = []
146 kwargs = {}
147
148 if filters['future'] == 'true':
149 kwargs['event_start__gte'] = timezone.now()
150
151 if filters['myevents'] == 'true':
152 kwargs['attendance_event__attendees'] = request.user
153
154 events = Event.objects.filter(**kwargs).order_by('event_start').prefetch_related(
155 'attendance_event', 'attendance_event__attendees')
156
157 if query:
158 for result in watson.search(query, models=(events,)):
159 results.append(result.object)
160 return results[:10]
161
162 return events
163
164
165 @login_required()
166 @user_passes_test(lambda u: u.groups.filter(name='Komiteer').count() == 1)
167 def generate_pdf(request, event_id):
168
169 event = get_object_or_404(Event, pk=event_id)
170
171 groups = request.user.groups.all()
172 if not (groups.filter(name='dotKom').count() == 1 or groups.filter(name='Hovedstyret').count() == 1):
173 if event.event_type == 1 and not groups.filter(name='arrKom').count() == 1:
174 messages.error(request, _(u'Du har ikke tilgang til listen for dette arrangementet.'))
175 return redirect(event)
176
177 if event.event_type == 2 and not groups.filter(name='bedKom').count() == 1:
178 messages.error(request, _(u'Du har ikke tilgang til listen for dette arrangementet.'))
179 return redirect(event)
180
181 if event.event_type == 3 and not groups.filter(name='fagKom').count() == 1:
182 messages.error(request, _(u'Du har ikke tilgang til listen for dette arrangementet.'))
183 return redirect(event)
184
185 return EventPDF(event).render_pdf()
186
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/apps/events/views.py b/apps/events/views.py
--- a/apps/events/views.py
+++ b/apps/events/views.py
@@ -149,7 +149,7 @@
kwargs['event_start__gte'] = timezone.now()
if filters['myevents'] == 'true':
- kwargs['attendance_event__attendees'] = request.user
+ kwargs['attendance_event__attendees__user'] = request.user
events = Event.objects.filter(**kwargs).order_by('event_start').prefetch_related(
'attendance_event', 'attendance_event__attendees')
|
{"golden_diff": "diff --git a/apps/events/views.py b/apps/events/views.py\n--- a/apps/events/views.py\n+++ b/apps/events/views.py\n@@ -149,7 +149,7 @@\n kwargs['event_start__gte'] = timezone.now()\n \n if filters['myevents'] == 'true':\n- kwargs['attendance_event__attendees'] = request.user\n+ kwargs['attendance_event__attendees__user'] = request.user\n \n events = Event.objects.filter(**kwargs).order_by('event_start').prefetch_related(\n 'attendance_event', 'attendance_event__attendees')\n", "issue": "Filtering my events doesn't work\n_Actually I'm not even sure if it's just my local setup that's fucking around or not, but this doesn't seem to work at all.\nI can't check with moonshine or prod because everything is down atm, so I'm just making this before I forget._\n\n```\nif filters['myevents'] == 'true':\n kwargs['attendance_event__attendees'] = request.user\n\n events = Event.objects.filter(**kwargs).order_by('event_start').prefetch_related(\n 'attendance_event', 'attendance_event__attendees')\n```\n\nin events/views.py _search_indexed\n\nComparing attendance_event__attendees (Attendee) with request.user (OnlineUser) doesn't make sense. \n\nIt should be attendance_event__attendees__user which from limited testing seems to work. \n\n", "before_files": [{"content": "#-*- coding: utf-8 -*-\n\nimport datetime\n\nfrom django.utils import timezone\n\nfrom django.conf import settings\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required, user_passes_test\nfrom django.core.urlresolvers import reverse\nfrom django.http import HttpResponseRedirect\nfrom django.shortcuts import render, get_object_or_404, redirect\nfrom django.utils.translation import ugettext as _\n\nimport watson\n\nfrom apps.events.forms import CaptchaForm\nfrom apps.events.models import Event, AttendanceEvent, Attendee\nfrom apps.events.pdf_generator import EventPDF\n\n\ndef index(request):\n return render(request, 'events/index.html', {})\n\ndef details(request, event_id, event_slug):\n event = get_object_or_404(Event, pk=event_id)\n\n is_attendance_event = False\n user_anonymous = True\n user_attending = False\n place_on_wait_list = 0\n will_be_on_wait_list = False\n rules = []\n user_status = False\n\n try:\n attendance_event = AttendanceEvent.objects.get(pk=event_id)\n is_attendance_event = True\n form = CaptchaForm(user=request.user)\n\n if attendance_event.rule_bundles:\n for rule_bundle in attendance_event.rule_bundles.all():\n rules.append(rule_bundle.get_rule_strings)\n\n if request.user.is_authenticated():\n user_anonymous = False\n if attendance_event.is_attendee(request.user):\n user_attending = True\n\n \n will_be_on_wait_list = attendance_event.will_i_be_on_wait_list\n\n user_status = event.is_eligible_for_signup(request.user)\n\n # Check if this user is on the waitlist\n place_on_wait_list = event.what_place_is_user_on_wait_list(request.user)\n\n except AttendanceEvent.DoesNotExist:\n pass\n\n if is_attendance_event:\n context = {\n 'now': timezone.now(),\n 'event': event,\n 'attendance_event': attendance_event,\n 'user_anonymous': user_anonymous,\n 'user_attending': user_attending,\n 'will_be_on_wait_list': will_be_on_wait_list,\n 'rules': rules,\n 'user_status': user_status,\n 'place_on_wait_list': int(place_on_wait_list),\n #'position_in_wait_list': position_in_wait_list,\n 'captcha_form': form,\n }\n \n return render(request, 'events/details.html', context)\n else:\n return render(request, 'events/details.html', {'event': event})\n\n\ndef get_attendee(attendee_id):\n return get_object_or_404(Attendee, pk=attendee_id)\n\n@login_required\ndef attendEvent(request, event_id):\n \n event = get_object_or_404(Event, pk=event_id)\n\n if not request.POST:\n messages.error(request, _(u'Vennligst fyll ut skjemaet.'))\n return redirect(event)\n\n form = CaptchaForm(request.POST, user=request.user)\n\n if not form.is_valid():\n for field,errors in form.errors.items():\n for error in errors:\n messages.error(request, error)\n\n return redirect(event)\n\n # Check if the user is eligible to attend this event.\n # If not, an error message will be present in the returned dict\n attendance_event = event.attendance_event\n\n response = event.is_eligible_for_signup(request.user);\n\n if response['status']: \n Attendee(event=attendance_event, user=request.user).save()\n messages.success(request, _(u\"Du er n\u00e5 p\u00e5meldt p\u00e5 arrangementet!\"))\n return redirect(event)\n else:\n messages.error(request, response['message'])\n return redirect(event)\n\n@login_required\ndef unattendEvent(request, event_id):\n\n event = get_object_or_404(Event, pk=event_id)\n attendance_event = event.attendance_event\n\n # Check if the deadline for unattending has passed\n if attendance_event.unattend_deadline < timezone.now():\n messages.error(request, _(u\"Avmeldingsfristen for dette arrangementet har utl\u00f8pt.\"))\n return redirect(event)\n\n event.notify_waiting_list(host=request.META['HTTP_HOST'], unattended_user=request.user)\n Attendee.objects.get(event=attendance_event, user=request.user).delete()\n\n messages.success(request, _(u\"Du ble meldt av arrangementet.\"))\n return redirect(event)\n\ndef search_events(request):\n query = request.GET.get('query')\n filters = {\n 'future' : request.GET.get('future'),\n 'myevents' : request.GET.get('myevents')\n }\n events = _search_indexed(request, query, filters)\n\n return render(request, 'events/search.html', {'events': events})\n\n\ndef _search_indexed(request, query, filters):\n results = []\n kwargs = {}\n\n if filters['future'] == 'true':\n kwargs['event_start__gte'] = timezone.now()\n\n if filters['myevents'] == 'true':\n kwargs['attendance_event__attendees'] = request.user\n\n events = Event.objects.filter(**kwargs).order_by('event_start').prefetch_related(\n 'attendance_event', 'attendance_event__attendees')\n\n if query:\n for result in watson.search(query, models=(events,)):\n results.append(result.object)\n return results[:10]\n\n return events\n\n\n@login_required()\n@user_passes_test(lambda u: u.groups.filter(name='Komiteer').count() == 1)\ndef generate_pdf(request, event_id):\n\n event = get_object_or_404(Event, pk=event_id)\n\n groups = request.user.groups.all()\n if not (groups.filter(name='dotKom').count() == 1 or groups.filter(name='Hovedstyret').count() == 1):\n if event.event_type == 1 and not groups.filter(name='arrKom').count() == 1:\n messages.error(request, _(u'Du har ikke tilgang til listen for dette arrangementet.'))\n return redirect(event)\n\n if event.event_type == 2 and not groups.filter(name='bedKom').count() == 1:\n messages.error(request, _(u'Du har ikke tilgang til listen for dette arrangementet.'))\n return redirect(event)\n\n if event.event_type == 3 and not groups.filter(name='fagKom').count() == 1:\n messages.error(request, _(u'Du har ikke tilgang til listen for dette arrangementet.')) \n return redirect(event)\n\n return EventPDF(event).render_pdf()\n", "path": "apps/events/views.py"}], "after_files": [{"content": "#-*- coding: utf-8 -*-\n\nimport datetime\n\nfrom django.utils import timezone\n\nfrom django.conf import settings\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required, user_passes_test\nfrom django.core.urlresolvers import reverse\nfrom django.http import HttpResponseRedirect\nfrom django.shortcuts import render, get_object_or_404, redirect\nfrom django.utils.translation import ugettext as _\n\nimport watson\n\nfrom apps.events.forms import CaptchaForm\nfrom apps.events.models import Event, AttendanceEvent, Attendee\nfrom apps.events.pdf_generator import EventPDF\n\n\ndef index(request):\n return render(request, 'events/index.html', {})\n\ndef details(request, event_id, event_slug):\n event = get_object_or_404(Event, pk=event_id)\n\n is_attendance_event = False\n user_anonymous = True\n user_attending = False\n place_on_wait_list = 0\n will_be_on_wait_list = False\n rules = []\n user_status = False\n\n try:\n attendance_event = AttendanceEvent.objects.get(pk=event_id)\n is_attendance_event = True\n form = CaptchaForm(user=request.user)\n\n if attendance_event.rule_bundles:\n for rule_bundle in attendance_event.rule_bundles.all():\n rules.append(rule_bundle.get_rule_strings)\n\n if request.user.is_authenticated():\n user_anonymous = False\n if attendance_event.is_attendee(request.user):\n user_attending = True\n\n \n will_be_on_wait_list = attendance_event.will_i_be_on_wait_list\n\n user_status = event.is_eligible_for_signup(request.user)\n\n # Check if this user is on the waitlist\n place_on_wait_list = event.what_place_is_user_on_wait_list(request.user)\n\n except AttendanceEvent.DoesNotExist:\n pass\n\n if is_attendance_event:\n context = {\n 'now': timezone.now(),\n 'event': event,\n 'attendance_event': attendance_event,\n 'user_anonymous': user_anonymous,\n 'user_attending': user_attending,\n 'will_be_on_wait_list': will_be_on_wait_list,\n 'rules': rules,\n 'user_status': user_status,\n 'place_on_wait_list': int(place_on_wait_list),\n #'position_in_wait_list': position_in_wait_list,\n 'captcha_form': form,\n }\n \n return render(request, 'events/details.html', context)\n else:\n return render(request, 'events/details.html', {'event': event})\n\n\ndef get_attendee(attendee_id):\n return get_object_or_404(Attendee, pk=attendee_id)\n\n@login_required\ndef attendEvent(request, event_id):\n \n event = get_object_or_404(Event, pk=event_id)\n\n if not request.POST:\n messages.error(request, _(u'Vennligst fyll ut skjemaet.'))\n return redirect(event)\n\n form = CaptchaForm(request.POST, user=request.user)\n\n if not form.is_valid():\n for field,errors in form.errors.items():\n for error in errors:\n messages.error(request, error)\n\n return redirect(event)\n\n # Check if the user is eligible to attend this event.\n # If not, an error message will be present in the returned dict\n attendance_event = event.attendance_event\n\n response = event.is_eligible_for_signup(request.user);\n\n if response['status']: \n Attendee(event=attendance_event, user=request.user).save()\n messages.success(request, _(u\"Du er n\u00e5 p\u00e5meldt p\u00e5 arrangementet!\"))\n return redirect(event)\n else:\n messages.error(request, response['message'])\n return redirect(event)\n\n@login_required\ndef unattendEvent(request, event_id):\n\n event = get_object_or_404(Event, pk=event_id)\n attendance_event = event.attendance_event\n\n # Check if the deadline for unattending has passed\n if attendance_event.unattend_deadline < timezone.now():\n messages.error(request, _(u\"Avmeldingsfristen for dette arrangementet har utl\u00f8pt.\"))\n return redirect(event)\n\n event.notify_waiting_list(host=request.META['HTTP_HOST'], unattended_user=request.user)\n Attendee.objects.get(event=attendance_event, user=request.user).delete()\n\n messages.success(request, _(u\"Du ble meldt av arrangementet.\"))\n return redirect(event)\n\ndef search_events(request):\n query = request.GET.get('query')\n filters = {\n 'future' : request.GET.get('future'),\n 'myevents' : request.GET.get('myevents')\n }\n events = _search_indexed(request, query, filters)\n\n return render(request, 'events/search.html', {'events': events})\n\n\ndef _search_indexed(request, query, filters):\n results = []\n kwargs = {}\n\n if filters['future'] == 'true':\n kwargs['event_start__gte'] = timezone.now()\n\n if filters['myevents'] == 'true':\n kwargs['attendance_event__attendees__user'] = request.user\n\n events = Event.objects.filter(**kwargs).order_by('event_start').prefetch_related(\n 'attendance_event', 'attendance_event__attendees')\n\n if query:\n for result in watson.search(query, models=(events,)):\n results.append(result.object)\n return results[:10]\n\n return events\n\n\n@login_required()\n@user_passes_test(lambda u: u.groups.filter(name='Komiteer').count() == 1)\ndef generate_pdf(request, event_id):\n\n event = get_object_or_404(Event, pk=event_id)\n\n groups = request.user.groups.all()\n if not (groups.filter(name='dotKom').count() == 1 or groups.filter(name='Hovedstyret').count() == 1):\n if event.event_type == 1 and not groups.filter(name='arrKom').count() == 1:\n messages.error(request, _(u'Du har ikke tilgang til listen for dette arrangementet.'))\n return redirect(event)\n\n if event.event_type == 2 and not groups.filter(name='bedKom').count() == 1:\n messages.error(request, _(u'Du har ikke tilgang til listen for dette arrangementet.'))\n return redirect(event)\n\n if event.event_type == 3 and not groups.filter(name='fagKom').count() == 1:\n messages.error(request, _(u'Du har ikke tilgang til listen for dette arrangementet.')) \n return redirect(event)\n\n return EventPDF(event).render_pdf()\n", "path": "apps/events/views.py"}]}
| 2,300 | 127 |
gh_patches_debug_30523
|
rasdani/github-patches
|
git_diff
|
meltano__meltano-6695
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use `pytest-randomly` plugin
https://github.com/pytest-dev/pytest-randomly
This plugin randomizes the order the tests are run in which prevents us from relying on their order to succeed. It also seeds various sources of randomness to improve reproducibility. For instance, if we encounter a test failure in CI due to randomness, we can run the tests locally with the `pytest --randomly-seed=<seed used in the CI test run>` to reproduce.
I've used this plugin before, and it has been helpful numerous times. I've already run into problems with Meltano's tests that are caused by their execution order. Using this plugin should force us to fix those issues, and prevent them from returning.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `noxfile.py`
Content:
```
1 """Nox configuration."""
2
3 from __future__ import annotations
4
5 import os
6 import sys
7 from pathlib import Path
8 from textwrap import dedent
9
10 try:
11 from nox_poetry import Session
12 from nox_poetry import session as nox_session
13 except ImportError:
14 message = f"""\
15 Nox failed to import the 'nox-poetry' package.
16 Please install it using the following command:
17 {sys.executable} -m pip install nox-poetry"""
18 raise SystemExit(dedent(message)) from None
19
20
21 package = "meltano"
22 python_versions = ["3.10", "3.9", "3.8", "3.7"]
23 main_python_version = "3.9"
24 locations = "src", "tests", "noxfile.py"
25
26
27 @nox_session(python=python_versions)
28 def tests(session: Session) -> None:
29 """Execute pytest tests and compute coverage.
30
31 Args:
32 session: Nox session.
33 """
34 backend_db = os.environ.get("PYTEST_BACKEND", "sqlite")
35
36 if backend_db == "mssql":
37 session.install(".[mssql]")
38 else:
39 session.install(".")
40
41 session.install(
42 "coverage[toml]",
43 "freezegun",
44 "mock",
45 "pytest",
46 "pytest-asyncio",
47 "pytest-docker",
48 "requests-mock",
49 )
50
51 try:
52 session.run(
53 "coverage",
54 "run",
55 "--parallel",
56 "-m",
57 "pytest",
58 *session.posargs,
59 env={"NOX_CURRENT_SESSION": "tests"},
60 )
61 finally:
62 if session.interactive:
63 session.notify("coverage", posargs=[])
64
65
66 @nox_session(python=main_python_version)
67 def coverage(session: Session) -> None:
68 """Upload coverage data.
69
70 Args:
71 session: Nox session.
72 """
73 args = session.posargs or ["report"]
74
75 session.install("coverage[toml]")
76
77 if not session.posargs and any(Path().glob(".coverage.*")):
78 session.run("coverage", "combine")
79
80 session.run("coverage", *args)
81
```
Path: `src/meltano/core/db.py`
Content:
```
1 """Defines helpers related to the system database."""
2
3 from __future__ import annotations
4
5 import logging
6 import time
7
8 from sqlalchemy import create_engine
9 from sqlalchemy.engine import Connection, Engine
10 from sqlalchemy.exc import OperationalError
11 from sqlalchemy.orm import sessionmaker
12 from sqlalchemy.sql import text
13
14 from meltano.core.project import Project
15
16 from .project_settings_service import ProjectSettingsService
17
18 # Keep a Project → Engine mapping to serve
19 # the same engine for the same Project
20 _engines = {}
21
22
23 def project_engine(
24 project: Project,
25 default: bool = False,
26 ) -> tuple[Engine, sessionmaker]:
27 """Create and register a SQLAlchemy engine for a Meltano project instance.
28
29 Args:
30 project: The Meltano project that the engine will be connected to.
31 default: Whether the engine created should be stored as the default
32 engine for this project.
33
34 Returns:
35 The engine, and a session maker bound to the engine.
36 """
37 existing_engine = _engines.get(project)
38 if existing_engine:
39 return existing_engine
40
41 settings = ProjectSettingsService(project)
42
43 engine_uri = settings.get("database_uri")
44 logging.debug(f"Creating engine {project}@{engine_uri}")
45 engine = create_engine(engine_uri, pool_pre_ping=True)
46
47 # Connect to the database to ensure it is available.
48 connect(
49 engine,
50 max_retries=settings.get("database_max_retries"),
51 retry_timeout=settings.get("database_retry_timeout"),
52 )
53
54 init_hook(engine)
55
56 engine_session = (engine, sessionmaker(bind=engine))
57
58 if default:
59 # register the default engine
60 _engines[project] = engine_session
61
62 return engine_session
63
64
65 def connect(
66 engine: Engine,
67 max_retries: int,
68 retry_timeout: float,
69 ) -> Connection:
70 """Connect to the database.
71
72 Args:
73 engine: The DB engine with which the check will be performed.
74 max_retries: The maximum number of retries that will be attempted.
75 retry_timeout: The number of seconds to wait between retries.
76
77 Raises:
78 OperationalError: Error during DB connection - max retries exceeded.
79
80 Returns:
81 A connection to the database.
82 """
83 attempt = 0
84 while True:
85 try:
86 return engine.connect()
87 except OperationalError:
88 if attempt >= max_retries:
89 logging.error(
90 f"Could not connect to the database after {attempt} "
91 "attempts. Max retries exceeded."
92 )
93 raise
94 attempt += 1
95 logging.info(
96 f"DB connection failed. Will retry after {retry_timeout}s. "
97 f"Attempt {attempt}/{max_retries}"
98 )
99 time.sleep(retry_timeout)
100
101
102 init_hooks = {
103 "sqlite": lambda x: x.execute("PRAGMA journal_mode=WAL"),
104 }
105
106
107 def init_hook(engine: Engine) -> None:
108 """Run the initialization hook for the provided DB engine.
109
110 The initialization hooks are taken from the `meltano.core.db.init_hooks`
111 dictionary, which maps the dialect name of the engine to a unary function
112 which will be called with the provided DB engine.
113
114 Args:
115 engine: The engine for which the init hook will be run.
116
117 Raises:
118 Exception: The init hook raised an exception.
119 """
120 try:
121 hook = init_hooks[engine.dialect.name]
122 except KeyError:
123 return
124
125 try:
126 hook(engine)
127 except Exception as ex:
128 raise Exception(f"Failed to initialize database: {ex!s}") from ex
129
130
131 def ensure_schema_exists(
132 engine: Engine,
133 schema_name: str,
134 grant_roles: tuple[str] = (),
135 ) -> None:
136 """Ensure the specified `schema_name` exists in the database.
137
138 Args:
139 engine: The DB engine to be used.
140 schema_name: The name of the schema.
141 grant_roles: Roles to grant to the specified schema.
142 """
143 schema_identifier = schema_name
144 group_identifiers = ",".join(grant_roles)
145
146 create_schema = text(f"CREATE SCHEMA IF NOT EXISTS {schema_identifier}")
147 grant_select_schema = text(
148 f"ALTER DEFAULT PRIVILEGES IN SCHEMA {schema_identifier} GRANT SELECT ON TABLES TO {group_identifiers}"
149 )
150 grant_usage_schema = text(
151 f"GRANT USAGE ON SCHEMA {schema_identifier} TO {group_identifiers}"
152 )
153
154 with engine.connect() as conn, conn.begin():
155 conn.execute(create_schema)
156 if grant_roles:
157 conn.execute(grant_select_schema)
158 conn.execute(grant_usage_schema)
159
160 logging.info(f"Schema {schema_name} has been created successfully.")
161 for role in grant_roles:
162 logging.info(f"Usage has been granted for role: {role}.")
163
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/noxfile.py b/noxfile.py
--- a/noxfile.py
+++ b/noxfile.py
@@ -5,6 +5,7 @@
import os
import sys
from pathlib import Path
+from random import randint
from textwrap import dedent
try:
@@ -45,6 +46,8 @@
"pytest",
"pytest-asyncio",
"pytest-docker",
+ "pytest-order",
+ "pytest-randomly",
"requests-mock",
)
@@ -55,6 +58,7 @@
"--parallel",
"-m",
"pytest",
+ f"--randomly-seed={randint(0, 2**32-1)}", # noqa: S311, WPS432
*session.posargs,
env={"NOX_CURRENT_SESSION": "tests"},
)
diff --git a/src/meltano/core/db.py b/src/meltano/core/db.py
--- a/src/meltano/core/db.py
+++ b/src/meltano/core/db.py
@@ -9,6 +9,7 @@
from sqlalchemy.engine import Connection, Engine
from sqlalchemy.exc import OperationalError
from sqlalchemy.orm import sessionmaker
+from sqlalchemy.pool import NullPool
from sqlalchemy.sql import text
from meltano.core.project import Project
@@ -41,8 +42,9 @@
settings = ProjectSettingsService(project)
engine_uri = settings.get("database_uri")
- logging.debug(f"Creating engine {project}@{engine_uri}")
- engine = create_engine(engine_uri, pool_pre_ping=True)
+ logging.debug(f"Creating engine '{project}@{engine_uri}'")
+
+ engine = create_engine(engine_uri, poolclass=NullPool)
# Connect to the database to ensure it is available.
connect(
|
{"golden_diff": "diff --git a/noxfile.py b/noxfile.py\n--- a/noxfile.py\n+++ b/noxfile.py\n@@ -5,6 +5,7 @@\n import os\n import sys\n from pathlib import Path\n+from random import randint\n from textwrap import dedent\n \n try:\n@@ -45,6 +46,8 @@\n \"pytest\",\n \"pytest-asyncio\",\n \"pytest-docker\",\n+ \"pytest-order\",\n+ \"pytest-randomly\",\n \"requests-mock\",\n )\n \n@@ -55,6 +58,7 @@\n \"--parallel\",\n \"-m\",\n \"pytest\",\n+ f\"--randomly-seed={randint(0, 2**32-1)}\", # noqa: S311, WPS432\n *session.posargs,\n env={\"NOX_CURRENT_SESSION\": \"tests\"},\n )\ndiff --git a/src/meltano/core/db.py b/src/meltano/core/db.py\n--- a/src/meltano/core/db.py\n+++ b/src/meltano/core/db.py\n@@ -9,6 +9,7 @@\n from sqlalchemy.engine import Connection, Engine\n from sqlalchemy.exc import OperationalError\n from sqlalchemy.orm import sessionmaker\n+from sqlalchemy.pool import NullPool\n from sqlalchemy.sql import text\n \n from meltano.core.project import Project\n@@ -41,8 +42,9 @@\n settings = ProjectSettingsService(project)\n \n engine_uri = settings.get(\"database_uri\")\n- logging.debug(f\"Creating engine {project}@{engine_uri}\")\n- engine = create_engine(engine_uri, pool_pre_ping=True)\n+ logging.debug(f\"Creating engine '{project}@{engine_uri}'\")\n+\n+ engine = create_engine(engine_uri, poolclass=NullPool)\n \n # Connect to the database to ensure it is available.\n connect(\n", "issue": "Use `pytest-randomly` plugin\nhttps://github.com/pytest-dev/pytest-randomly\r\n\r\nThis plugin randomizes the order the tests are run in which prevents us from relying on their order to succeed. It also seeds various sources of randomness to improve reproducibility. For instance, if we encounter a test failure in CI due to randomness, we can run the tests locally with the `pytest --randomly-seed=<seed used in the CI test run>` to reproduce.\r\n\r\nI've used this plugin before, and it has been helpful numerous times. I've already run into problems with Meltano's tests that are caused by their execution order. Using this plugin should force us to fix those issues, and prevent them from returning.\n", "before_files": [{"content": "\"\"\"Nox configuration.\"\"\"\n\nfrom __future__ import annotations\n\nimport os\nimport sys\nfrom pathlib import Path\nfrom textwrap import dedent\n\ntry:\n from nox_poetry import Session\n from nox_poetry import session as nox_session\nexcept ImportError:\n message = f\"\"\"\\\n Nox failed to import the 'nox-poetry' package.\n Please install it using the following command:\n {sys.executable} -m pip install nox-poetry\"\"\"\n raise SystemExit(dedent(message)) from None\n\n\npackage = \"meltano\"\npython_versions = [\"3.10\", \"3.9\", \"3.8\", \"3.7\"]\nmain_python_version = \"3.9\"\nlocations = \"src\", \"tests\", \"noxfile.py\"\n\n\n@nox_session(python=python_versions)\ndef tests(session: Session) -> None:\n \"\"\"Execute pytest tests and compute coverage.\n\n Args:\n session: Nox session.\n \"\"\"\n backend_db = os.environ.get(\"PYTEST_BACKEND\", \"sqlite\")\n\n if backend_db == \"mssql\":\n session.install(\".[mssql]\")\n else:\n session.install(\".\")\n\n session.install(\n \"coverage[toml]\",\n \"freezegun\",\n \"mock\",\n \"pytest\",\n \"pytest-asyncio\",\n \"pytest-docker\",\n \"requests-mock\",\n )\n\n try:\n session.run(\n \"coverage\",\n \"run\",\n \"--parallel\",\n \"-m\",\n \"pytest\",\n *session.posargs,\n env={\"NOX_CURRENT_SESSION\": \"tests\"},\n )\n finally:\n if session.interactive:\n session.notify(\"coverage\", posargs=[])\n\n\n@nox_session(python=main_python_version)\ndef coverage(session: Session) -> None:\n \"\"\"Upload coverage data.\n\n Args:\n session: Nox session.\n \"\"\"\n args = session.posargs or [\"report\"]\n\n session.install(\"coverage[toml]\")\n\n if not session.posargs and any(Path().glob(\".coverage.*\")):\n session.run(\"coverage\", \"combine\")\n\n session.run(\"coverage\", *args)\n", "path": "noxfile.py"}, {"content": "\"\"\"Defines helpers related to the system database.\"\"\"\n\nfrom __future__ import annotations\n\nimport logging\nimport time\n\nfrom sqlalchemy import create_engine\nfrom sqlalchemy.engine import Connection, Engine\nfrom sqlalchemy.exc import OperationalError\nfrom sqlalchemy.orm import sessionmaker\nfrom sqlalchemy.sql import text\n\nfrom meltano.core.project import Project\n\nfrom .project_settings_service import ProjectSettingsService\n\n# Keep a Project \u2192 Engine mapping to serve\n# the same engine for the same Project\n_engines = {}\n\n\ndef project_engine(\n project: Project,\n default: bool = False,\n) -> tuple[Engine, sessionmaker]:\n \"\"\"Create and register a SQLAlchemy engine for a Meltano project instance.\n\n Args:\n project: The Meltano project that the engine will be connected to.\n default: Whether the engine created should be stored as the default\n engine for this project.\n\n Returns:\n The engine, and a session maker bound to the engine.\n \"\"\"\n existing_engine = _engines.get(project)\n if existing_engine:\n return existing_engine\n\n settings = ProjectSettingsService(project)\n\n engine_uri = settings.get(\"database_uri\")\n logging.debug(f\"Creating engine {project}@{engine_uri}\")\n engine = create_engine(engine_uri, pool_pre_ping=True)\n\n # Connect to the database to ensure it is available.\n connect(\n engine,\n max_retries=settings.get(\"database_max_retries\"),\n retry_timeout=settings.get(\"database_retry_timeout\"),\n )\n\n init_hook(engine)\n\n engine_session = (engine, sessionmaker(bind=engine))\n\n if default:\n # register the default engine\n _engines[project] = engine_session\n\n return engine_session\n\n\ndef connect(\n engine: Engine,\n max_retries: int,\n retry_timeout: float,\n) -> Connection:\n \"\"\"Connect to the database.\n\n Args:\n engine: The DB engine with which the check will be performed.\n max_retries: The maximum number of retries that will be attempted.\n retry_timeout: The number of seconds to wait between retries.\n\n Raises:\n OperationalError: Error during DB connection - max retries exceeded.\n\n Returns:\n A connection to the database.\n \"\"\"\n attempt = 0\n while True:\n try:\n return engine.connect()\n except OperationalError:\n if attempt >= max_retries:\n logging.error(\n f\"Could not connect to the database after {attempt} \"\n \"attempts. Max retries exceeded.\"\n )\n raise\n attempt += 1\n logging.info(\n f\"DB connection failed. Will retry after {retry_timeout}s. \"\n f\"Attempt {attempt}/{max_retries}\"\n )\n time.sleep(retry_timeout)\n\n\ninit_hooks = {\n \"sqlite\": lambda x: x.execute(\"PRAGMA journal_mode=WAL\"),\n}\n\n\ndef init_hook(engine: Engine) -> None:\n \"\"\"Run the initialization hook for the provided DB engine.\n\n The initialization hooks are taken from the `meltano.core.db.init_hooks`\n dictionary, which maps the dialect name of the engine to a unary function\n which will be called with the provided DB engine.\n\n Args:\n engine: The engine for which the init hook will be run.\n\n Raises:\n Exception: The init hook raised an exception.\n \"\"\"\n try:\n hook = init_hooks[engine.dialect.name]\n except KeyError:\n return\n\n try:\n hook(engine)\n except Exception as ex:\n raise Exception(f\"Failed to initialize database: {ex!s}\") from ex\n\n\ndef ensure_schema_exists(\n engine: Engine,\n schema_name: str,\n grant_roles: tuple[str] = (),\n) -> None:\n \"\"\"Ensure the specified `schema_name` exists in the database.\n\n Args:\n engine: The DB engine to be used.\n schema_name: The name of the schema.\n grant_roles: Roles to grant to the specified schema.\n \"\"\"\n schema_identifier = schema_name\n group_identifiers = \",\".join(grant_roles)\n\n create_schema = text(f\"CREATE SCHEMA IF NOT EXISTS {schema_identifier}\")\n grant_select_schema = text(\n f\"ALTER DEFAULT PRIVILEGES IN SCHEMA {schema_identifier} GRANT SELECT ON TABLES TO {group_identifiers}\"\n )\n grant_usage_schema = text(\n f\"GRANT USAGE ON SCHEMA {schema_identifier} TO {group_identifiers}\"\n )\n\n with engine.connect() as conn, conn.begin():\n conn.execute(create_schema)\n if grant_roles:\n conn.execute(grant_select_schema)\n conn.execute(grant_usage_schema)\n\n logging.info(f\"Schema {schema_name} has been created successfully.\")\n for role in grant_roles:\n logging.info(f\"Usage has been granted for role: {role}.\")\n", "path": "src/meltano/core/db.py"}], "after_files": [{"content": "\"\"\"Nox configuration.\"\"\"\n\nfrom __future__ import annotations\n\nimport os\nimport sys\nfrom pathlib import Path\nfrom random import randint\nfrom textwrap import dedent\n\ntry:\n from nox_poetry import Session\n from nox_poetry import session as nox_session\nexcept ImportError:\n message = f\"\"\"\\\n Nox failed to import the 'nox-poetry' package.\n Please install it using the following command:\n {sys.executable} -m pip install nox-poetry\"\"\"\n raise SystemExit(dedent(message)) from None\n\n\npackage = \"meltano\"\npython_versions = [\"3.10\", \"3.9\", \"3.8\", \"3.7\"]\nmain_python_version = \"3.9\"\nlocations = \"src\", \"tests\", \"noxfile.py\"\n\n\n@nox_session(python=python_versions)\ndef tests(session: Session) -> None:\n \"\"\"Execute pytest tests and compute coverage.\n\n Args:\n session: Nox session.\n \"\"\"\n backend_db = os.environ.get(\"PYTEST_BACKEND\", \"sqlite\")\n\n if backend_db == \"mssql\":\n session.install(\".[mssql]\")\n else:\n session.install(\".\")\n\n session.install(\n \"coverage[toml]\",\n \"freezegun\",\n \"mock\",\n \"pytest\",\n \"pytest-asyncio\",\n \"pytest-docker\",\n \"pytest-order\",\n \"pytest-randomly\",\n \"requests-mock\",\n )\n\n try:\n session.run(\n \"coverage\",\n \"run\",\n \"--parallel\",\n \"-m\",\n \"pytest\",\n f\"--randomly-seed={randint(0, 2**32-1)}\", # noqa: S311, WPS432\n *session.posargs,\n env={\"NOX_CURRENT_SESSION\": \"tests\"},\n )\n finally:\n if session.interactive:\n session.notify(\"coverage\", posargs=[])\n\n\n@nox_session(python=main_python_version)\ndef coverage(session: Session) -> None:\n \"\"\"Upload coverage data.\n\n Args:\n session: Nox session.\n \"\"\"\n args = session.posargs or [\"report\"]\n\n session.install(\"coverage[toml]\")\n\n if not session.posargs and any(Path().glob(\".coverage.*\")):\n session.run(\"coverage\", \"combine\")\n\n session.run(\"coverage\", *args)\n", "path": "noxfile.py"}, {"content": "\"\"\"Defines helpers related to the system database.\"\"\"\n\nfrom __future__ import annotations\n\nimport logging\nimport time\n\nfrom sqlalchemy import create_engine\nfrom sqlalchemy.engine import Connection, Engine\nfrom sqlalchemy.exc import OperationalError\nfrom sqlalchemy.orm import sessionmaker\nfrom sqlalchemy.pool import NullPool\nfrom sqlalchemy.sql import text\n\nfrom meltano.core.project import Project\n\nfrom .project_settings_service import ProjectSettingsService\n\n# Keep a Project \u2192 Engine mapping to serve\n# the same engine for the same Project\n_engines = {}\n\n\ndef project_engine(\n project: Project,\n default: bool = False,\n) -> tuple[Engine, sessionmaker]:\n \"\"\"Create and register a SQLAlchemy engine for a Meltano project instance.\n\n Args:\n project: The Meltano project that the engine will be connected to.\n default: Whether the engine created should be stored as the default\n engine for this project.\n\n Returns:\n The engine, and a session maker bound to the engine.\n \"\"\"\n existing_engine = _engines.get(project)\n if existing_engine:\n return existing_engine\n\n settings = ProjectSettingsService(project)\n\n engine_uri = settings.get(\"database_uri\")\n logging.debug(f\"Creating engine '{project}@{engine_uri}'\")\n\n engine = create_engine(engine_uri, poolclass=NullPool)\n\n # Connect to the database to ensure it is available.\n connect(\n engine,\n max_retries=settings.get(\"database_max_retries\"),\n retry_timeout=settings.get(\"database_retry_timeout\"),\n )\n\n init_hook(engine)\n\n engine_session = (engine, sessionmaker(bind=engine))\n\n if default:\n # register the default engine\n _engines[project] = engine_session\n\n return engine_session\n\n\ndef connect(\n engine: Engine,\n max_retries: int,\n retry_timeout: float,\n) -> Connection:\n \"\"\"Connect to the database.\n\n Args:\n engine: The DB engine with which the check will be performed.\n max_retries: The maximum number of retries that will be attempted.\n retry_timeout: The number of seconds to wait between retries.\n\n Raises:\n OperationalError: Error during DB connection - max retries exceeded.\n\n Returns:\n A connection to the database.\n \"\"\"\n attempt = 0\n while True:\n try:\n return engine.connect()\n except OperationalError:\n if attempt >= max_retries:\n logging.error(\n f\"Could not connect to the database after {attempt} \"\n \"attempts. Max retries exceeded.\"\n )\n raise\n attempt += 1\n logging.info(\n f\"DB connection failed. Will retry after {retry_timeout}s. \"\n f\"Attempt {attempt}/{max_retries}\"\n )\n time.sleep(retry_timeout)\n\n\ninit_hooks = {\n \"sqlite\": lambda x: x.execute(\"PRAGMA journal_mode=WAL\"),\n}\n\n\ndef init_hook(engine: Engine) -> None:\n \"\"\"Run the initialization hook for the provided DB engine.\n\n The initialization hooks are taken from the `meltano.core.db.init_hooks`\n dictionary, which maps the dialect name of the engine to a unary function\n which will be called with the provided DB engine.\n\n Args:\n engine: The engine for which the init hook will be run.\n\n Raises:\n Exception: The init hook raised an exception.\n \"\"\"\n try:\n hook = init_hooks[engine.dialect.name]\n except KeyError:\n return\n\n try:\n hook(engine)\n except Exception as ex:\n raise Exception(f\"Failed to initialize database: {ex!s}\") from ex\n\n\ndef ensure_schema_exists(\n engine: Engine,\n schema_name: str,\n grant_roles: tuple[str] = (),\n) -> None:\n \"\"\"Ensure the specified `schema_name` exists in the database.\n\n Args:\n engine: The DB engine to be used.\n schema_name: The name of the schema.\n grant_roles: Roles to grant to the specified schema.\n \"\"\"\n schema_identifier = schema_name\n group_identifiers = \",\".join(grant_roles)\n\n create_schema = text(f\"CREATE SCHEMA IF NOT EXISTS {schema_identifier}\")\n grant_select_schema = text(\n f\"ALTER DEFAULT PRIVILEGES IN SCHEMA {schema_identifier} GRANT SELECT ON TABLES TO {group_identifiers}\"\n )\n grant_usage_schema = text(\n f\"GRANT USAGE ON SCHEMA {schema_identifier} TO {group_identifiers}\"\n )\n\n with engine.connect() as conn, conn.begin():\n conn.execute(create_schema)\n if grant_roles:\n conn.execute(grant_select_schema)\n conn.execute(grant_usage_schema)\n\n logging.info(f\"Schema {schema_name} has been created successfully.\")\n for role in grant_roles:\n logging.info(f\"Usage has been granted for role: {role}.\")\n", "path": "src/meltano/core/db.py"}]}
| 2,448 | 398 |
gh_patches_debug_25257
|
rasdani/github-patches
|
git_diff
|
ESMCI__cime-4442
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
cs.status reset to force rebuild
I would like an additional option to cs.status or perhaps create_test that
would reset all cases in a test suite to the PEND SHAREDLIB_BUILD state so that
all tests are rebuilt before being restarted.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `CIME/cs_status.py`
Content:
```
1 """
2 Implementation of the cs.status script, which prints the status of all
3 of the tests in one or more test suites
4 """
5
6 from __future__ import print_function
7 from CIME.XML.standard_module_setup import *
8 from CIME.XML.expected_fails_file import ExpectedFailsFile
9 from CIME.test_status import TestStatus
10 import os
11 import sys
12 from collections import defaultdict
13
14
15 def cs_status(
16 test_paths,
17 summary=False,
18 fails_only=False,
19 count_fails_phase_list=None,
20 check_throughput=False,
21 check_memory=False,
22 expected_fails_filepath=None,
23 out=sys.stdout,
24 ):
25 """Print the test statuses of all tests in test_paths. The default
26 is to print to stdout, but this can be overridden with the 'out'
27 argument.
28
29 If summary is True, then only the overall status of each test is printed
30
31 If fails_only is True, then only test failures are printed (this
32 includes PENDs as well as FAILs).
33
34 If count_fails_phase_list is provided, it should be a list of phases
35 (from the phases given by test_status.ALL_PHASES). For each phase in
36 this list: do not give line-by-line output; instead, just report the
37 total number of tests that have not PASSed this phase (this includes
38 PENDs and FAILs). (This is typically used with the fails_only
39 option, but it can also be used without that option.)
40
41 If expected_fails_filepath is provided, it should be a string giving
42 the full path to a file listing expected failures for this test
43 suite. Expected failures are then labeled as such in the output.
44 """
45 expect(not (summary and fails_only), "Cannot have both summary and fails_only")
46 expect(
47 not (summary and count_fails_phase_list),
48 "Cannot have both summary and count_fails_phase_list",
49 )
50 if count_fails_phase_list is None:
51 count_fails_phase_list = []
52 non_pass_counts = dict.fromkeys(count_fails_phase_list, 0)
53 xfails = _get_xfails(expected_fails_filepath)
54 test_id_output = defaultdict(str)
55 test_id_counts = defaultdict(int)
56 for test_path in test_paths:
57 test_dir = os.path.dirname(test_path)
58 ts = TestStatus(test_dir=test_dir)
59 test_id = os.path.basename(test_dir).split(".")[-1]
60 if summary:
61 output = _overall_output(
62 ts, " {status} {test_name}\n", check_throughput, check_memory
63 )
64 else:
65 if fails_only:
66 output = ""
67 else:
68 output = _overall_output(
69 ts,
70 " {test_name} (Overall: {status}) details:\n",
71 check_throughput,
72 check_memory,
73 )
74 output += ts.phase_statuses_dump(
75 prefix=" ",
76 skip_passes=fails_only,
77 skip_phase_list=count_fails_phase_list,
78 xfails=xfails.get(ts.get_name()),
79 )
80 if count_fails_phase_list:
81 ts.increment_non_pass_counts(non_pass_counts)
82
83 test_id_output[test_id] += output
84 test_id_counts[test_id] += 1
85
86 for test_id in sorted(test_id_output):
87 count = test_id_counts[test_id]
88 print(
89 "{}: {} test{}".format(test_id, count, "s" if count > 1 else ""), file=out
90 )
91 print(test_id_output[test_id], file=out)
92 print(" ", file=out)
93
94 if count_fails_phase_list:
95 print(72 * "=", file=out)
96 print("Non-PASS results for select phases:", file=out)
97 for phase in count_fails_phase_list:
98 print("{} non-passes: {}".format(phase, non_pass_counts[phase]), file=out)
99
100
101 def _get_xfails(expected_fails_filepath):
102 """Returns a dictionary of ExpectedFails objects, where the keys are test names
103
104 expected_fails_filepath should be either a string giving the path to
105 the file containing expected failures, or None. If None, then this
106 returns an empty dictionary (as if expected_fails_filepath were
107 pointing to a file with no expected failures listed).
108 """
109 if expected_fails_filepath is not None:
110 expected_fails_file = ExpectedFailsFile(expected_fails_filepath)
111 xfails = expected_fails_file.get_expected_fails()
112 else:
113 xfails = {}
114 return xfails
115
116
117 def _overall_output(ts, format_str, check_throughput, check_memory):
118 """Returns a string giving the overall test status
119
120 Args:
121 ts: TestStatus object
122 format_str (string): string giving the format of the output; must
123 contain place-holders for status and test_name
124 """
125 test_name = ts.get_name()
126 status = ts.get_overall_test_status(
127 check_throughput=check_throughput,
128 check_memory=check_memory,
129 )[0]
130 return format_str.format(status=status, test_name=test_name)
131
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/CIME/cs_status.py b/CIME/cs_status.py
--- a/CIME/cs_status.py
+++ b/CIME/cs_status.py
@@ -6,7 +6,7 @@
from __future__ import print_function
from CIME.XML.standard_module_setup import *
from CIME.XML.expected_fails_file import ExpectedFailsFile
-from CIME.test_status import TestStatus
+from CIME.test_status import TestStatus, SHAREDLIB_BUILD_PHASE, TEST_PEND_STATUS
import os
import sys
from collections import defaultdict
@@ -20,6 +20,7 @@
check_throughput=False,
check_memory=False,
expected_fails_filepath=None,
+ force_rebuild=False,
out=sys.stdout,
):
"""Print the test statuses of all tests in test_paths. The default
@@ -56,6 +57,11 @@
for test_path in test_paths:
test_dir = os.path.dirname(test_path)
ts = TestStatus(test_dir=test_dir)
+
+ if force_rebuild:
+ with ts:
+ ts.set_status(SHAREDLIB_BUILD_PHASE, TEST_PEND_STATUS)
+
test_id = os.path.basename(test_dir).split(".")[-1]
if summary:
output = _overall_output(
|
{"golden_diff": "diff --git a/CIME/cs_status.py b/CIME/cs_status.py\n--- a/CIME/cs_status.py\n+++ b/CIME/cs_status.py\n@@ -6,7 +6,7 @@\n from __future__ import print_function\n from CIME.XML.standard_module_setup import *\n from CIME.XML.expected_fails_file import ExpectedFailsFile\n-from CIME.test_status import TestStatus\n+from CIME.test_status import TestStatus, SHAREDLIB_BUILD_PHASE, TEST_PEND_STATUS\n import os\n import sys\n from collections import defaultdict\n@@ -20,6 +20,7 @@\n check_throughput=False,\n check_memory=False,\n expected_fails_filepath=None,\n+ force_rebuild=False,\n out=sys.stdout,\n ):\n \"\"\"Print the test statuses of all tests in test_paths. The default\n@@ -56,6 +57,11 @@\n for test_path in test_paths:\n test_dir = os.path.dirname(test_path)\n ts = TestStatus(test_dir=test_dir)\n+\n+ if force_rebuild:\n+ with ts:\n+ ts.set_status(SHAREDLIB_BUILD_PHASE, TEST_PEND_STATUS)\n+\n test_id = os.path.basename(test_dir).split(\".\")[-1]\n if summary:\n output = _overall_output(\n", "issue": "cs.status reset to force rebuild\nI would like an additional option to cs.status or perhaps create_test that\r\nwould reset all cases in a test suite to the PEND SHAREDLIB_BUILD state so that \r\nall tests are rebuilt before being restarted. \n", "before_files": [{"content": "\"\"\"\nImplementation of the cs.status script, which prints the status of all\nof the tests in one or more test suites\n\"\"\"\n\nfrom __future__ import print_function\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.XML.expected_fails_file import ExpectedFailsFile\nfrom CIME.test_status import TestStatus\nimport os\nimport sys\nfrom collections import defaultdict\n\n\ndef cs_status(\n test_paths,\n summary=False,\n fails_only=False,\n count_fails_phase_list=None,\n check_throughput=False,\n check_memory=False,\n expected_fails_filepath=None,\n out=sys.stdout,\n):\n \"\"\"Print the test statuses of all tests in test_paths. The default\n is to print to stdout, but this can be overridden with the 'out'\n argument.\n\n If summary is True, then only the overall status of each test is printed\n\n If fails_only is True, then only test failures are printed (this\n includes PENDs as well as FAILs).\n\n If count_fails_phase_list is provided, it should be a list of phases\n (from the phases given by test_status.ALL_PHASES). For each phase in\n this list: do not give line-by-line output; instead, just report the\n total number of tests that have not PASSed this phase (this includes\n PENDs and FAILs). (This is typically used with the fails_only\n option, but it can also be used without that option.)\n\n If expected_fails_filepath is provided, it should be a string giving\n the full path to a file listing expected failures for this test\n suite. Expected failures are then labeled as such in the output.\n \"\"\"\n expect(not (summary and fails_only), \"Cannot have both summary and fails_only\")\n expect(\n not (summary and count_fails_phase_list),\n \"Cannot have both summary and count_fails_phase_list\",\n )\n if count_fails_phase_list is None:\n count_fails_phase_list = []\n non_pass_counts = dict.fromkeys(count_fails_phase_list, 0)\n xfails = _get_xfails(expected_fails_filepath)\n test_id_output = defaultdict(str)\n test_id_counts = defaultdict(int)\n for test_path in test_paths:\n test_dir = os.path.dirname(test_path)\n ts = TestStatus(test_dir=test_dir)\n test_id = os.path.basename(test_dir).split(\".\")[-1]\n if summary:\n output = _overall_output(\n ts, \" {status} {test_name}\\n\", check_throughput, check_memory\n )\n else:\n if fails_only:\n output = \"\"\n else:\n output = _overall_output(\n ts,\n \" {test_name} (Overall: {status}) details:\\n\",\n check_throughput,\n check_memory,\n )\n output += ts.phase_statuses_dump(\n prefix=\" \",\n skip_passes=fails_only,\n skip_phase_list=count_fails_phase_list,\n xfails=xfails.get(ts.get_name()),\n )\n if count_fails_phase_list:\n ts.increment_non_pass_counts(non_pass_counts)\n\n test_id_output[test_id] += output\n test_id_counts[test_id] += 1\n\n for test_id in sorted(test_id_output):\n count = test_id_counts[test_id]\n print(\n \"{}: {} test{}\".format(test_id, count, \"s\" if count > 1 else \"\"), file=out\n )\n print(test_id_output[test_id], file=out)\n print(\" \", file=out)\n\n if count_fails_phase_list:\n print(72 * \"=\", file=out)\n print(\"Non-PASS results for select phases:\", file=out)\n for phase in count_fails_phase_list:\n print(\"{} non-passes: {}\".format(phase, non_pass_counts[phase]), file=out)\n\n\ndef _get_xfails(expected_fails_filepath):\n \"\"\"Returns a dictionary of ExpectedFails objects, where the keys are test names\n\n expected_fails_filepath should be either a string giving the path to\n the file containing expected failures, or None. If None, then this\n returns an empty dictionary (as if expected_fails_filepath were\n pointing to a file with no expected failures listed).\n \"\"\"\n if expected_fails_filepath is not None:\n expected_fails_file = ExpectedFailsFile(expected_fails_filepath)\n xfails = expected_fails_file.get_expected_fails()\n else:\n xfails = {}\n return xfails\n\n\ndef _overall_output(ts, format_str, check_throughput, check_memory):\n \"\"\"Returns a string giving the overall test status\n\n Args:\n ts: TestStatus object\n format_str (string): string giving the format of the output; must\n contain place-holders for status and test_name\n \"\"\"\n test_name = ts.get_name()\n status = ts.get_overall_test_status(\n check_throughput=check_throughput,\n check_memory=check_memory,\n )[0]\n return format_str.format(status=status, test_name=test_name)\n", "path": "CIME/cs_status.py"}], "after_files": [{"content": "\"\"\"\nImplementation of the cs.status script, which prints the status of all\nof the tests in one or more test suites\n\"\"\"\n\nfrom __future__ import print_function\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.XML.expected_fails_file import ExpectedFailsFile\nfrom CIME.test_status import TestStatus, SHAREDLIB_BUILD_PHASE, TEST_PEND_STATUS\nimport os\nimport sys\nfrom collections import defaultdict\n\n\ndef cs_status(\n test_paths,\n summary=False,\n fails_only=False,\n count_fails_phase_list=None,\n check_throughput=False,\n check_memory=False,\n expected_fails_filepath=None,\n force_rebuild=False,\n out=sys.stdout,\n):\n \"\"\"Print the test statuses of all tests in test_paths. The default\n is to print to stdout, but this can be overridden with the 'out'\n argument.\n\n If summary is True, then only the overall status of each test is printed\n\n If fails_only is True, then only test failures are printed (this\n includes PENDs as well as FAILs).\n\n If count_fails_phase_list is provided, it should be a list of phases\n (from the phases given by test_status.ALL_PHASES). For each phase in\n this list: do not give line-by-line output; instead, just report the\n total number of tests that have not PASSed this phase (this includes\n PENDs and FAILs). (This is typically used with the fails_only\n option, but it can also be used without that option.)\n\n If expected_fails_filepath is provided, it should be a string giving\n the full path to a file listing expected failures for this test\n suite. Expected failures are then labeled as such in the output.\n \"\"\"\n expect(not (summary and fails_only), \"Cannot have both summary and fails_only\")\n expect(\n not (summary and count_fails_phase_list),\n \"Cannot have both summary and count_fails_phase_list\",\n )\n if count_fails_phase_list is None:\n count_fails_phase_list = []\n non_pass_counts = dict.fromkeys(count_fails_phase_list, 0)\n xfails = _get_xfails(expected_fails_filepath)\n test_id_output = defaultdict(str)\n test_id_counts = defaultdict(int)\n for test_path in test_paths:\n test_dir = os.path.dirname(test_path)\n ts = TestStatus(test_dir=test_dir)\n\n if force_rebuild:\n with ts:\n ts.set_status(SHAREDLIB_BUILD_PHASE, TEST_PEND_STATUS)\n\n test_id = os.path.basename(test_dir).split(\".\")[-1]\n if summary:\n output = _overall_output(\n ts, \" {status} {test_name}\\n\", check_throughput, check_memory\n )\n else:\n if fails_only:\n output = \"\"\n else:\n output = _overall_output(\n ts,\n \" {test_name} (Overall: {status}) details:\\n\",\n check_throughput,\n check_memory,\n )\n output += ts.phase_statuses_dump(\n prefix=\" \",\n skip_passes=fails_only,\n skip_phase_list=count_fails_phase_list,\n xfails=xfails.get(ts.get_name()),\n )\n if count_fails_phase_list:\n ts.increment_non_pass_counts(non_pass_counts)\n\n test_id_output[test_id] += output\n test_id_counts[test_id] += 1\n\n for test_id in sorted(test_id_output):\n count = test_id_counts[test_id]\n print(\n \"{}: {} test{}\".format(test_id, count, \"s\" if count > 1 else \"\"), file=out\n )\n print(test_id_output[test_id], file=out)\n print(\" \", file=out)\n\n if count_fails_phase_list:\n print(72 * \"=\", file=out)\n print(\"Non-PASS results for select phases:\", file=out)\n for phase in count_fails_phase_list:\n print(\"{} non-passes: {}\".format(phase, non_pass_counts[phase]), file=out)\n\n\ndef _get_xfails(expected_fails_filepath):\n \"\"\"Returns a dictionary of ExpectedFails objects, where the keys are test names\n\n expected_fails_filepath should be either a string giving the path to\n the file containing expected failures, or None. If None, then this\n returns an empty dictionary (as if expected_fails_filepath were\n pointing to a file with no expected failures listed).\n \"\"\"\n if expected_fails_filepath is not None:\n expected_fails_file = ExpectedFailsFile(expected_fails_filepath)\n xfails = expected_fails_file.get_expected_fails()\n else:\n xfails = {}\n return xfails\n\n\ndef _overall_output(ts, format_str, check_throughput, check_memory):\n \"\"\"Returns a string giving the overall test status\n\n Args:\n ts: TestStatus object\n format_str (string): string giving the format of the output; must\n contain place-holders for status and test_name\n \"\"\"\n test_name = ts.get_name()\n status = ts.get_overall_test_status(\n check_throughput=check_throughput,\n check_memory=check_memory,\n )[0]\n return format_str.format(status=status, test_name=test_name)\n", "path": "CIME/cs_status.py"}]}
| 1,671 | 272 |
gh_patches_debug_13744
|
rasdani/github-patches
|
git_diff
|
saleor__saleor-1471
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Refactor displaying success messages in the dashboard
The code responsible for displaying success messages in the dashboard lives in [_messages.html](https://github.com/mirumee/saleor/blob/master/templates/dashboard/includes/_messages.html) template and mixes Django's templating language with JS which isn't very elegant. Instead, there should be a function written entirely in JS that would take care of rendering those messages with data passed from backend through `data-*` attributes.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `saleor/dashboard/templatetags/utils.py`
Content:
```
1 from urllib.parse import urlencode
2
3 from django import forms
4 from django.template import Library
5 from django_filters.fields import RangeField
6 from versatileimagefield.widgets import VersatileImagePPOIClickWidget
7
8 from ...product.utils import get_margin_for_variant, get_variant_costs_data
9 from ..product.widgets import ImagePreviewWidget
10 from .chips import (
11 handle_default, handle_multiple_choice, handle_multiple_model_choice,
12 handle_nullboolean, handle_range, handle_single_choice,
13 handle_single_model_choice)
14
15 register = Library()
16
17
18 @register.simple_tag(takes_context=True)
19 def construct_get_query(context, **params):
20 request_get = context['request'].GET.dict()
21 if not (request_get or params):
22 return ''
23 all_params = {}
24 all_params.update(request_get)
25 all_params.update(params)
26 all_params.update(context.get('default_pagination_params', {}))
27 return '?' + urlencode(all_params)
28
29
30 @register.filter
31 def is_versatile_image_ppoi_click_widget(field):
32 '''
33 This filter checks if image field widget is used when user wants to edit
34 existing product image.
35 '''
36 return isinstance(field.field.widget, VersatileImagePPOIClickWidget)
37
38
39 @register.filter
40 def is_image_preview_widget(field):
41 '''
42 This filter checks if image field widget is used when user wants to add new
43 product image.
44 '''
45 return isinstance(field.field.widget, ImagePreviewWidget)
46
47
48 @register.inclusion_tag('dashboard/product/product_variant/_image_select.html')
49 def render_image_choice(field):
50 choices = zip(field, field.field.queryset)
51 return {'field': field, 'choices_with_images': choices}
52
53
54 @register.inclusion_tag('dashboard/includes/_pagination.html',
55 takes_context=True)
56 def paginate(context, page_obj, num_of_pages=5):
57 context['page_obj'] = page_obj
58 context['n_forward'] = num_of_pages + 1
59 context['n_backward'] = -num_of_pages - 1
60 context['next_section'] = (2 * num_of_pages) + 1
61 context['previous_section'] = (-2 * num_of_pages) - 1
62 return context
63
64
65 @register.simple_tag
66 def margin_for_variant(stock):
67 return get_margin_for_variant(stock)
68
69
70 @register.simple_tag
71 def margins_for_variant(variant):
72 margins = get_variant_costs_data(variant)['margins']
73 return margins
74
75
76 @register.inclusion_tag('dashboard/includes/_filters.html', takes_context=True)
77 def add_filters(context, filter_set, sort_by_filter_name='sort_by'):
78 chips = []
79 request_get = context['request'].GET.copy()
80 for filter_name in filter_set.form.cleaned_data.keys():
81 if filter_name == sort_by_filter_name:
82 # Skip processing of sort_by filter, as it's rendered differently
83 continue
84
85 field = filter_set.form[filter_name]
86 if field.value() not in ['', None]:
87 if isinstance(field.field, forms.NullBooleanField):
88 items = handle_nullboolean(field, request_get)
89 elif isinstance(field.field, forms.ModelMultipleChoiceField):
90 items = handle_multiple_model_choice(field, request_get)
91 elif isinstance(field.field, forms.MultipleChoiceField):
92 items = handle_multiple_choice(field, request_get)
93 elif isinstance(field.field, forms.ModelChoiceField):
94 items = handle_single_model_choice(field, request_get)
95 elif isinstance(field.field, forms.ChoiceField):
96 items = handle_single_choice(field, request_get)
97 elif isinstance(field.field, RangeField):
98 items = handle_range(field, request_get)
99 else:
100 items = handle_default(field, request_get)
101 chips.extend(items)
102 return {
103 'chips': chips, 'filter': filter_set, 'count': filter_set.qs.count(),
104 'sort_by': request_get.get(sort_by_filter_name, None)}
105
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/saleor/dashboard/templatetags/utils.py b/saleor/dashboard/templatetags/utils.py
--- a/saleor/dashboard/templatetags/utils.py
+++ b/saleor/dashboard/templatetags/utils.py
@@ -1,3 +1,5 @@
+from __future__ import unicode_literals
+from json import dumps
from urllib.parse import urlencode
from django import forms
@@ -102,3 +104,13 @@
return {
'chips': chips, 'filter': filter_set, 'count': filter_set.qs.count(),
'sort_by': request_get.get(sort_by_filter_name, None)}
+
+
[email protected]_tag(takes_context=True)
+def serialize_messages(context):
+ """Serialize django.contrib.messages to JSON"""
+ messages = context.get('messages', [])
+ data = {}
+ for i, message in enumerate(messages):
+ data[i] = str(message)
+ return dumps(data)
|
{"golden_diff": "diff --git a/saleor/dashboard/templatetags/utils.py b/saleor/dashboard/templatetags/utils.py\n--- a/saleor/dashboard/templatetags/utils.py\n+++ b/saleor/dashboard/templatetags/utils.py\n@@ -1,3 +1,5 @@\n+from __future__ import unicode_literals\n+from json import dumps\n from urllib.parse import urlencode\n \n from django import forms\n@@ -102,3 +104,13 @@\n return {\n 'chips': chips, 'filter': filter_set, 'count': filter_set.qs.count(),\n 'sort_by': request_get.get(sort_by_filter_name, None)}\n+\n+\[email protected]_tag(takes_context=True)\n+def serialize_messages(context):\n+ \"\"\"Serialize django.contrib.messages to JSON\"\"\"\n+ messages = context.get('messages', [])\n+ data = {}\n+ for i, message in enumerate(messages):\n+ data[i] = str(message)\n+ return dumps(data)\n", "issue": "Refactor displaying success messages in the dashboard\nThe code responsible for displaying success messages in the dashboard lives in [_messages.html](https://github.com/mirumee/saleor/blob/master/templates/dashboard/includes/_messages.html) template and mixes Django's templating language with JS which isn't very elegant. Instead, there should be a function written entirely in JS that would take care of rendering those messages with data passed from backend through `data-*` attributes.\n", "before_files": [{"content": "from urllib.parse import urlencode\n\nfrom django import forms\nfrom django.template import Library\nfrom django_filters.fields import RangeField\nfrom versatileimagefield.widgets import VersatileImagePPOIClickWidget\n\nfrom ...product.utils import get_margin_for_variant, get_variant_costs_data\nfrom ..product.widgets import ImagePreviewWidget\nfrom .chips import (\n handle_default, handle_multiple_choice, handle_multiple_model_choice,\n handle_nullboolean, handle_range, handle_single_choice,\n handle_single_model_choice)\n\nregister = Library()\n\n\[email protected]_tag(takes_context=True)\ndef construct_get_query(context, **params):\n request_get = context['request'].GET.dict()\n if not (request_get or params):\n return ''\n all_params = {}\n all_params.update(request_get)\n all_params.update(params)\n all_params.update(context.get('default_pagination_params', {}))\n return '?' + urlencode(all_params)\n\n\[email protected]\ndef is_versatile_image_ppoi_click_widget(field):\n '''\n This filter checks if image field widget is used when user wants to edit\n existing product image.\n '''\n return isinstance(field.field.widget, VersatileImagePPOIClickWidget)\n\n\[email protected]\ndef is_image_preview_widget(field):\n '''\n This filter checks if image field widget is used when user wants to add new\n product image.\n '''\n return isinstance(field.field.widget, ImagePreviewWidget)\n\n\[email protected]_tag('dashboard/product/product_variant/_image_select.html')\ndef render_image_choice(field):\n choices = zip(field, field.field.queryset)\n return {'field': field, 'choices_with_images': choices}\n\n\[email protected]_tag('dashboard/includes/_pagination.html',\n takes_context=True)\ndef paginate(context, page_obj, num_of_pages=5):\n context['page_obj'] = page_obj\n context['n_forward'] = num_of_pages + 1\n context['n_backward'] = -num_of_pages - 1\n context['next_section'] = (2 * num_of_pages) + 1\n context['previous_section'] = (-2 * num_of_pages) - 1\n return context\n\n\[email protected]_tag\ndef margin_for_variant(stock):\n return get_margin_for_variant(stock)\n\n\[email protected]_tag\ndef margins_for_variant(variant):\n margins = get_variant_costs_data(variant)['margins']\n return margins\n\n\[email protected]_tag('dashboard/includes/_filters.html', takes_context=True)\ndef add_filters(context, filter_set, sort_by_filter_name='sort_by'):\n chips = []\n request_get = context['request'].GET.copy()\n for filter_name in filter_set.form.cleaned_data.keys():\n if filter_name == sort_by_filter_name:\n # Skip processing of sort_by filter, as it's rendered differently\n continue\n\n field = filter_set.form[filter_name]\n if field.value() not in ['', None]:\n if isinstance(field.field, forms.NullBooleanField):\n items = handle_nullboolean(field, request_get)\n elif isinstance(field.field, forms.ModelMultipleChoiceField):\n items = handle_multiple_model_choice(field, request_get)\n elif isinstance(field.field, forms.MultipleChoiceField):\n items = handle_multiple_choice(field, request_get)\n elif isinstance(field.field, forms.ModelChoiceField):\n items = handle_single_model_choice(field, request_get)\n elif isinstance(field.field, forms.ChoiceField):\n items = handle_single_choice(field, request_get)\n elif isinstance(field.field, RangeField):\n items = handle_range(field, request_get)\n else:\n items = handle_default(field, request_get)\n chips.extend(items)\n return {\n 'chips': chips, 'filter': filter_set, 'count': filter_set.qs.count(),\n 'sort_by': request_get.get(sort_by_filter_name, None)}\n", "path": "saleor/dashboard/templatetags/utils.py"}], "after_files": [{"content": "from __future__ import unicode_literals\nfrom json import dumps\nfrom urllib.parse import urlencode\n\nfrom django import forms\nfrom django.template import Library\nfrom django_filters.fields import RangeField\nfrom versatileimagefield.widgets import VersatileImagePPOIClickWidget\n\nfrom ...product.utils import get_margin_for_variant, get_variant_costs_data\nfrom ..product.widgets import ImagePreviewWidget\nfrom .chips import (\n handle_default, handle_multiple_choice, handle_multiple_model_choice,\n handle_nullboolean, handle_range, handle_single_choice,\n handle_single_model_choice)\n\nregister = Library()\n\n\[email protected]_tag(takes_context=True)\ndef construct_get_query(context, **params):\n request_get = context['request'].GET.dict()\n if not (request_get or params):\n return ''\n all_params = {}\n all_params.update(request_get)\n all_params.update(params)\n all_params.update(context.get('default_pagination_params', {}))\n return '?' + urlencode(all_params)\n\n\[email protected]\ndef is_versatile_image_ppoi_click_widget(field):\n '''\n This filter checks if image field widget is used when user wants to edit\n existing product image.\n '''\n return isinstance(field.field.widget, VersatileImagePPOIClickWidget)\n\n\[email protected]\ndef is_image_preview_widget(field):\n '''\n This filter checks if image field widget is used when user wants to add new\n product image.\n '''\n return isinstance(field.field.widget, ImagePreviewWidget)\n\n\[email protected]_tag('dashboard/product/product_variant/_image_select.html')\ndef render_image_choice(field):\n choices = zip(field, field.field.queryset)\n return {'field': field, 'choices_with_images': choices}\n\n\[email protected]_tag('dashboard/includes/_pagination.html',\n takes_context=True)\ndef paginate(context, page_obj, num_of_pages=5):\n context['page_obj'] = page_obj\n context['n_forward'] = num_of_pages + 1\n context['n_backward'] = -num_of_pages - 1\n context['next_section'] = (2 * num_of_pages) + 1\n context['previous_section'] = (-2 * num_of_pages) - 1\n return context\n\n\[email protected]_tag\ndef margin_for_variant(stock):\n return get_margin_for_variant(stock)\n\n\[email protected]_tag\ndef margins_for_variant(variant):\n margins = get_variant_costs_data(variant)['margins']\n return margins\n\n\[email protected]_tag('dashboard/includes/_filters.html', takes_context=True)\ndef add_filters(context, filter_set, sort_by_filter_name='sort_by'):\n chips = []\n request_get = context['request'].GET.copy()\n for filter_name in filter_set.form.cleaned_data.keys():\n if filter_name == sort_by_filter_name:\n # Skip processing of sort_by filter, as it's rendered differently\n continue\n\n field = filter_set.form[filter_name]\n if field.value() not in ['', None]:\n if isinstance(field.field, forms.NullBooleanField):\n items = handle_nullboolean(field, request_get)\n elif isinstance(field.field, forms.ModelMultipleChoiceField):\n items = handle_multiple_model_choice(field, request_get)\n elif isinstance(field.field, forms.MultipleChoiceField):\n items = handle_multiple_choice(field, request_get)\n elif isinstance(field.field, forms.ModelChoiceField):\n items = handle_single_model_choice(field, request_get)\n elif isinstance(field.field, forms.ChoiceField):\n items = handle_single_choice(field, request_get)\n elif isinstance(field.field, RangeField):\n items = handle_range(field, request_get)\n else:\n items = handle_default(field, request_get)\n chips.extend(items)\n return {\n 'chips': chips, 'filter': filter_set, 'count': filter_set.qs.count(),\n 'sort_by': request_get.get(sort_by_filter_name, None)}\n\n\[email protected]_tag(takes_context=True)\ndef serialize_messages(context):\n \"\"\"Serialize django.contrib.messages to JSON\"\"\"\n messages = context.get('messages', [])\n data = {}\n for i, message in enumerate(messages):\n data[i] = str(message)\n return dumps(data)\n", "path": "saleor/dashboard/templatetags/utils.py"}]}
| 1,373 | 215 |
gh_patches_debug_8154
|
rasdani/github-patches
|
git_diff
|
numba__numba-814
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BUG JIT optimizer removes index-variables after the loop
It seemes that the optimizer removes all variables that look like index variables after the loop is completed (or before the next loop is started).
For example, the following code:
```
from numba import jit
import numpy as np
@jit('f8[:](f8[:])')
def test(x):
res = np.zeros(len(x))
ind = 0
for ii in range(len(x)):
ind += 1
res[ind] = x[ind]
if x[ind] >= 10:
break
# ind #### Uncomment this line to get correct result
for ii in range(ind+1, len(x)):
res[ii] = 0
return res
x = np.array([1.,4,2,-3,5,2,10,5,2,6])
print test(x)
```
returns `[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]` as if before the second loop the variable had value: `ind=0`
But when the line `# ind` is not commented (i.e. when variable `ind` is read before the second loop) the result is correct: `[ 0. 4. 2. -3. 5. 2. 10. 0. 0. 0.]`
In first case the loop is jitted and in second is rejected. So it seemed that the access to the variable in `range(ind+1, len(x))` is not considered by the compiler as a reason to reject jitting.
Just in case:
Python 2.7.6
LLVM: 3.3
llvmpy (0.12.7)
numba (0.14.0)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `numba/looplifting.py`
Content:
```
1 from __future__ import print_function, division, absolute_import
2
3 from numba import utils
4 from numba.bytecode import ByteCodeInst, CustomByteCode
5
6
7 def lift_loop(bytecode, dispatcher_factory):
8 """Lift the top-level loops.
9
10 Returns (outer, loops)
11 ------------------------
12 * outer: ByteCode of a copy of the loop-less function.
13 * loops: a list of ByteCode of the loops.
14 """
15 outer = []
16 loops = []
17 separate_loops(bytecode, outer, loops)
18
19 # Discover variables references
20 outer_rds, outer_wrs = find_varnames_uses(bytecode, outer)
21 outer_wrs |= set(bytecode.argspec.args)
22
23 dispatchers = []
24 outerlabels = set(bytecode.labels)
25 outernames = list(bytecode.co_names)
26
27 for loop in loops:
28 args, rets = discover_args_and_returns(bytecode, loop, outer_rds,
29 outer_wrs)
30
31 disp = insert_loop_call(bytecode, loop, args,
32 outer, outerlabels, rets,
33 dispatcher_factory)
34 dispatchers.append(disp)
35
36 # Build outer bytecode
37 codetable = utils.SortedMap((i.offset, i) for i in outer)
38 outerbc = CustomByteCode(func=bytecode.func,
39 func_qualname=bytecode.func_qualname,
40 argspec=bytecode.argspec,
41 filename=bytecode.filename,
42 co_names=outernames,
43 co_varnames=bytecode.co_varnames,
44 co_consts=bytecode.co_consts,
45 co_freevars=bytecode.co_freevars,
46 table=codetable,
47 labels=outerlabels & set(codetable.keys()))
48
49 return outerbc, dispatchers
50
51 @utils.total_ordering
52 class SubOffset(object):
53 """The loop-jitting may insert bytecode between two bytecode but we
54 cannot guarantee that there is enough integral space between two offsets.
55 This class workaround the problem by introducing a fractional part to the
56 offset.
57 """
58 def __init__(self, val, sub=1):
59 assert sub > 0, "fractional part cannot be <= 0"
60 self.val = val
61 self.sub = sub
62
63 def next(self):
64 """Helper method to get the next suboffset by incrementing the
65 fractional part only
66 """
67 return SubOffset(self.val, self.sub + 1)
68
69 def __add__(self, other):
70 """Adding to a suboffset will only increment the fractional part.
71 The integral part is immutable.
72 """
73 return SubOffset(self.val, self.sub + other)
74
75 def __hash__(self):
76 return hash((self.val, self.sub))
77
78 def __lt__(self, other):
79 """Can only compare to SubOffset or int
80 """
81 if isinstance(other, SubOffset):
82 if self.val < other.val:
83 return self
84 elif self.val == other.val:
85 return self.sub < other.sub
86 else:
87 return False
88 elif isinstance(other, int):
89 return self.val < other
90 else:
91 return NotImplemented
92
93 def __eq__(self, other):
94 if isinstance(other, SubOffset):
95 return self.val == other.val and self.sub == other.sub
96 elif isinstance(other, int):
97 # Can never be equal to a integer by definition
98 return False
99 else:
100 return NotImplemented
101
102 def __repr__(self):
103 """Print like a floating-point by it is not one at all.
104 """
105 return "{0}.{1}".format(self.val, self.sub)
106
107
108 def insert_loop_call(bytecode, loop, args, outer, outerlabels, returns,
109 dispatcher_factory):
110 endloopoffset = loop[-1].next
111 # Accepted. Create a bytecode object for the loop
112 args = tuple(args)
113
114 lbc = make_loop_bytecode(bytecode, loop, args, returns)
115
116 # Generate dispatcher for this inner loop, and append it to the
117 # consts tuple.
118 disp = dispatcher_factory(lbc)
119 disp_idx = len(bytecode.co_consts)
120 bytecode.co_consts += (disp,)
121
122 # Insert jump to the end
123 insertpt = SubOffset(loop[0].next)
124 jmp = ByteCodeInst.get(loop[0].offset, 'JUMP_ABSOLUTE', insertpt)
125 jmp.lineno = loop[0].lineno
126 insert_instruction(outer, jmp)
127
128 outerlabels.add(outer[-1].next)
129
130 # Prepare arguments
131 loadfn = ByteCodeInst.get(insertpt, "LOAD_CONST", disp_idx)
132 loadfn.lineno = loop[0].lineno
133 insert_instruction(outer, loadfn)
134
135 insertpt = insertpt.next()
136 for arg in args:
137 loadarg = ByteCodeInst.get(insertpt, 'LOAD_FAST',
138 bytecode.co_varnames.index(arg))
139 loadarg.lineno = loop[0].lineno
140 insert_instruction(outer, loadarg)
141 insertpt = insertpt.next()
142
143 # Call function
144 assert len(args) < 256
145 call = ByteCodeInst.get(insertpt, "CALL_FUNCTION", len(args))
146 call.lineno = loop[0].lineno
147 insert_instruction(outer, call)
148
149 insertpt = insertpt.next()
150
151 if returns:
152 # Unpack arguments
153 unpackseq = ByteCodeInst.get(insertpt, "UNPACK_SEQUENCE",
154 len(returns))
155 unpackseq.lineno = loop[0].lineno
156 insert_instruction(outer, unpackseq)
157 insertpt = insertpt.next()
158
159 for out in returns:
160 # Store each variable
161 storefast = ByteCodeInst.get(insertpt, "STORE_FAST",
162 bytecode.co_varnames.index(out))
163 storefast.lineno = loop[0].lineno
164 insert_instruction(outer, storefast)
165 insertpt = insertpt.next()
166 else:
167 # No return value
168 poptop = ByteCodeInst.get(outer[-1].next, "POP_TOP", None)
169 poptop.lineno = loop[0].lineno
170 insert_instruction(outer, poptop)
171 insertpt = insertpt.next()
172
173 jmpback = ByteCodeInst.get(insertpt, 'JUMP_ABSOLUTE',
174 endloopoffset)
175
176 jmpback.lineno = loop[0].lineno
177 insert_instruction(outer, jmpback)
178
179 return disp
180
181
182 def insert_instruction(insts, item):
183 i = find_previous_inst(insts, item.offset)
184 insts.insert(i, item)
185
186
187 def find_previous_inst(insts, offset):
188 for i, inst in enumerate(insts):
189 if inst.offset > offset:
190 return i
191 return len(insts)
192
193
194 def make_loop_bytecode(bytecode, loop, args, returns):
195 # Add return None
196 co_consts = tuple(bytecode.co_consts)
197 if None not in co_consts:
198 co_consts += (None,)
199
200 if returns:
201 for out in returns:
202 # Load output
203 loadfast = ByteCodeInst.get(loop[-1].next, "LOAD_FAST",
204 bytecode.co_varnames.index(out))
205 loadfast.lineno = loop[-1].lineno
206 loop.append(loadfast)
207 # Build tuple
208 buildtuple = ByteCodeInst.get(loop[-1].next, "BUILD_TUPLE",
209 len(returns))
210 buildtuple.lineno = loop[-1].lineno
211 loop.append(buildtuple)
212
213 else:
214 # Load None
215 load_none = ByteCodeInst.get(loop[-1].next, "LOAD_CONST",
216 co_consts.index(None))
217 load_none.lineno = loop[-1].lineno
218 loop.append(load_none)
219
220 # Return TOS
221 return_value = ByteCodeInst.get(loop[-1].next, "RETURN_VALUE", 0)
222 return_value.lineno = loop[-1].lineno
223 loop.append(return_value)
224
225 # Function name
226 loop_qualname = bytecode.func_qualname + ".__numba__loop%d__" % loop[0].offset
227
228 # Argspec
229 argspectype = type(bytecode.argspec)
230 argspec = argspectype(args=args, varargs=(), keywords=(), defaults=())
231
232 # Code table
233 codetable = utils.SortedMap((i.offset, i) for i in loop)
234
235 # Custom bytecode object
236 lbc = CustomByteCode(func=bytecode.func,
237 func_qualname=loop_qualname,
238 argspec=argspec,
239 filename=bytecode.filename,
240 co_names=bytecode.co_names,
241 co_varnames=bytecode.co_varnames,
242 co_consts=co_consts,
243 co_freevars=bytecode.co_freevars,
244 table=codetable,
245 labels=bytecode.labels)
246
247 return lbc
248
249
250 def stitch_instructions(outer, loop):
251 begin = loop[0].offset
252 i = find_previous_inst(outer, begin)
253 return outer[:i] + loop + outer[i:]
254
255
256 def discover_args_and_returns(bytecode, insts, outer_rds, outer_wrs):
257 """
258 Basic analysis for args and returns
259 This completely ignores the ordering or the read-writes.
260 """
261 rdnames, wrnames = find_varnames_uses(bytecode, insts)
262 # Pass names that are written outside and read locally
263 args = outer_wrs & rdnames
264 # Return values that it written locally and read outside
265 rets = wrnames & outer_rds
266 return args, rets
267
268
269 def find_varnames_uses(bytecode, insts):
270 rdnames = set()
271 wrnames = set()
272 for inst in insts:
273 if inst.opname == 'LOAD_FAST':
274 rdnames.add(bytecode.co_varnames[inst.arg])
275 elif inst.opname == 'STORE_FAST':
276 wrnames.add(bytecode.co_varnames[inst.arg])
277 return rdnames, wrnames
278
279
280 def separate_loops(bytecode, outer, loops):
281 """
282 Separate top-level loops from the function
283
284 Stores loopless instructions from the original function into `outer`.
285 Stores list of loop instructions into `loops`.
286 Both `outer` and `loops` are list-like (`append(item)` defined).
287 """
288 endloop = None
289 cur = None
290 for inst in bytecode:
291 if endloop is None:
292 if inst.opname == 'SETUP_LOOP':
293 cur = [inst]
294 # Python may set the end of loop to the final jump destination
295 # when nested in a if-else. We need to scan the bytecode to
296 # find the actual end of loop
297 endloop = _scan_real_end_loop(bytecode, inst)
298 else:
299 outer.append(inst)
300 else:
301 cur.append(inst)
302 if inst.next == endloop:
303 for inst in cur:
304 if inst.opname == 'RETURN_VALUE':
305 # Reject if return inside loop
306 outer.extend(cur)
307 break
308 else:
309 loops.append(cur)
310 endloop = None
311
312
313 def _scan_real_end_loop(bytecode, setuploop_inst):
314 """Find the end of loop.
315 Return the instruction offset.
316 """
317 start = setuploop_inst.next
318 end = start + setuploop_inst.arg
319 offset = start
320 depth = 0
321 while offset < end:
322 inst = bytecode[offset]
323 depth += inst.block_effect
324 if depth < 0:
325 return inst.next
326 offset = inst.next
327
328
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/numba/looplifting.py b/numba/looplifting.py
--- a/numba/looplifting.py
+++ b/numba/looplifting.py
@@ -20,6 +20,13 @@
outer_rds, outer_wrs = find_varnames_uses(bytecode, outer)
outer_wrs |= set(bytecode.argspec.args)
+ # Find in-loop references to variables
+ for loop in loops:
+ args, rets = discover_args_and_returns(bytecode, loop, outer_rds,
+ outer_wrs)
+ outer_rds |= args
+ outer_wrs |= rets
+
dispatchers = []
outerlabels = set(bytecode.labels)
outernames = list(bytecode.co_names)
|
{"golden_diff": "diff --git a/numba/looplifting.py b/numba/looplifting.py\n--- a/numba/looplifting.py\n+++ b/numba/looplifting.py\n@@ -20,6 +20,13 @@\n outer_rds, outer_wrs = find_varnames_uses(bytecode, outer)\n outer_wrs |= set(bytecode.argspec.args)\n \n+ # Find in-loop references to variables\n+ for loop in loops:\n+ args, rets = discover_args_and_returns(bytecode, loop, outer_rds,\n+ outer_wrs)\n+ outer_rds |= args\n+ outer_wrs |= rets\n+\n dispatchers = []\n outerlabels = set(bytecode.labels)\n outernames = list(bytecode.co_names)\n", "issue": "BUG JIT optimizer removes index-variables after the loop\nIt seemes that the optimizer removes all variables that look like index variables after the loop is completed (or before the next loop is started).\n\nFor example, the following code:\n\n```\nfrom numba import jit\nimport numpy as np\n\n@jit('f8[:](f8[:])')\ndef test(x):\n res = np.zeros(len(x))\n ind = 0\n for ii in range(len(x)):\n ind += 1\n res[ind] = x[ind]\n if x[ind] >= 10:\n break\n\n # ind #### Uncomment this line to get correct result\n for ii in range(ind+1, len(x)):\n res[ii] = 0\n return res\n\nx = np.array([1.,4,2,-3,5,2,10,5,2,6])\nprint test(x)\n```\n\nreturns `[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]` as if before the second loop the variable had value: `ind=0`\n\nBut when the line `# ind` is not commented (i.e. when variable `ind` is read before the second loop) the result is correct: `[ 0. 4. 2. -3. 5. 2. 10. 0. 0. 0.]`\n\nIn first case the loop is jitted and in second is rejected. So it seemed that the access to the variable in `range(ind+1, len(x))` is not considered by the compiler as a reason to reject jitting.\n\nJust in case:\nPython 2.7.6\nLLVM: 3.3\nllvmpy (0.12.7)\nnumba (0.14.0)\n\n", "before_files": [{"content": "from __future__ import print_function, division, absolute_import\n\nfrom numba import utils\nfrom numba.bytecode import ByteCodeInst, CustomByteCode\n\n\ndef lift_loop(bytecode, dispatcher_factory):\n \"\"\"Lift the top-level loops.\n\n Returns (outer, loops)\n ------------------------\n * outer: ByteCode of a copy of the loop-less function.\n * loops: a list of ByteCode of the loops.\n \"\"\"\n outer = []\n loops = []\n separate_loops(bytecode, outer, loops)\n\n # Discover variables references\n outer_rds, outer_wrs = find_varnames_uses(bytecode, outer)\n outer_wrs |= set(bytecode.argspec.args)\n\n dispatchers = []\n outerlabels = set(bytecode.labels)\n outernames = list(bytecode.co_names)\n\n for loop in loops:\n args, rets = discover_args_and_returns(bytecode, loop, outer_rds,\n outer_wrs)\n\n disp = insert_loop_call(bytecode, loop, args,\n outer, outerlabels, rets,\n dispatcher_factory)\n dispatchers.append(disp)\n\n # Build outer bytecode\n codetable = utils.SortedMap((i.offset, i) for i in outer)\n outerbc = CustomByteCode(func=bytecode.func,\n func_qualname=bytecode.func_qualname,\n argspec=bytecode.argspec,\n filename=bytecode.filename,\n co_names=outernames,\n co_varnames=bytecode.co_varnames,\n co_consts=bytecode.co_consts,\n co_freevars=bytecode.co_freevars,\n table=codetable,\n labels=outerlabels & set(codetable.keys()))\n\n return outerbc, dispatchers\n\[email protected]_ordering\nclass SubOffset(object):\n \"\"\"The loop-jitting may insert bytecode between two bytecode but we\n cannot guarantee that there is enough integral space between two offsets.\n This class workaround the problem by introducing a fractional part to the\n offset.\n \"\"\"\n def __init__(self, val, sub=1):\n assert sub > 0, \"fractional part cannot be <= 0\"\n self.val = val\n self.sub = sub\n\n def next(self):\n \"\"\"Helper method to get the next suboffset by incrementing the\n fractional part only\n \"\"\"\n return SubOffset(self.val, self.sub + 1)\n\n def __add__(self, other):\n \"\"\"Adding to a suboffset will only increment the fractional part.\n The integral part is immutable.\n \"\"\"\n return SubOffset(self.val, self.sub + other)\n\n def __hash__(self):\n return hash((self.val, self.sub))\n\n def __lt__(self, other):\n \"\"\"Can only compare to SubOffset or int\n \"\"\"\n if isinstance(other, SubOffset):\n if self.val < other.val:\n return self\n elif self.val == other.val:\n return self.sub < other.sub\n else:\n return False\n elif isinstance(other, int):\n return self.val < other\n else:\n return NotImplemented\n\n def __eq__(self, other):\n if isinstance(other, SubOffset):\n return self.val == other.val and self.sub == other.sub\n elif isinstance(other, int):\n # Can never be equal to a integer by definition\n return False\n else:\n return NotImplemented\n\n def __repr__(self):\n \"\"\"Print like a floating-point by it is not one at all.\n \"\"\"\n return \"{0}.{1}\".format(self.val, self.sub)\n\n\ndef insert_loop_call(bytecode, loop, args, outer, outerlabels, returns,\n dispatcher_factory):\n endloopoffset = loop[-1].next\n # Accepted. Create a bytecode object for the loop\n args = tuple(args)\n\n lbc = make_loop_bytecode(bytecode, loop, args, returns)\n\n # Generate dispatcher for this inner loop, and append it to the\n # consts tuple.\n disp = dispatcher_factory(lbc)\n disp_idx = len(bytecode.co_consts)\n bytecode.co_consts += (disp,)\n\n # Insert jump to the end\n insertpt = SubOffset(loop[0].next)\n jmp = ByteCodeInst.get(loop[0].offset, 'JUMP_ABSOLUTE', insertpt)\n jmp.lineno = loop[0].lineno\n insert_instruction(outer, jmp)\n\n outerlabels.add(outer[-1].next)\n\n # Prepare arguments\n loadfn = ByteCodeInst.get(insertpt, \"LOAD_CONST\", disp_idx)\n loadfn.lineno = loop[0].lineno\n insert_instruction(outer, loadfn)\n\n insertpt = insertpt.next()\n for arg in args:\n loadarg = ByteCodeInst.get(insertpt, 'LOAD_FAST',\n bytecode.co_varnames.index(arg))\n loadarg.lineno = loop[0].lineno\n insert_instruction(outer, loadarg)\n insertpt = insertpt.next()\n\n # Call function\n assert len(args) < 256\n call = ByteCodeInst.get(insertpt, \"CALL_FUNCTION\", len(args))\n call.lineno = loop[0].lineno\n insert_instruction(outer, call)\n\n insertpt = insertpt.next()\n\n if returns:\n # Unpack arguments\n unpackseq = ByteCodeInst.get(insertpt, \"UNPACK_SEQUENCE\",\n len(returns))\n unpackseq.lineno = loop[0].lineno\n insert_instruction(outer, unpackseq)\n insertpt = insertpt.next()\n\n for out in returns:\n # Store each variable\n storefast = ByteCodeInst.get(insertpt, \"STORE_FAST\",\n bytecode.co_varnames.index(out))\n storefast.lineno = loop[0].lineno\n insert_instruction(outer, storefast)\n insertpt = insertpt.next()\n else:\n # No return value\n poptop = ByteCodeInst.get(outer[-1].next, \"POP_TOP\", None)\n poptop.lineno = loop[0].lineno\n insert_instruction(outer, poptop)\n insertpt = insertpt.next()\n\n jmpback = ByteCodeInst.get(insertpt, 'JUMP_ABSOLUTE',\n endloopoffset)\n\n jmpback.lineno = loop[0].lineno\n insert_instruction(outer, jmpback)\n\n return disp\n\n\ndef insert_instruction(insts, item):\n i = find_previous_inst(insts, item.offset)\n insts.insert(i, item)\n\n\ndef find_previous_inst(insts, offset):\n for i, inst in enumerate(insts):\n if inst.offset > offset:\n return i\n return len(insts)\n\n\ndef make_loop_bytecode(bytecode, loop, args, returns):\n # Add return None\n co_consts = tuple(bytecode.co_consts)\n if None not in co_consts:\n co_consts += (None,)\n\n if returns:\n for out in returns:\n # Load output\n loadfast = ByteCodeInst.get(loop[-1].next, \"LOAD_FAST\",\n bytecode.co_varnames.index(out))\n loadfast.lineno = loop[-1].lineno\n loop.append(loadfast)\n # Build tuple\n buildtuple = ByteCodeInst.get(loop[-1].next, \"BUILD_TUPLE\",\n len(returns))\n buildtuple.lineno = loop[-1].lineno\n loop.append(buildtuple)\n\n else:\n # Load None\n load_none = ByteCodeInst.get(loop[-1].next, \"LOAD_CONST\",\n co_consts.index(None))\n load_none.lineno = loop[-1].lineno\n loop.append(load_none)\n\n # Return TOS\n return_value = ByteCodeInst.get(loop[-1].next, \"RETURN_VALUE\", 0)\n return_value.lineno = loop[-1].lineno\n loop.append(return_value)\n\n # Function name\n loop_qualname = bytecode.func_qualname + \".__numba__loop%d__\" % loop[0].offset\n\n # Argspec\n argspectype = type(bytecode.argspec)\n argspec = argspectype(args=args, varargs=(), keywords=(), defaults=())\n\n # Code table\n codetable = utils.SortedMap((i.offset, i) for i in loop)\n\n # Custom bytecode object\n lbc = CustomByteCode(func=bytecode.func,\n func_qualname=loop_qualname,\n argspec=argspec,\n filename=bytecode.filename,\n co_names=bytecode.co_names,\n co_varnames=bytecode.co_varnames,\n co_consts=co_consts,\n co_freevars=bytecode.co_freevars,\n table=codetable,\n labels=bytecode.labels)\n\n return lbc\n\n\ndef stitch_instructions(outer, loop):\n begin = loop[0].offset\n i = find_previous_inst(outer, begin)\n return outer[:i] + loop + outer[i:]\n\n\ndef discover_args_and_returns(bytecode, insts, outer_rds, outer_wrs):\n \"\"\"\n Basic analysis for args and returns\n This completely ignores the ordering or the read-writes.\n \"\"\"\n rdnames, wrnames = find_varnames_uses(bytecode, insts)\n # Pass names that are written outside and read locally\n args = outer_wrs & rdnames\n # Return values that it written locally and read outside\n rets = wrnames & outer_rds\n return args, rets\n\n\ndef find_varnames_uses(bytecode, insts):\n rdnames = set()\n wrnames = set()\n for inst in insts:\n if inst.opname == 'LOAD_FAST':\n rdnames.add(bytecode.co_varnames[inst.arg])\n elif inst.opname == 'STORE_FAST':\n wrnames.add(bytecode.co_varnames[inst.arg])\n return rdnames, wrnames\n\n\ndef separate_loops(bytecode, outer, loops):\n \"\"\"\n Separate top-level loops from the function\n\n Stores loopless instructions from the original function into `outer`.\n Stores list of loop instructions into `loops`.\n Both `outer` and `loops` are list-like (`append(item)` defined).\n \"\"\"\n endloop = None\n cur = None\n for inst in bytecode:\n if endloop is None:\n if inst.opname == 'SETUP_LOOP':\n cur = [inst]\n # Python may set the end of loop to the final jump destination\n # when nested in a if-else. We need to scan the bytecode to\n # find the actual end of loop\n endloop = _scan_real_end_loop(bytecode, inst)\n else:\n outer.append(inst)\n else:\n cur.append(inst)\n if inst.next == endloop:\n for inst in cur:\n if inst.opname == 'RETURN_VALUE':\n # Reject if return inside loop\n outer.extend(cur)\n break\n else:\n loops.append(cur)\n endloop = None\n\n\ndef _scan_real_end_loop(bytecode, setuploop_inst):\n \"\"\"Find the end of loop.\n Return the instruction offset.\n \"\"\"\n start = setuploop_inst.next\n end = start + setuploop_inst.arg\n offset = start\n depth = 0\n while offset < end:\n inst = bytecode[offset]\n depth += inst.block_effect\n if depth < 0:\n return inst.next\n offset = inst.next\n\n", "path": "numba/looplifting.py"}], "after_files": [{"content": "from __future__ import print_function, division, absolute_import\n\nfrom numba import utils\nfrom numba.bytecode import ByteCodeInst, CustomByteCode\n\n\ndef lift_loop(bytecode, dispatcher_factory):\n \"\"\"Lift the top-level loops.\n\n Returns (outer, loops)\n ------------------------\n * outer: ByteCode of a copy of the loop-less function.\n * loops: a list of ByteCode of the loops.\n \"\"\"\n outer = []\n loops = []\n separate_loops(bytecode, outer, loops)\n\n # Discover variables references\n outer_rds, outer_wrs = find_varnames_uses(bytecode, outer)\n outer_wrs |= set(bytecode.argspec.args)\n\n # Find in-loop references to variables\n for loop in loops:\n args, rets = discover_args_and_returns(bytecode, loop, outer_rds,\n outer_wrs)\n outer_rds |= args\n outer_wrs |= rets\n\n dispatchers = []\n outerlabels = set(bytecode.labels)\n outernames = list(bytecode.co_names)\n\n for loop in loops:\n args, rets = discover_args_and_returns(bytecode, loop, outer_rds,\n outer_wrs)\n\n disp = insert_loop_call(bytecode, loop, args,\n outer, outerlabels, rets,\n dispatcher_factory)\n dispatchers.append(disp)\n\n # Build outer bytecode\n codetable = utils.SortedMap((i.offset, i) for i in outer)\n outerbc = CustomByteCode(func=bytecode.func,\n func_qualname=bytecode.func_qualname,\n argspec=bytecode.argspec,\n filename=bytecode.filename,\n co_names=outernames,\n co_varnames=bytecode.co_varnames,\n co_consts=bytecode.co_consts,\n co_freevars=bytecode.co_freevars,\n table=codetable,\n labels=outerlabels & set(codetable.keys()))\n\n return outerbc, dispatchers\n\[email protected]_ordering\nclass SubOffset(object):\n \"\"\"The loop-jitting may insert bytecode between two bytecode but we\n cannot guarantee that there is enough integral space between two offsets.\n This class workaround the problem by introducing a fractional part to the\n offset.\n \"\"\"\n def __init__(self, val, sub=1):\n assert sub > 0, \"fractional part cannot be <= 0\"\n self.val = val\n self.sub = sub\n\n def next(self):\n \"\"\"Helper method to get the next suboffset by incrementing the\n fractional part only\n \"\"\"\n return SubOffset(self.val, self.sub + 1)\n\n def __add__(self, other):\n \"\"\"Adding to a suboffset will only increment the fractional part.\n The integral part is immutable.\n \"\"\"\n return SubOffset(self.val, self.sub + other)\n\n def __hash__(self):\n return hash((self.val, self.sub))\n\n def __lt__(self, other):\n \"\"\"Can only compare to SubOffset or int\n \"\"\"\n if isinstance(other, SubOffset):\n if self.val < other.val:\n return self\n elif self.val == other.val:\n return self.sub < other.sub\n else:\n return False\n elif isinstance(other, int):\n return self.val < other\n else:\n return NotImplemented\n\n def __eq__(self, other):\n if isinstance(other, SubOffset):\n return self.val == other.val and self.sub == other.sub\n elif isinstance(other, int):\n # Can never be equal to a integer by definition\n return False\n else:\n return NotImplemented\n\n def __repr__(self):\n \"\"\"Print like a floating-point by it is not one at all.\n \"\"\"\n return \"{0}.{1}\".format(self.val, self.sub)\n\n\ndef insert_loop_call(bytecode, loop, args, outer, outerlabels, returns,\n dispatcher_factory):\n endloopoffset = loop[-1].next\n # Accepted. Create a bytecode object for the loop\n args = tuple(args)\n\n lbc = make_loop_bytecode(bytecode, loop, args, returns)\n\n # Generate dispatcher for this inner loop, and append it to the\n # consts tuple.\n disp = dispatcher_factory(lbc)\n disp_idx = len(bytecode.co_consts)\n bytecode.co_consts += (disp,)\n\n # Insert jump to the end\n insertpt = SubOffset(loop[0].next)\n jmp = ByteCodeInst.get(loop[0].offset, 'JUMP_ABSOLUTE', insertpt)\n jmp.lineno = loop[0].lineno\n insert_instruction(outer, jmp)\n\n outerlabels.add(outer[-1].next)\n\n # Prepare arguments\n loadfn = ByteCodeInst.get(insertpt, \"LOAD_CONST\", disp_idx)\n loadfn.lineno = loop[0].lineno\n insert_instruction(outer, loadfn)\n\n insertpt = insertpt.next()\n for arg in args:\n loadarg = ByteCodeInst.get(insertpt, 'LOAD_FAST',\n bytecode.co_varnames.index(arg))\n loadarg.lineno = loop[0].lineno\n insert_instruction(outer, loadarg)\n insertpt = insertpt.next()\n\n # Call function\n assert len(args) < 256\n call = ByteCodeInst.get(insertpt, \"CALL_FUNCTION\", len(args))\n call.lineno = loop[0].lineno\n insert_instruction(outer, call)\n\n insertpt = insertpt.next()\n\n if returns:\n # Unpack arguments\n unpackseq = ByteCodeInst.get(insertpt, \"UNPACK_SEQUENCE\",\n len(returns))\n unpackseq.lineno = loop[0].lineno\n insert_instruction(outer, unpackseq)\n insertpt = insertpt.next()\n\n for out in returns:\n # Store each variable\n storefast = ByteCodeInst.get(insertpt, \"STORE_FAST\",\n bytecode.co_varnames.index(out))\n storefast.lineno = loop[0].lineno\n insert_instruction(outer, storefast)\n insertpt = insertpt.next()\n else:\n # No return value\n poptop = ByteCodeInst.get(outer[-1].next, \"POP_TOP\", None)\n poptop.lineno = loop[0].lineno\n insert_instruction(outer, poptop)\n insertpt = insertpt.next()\n\n jmpback = ByteCodeInst.get(insertpt, 'JUMP_ABSOLUTE',\n endloopoffset)\n\n jmpback.lineno = loop[0].lineno\n insert_instruction(outer, jmpback)\n\n return disp\n\n\ndef insert_instruction(insts, item):\n i = find_previous_inst(insts, item.offset)\n insts.insert(i, item)\n\n\ndef find_previous_inst(insts, offset):\n for i, inst in enumerate(insts):\n if inst.offset > offset:\n return i\n return len(insts)\n\n\ndef make_loop_bytecode(bytecode, loop, args, returns):\n # Add return None\n co_consts = tuple(bytecode.co_consts)\n if None not in co_consts:\n co_consts += (None,)\n\n if returns:\n for out in returns:\n # Load output\n loadfast = ByteCodeInst.get(loop[-1].next, \"LOAD_FAST\",\n bytecode.co_varnames.index(out))\n loadfast.lineno = loop[-1].lineno\n loop.append(loadfast)\n # Build tuple\n buildtuple = ByteCodeInst.get(loop[-1].next, \"BUILD_TUPLE\",\n len(returns))\n buildtuple.lineno = loop[-1].lineno\n loop.append(buildtuple)\n\n else:\n # Load None\n load_none = ByteCodeInst.get(loop[-1].next, \"LOAD_CONST\",\n co_consts.index(None))\n load_none.lineno = loop[-1].lineno\n loop.append(load_none)\n\n # Return TOS\n return_value = ByteCodeInst.get(loop[-1].next, \"RETURN_VALUE\", 0)\n return_value.lineno = loop[-1].lineno\n loop.append(return_value)\n\n # Function name\n loop_qualname = bytecode.func_qualname + \".__numba__loop%d__\" % loop[0].offset\n\n # Argspec\n argspectype = type(bytecode.argspec)\n argspec = argspectype(args=args, varargs=(), keywords=(), defaults=())\n\n # Code table\n codetable = utils.SortedMap((i.offset, i) for i in loop)\n\n # Custom bytecode object\n lbc = CustomByteCode(func=bytecode.func,\n func_qualname=loop_qualname,\n argspec=argspec,\n filename=bytecode.filename,\n co_names=bytecode.co_names,\n co_varnames=bytecode.co_varnames,\n co_consts=co_consts,\n co_freevars=bytecode.co_freevars,\n table=codetable,\n labels=bytecode.labels)\n\n return lbc\n\n\ndef stitch_instructions(outer, loop):\n begin = loop[0].offset\n i = find_previous_inst(outer, begin)\n return outer[:i] + loop + outer[i:]\n\n\ndef discover_args_and_returns(bytecode, insts, outer_rds, outer_wrs):\n \"\"\"\n Basic analysis for args and returns\n This completely ignores the ordering or the read-writes.\n \"\"\"\n rdnames, wrnames = find_varnames_uses(bytecode, insts)\n # Pass names that are written outside and read locally\n args = outer_wrs & rdnames\n # Return values that it written locally and read outside\n rets = wrnames & outer_rds\n return args, rets\n\n\ndef find_varnames_uses(bytecode, insts):\n rdnames = set()\n wrnames = set()\n for inst in insts:\n if inst.opname == 'LOAD_FAST':\n rdnames.add(bytecode.co_varnames[inst.arg])\n elif inst.opname == 'STORE_FAST':\n wrnames.add(bytecode.co_varnames[inst.arg])\n return rdnames, wrnames\n\n\ndef separate_loops(bytecode, outer, loops):\n \"\"\"\n Separate top-level loops from the function\n\n Stores loopless instructions from the original function into `outer`.\n Stores list of loop instructions into `loops`.\n Both `outer` and `loops` are list-like (`append(item)` defined).\n \"\"\"\n endloop = None\n cur = None\n for inst in bytecode:\n if endloop is None:\n if inst.opname == 'SETUP_LOOP':\n cur = [inst]\n # Python may set the end of loop to the final jump destination\n # when nested in a if-else. We need to scan the bytecode to\n # find the actual end of loop\n endloop = _scan_real_end_loop(bytecode, inst)\n else:\n outer.append(inst)\n else:\n cur.append(inst)\n if inst.next == endloop:\n for inst in cur:\n if inst.opname == 'RETURN_VALUE':\n # Reject if return inside loop\n outer.extend(cur)\n break\n else:\n loops.append(cur)\n endloop = None\n\n\ndef _scan_real_end_loop(bytecode, setuploop_inst):\n \"\"\"Find the end of loop.\n Return the instruction offset.\n \"\"\"\n start = setuploop_inst.next\n end = start + setuploop_inst.arg\n offset = start\n depth = 0\n while offset < end:\n inst = bytecode[offset]\n depth += inst.block_effect\n if depth < 0:\n return inst.next\n offset = inst.next\n\n", "path": "numba/looplifting.py"}]}
| 4,013 | 171 |
gh_patches_debug_18707
|
rasdani/github-patches
|
git_diff
|
dynaconf__dynaconf-42
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ModuleNotFoundError: No module named 'flask'
Dynaconf requires Flask by default, is that by mistake or is it intentionally?
```bash
File "/app/.heroku/python/lib/python3.6/site-packages/dynaconf/__init__.py", line 5, in <module>
from dynaconf.contrib import FlaskDynaconf
File "/app/.heroku/python/lib/python3.6/site-packages/dynaconf/contrib/__init__.py", line 1, in <module>
from dynaconf.contrib.flask_dynaconf import FlaskDynaconf, DynaconfConfig # noqa
File "/app/.heroku/python/lib/python3.6/site-packages/dynaconf/contrib/flask_dynaconf.py", line 2, in <module>
from flask.config import Config
ModuleNotFoundError: No module named 'flask'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dynaconf/contrib/flask_dynaconf.py`
Content:
```
1 # coding: utf-8
2 from flask.config import Config
3 from dynaconf import LazySettings
4
5
6 class FlaskDynaconf(object):
7 """
8 The arguments are.
9 app = The created app
10 dynaconf_args = Extra args to be passed to Dynaconf (validator for example)
11
12 All other values are stored as config vars specially:
13
14 ENVVAR_FOR_DYNACONF = Name of environment variable to use if you want to
15 change the settings file from env vars
16 example:
17 export MYSITE_SETTINGS_MODULE=/tmp/settings.py
18 with the above the settings will be loaded from that
19 file
20 Dynaconf supports .py, .yml, .toml
21
22 DYNACONF_NAMESPACE = Namespace prefix for your envvars to become settings
23 example:
24 export MYSITE_SQL_PORT='@int 5445'
25
26 with that exported to env you access using:
27 app.config.SQL_PORT
28 app.config.get('SQL_PORT')
29 app.config.get('sql_port')
30 # get is case insensitive
31 app.config['SQL_PORT']
32
33 Dynaconf uses `@int, @bool, @float, @json` to cast env
34 vars
35
36 SETTINGS_MODULE_FOR_DYNACONF = The name of the module or file to use as
37 default to load settings. If nothing is passed
38 it will be `settings.py` or value found in
39 `ENVVAR_FOR_DYNACONF`
40 Dynaconf supports .py, .yml, .toml
41
42 YAML = If using YAML for settings module, you pass an extra yaml file here
43 It is general useful to have a different file to store secrets
44 example `.secrets.yml` and then values in that file will
45 override other values. And you can exclude the .secrets from your
46 public repositories.
47
48 --------------------------------------------------------------------------
49
50 ATTENTION: Take a look at `settings.yml` and `.secrets.yml` to know the
51 required settings format.
52
53 Settings load order in Dynaconf:
54 0) Load all defaults and Flask defaults
55 1) Load all passed variables when applying FlaskDynaconf
56 2) Update with data in SETTINGS_MODULE_FOR_DYNACONF
57 3) Update with data in YAML extra file if provided
58 4) Update with data in environmente vars `DYNACONF_NAMESPACE_`
59
60 YAML files are very useful to have `namespaced` settings, lets say,
61 `production` and `development`.
62
63 You can also achieve the same using multiple `.py` files naming as
64 `settings.py`, `production_settings.py` and `development_settings.py`
65 (see examples/validator)
66
67 Example::
68
69 app = Flask(__name__)
70 FlaskDynaconf(
71 app,
72 ENVVAR_FOR_DYNACONF="MYSITE_SETTINGS_MODULE",
73 DYNACONF_NAMESPACE='MYSITE',
74 SETTINGS_MODULE_FOR_DYNACONF='settings.yml',
75 YAML='.secrets.yml',
76 EXTRA_VALUE='You can add aditional config vars here'
77 )
78
79 Take a look at examples/flask in Dynaconf repository
80
81 """
82 def __init__(self, app=None, instance_relative_config=False,
83 dynaconf_instance=None, **kwargs):
84 """kwargs holds initial dynaconf configuration"""
85 self.kwargs = kwargs
86 if 'DYNACONF_NAMESPACE' not in kwargs:
87 kwargs['DYNACONF_NAMESPACE'] = 'FLASK'
88 self.dynaconf_instance = dynaconf_instance
89 self.instance_relative_config = instance_relative_config
90 if app:
91 self.init_app(app, **kwargs)
92
93 def init_app(self, app, **kwargs):
94 """kwargs holds initial dynaconf configuration"""
95 self.kwargs.update(kwargs)
96 self.settings = self.dynaconf_instance or LazySettings(**self.kwargs)
97 app.config = self.make_config(app)
98 app.dynaconf = self.settings
99
100 def make_config(self, app):
101 root_path = app.root_path
102 if self.instance_relative_config: # pragma: no cover
103 root_path = app.instance_path
104 if self.dynaconf_instance:
105 self.settings.update(self.kwargs)
106 return DynaconfConfig(
107 root_path=root_path,
108 defaults=app.config,
109 _settings=self.settings
110 )
111
112
113 class DynaconfConfig(Config):
114 """
115 Settings load order in Dynaconf
116 0) Load all defaults and Flask defaults
117 1) Load all passed variables above
118 2) Update with data in SETTINGS_MODULE_FOR_DYNACONF
119 3) Update with data in YAML
120 4) Update with data in rnvironmente vars `DYNACONF_NAMESPACE_`
121 """
122
123 def get(self, key, default=None):
124 """Gets config from dynaconf variables
125 if variables does not exists in dynaconf try getting from
126 app.config to support runtime settings."""
127 return self._settings.get(key, Config.get(self, key, default))
128
129 def __init__(self, _settings, *args, **kwargs):
130 """perform the initial load"""
131 super(DynaconfConfig, self).__init__(*args, **kwargs)
132 Config.update(self, _settings.store)
133 self._settings = _settings
134
135 def __getitem__(self, key):
136 """
137 First try to get value from dynaconf then from Flask
138 """
139 return self.get(key)
140
141 def __getattr__(self, name):
142 """
143 First try to get value from dynaconf then from Flask
144 """
145 try:
146 return getattr(self._settings, name)
147 except AttributeError:
148 return self[name]
149
150 def __call__(self, name, *args, **kwargs):
151 return self.get(name, *args, **kwargs)
152
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/dynaconf/contrib/flask_dynaconf.py b/dynaconf/contrib/flask_dynaconf.py
--- a/dynaconf/contrib/flask_dynaconf.py
+++ b/dynaconf/contrib/flask_dynaconf.py
@@ -1,5 +1,12 @@
# coding: utf-8
-from flask.config import Config
+try:
+ from flask.config import Config
+ flask_installed = True
+except ImportError:
+ flask_installed = False
+ Config = object
+
+
from dynaconf import LazySettings
@@ -82,6 +89,11 @@
def __init__(self, app=None, instance_relative_config=False,
dynaconf_instance=None, **kwargs):
"""kwargs holds initial dynaconf configuration"""
+ if not flask_installed:
+ raise RuntimeError(
+ "To use this extension Flask must be installed "
+ "install it with: pip install flask"
+ )
self.kwargs = kwargs
if 'DYNACONF_NAMESPACE' not in kwargs:
kwargs['DYNACONF_NAMESPACE'] = 'FLASK'
|
{"golden_diff": "diff --git a/dynaconf/contrib/flask_dynaconf.py b/dynaconf/contrib/flask_dynaconf.py\n--- a/dynaconf/contrib/flask_dynaconf.py\n+++ b/dynaconf/contrib/flask_dynaconf.py\n@@ -1,5 +1,12 @@\n # coding: utf-8\n-from flask.config import Config\n+try:\n+ from flask.config import Config\n+ flask_installed = True\n+except ImportError:\n+ flask_installed = False\n+ Config = object\n+\n+\n from dynaconf import LazySettings\n \n \n@@ -82,6 +89,11 @@\n def __init__(self, app=None, instance_relative_config=False,\n dynaconf_instance=None, **kwargs):\n \"\"\"kwargs holds initial dynaconf configuration\"\"\"\n+ if not flask_installed:\n+ raise RuntimeError(\n+ \"To use this extension Flask must be installed \"\n+ \"install it with: pip install flask\"\n+ )\n self.kwargs = kwargs\n if 'DYNACONF_NAMESPACE' not in kwargs:\n kwargs['DYNACONF_NAMESPACE'] = 'FLASK'\n", "issue": "ModuleNotFoundError: No module named 'flask'\nDynaconf requires Flask by default, is that by mistake or is it intentionally?\r\n\r\n```bash\r\n File \"/app/.heroku/python/lib/python3.6/site-packages/dynaconf/__init__.py\", line 5, in <module> \r\n from dynaconf.contrib import FlaskDynaconf \r\n File \"/app/.heroku/python/lib/python3.6/site-packages/dynaconf/contrib/__init__.py\", line 1, in <module> \r\n from dynaconf.contrib.flask_dynaconf import FlaskDynaconf, DynaconfConfig # noqa \r\n File \"/app/.heroku/python/lib/python3.6/site-packages/dynaconf/contrib/flask_dynaconf.py\", line 2, in <module> \r\n from flask.config import Config \r\nModuleNotFoundError: No module named 'flask'\r\n```\n", "before_files": [{"content": "# coding: utf-8\nfrom flask.config import Config\nfrom dynaconf import LazySettings\n\n\nclass FlaskDynaconf(object):\n \"\"\"\n The arguments are.\n app = The created app\n dynaconf_args = Extra args to be passed to Dynaconf (validator for example)\n\n All other values are stored as config vars specially:\n\n ENVVAR_FOR_DYNACONF = Name of environment variable to use if you want to\n change the settings file from env vars\n example:\n export MYSITE_SETTINGS_MODULE=/tmp/settings.py\n with the above the settings will be loaded from that\n file\n Dynaconf supports .py, .yml, .toml\n\n DYNACONF_NAMESPACE = Namespace prefix for your envvars to become settings\n example:\n export MYSITE_SQL_PORT='@int 5445'\n\n with that exported to env you access using:\n app.config.SQL_PORT\n app.config.get('SQL_PORT')\n app.config.get('sql_port')\n # get is case insensitive\n app.config['SQL_PORT']\n\n Dynaconf uses `@int, @bool, @float, @json` to cast env\n vars\n\n SETTINGS_MODULE_FOR_DYNACONF = The name of the module or file to use as\n default to load settings. If nothing is passed\n it will be `settings.py` or value found in\n `ENVVAR_FOR_DYNACONF`\n Dynaconf supports .py, .yml, .toml\n\n YAML = If using YAML for settings module, you pass an extra yaml file here\n It is general useful to have a different file to store secrets\n example `.secrets.yml` and then values in that file will\n override other values. And you can exclude the .secrets from your\n public repositories.\n\n --------------------------------------------------------------------------\n\n ATTENTION: Take a look at `settings.yml` and `.secrets.yml` to know the\n required settings format.\n\n Settings load order in Dynaconf:\n 0) Load all defaults and Flask defaults\n 1) Load all passed variables when applying FlaskDynaconf\n 2) Update with data in SETTINGS_MODULE_FOR_DYNACONF\n 3) Update with data in YAML extra file if provided\n 4) Update with data in environmente vars `DYNACONF_NAMESPACE_`\n\n YAML files are very useful to have `namespaced` settings, lets say,\n `production` and `development`.\n\n You can also achieve the same using multiple `.py` files naming as\n `settings.py`, `production_settings.py` and `development_settings.py`\n (see examples/validator)\n\n Example::\n\n app = Flask(__name__)\n FlaskDynaconf(\n app,\n ENVVAR_FOR_DYNACONF=\"MYSITE_SETTINGS_MODULE\",\n DYNACONF_NAMESPACE='MYSITE',\n SETTINGS_MODULE_FOR_DYNACONF='settings.yml',\n YAML='.secrets.yml',\n EXTRA_VALUE='You can add aditional config vars here'\n )\n\n Take a look at examples/flask in Dynaconf repository\n\n \"\"\"\n def __init__(self, app=None, instance_relative_config=False,\n dynaconf_instance=None, **kwargs):\n \"\"\"kwargs holds initial dynaconf configuration\"\"\"\n self.kwargs = kwargs\n if 'DYNACONF_NAMESPACE' not in kwargs:\n kwargs['DYNACONF_NAMESPACE'] = 'FLASK'\n self.dynaconf_instance = dynaconf_instance\n self.instance_relative_config = instance_relative_config\n if app:\n self.init_app(app, **kwargs)\n\n def init_app(self, app, **kwargs):\n \"\"\"kwargs holds initial dynaconf configuration\"\"\"\n self.kwargs.update(kwargs)\n self.settings = self.dynaconf_instance or LazySettings(**self.kwargs)\n app.config = self.make_config(app)\n app.dynaconf = self.settings\n\n def make_config(self, app):\n root_path = app.root_path\n if self.instance_relative_config: # pragma: no cover\n root_path = app.instance_path\n if self.dynaconf_instance:\n self.settings.update(self.kwargs)\n return DynaconfConfig(\n root_path=root_path,\n defaults=app.config,\n _settings=self.settings\n )\n\n\nclass DynaconfConfig(Config):\n \"\"\"\n Settings load order in Dynaconf\n 0) Load all defaults and Flask defaults\n 1) Load all passed variables above\n 2) Update with data in SETTINGS_MODULE_FOR_DYNACONF\n 3) Update with data in YAML\n 4) Update with data in rnvironmente vars `DYNACONF_NAMESPACE_`\n \"\"\"\n\n def get(self, key, default=None):\n \"\"\"Gets config from dynaconf variables\n if variables does not exists in dynaconf try getting from\n app.config to support runtime settings.\"\"\"\n return self._settings.get(key, Config.get(self, key, default))\n\n def __init__(self, _settings, *args, **kwargs):\n \"\"\"perform the initial load\"\"\"\n super(DynaconfConfig, self).__init__(*args, **kwargs)\n Config.update(self, _settings.store)\n self._settings = _settings\n\n def __getitem__(self, key):\n \"\"\"\n First try to get value from dynaconf then from Flask\n \"\"\"\n return self.get(key)\n\n def __getattr__(self, name):\n \"\"\"\n First try to get value from dynaconf then from Flask\n \"\"\"\n try:\n return getattr(self._settings, name)\n except AttributeError:\n return self[name]\n\n def __call__(self, name, *args, **kwargs):\n return self.get(name, *args, **kwargs)\n", "path": "dynaconf/contrib/flask_dynaconf.py"}], "after_files": [{"content": "# coding: utf-8\ntry:\n from flask.config import Config\n flask_installed = True\nexcept ImportError:\n flask_installed = False\n Config = object\n\n\nfrom dynaconf import LazySettings\n\n\nclass FlaskDynaconf(object):\n \"\"\"\n The arguments are.\n app = The created app\n dynaconf_args = Extra args to be passed to Dynaconf (validator for example)\n\n All other values are stored as config vars specially:\n\n ENVVAR_FOR_DYNACONF = Name of environment variable to use if you want to\n change the settings file from env vars\n example:\n export MYSITE_SETTINGS_MODULE=/tmp/settings.py\n with the above the settings will be loaded from that\n file\n Dynaconf supports .py, .yml, .toml\n\n DYNACONF_NAMESPACE = Namespace prefix for your envvars to become settings\n example:\n export MYSITE_SQL_PORT='@int 5445'\n\n with that exported to env you access using:\n app.config.SQL_PORT\n app.config.get('SQL_PORT')\n app.config.get('sql_port')\n # get is case insensitive\n app.config['SQL_PORT']\n\n Dynaconf uses `@int, @bool, @float, @json` to cast env\n vars\n\n SETTINGS_MODULE_FOR_DYNACONF = The name of the module or file to use as\n default to load settings. If nothing is passed\n it will be `settings.py` or value found in\n `ENVVAR_FOR_DYNACONF`\n Dynaconf supports .py, .yml, .toml\n\n YAML = If using YAML for settings module, you pass an extra yaml file here\n It is general useful to have a different file to store secrets\n example `.secrets.yml` and then values in that file will\n override other values. And you can exclude the .secrets from your\n public repositories.\n\n --------------------------------------------------------------------------\n\n ATTENTION: Take a look at `settings.yml` and `.secrets.yml` to know the\n required settings format.\n\n Settings load order in Dynaconf:\n 0) Load all defaults and Flask defaults\n 1) Load all passed variables when applying FlaskDynaconf\n 2) Update with data in SETTINGS_MODULE_FOR_DYNACONF\n 3) Update with data in YAML extra file if provided\n 4) Update with data in environmente vars `DYNACONF_NAMESPACE_`\n\n YAML files are very useful to have `namespaced` settings, lets say,\n `production` and `development`.\n\n You can also achieve the same using multiple `.py` files naming as\n `settings.py`, `production_settings.py` and `development_settings.py`\n (see examples/validator)\n\n Example::\n\n app = Flask(__name__)\n FlaskDynaconf(\n app,\n ENVVAR_FOR_DYNACONF=\"MYSITE_SETTINGS_MODULE\",\n DYNACONF_NAMESPACE='MYSITE',\n SETTINGS_MODULE_FOR_DYNACONF='settings.yml',\n YAML='.secrets.yml',\n EXTRA_VALUE='You can add aditional config vars here'\n )\n\n Take a look at examples/flask in Dynaconf repository\n\n \"\"\"\n def __init__(self, app=None, instance_relative_config=False,\n dynaconf_instance=None, **kwargs):\n \"\"\"kwargs holds initial dynaconf configuration\"\"\"\n if not flask_installed:\n raise RuntimeError(\n \"To use this extension Flask must be installed \"\n \"install it with: pip install flask\"\n )\n self.kwargs = kwargs\n if 'DYNACONF_NAMESPACE' not in kwargs:\n kwargs['DYNACONF_NAMESPACE'] = 'FLASK'\n self.dynaconf_instance = dynaconf_instance\n self.instance_relative_config = instance_relative_config\n if app:\n self.init_app(app, **kwargs)\n\n def init_app(self, app, **kwargs):\n \"\"\"kwargs holds initial dynaconf configuration\"\"\"\n self.kwargs.update(kwargs)\n self.settings = self.dynaconf_instance or LazySettings(**self.kwargs)\n app.config = self.make_config(app)\n app.dynaconf = self.settings\n\n def make_config(self, app):\n root_path = app.root_path\n if self.instance_relative_config: # pragma: no cover\n root_path = app.instance_path\n if self.dynaconf_instance:\n self.settings.update(self.kwargs)\n return DynaconfConfig(\n root_path=root_path,\n defaults=app.config,\n _settings=self.settings\n )\n\n\nclass DynaconfConfig(Config):\n \"\"\"\n Settings load order in Dynaconf\n 0) Load all defaults and Flask defaults\n 1) Load all passed variables above\n 2) Update with data in SETTINGS_MODULE_FOR_DYNACONF\n 3) Update with data in YAML\n 4) Update with data in rnvironmente vars `DYNACONF_NAMESPACE_`\n \"\"\"\n\n def get(self, key, default=None):\n \"\"\"Gets config from dynaconf variables\n if variables does not exists in dynaconf try getting from\n app.config to support runtime settings.\"\"\"\n return self._settings.get(key, Config.get(self, key, default))\n\n def __init__(self, _settings, *args, **kwargs):\n \"\"\"perform the initial load\"\"\"\n super(DynaconfConfig, self).__init__(*args, **kwargs)\n Config.update(self, _settings.store)\n self._settings = _settings\n\n def __getitem__(self, key):\n \"\"\"\n First try to get value from dynaconf then from Flask\n \"\"\"\n return self.get(key)\n\n def __getattr__(self, name):\n \"\"\"\n First try to get value from dynaconf then from Flask\n \"\"\"\n try:\n return getattr(self._settings, name)\n except AttributeError:\n return self[name]\n\n def __call__(self, name, *args, **kwargs):\n return self.get(name, *args, **kwargs)\n", "path": "dynaconf/contrib/flask_dynaconf.py"}]}
| 2,056 | 247 |
gh_patches_debug_12447
|
rasdani/github-patches
|
git_diff
|
searxng__searxng-3204
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug: lingva engine / redirects & Key-Errors
**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**
Repository: https://github.com/return42/searxng
Branch: darmarit.org
Version: 2024.2.3+a6f5d690
**How did you install SearXNG?**
(unmodified fork/brand) from master branch
**What happened?**
With the default config / the "official instance" we have the errors reported below:
https://github.com/searxng/searxng/blob/df1a774003c285866a96b149bf92412037b4932d/searx/settings.yml#L1037-L1041
**How To Reproduce**
```
!lingva en-de convenient
```
**Technical report**
```
Error
* Error: httpx.ReadTimeout
* Percentage: 50
* Parameters: `(None, None, 'lingva.thedaviddelta.com')`
* File name: `searx/search/processors/online.py:118`
* Function: `_send_http_request`
* Code: `response = req(params['url'], **request_args)`
```
```
Error
* Error: 1 redirects, maximum: 0
* Percentage: 50
* Parameters: `('200', 'OK', 'lingva.thedaviddelta.com')`
* File name: `searx/search/processors/online.py:127`
* Function: `_send_http_request`
* Code: `count_error(`
```
```
Error
* Error: KeyError
* Percentage: 50
* Parameters: `()`
* File name: `searx/engines/lingva.py:51`
* Function: `response`
* Code: `infobox += f"<b>{translation['type']}</b>"`
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `searx/engines/lingva.py`
Content:
```
1 # SPDX-License-Identifier: AGPL-3.0-or-later
2 # lint: pylint
3 """Lingva (alternative Google Translate frontend)"""
4
5 from json import loads
6
7 about = {
8 "website": 'https://lingva.ml',
9 "wikidata_id": None,
10 "official_api_documentation": 'https://github.com/thedaviddelta/lingva-translate#public-apis',
11 "use_official_api": True,
12 "require_api_key": False,
13 "results": 'JSON',
14 }
15
16 engine_type = 'online_dictionary'
17 categories = ['general']
18
19 url = "https://lingva.thedaviddelta.com/"
20 search_url = "{url}/api/v1/{from_lang}/{to_lang}/{query}"
21
22
23 def request(_query, params):
24 params['url'] = search_url.format(
25 url=url, from_lang=params['from_lang'][1], to_lang=params['to_lang'][1], query=params['query']
26 )
27 return params
28
29
30 def response(resp):
31 results = []
32
33 result = loads(resp.text)
34 info = result["info"]
35 from_to_prefix = "%s-%s " % (resp.search_params['from_lang'][1], resp.search_params['to_lang'][1])
36
37 if "typo" in info:
38 results.append({"suggestion": from_to_prefix + info["typo"]})
39
40 if 'definitions' in info: # pylint: disable=too-many-nested-blocks
41 for definition in info['definitions']:
42 if 'list' in definition:
43 for item in definition['list']:
44 if 'synonyms' in item:
45 for synonym in item['synonyms']:
46 results.append({"suggestion": from_to_prefix + synonym})
47
48 infobox = ""
49
50 for translation in info["extraTranslations"]:
51 infobox += f"<b>{translation['type']}</b>"
52
53 for word in translation["list"]:
54 infobox += f"<dl><dt>{word['word']}</dt>"
55
56 for meaning in word["meanings"]:
57 infobox += f"<dd>{meaning}</dd>"
58
59 infobox += "</dl>"
60
61 results.append(
62 {
63 'infobox': result["translation"],
64 'content': infobox,
65 }
66 )
67
68 return results
69
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/searx/engines/lingva.py b/searx/engines/lingva.py
--- a/searx/engines/lingva.py
+++ b/searx/engines/lingva.py
@@ -16,7 +16,7 @@
engine_type = 'online_dictionary'
categories = ['general']
-url = "https://lingva.thedaviddelta.com/"
+url = "https://lingva.thedaviddelta.com"
search_url = "{url}/api/v1/{from_lang}/{to_lang}/{query}"
@@ -48,8 +48,6 @@
infobox = ""
for translation in info["extraTranslations"]:
- infobox += f"<b>{translation['type']}</b>"
-
for word in translation["list"]:
infobox += f"<dl><dt>{word['word']}</dt>"
|
{"golden_diff": "diff --git a/searx/engines/lingva.py b/searx/engines/lingva.py\n--- a/searx/engines/lingva.py\n+++ b/searx/engines/lingva.py\n@@ -16,7 +16,7 @@\n engine_type = 'online_dictionary'\n categories = ['general']\n \n-url = \"https://lingva.thedaviddelta.com/\"\n+url = \"https://lingva.thedaviddelta.com\"\n search_url = \"{url}/api/v1/{from_lang}/{to_lang}/{query}\"\n \n \n@@ -48,8 +48,6 @@\n infobox = \"\"\n \n for translation in info[\"extraTranslations\"]:\n- infobox += f\"<b>{translation['type']}</b>\"\n-\n for word in translation[\"list\"]:\n infobox += f\"<dl><dt>{word['word']}</dt>\"\n", "issue": "Bug: lingva engine / redirects & Key-Errors\n**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**\r\n\r\nRepository: https://github.com/return42/searxng\r\nBranch: darmarit.org\r\nVersion: 2024.2.3+a6f5d690\r\n\r\n**How did you install SearXNG?**\r\n\r\n(unmodified fork/brand) from master branch\r\n\r\n**What happened?**\r\n\r\nWith the default config / the \"official instance\" we have the errors reported below:\r\n\r\nhttps://github.com/searxng/searxng/blob/df1a774003c285866a96b149bf92412037b4932d/searx/settings.yml#L1037-L1041\r\n\r\n**How To Reproduce**\r\n\r\n```\r\n!lingva en-de convenient\r\n```\r\n\r\n**Technical report**\r\n\r\n```\r\nError\r\n * Error: httpx.ReadTimeout\r\n * Percentage: 50\r\n * Parameters: `(None, None, 'lingva.thedaviddelta.com')`\r\n * File name: `searx/search/processors/online.py:118`\r\n * Function: `_send_http_request`\r\n * Code: `response = req(params['url'], **request_args)`\r\n```\r\n\r\n```\r\nError\r\n * Error: 1 redirects, maximum: 0\r\n * Percentage: 50\r\n * Parameters: `('200', 'OK', 'lingva.thedaviddelta.com')`\r\n * File name: `searx/search/processors/online.py:127`\r\n * Function: `_send_http_request`\r\n * Code: `count_error(`\r\n```\r\n\r\n```\r\nError\r\n * Error: KeyError\r\n * Percentage: 50\r\n * Parameters: `()`\r\n * File name: `searx/engines/lingva.py:51`\r\n * Function: `response`\r\n * Code: `infobox += f\"<b>{translation['type']}</b>\"`\r\n```\n", "before_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n# lint: pylint\n\"\"\"Lingva (alternative Google Translate frontend)\"\"\"\n\nfrom json import loads\n\nabout = {\n \"website\": 'https://lingva.ml',\n \"wikidata_id\": None,\n \"official_api_documentation\": 'https://github.com/thedaviddelta/lingva-translate#public-apis',\n \"use_official_api\": True,\n \"require_api_key\": False,\n \"results\": 'JSON',\n}\n\nengine_type = 'online_dictionary'\ncategories = ['general']\n\nurl = \"https://lingva.thedaviddelta.com/\"\nsearch_url = \"{url}/api/v1/{from_lang}/{to_lang}/{query}\"\n\n\ndef request(_query, params):\n params['url'] = search_url.format(\n url=url, from_lang=params['from_lang'][1], to_lang=params['to_lang'][1], query=params['query']\n )\n return params\n\n\ndef response(resp):\n results = []\n\n result = loads(resp.text)\n info = result[\"info\"]\n from_to_prefix = \"%s-%s \" % (resp.search_params['from_lang'][1], resp.search_params['to_lang'][1])\n\n if \"typo\" in info:\n results.append({\"suggestion\": from_to_prefix + info[\"typo\"]})\n\n if 'definitions' in info: # pylint: disable=too-many-nested-blocks\n for definition in info['definitions']:\n if 'list' in definition:\n for item in definition['list']:\n if 'synonyms' in item:\n for synonym in item['synonyms']:\n results.append({\"suggestion\": from_to_prefix + synonym})\n\n infobox = \"\"\n\n for translation in info[\"extraTranslations\"]:\n infobox += f\"<b>{translation['type']}</b>\"\n\n for word in translation[\"list\"]:\n infobox += f\"<dl><dt>{word['word']}</dt>\"\n\n for meaning in word[\"meanings\"]:\n infobox += f\"<dd>{meaning}</dd>\"\n\n infobox += \"</dl>\"\n\n results.append(\n {\n 'infobox': result[\"translation\"],\n 'content': infobox,\n }\n )\n\n return results\n", "path": "searx/engines/lingva.py"}], "after_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n# lint: pylint\n\"\"\"Lingva (alternative Google Translate frontend)\"\"\"\n\nfrom json import loads\n\nabout = {\n \"website\": 'https://lingva.ml',\n \"wikidata_id\": None,\n \"official_api_documentation\": 'https://github.com/thedaviddelta/lingva-translate#public-apis',\n \"use_official_api\": True,\n \"require_api_key\": False,\n \"results\": 'JSON',\n}\n\nengine_type = 'online_dictionary'\ncategories = ['general']\n\nurl = \"https://lingva.thedaviddelta.com\"\nsearch_url = \"{url}/api/v1/{from_lang}/{to_lang}/{query}\"\n\n\ndef request(_query, params):\n params['url'] = search_url.format(\n url=url, from_lang=params['from_lang'][1], to_lang=params['to_lang'][1], query=params['query']\n )\n return params\n\n\ndef response(resp):\n results = []\n\n result = loads(resp.text)\n info = result[\"info\"]\n from_to_prefix = \"%s-%s \" % (resp.search_params['from_lang'][1], resp.search_params['to_lang'][1])\n\n if \"typo\" in info:\n results.append({\"suggestion\": from_to_prefix + info[\"typo\"]})\n\n if 'definitions' in info: # pylint: disable=too-many-nested-blocks\n for definition in info['definitions']:\n if 'list' in definition:\n for item in definition['list']:\n if 'synonyms' in item:\n for synonym in item['synonyms']:\n results.append({\"suggestion\": from_to_prefix + synonym})\n\n infobox = \"\"\n\n for translation in info[\"extraTranslations\"]:\n for word in translation[\"list\"]:\n infobox += f\"<dl><dt>{word['word']}</dt>\"\n\n for meaning in word[\"meanings\"]:\n infobox += f\"<dd>{meaning}</dd>\"\n\n infobox += \"</dl>\"\n\n results.append(\n {\n 'infobox': result[\"translation\"],\n 'content': infobox,\n }\n )\n\n return results\n", "path": "searx/engines/lingva.py"}]}
| 1,350 | 192 |
gh_patches_debug_254
|
rasdani/github-patches
|
git_diff
|
mindee__doctr-123
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[docs] Enable documentation of multiple versions at once
As of now, the documentation that would be deployed publicly is only the latest version. The better alternative would be:
- having the latest version by default
- having the documentation of each release accessible as well using a displayed selector
Hugginface transformers did the following: https://github.com/huggingface/transformers/blob/master/.circleci/deploy.sh
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/source/conf.py`
Content:
```
1 # Configuration file for the Sphinx documentation builder.
2 #
3 # This file only contains a selection of the most common options. For a full
4 # list see the documentation:
5 # https://www.sphinx-doc.org/en/master/usage/configuration.html
6
7 # -- Path setup --------------------------------------------------------------
8
9 import sphinx_rtd_theme
10
11 # If extensions (or modules to document with autodoc) are in another directory,
12 # add these directories to sys.path here. If the directory is relative to the
13 # documentation root, use os.path.abspath to make it absolute, like shown here.
14 #
15 import os
16 import sys
17 sys.path.insert(0, os.path.abspath('../..'))
18 import doctr
19
20 # -- Project information -----------------------------------------------------
21
22 master_doc = 'index'
23 project = 'doctr'
24 copyright = '2021, Mindee'
25 author = 'François-Guillaume Fernandez, Charles Gaillard, Mohamed Biaz'
26
27 # The full version, including alpha/beta/rc tags
28 version = doctr.__version__
29 release = doctr.__version__ + '-git'
30
31
32 # -- General configuration ---------------------------------------------------
33
34 # Add any Sphinx extension module names here, as strings. They can be
35 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
36 # ones.
37 extensions = [
38 'sphinx.ext.autodoc',
39 'sphinx.ext.napoleon',
40 'sphinx.ext.viewcode',
41 'sphinx.ext.coverage',
42 'sphinx.ext.mathjax',
43 'sphinxemoji.sphinxemoji', # cf. https://sphinxemojicodes.readthedocs.io/en/stable/
44 'sphinx_copybutton',
45 ]
46
47 napoleon_use_ivar = True
48
49 # Add any paths that contain templates here, relative to this directory.
50 templates_path = ['_templates']
51
52 # List of patterns, relative to source directory, that match files and
53 # directories to ignore when looking for source files.
54 # This pattern also affects html_static_path and html_extra_path.
55 exclude_patterns = [u'_build', 'Thumbs.db', '.DS_Store']
56
57
58 # The name of the Pygments (syntax highlighting) style to use.
59 pygments_style = 'sphinx'
60 highlight_language = 'python3'
61
62 # -- Options for HTML output -------------------------------------------------
63
64 # The theme to use for HTML and HTML Help pages. See the documentation for
65 # a list of builtin themes.
66 #
67 html_theme = 'sphinx_rtd_theme'
68 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
69
70 # Theme options are theme-specific and customize the look and feel of a theme
71 # further. For a list of options available for each theme, see the
72 # documentation.
73 #
74 html_theme_options = {
75 'collapse_navigation': False,
76 'display_version': True,
77 'logo_only': False,
78 }
79
80 # html_logo = '_static/images/logo.png'
81
82
83 # Add any paths that contain custom static files (such as style sheets) here,
84 # relative to this directory. They are copied after the builtin static files,
85 # so a file named "default.css" will overwrite the builtin "default.css".
86 html_static_path = ['_static']
87
88 # A list of files that should not be packed into the epub file.
89 epub_exclude_files = ['search.html']
90
91 def setup(app):
92 app.add_css_file('css/mindee.css')
93 app.add_js_file('js/custom.js')
94
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docs/source/conf.py b/docs/source/conf.py
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -73,7 +73,7 @@
#
html_theme_options = {
'collapse_navigation': False,
- 'display_version': True,
+ 'display_version': False,
'logo_only': False,
}
|
{"golden_diff": "diff --git a/docs/source/conf.py b/docs/source/conf.py\n--- a/docs/source/conf.py\n+++ b/docs/source/conf.py\n@@ -73,7 +73,7 @@\n #\n html_theme_options = {\n 'collapse_navigation': False,\n- 'display_version': True,\n+ 'display_version': False,\n 'logo_only': False,\n }\n", "issue": "[docs] Enable documentation of multiple versions at once\nAs of now, the documentation that would be deployed publicly is only the latest version. The better alternative would be:\r\n- having the latest version by default\r\n- having the documentation of each release accessible as well using a displayed selector\r\n\r\nHugginface transformers did the following: https://github.com/huggingface/transformers/blob/master/.circleci/deploy.sh\n", "before_files": [{"content": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup --------------------------------------------------------------\n\nimport sphinx_rtd_theme\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport os\nimport sys\nsys.path.insert(0, os.path.abspath('../..'))\nimport doctr\n\n# -- Project information -----------------------------------------------------\n\nmaster_doc = 'index'\nproject = 'doctr'\ncopyright = '2021, Mindee'\nauthor = 'Fran\u00e7ois-Guillaume Fernandez, Charles Gaillard, Mohamed Biaz'\n\n# The full version, including alpha/beta/rc tags\nversion = doctr.__version__\nrelease = doctr.__version__ + '-git'\n\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n\t'sphinx.ext.autodoc',\n\t'sphinx.ext.napoleon',\n\t'sphinx.ext.viewcode',\n 'sphinx.ext.coverage',\n 'sphinx.ext.mathjax',\n 'sphinxemoji.sphinxemoji', # cf. https://sphinxemojicodes.readthedocs.io/en/stable/\n 'sphinx_copybutton',\n]\n\nnapoleon_use_ivar = True\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = [u'_build', 'Thumbs.db', '.DS_Store']\n\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\nhighlight_language = 'python3'\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'sphinx_rtd_theme'\nhtml_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\nhtml_theme_options = {\n 'collapse_navigation': False,\n 'display_version': True,\n 'logo_only': False,\n}\n\n# html_logo = '_static/images/logo.png'\n\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# A list of files that should not be packed into the epub file.\nepub_exclude_files = ['search.html']\n\ndef setup(app):\n app.add_css_file('css/mindee.css')\n app.add_js_file('js/custom.js')\n", "path": "docs/source/conf.py"}], "after_files": [{"content": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup --------------------------------------------------------------\n\nimport sphinx_rtd_theme\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport os\nimport sys\nsys.path.insert(0, os.path.abspath('../..'))\nimport doctr\n\n# -- Project information -----------------------------------------------------\n\nmaster_doc = 'index'\nproject = 'doctr'\ncopyright = '2021, Mindee'\nauthor = 'Fran\u00e7ois-Guillaume Fernandez, Charles Gaillard, Mohamed Biaz'\n\n# The full version, including alpha/beta/rc tags\nversion = doctr.__version__\nrelease = doctr.__version__ + '-git'\n\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n\t'sphinx.ext.autodoc',\n\t'sphinx.ext.napoleon',\n\t'sphinx.ext.viewcode',\n 'sphinx.ext.coverage',\n 'sphinx.ext.mathjax',\n 'sphinxemoji.sphinxemoji', # cf. https://sphinxemojicodes.readthedocs.io/en/stable/\n 'sphinx_copybutton',\n]\n\nnapoleon_use_ivar = True\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = [u'_build', 'Thumbs.db', '.DS_Store']\n\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\nhighlight_language = 'python3'\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'sphinx_rtd_theme'\nhtml_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\nhtml_theme_options = {\n 'collapse_navigation': False,\n 'display_version': False,\n 'logo_only': False,\n}\n\n# html_logo = '_static/images/logo.png'\n\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# A list of files that should not be packed into the epub file.\nepub_exclude_files = ['search.html']\n\ndef setup(app):\n app.add_css_file('css/mindee.css')\n app.add_js_file('js/custom.js')\n", "path": "docs/source/conf.py"}]}
| 1,229 | 77 |
gh_patches_debug_49809
|
rasdani/github-patches
|
git_diff
|
plotly__plotly.py-699
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
jsonschema.SchemaError when a figure is validated
Here is a minimal example that reproduces the bug: http://nbviewer.jupyter.org/gist/empet/cf922d7c7f4269d6f63432ec67a5d020
The notebook runs OK (with plotly 2.0.1) when I call `plot(fig)`. `iplot(fig)` generates the plot too, but an error box pops up whenever Jupyter tries to save the notebook. The box has the following content:
_The save operation succeeded, but the notebook does not appear to be valid. The validation error was:
Notebook Validation failed_:
`u'data': [{u'colorscale': u'Viridis', u'z': [[2, 27, 105, 100], [87, 14, 121, 102], [26, 121, 73, 34], [44, 105, 111, 127]], u'type': u'heatmap', u'zsmooth': u'best'}], u'layout': {u'width': 400, u'height': 400}}` _is not valid under any of the given schemas_:
`{
"data": [
{
"colorscale": "Viridis",
"z": [
[
2,
27,
105,
100
],
[
87,
14,
121,
102
],
[
26,
121,
73,
34
],
[
44,
105,
111,
127
]
],
"type": "heatmap",
"zsmooth": "best"
}
],
"layout": {
"width": 400,
"height": 400
}
}`
Initially I formulated this issue only for heatmaps, but meanwhile I realized that this behaviour manifests for any type of plot.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import setup
2
3 exec (open('plotly/version.py').read())
4
5
6 def readme():
7 with open('README.rst') as f:
8 return f.read()
9
10
11 setup(name='plotly',
12 version=__version__,
13 use_2to3=False,
14 author='Chris P',
15 author_email='[email protected]',
16 maintainer='Chris P',
17 maintainer_email='[email protected]',
18 url='https://plot.ly/python/',
19 description="Python plotting library for collaborative, "
20 "interactive, publication-quality graphs.",
21 long_description=readme(),
22 classifiers=[
23 'Development Status :: 4 - Beta',
24 'Programming Language :: Python :: 2',
25 'Programming Language :: Python :: 2.7',
26 'Programming Language :: Python :: 3',
27 'Programming Language :: Python :: 3.3',
28 'Programming Language :: Python :: 3.4',
29 'Programming Language :: Python :: 3.5',
30 'Topic :: Scientific/Engineering :: Visualization',
31 ],
32 license='MIT',
33 packages=['plotly',
34 'plotly/api',
35 'plotly/api/v1',
36 'plotly/api/v2',
37 'plotly/plotly',
38 'plotly/plotly/chunked_requests',
39 'plotly/figure_factory',
40 'plotly/graph_objs',
41 'plotly/grid_objs',
42 'plotly/widgets',
43 'plotly/offline',
44 'plotly/matplotlylib',
45 'plotly/matplotlylib/mplexporter',
46 'plotly/matplotlylib/mplexporter/renderers'],
47 package_data={'plotly': ['package_data/*']},
48 install_requires=['decorator', 'requests', 'six', 'pytz'],
49 zip_safe=False)
50
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -45,5 +45,9 @@
'plotly/matplotlylib/mplexporter',
'plotly/matplotlylib/mplexporter/renderers'],
package_data={'plotly': ['package_data/*']},
- install_requires=['decorator', 'requests', 'six', 'pytz'],
+ install_requires=['decorator',
+ 'nbformat>=4.2',
+ 'pytz',
+ 'requests',
+ 'six'],
zip_safe=False)
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -45,5 +45,9 @@\n 'plotly/matplotlylib/mplexporter',\n 'plotly/matplotlylib/mplexporter/renderers'],\n package_data={'plotly': ['package_data/*']},\n- install_requires=['decorator', 'requests', 'six', 'pytz'],\n+ install_requires=['decorator',\n+ 'nbformat>=4.2',\n+ 'pytz',\n+ 'requests',\n+ 'six'],\n zip_safe=False)\n", "issue": "jsonschema.SchemaError when a figure is validated\nHere is a minimal example that reproduces the bug: http://nbviewer.jupyter.org/gist/empet/cf922d7c7f4269d6f63432ec67a5d020\r\n\r\nThe notebook runs OK (with plotly 2.0.1) when I call `plot(fig)`. `iplot(fig)` generates the plot too, but an error box pops up whenever Jupyter tries to save the notebook. The box has the following content:\r\n\r\n_The save operation succeeded, but the notebook does not appear to be valid. The validation error was:\r\nNotebook Validation failed_:\r\n`u'data': [{u'colorscale': u'Viridis', u'z': [[2, 27, 105, 100], [87, 14, 121, 102], [26, 121, 73, 34], [44, 105, 111, 127]], u'type': u'heatmap', u'zsmooth': u'best'}], u'layout': {u'width': 400, u'height': 400}}` _is not valid under any of the given schemas_:\r\n\r\n`{\r\n \"data\": [\r\n {\r\n \"colorscale\": \"Viridis\",\r\n \"z\": [\r\n [\r\n 2,\r\n 27,\r\n 105,\r\n 100\r\n ],\r\n [\r\n 87,\r\n 14,\r\n 121,\r\n 102\r\n ],\r\n [\r\n 26,\r\n 121,\r\n 73,\r\n 34\r\n ],\r\n [\r\n 44,\r\n 105,\r\n 111,\r\n 127\r\n ]\r\n ],\r\n \"type\": \"heatmap\",\r\n \"zsmooth\": \"best\"\r\n }\r\n ],\r\n \"layout\": {\r\n \"width\": 400,\r\n \"height\": 400\r\n }\r\n}`\r\n\r\nInitially I formulated this issue only for heatmaps, but meanwhile I realized that this behaviour manifests for any type of plot.\n", "before_files": [{"content": "from setuptools import setup\n\nexec (open('plotly/version.py').read())\n\n\ndef readme():\n with open('README.rst') as f:\n return f.read()\n\n\nsetup(name='plotly',\n version=__version__,\n use_2to3=False,\n author='Chris P',\n author_email='[email protected]',\n maintainer='Chris P',\n maintainer_email='[email protected]',\n url='https://plot.ly/python/',\n description=\"Python plotting library for collaborative, \"\n \"interactive, publication-quality graphs.\",\n long_description=readme(),\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Topic :: Scientific/Engineering :: Visualization',\n ],\n license='MIT',\n packages=['plotly',\n 'plotly/api',\n 'plotly/api/v1',\n 'plotly/api/v2',\n 'plotly/plotly',\n 'plotly/plotly/chunked_requests',\n 'plotly/figure_factory',\n 'plotly/graph_objs',\n 'plotly/grid_objs',\n 'plotly/widgets',\n 'plotly/offline',\n 'plotly/matplotlylib',\n 'plotly/matplotlylib/mplexporter',\n 'plotly/matplotlylib/mplexporter/renderers'],\n package_data={'plotly': ['package_data/*']},\n install_requires=['decorator', 'requests', 'six', 'pytz'],\n zip_safe=False)\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import setup\n\nexec (open('plotly/version.py').read())\n\n\ndef readme():\n with open('README.rst') as f:\n return f.read()\n\n\nsetup(name='plotly',\n version=__version__,\n use_2to3=False,\n author='Chris P',\n author_email='[email protected]',\n maintainer='Chris P',\n maintainer_email='[email protected]',\n url='https://plot.ly/python/',\n description=\"Python plotting library for collaborative, \"\n \"interactive, publication-quality graphs.\",\n long_description=readme(),\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Topic :: Scientific/Engineering :: Visualization',\n ],\n license='MIT',\n packages=['plotly',\n 'plotly/api',\n 'plotly/api/v1',\n 'plotly/api/v2',\n 'plotly/plotly',\n 'plotly/plotly/chunked_requests',\n 'plotly/figure_factory',\n 'plotly/graph_objs',\n 'plotly/grid_objs',\n 'plotly/widgets',\n 'plotly/offline',\n 'plotly/matplotlylib',\n 'plotly/matplotlylib/mplexporter',\n 'plotly/matplotlylib/mplexporter/renderers'],\n package_data={'plotly': ['package_data/*']},\n install_requires=['decorator',\n 'nbformat>=4.2',\n 'pytz',\n 'requests',\n 'six'],\n zip_safe=False)\n", "path": "setup.py"}]}
| 1,213 | 127 |
gh_patches_debug_38019
|
rasdani/github-patches
|
git_diff
|
open-mmlab__mmaction2-642
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Seed in sampler
https://github.com/open-mmlab/mmdetection/pull/4665
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mmaction/datasets/builder.py`
Content:
```
1 import platform
2 import random
3 from functools import partial
4
5 import numpy as np
6 from mmcv.parallel import collate
7 from mmcv.runner import get_dist_info
8 from mmcv.utils import build_from_cfg
9 from torch.utils.data import DataLoader
10
11 from .dataset_wrappers import RepeatDataset
12 from .registry import DATASETS
13 from .samplers import DistributedPowerSampler, DistributedSampler
14
15 if platform.system() != 'Windows':
16 # https://github.com/pytorch/pytorch/issues/973
17 import resource
18 rlimit = resource.getrlimit(resource.RLIMIT_NOFILE)
19 hard_limit = rlimit[1]
20 soft_limit = min(4096, hard_limit)
21 resource.setrlimit(resource.RLIMIT_NOFILE, (soft_limit, hard_limit))
22
23
24 def build_dataset(cfg, default_args=None):
25 """Build a dataset from config dict.
26
27 Args:
28 cfg (dict): Config dict. It should at least contain the key "type".
29 default_args (dict | None, optional): Default initialization arguments.
30 Default: None.
31
32 Returns:
33 Dataset: The constructed dataset.
34 """
35 if cfg['type'] == 'RepeatDataset':
36 dataset = RepeatDataset(
37 build_dataset(cfg['dataset'], default_args), cfg['times'])
38 else:
39 dataset = build_from_cfg(cfg, DATASETS, default_args)
40 return dataset
41
42
43 def build_dataloader(dataset,
44 videos_per_gpu,
45 workers_per_gpu,
46 num_gpus=1,
47 dist=True,
48 shuffle=True,
49 seed=None,
50 drop_last=False,
51 pin_memory=True,
52 **kwargs):
53 """Build PyTorch DataLoader.
54
55 In distributed training, each GPU/process has a dataloader.
56 In non-distributed training, there is only one dataloader for all GPUs.
57
58 Args:
59 dataset (:obj:`Dataset`): A PyTorch dataset.
60 videos_per_gpu (int): Number of videos on each GPU, i.e.,
61 batch size of each GPU.
62 workers_per_gpu (int): How many subprocesses to use for data
63 loading for each GPU.
64 num_gpus (int): Number of GPUs. Only used in non-distributed
65 training. Default: 1.
66 dist (bool): Distributed training/test or not. Default: True.
67 shuffle (bool): Whether to shuffle the data at every epoch.
68 Default: True.
69 seed (int | None): Seed to be used. Default: None.
70 drop_last (bool): Whether to drop the last incomplete batch in epoch.
71 Default: False
72 pin_memory (bool): Whether to use pin_memory in DataLoader.
73 Default: True
74 kwargs (dict, optional): Any keyword argument to be used to initialize
75 DataLoader.
76
77 Returns:
78 DataLoader: A PyTorch dataloader.
79 """
80 rank, world_size = get_dist_info()
81 sample_by_class = getattr(dataset, 'sample_by_class', False)
82 power = getattr(dataset, 'power', None)
83
84 if dist:
85 if sample_by_class:
86 assert power is not None
87 sampler = DistributedPowerSampler(dataset, world_size, rank, power)
88 else:
89 sampler = DistributedSampler(
90 dataset, world_size, rank, shuffle=shuffle)
91 shuffle = False
92 batch_size = videos_per_gpu
93 num_workers = workers_per_gpu
94 else:
95 sampler = None
96 batch_size = num_gpus * videos_per_gpu
97 num_workers = num_gpus * workers_per_gpu
98
99 init_fn = partial(
100 worker_init_fn, num_workers=num_workers, rank=rank,
101 seed=seed) if seed is not None else None
102
103 data_loader = DataLoader(
104 dataset,
105 batch_size=batch_size,
106 sampler=sampler,
107 num_workers=num_workers,
108 collate_fn=partial(collate, samples_per_gpu=videos_per_gpu),
109 pin_memory=pin_memory,
110 shuffle=shuffle,
111 worker_init_fn=init_fn,
112 drop_last=drop_last,
113 **kwargs)
114
115 return data_loader
116
117
118 def worker_init_fn(worker_id, num_workers, rank, seed):
119 """Init the random seed for various workers."""
120 # The seed of each worker equals to
121 # num_worker * rank + worker_id + user_seed
122 worker_seed = num_workers * rank + worker_id + seed
123 np.random.seed(worker_seed)
124 random.seed(worker_seed)
125
```
Path: `mmaction/datasets/samplers/distributed_sampler.py`
Content:
```
1 import torch
2 from torch.utils.data import DistributedSampler as _DistributedSampler
3
4
5 class DistributedSampler(_DistributedSampler):
6 """DistributedSampler inheriting from
7 ``torch.utils.data.DistributedSampler``.
8
9 In pytorch of lower versions, there is no ``shuffle`` argument. This child
10 class will port one to DistributedSampler.
11 """
12
13 def __init__(self, dataset, num_replicas=None, rank=None, shuffle=True):
14 super().__init__(dataset, num_replicas=num_replicas, rank=rank)
15 self.shuffle = shuffle
16
17 def __iter__(self):
18 # deterministically shuffle based on epoch
19 if self.shuffle:
20 g = torch.Generator()
21 g.manual_seed(self.epoch)
22 indices = torch.randperm(len(self.dataset), generator=g).tolist()
23 else:
24 indices = torch.arange(len(self.dataset)).tolist()
25
26 # add extra samples to make it evenly divisible
27 indices += indices[:(self.total_size - len(indices))]
28 assert len(indices) == self.total_size
29
30 # subsample
31 indices = indices[self.rank:self.total_size:self.num_replicas]
32 assert len(indices) == self.num_samples
33 return iter(indices)
34
35
36 class DistributedPowerSampler(_DistributedSampler):
37 """DistributedPowerSampler inheriting from
38 ``torch.utils.data.DistributedSampler``.
39
40 Samples are sampled with the probability that is proportional to the power
41 of label frequency (freq ^ power). The sampler only applies to single class
42 recognition dataset.
43
44 The default value of power is 1, which is equivalent to bootstrap sampling
45 from the entire dataset.
46 """
47
48 def __init__(self, dataset, num_replicas=None, rank=None, power=1):
49 super().__init__(dataset, num_replicas=num_replicas, rank=rank)
50 self.power = power
51
52 def __iter__(self):
53 # deterministically shuffle based on epoch
54 g = torch.Generator()
55 g.manual_seed(self.epoch)
56 video_infos_by_class = self.dataset.video_infos_by_class
57 num_classes = self.dataset.num_classes
58 # For simplicity, discontinuous labels are not permitted
59 assert set(video_infos_by_class) == set(range(num_classes))
60 counts = [len(video_infos_by_class[i]) for i in range(num_classes)]
61 counts = [cnt**self.power for cnt in counts]
62
63 indices = torch.multinomial(
64 torch.Tensor(counts),
65 self.total_size,
66 replacement=True,
67 generator=g)
68 indices = indices.data.numpy().tolist()
69 assert len(indices) == self.total_size
70
71 indices = indices[self.rank:self.total_size:self.num_replicas]
72 assert len(indices) == self.num_samples
73
74 return iter(indices)
75
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mmaction/datasets/builder.py b/mmaction/datasets/builder.py
--- a/mmaction/datasets/builder.py
+++ b/mmaction/datasets/builder.py
@@ -84,10 +84,11 @@
if dist:
if sample_by_class:
assert power is not None
- sampler = DistributedPowerSampler(dataset, world_size, rank, power)
+ sampler = DistributedPowerSampler(
+ dataset, world_size, rank, power, seed=seed)
else:
sampler = DistributedSampler(
- dataset, world_size, rank, shuffle=shuffle)
+ dataset, world_size, rank, shuffle=shuffle, seed=seed)
shuffle = False
batch_size = videos_per_gpu
num_workers = workers_per_gpu
diff --git a/mmaction/datasets/samplers/distributed_sampler.py b/mmaction/datasets/samplers/distributed_sampler.py
--- a/mmaction/datasets/samplers/distributed_sampler.py
+++ b/mmaction/datasets/samplers/distributed_sampler.py
@@ -10,15 +10,22 @@
class will port one to DistributedSampler.
"""
- def __init__(self, dataset, num_replicas=None, rank=None, shuffle=True):
- super().__init__(dataset, num_replicas=num_replicas, rank=rank)
- self.shuffle = shuffle
+ def __init__(self,
+ dataset,
+ num_replicas=None,
+ rank=None,
+ shuffle=True,
+ seed=0):
+ super().__init__(
+ dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle)
+ # for the compatibility from PyTorch 1.3+
+ self.seed = seed if seed is not None else 0
def __iter__(self):
# deterministically shuffle based on epoch
if self.shuffle:
g = torch.Generator()
- g.manual_seed(self.epoch)
+ g.manual_seed(self.epoch + self.seed)
indices = torch.randperm(len(self.dataset), generator=g).tolist()
else:
indices = torch.arange(len(self.dataset)).tolist()
@@ -45,14 +52,15 @@
from the entire dataset.
"""
- def __init__(self, dataset, num_replicas=None, rank=None, power=1):
+ def __init__(self, dataset, num_replicas=None, rank=None, power=1, seed=0):
super().__init__(dataset, num_replicas=num_replicas, rank=rank)
self.power = power
+ self.seed = seed if seed is not None else 0
def __iter__(self):
# deterministically shuffle based on epoch
g = torch.Generator()
- g.manual_seed(self.epoch)
+ g.manual_seed(self.epoch + self.seed)
video_infos_by_class = self.dataset.video_infos_by_class
num_classes = self.dataset.num_classes
# For simplicity, discontinuous labels are not permitted
|
{"golden_diff": "diff --git a/mmaction/datasets/builder.py b/mmaction/datasets/builder.py\n--- a/mmaction/datasets/builder.py\n+++ b/mmaction/datasets/builder.py\n@@ -84,10 +84,11 @@\n if dist:\n if sample_by_class:\n assert power is not None\n- sampler = DistributedPowerSampler(dataset, world_size, rank, power)\n+ sampler = DistributedPowerSampler(\n+ dataset, world_size, rank, power, seed=seed)\n else:\n sampler = DistributedSampler(\n- dataset, world_size, rank, shuffle=shuffle)\n+ dataset, world_size, rank, shuffle=shuffle, seed=seed)\n shuffle = False\n batch_size = videos_per_gpu\n num_workers = workers_per_gpu\ndiff --git a/mmaction/datasets/samplers/distributed_sampler.py b/mmaction/datasets/samplers/distributed_sampler.py\n--- a/mmaction/datasets/samplers/distributed_sampler.py\n+++ b/mmaction/datasets/samplers/distributed_sampler.py\n@@ -10,15 +10,22 @@\n class will port one to DistributedSampler.\n \"\"\"\n \n- def __init__(self, dataset, num_replicas=None, rank=None, shuffle=True):\n- super().__init__(dataset, num_replicas=num_replicas, rank=rank)\n- self.shuffle = shuffle\n+ def __init__(self,\n+ dataset,\n+ num_replicas=None,\n+ rank=None,\n+ shuffle=True,\n+ seed=0):\n+ super().__init__(\n+ dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle)\n+ # for the compatibility from PyTorch 1.3+\n+ self.seed = seed if seed is not None else 0\n \n def __iter__(self):\n # deterministically shuffle based on epoch\n if self.shuffle:\n g = torch.Generator()\n- g.manual_seed(self.epoch)\n+ g.manual_seed(self.epoch + self.seed)\n indices = torch.randperm(len(self.dataset), generator=g).tolist()\n else:\n indices = torch.arange(len(self.dataset)).tolist()\n@@ -45,14 +52,15 @@\n from the entire dataset.\n \"\"\"\n \n- def __init__(self, dataset, num_replicas=None, rank=None, power=1):\n+ def __init__(self, dataset, num_replicas=None, rank=None, power=1, seed=0):\n super().__init__(dataset, num_replicas=num_replicas, rank=rank)\n self.power = power\n+ self.seed = seed if seed is not None else 0\n \n def __iter__(self):\n # deterministically shuffle based on epoch\n g = torch.Generator()\n- g.manual_seed(self.epoch)\n+ g.manual_seed(self.epoch + self.seed)\n video_infos_by_class = self.dataset.video_infos_by_class\n num_classes = self.dataset.num_classes\n # For simplicity, discontinuous labels are not permitted\n", "issue": "Seed in sampler\nhttps://github.com/open-mmlab/mmdetection/pull/4665\n", "before_files": [{"content": "import platform\nimport random\nfrom functools import partial\n\nimport numpy as np\nfrom mmcv.parallel import collate\nfrom mmcv.runner import get_dist_info\nfrom mmcv.utils import build_from_cfg\nfrom torch.utils.data import DataLoader\n\nfrom .dataset_wrappers import RepeatDataset\nfrom .registry import DATASETS\nfrom .samplers import DistributedPowerSampler, DistributedSampler\n\nif platform.system() != 'Windows':\n # https://github.com/pytorch/pytorch/issues/973\n import resource\n rlimit = resource.getrlimit(resource.RLIMIT_NOFILE)\n hard_limit = rlimit[1]\n soft_limit = min(4096, hard_limit)\n resource.setrlimit(resource.RLIMIT_NOFILE, (soft_limit, hard_limit))\n\n\ndef build_dataset(cfg, default_args=None):\n \"\"\"Build a dataset from config dict.\n\n Args:\n cfg (dict): Config dict. It should at least contain the key \"type\".\n default_args (dict | None, optional): Default initialization arguments.\n Default: None.\n\n Returns:\n Dataset: The constructed dataset.\n \"\"\"\n if cfg['type'] == 'RepeatDataset':\n dataset = RepeatDataset(\n build_dataset(cfg['dataset'], default_args), cfg['times'])\n else:\n dataset = build_from_cfg(cfg, DATASETS, default_args)\n return dataset\n\n\ndef build_dataloader(dataset,\n videos_per_gpu,\n workers_per_gpu,\n num_gpus=1,\n dist=True,\n shuffle=True,\n seed=None,\n drop_last=False,\n pin_memory=True,\n **kwargs):\n \"\"\"Build PyTorch DataLoader.\n\n In distributed training, each GPU/process has a dataloader.\n In non-distributed training, there is only one dataloader for all GPUs.\n\n Args:\n dataset (:obj:`Dataset`): A PyTorch dataset.\n videos_per_gpu (int): Number of videos on each GPU, i.e.,\n batch size of each GPU.\n workers_per_gpu (int): How many subprocesses to use for data\n loading for each GPU.\n num_gpus (int): Number of GPUs. Only used in non-distributed\n training. Default: 1.\n dist (bool): Distributed training/test or not. Default: True.\n shuffle (bool): Whether to shuffle the data at every epoch.\n Default: True.\n seed (int | None): Seed to be used. Default: None.\n drop_last (bool): Whether to drop the last incomplete batch in epoch.\n Default: False\n pin_memory (bool): Whether to use pin_memory in DataLoader.\n Default: True\n kwargs (dict, optional): Any keyword argument to be used to initialize\n DataLoader.\n\n Returns:\n DataLoader: A PyTorch dataloader.\n \"\"\"\n rank, world_size = get_dist_info()\n sample_by_class = getattr(dataset, 'sample_by_class', False)\n power = getattr(dataset, 'power', None)\n\n if dist:\n if sample_by_class:\n assert power is not None\n sampler = DistributedPowerSampler(dataset, world_size, rank, power)\n else:\n sampler = DistributedSampler(\n dataset, world_size, rank, shuffle=shuffle)\n shuffle = False\n batch_size = videos_per_gpu\n num_workers = workers_per_gpu\n else:\n sampler = None\n batch_size = num_gpus * videos_per_gpu\n num_workers = num_gpus * workers_per_gpu\n\n init_fn = partial(\n worker_init_fn, num_workers=num_workers, rank=rank,\n seed=seed) if seed is not None else None\n\n data_loader = DataLoader(\n dataset,\n batch_size=batch_size,\n sampler=sampler,\n num_workers=num_workers,\n collate_fn=partial(collate, samples_per_gpu=videos_per_gpu),\n pin_memory=pin_memory,\n shuffle=shuffle,\n worker_init_fn=init_fn,\n drop_last=drop_last,\n **kwargs)\n\n return data_loader\n\n\ndef worker_init_fn(worker_id, num_workers, rank, seed):\n \"\"\"Init the random seed for various workers.\"\"\"\n # The seed of each worker equals to\n # num_worker * rank + worker_id + user_seed\n worker_seed = num_workers * rank + worker_id + seed\n np.random.seed(worker_seed)\n random.seed(worker_seed)\n", "path": "mmaction/datasets/builder.py"}, {"content": "import torch\nfrom torch.utils.data import DistributedSampler as _DistributedSampler\n\n\nclass DistributedSampler(_DistributedSampler):\n \"\"\"DistributedSampler inheriting from\n ``torch.utils.data.DistributedSampler``.\n\n In pytorch of lower versions, there is no ``shuffle`` argument. This child\n class will port one to DistributedSampler.\n \"\"\"\n\n def __init__(self, dataset, num_replicas=None, rank=None, shuffle=True):\n super().__init__(dataset, num_replicas=num_replicas, rank=rank)\n self.shuffle = shuffle\n\n def __iter__(self):\n # deterministically shuffle based on epoch\n if self.shuffle:\n g = torch.Generator()\n g.manual_seed(self.epoch)\n indices = torch.randperm(len(self.dataset), generator=g).tolist()\n else:\n indices = torch.arange(len(self.dataset)).tolist()\n\n # add extra samples to make it evenly divisible\n indices += indices[:(self.total_size - len(indices))]\n assert len(indices) == self.total_size\n\n # subsample\n indices = indices[self.rank:self.total_size:self.num_replicas]\n assert len(indices) == self.num_samples\n return iter(indices)\n\n\nclass DistributedPowerSampler(_DistributedSampler):\n \"\"\"DistributedPowerSampler inheriting from\n ``torch.utils.data.DistributedSampler``.\n\n Samples are sampled with the probability that is proportional to the power\n of label frequency (freq ^ power). The sampler only applies to single class\n recognition dataset.\n\n The default value of power is 1, which is equivalent to bootstrap sampling\n from the entire dataset.\n \"\"\"\n\n def __init__(self, dataset, num_replicas=None, rank=None, power=1):\n super().__init__(dataset, num_replicas=num_replicas, rank=rank)\n self.power = power\n\n def __iter__(self):\n # deterministically shuffle based on epoch\n g = torch.Generator()\n g.manual_seed(self.epoch)\n video_infos_by_class = self.dataset.video_infos_by_class\n num_classes = self.dataset.num_classes\n # For simplicity, discontinuous labels are not permitted\n assert set(video_infos_by_class) == set(range(num_classes))\n counts = [len(video_infos_by_class[i]) for i in range(num_classes)]\n counts = [cnt**self.power for cnt in counts]\n\n indices = torch.multinomial(\n torch.Tensor(counts),\n self.total_size,\n replacement=True,\n generator=g)\n indices = indices.data.numpy().tolist()\n assert len(indices) == self.total_size\n\n indices = indices[self.rank:self.total_size:self.num_replicas]\n assert len(indices) == self.num_samples\n\n return iter(indices)\n", "path": "mmaction/datasets/samplers/distributed_sampler.py"}], "after_files": [{"content": "import platform\nimport random\nfrom functools import partial\n\nimport numpy as np\nfrom mmcv.parallel import collate\nfrom mmcv.runner import get_dist_info\nfrom mmcv.utils import build_from_cfg\nfrom torch.utils.data import DataLoader\n\nfrom .dataset_wrappers import RepeatDataset\nfrom .registry import DATASETS\nfrom .samplers import DistributedPowerSampler, DistributedSampler\n\nif platform.system() != 'Windows':\n # https://github.com/pytorch/pytorch/issues/973\n import resource\n rlimit = resource.getrlimit(resource.RLIMIT_NOFILE)\n hard_limit = rlimit[1]\n soft_limit = min(4096, hard_limit)\n resource.setrlimit(resource.RLIMIT_NOFILE, (soft_limit, hard_limit))\n\n\ndef build_dataset(cfg, default_args=None):\n \"\"\"Build a dataset from config dict.\n\n Args:\n cfg (dict): Config dict. It should at least contain the key \"type\".\n default_args (dict | None, optional): Default initialization arguments.\n Default: None.\n\n Returns:\n Dataset: The constructed dataset.\n \"\"\"\n if cfg['type'] == 'RepeatDataset':\n dataset = RepeatDataset(\n build_dataset(cfg['dataset'], default_args), cfg['times'])\n else:\n dataset = build_from_cfg(cfg, DATASETS, default_args)\n return dataset\n\n\ndef build_dataloader(dataset,\n videos_per_gpu,\n workers_per_gpu,\n num_gpus=1,\n dist=True,\n shuffle=True,\n seed=None,\n drop_last=False,\n pin_memory=True,\n **kwargs):\n \"\"\"Build PyTorch DataLoader.\n\n In distributed training, each GPU/process has a dataloader.\n In non-distributed training, there is only one dataloader for all GPUs.\n\n Args:\n dataset (:obj:`Dataset`): A PyTorch dataset.\n videos_per_gpu (int): Number of videos on each GPU, i.e.,\n batch size of each GPU.\n workers_per_gpu (int): How many subprocesses to use for data\n loading for each GPU.\n num_gpus (int): Number of GPUs. Only used in non-distributed\n training. Default: 1.\n dist (bool): Distributed training/test or not. Default: True.\n shuffle (bool): Whether to shuffle the data at every epoch.\n Default: True.\n seed (int | None): Seed to be used. Default: None.\n drop_last (bool): Whether to drop the last incomplete batch in epoch.\n Default: False\n pin_memory (bool): Whether to use pin_memory in DataLoader.\n Default: True\n kwargs (dict, optional): Any keyword argument to be used to initialize\n DataLoader.\n\n Returns:\n DataLoader: A PyTorch dataloader.\n \"\"\"\n rank, world_size = get_dist_info()\n sample_by_class = getattr(dataset, 'sample_by_class', False)\n power = getattr(dataset, 'power', None)\n\n if dist:\n if sample_by_class:\n assert power is not None\n sampler = DistributedPowerSampler(\n dataset, world_size, rank, power, seed=seed)\n else:\n sampler = DistributedSampler(\n dataset, world_size, rank, shuffle=shuffle, seed=seed)\n shuffle = False\n batch_size = videos_per_gpu\n num_workers = workers_per_gpu\n else:\n sampler = None\n batch_size = num_gpus * videos_per_gpu\n num_workers = num_gpus * workers_per_gpu\n\n init_fn = partial(\n worker_init_fn, num_workers=num_workers, rank=rank,\n seed=seed) if seed is not None else None\n\n data_loader = DataLoader(\n dataset,\n batch_size=batch_size,\n sampler=sampler,\n num_workers=num_workers,\n collate_fn=partial(collate, samples_per_gpu=videos_per_gpu),\n pin_memory=pin_memory,\n shuffle=shuffle,\n worker_init_fn=init_fn,\n drop_last=drop_last,\n **kwargs)\n\n return data_loader\n\n\ndef worker_init_fn(worker_id, num_workers, rank, seed):\n \"\"\"Init the random seed for various workers.\"\"\"\n # The seed of each worker equals to\n # num_worker * rank + worker_id + user_seed\n worker_seed = num_workers * rank + worker_id + seed\n np.random.seed(worker_seed)\n random.seed(worker_seed)\n", "path": "mmaction/datasets/builder.py"}, {"content": "import torch\nfrom torch.utils.data import DistributedSampler as _DistributedSampler\n\n\nclass DistributedSampler(_DistributedSampler):\n \"\"\"DistributedSampler inheriting from\n ``torch.utils.data.DistributedSampler``.\n\n In pytorch of lower versions, there is no ``shuffle`` argument. This child\n class will port one to DistributedSampler.\n \"\"\"\n\n def __init__(self,\n dataset,\n num_replicas=None,\n rank=None,\n shuffle=True,\n seed=0):\n super().__init__(\n dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle)\n # for the compatibility from PyTorch 1.3+\n self.seed = seed if seed is not None else 0\n\n def __iter__(self):\n # deterministically shuffle based on epoch\n if self.shuffle:\n g = torch.Generator()\n g.manual_seed(self.epoch + self.seed)\n indices = torch.randperm(len(self.dataset), generator=g).tolist()\n else:\n indices = torch.arange(len(self.dataset)).tolist()\n\n # add extra samples to make it evenly divisible\n indices += indices[:(self.total_size - len(indices))]\n assert len(indices) == self.total_size\n\n # subsample\n indices = indices[self.rank:self.total_size:self.num_replicas]\n assert len(indices) == self.num_samples\n return iter(indices)\n\n\nclass DistributedPowerSampler(_DistributedSampler):\n \"\"\"DistributedPowerSampler inheriting from\n ``torch.utils.data.DistributedSampler``.\n\n Samples are sampled with the probability that is proportional to the power\n of label frequency (freq ^ power). The sampler only applies to single class\n recognition dataset.\n\n The default value of power is 1, which is equivalent to bootstrap sampling\n from the entire dataset.\n \"\"\"\n\n def __init__(self, dataset, num_replicas=None, rank=None, power=1, seed=0):\n super().__init__(dataset, num_replicas=num_replicas, rank=rank)\n self.power = power\n self.seed = seed if seed is not None else 0\n\n def __iter__(self):\n # deterministically shuffle based on epoch\n g = torch.Generator()\n g.manual_seed(self.epoch + self.seed)\n video_infos_by_class = self.dataset.video_infos_by_class\n num_classes = self.dataset.num_classes\n # For simplicity, discontinuous labels are not permitted\n assert set(video_infos_by_class) == set(range(num_classes))\n counts = [len(video_infos_by_class[i]) for i in range(num_classes)]\n counts = [cnt**self.power for cnt in counts]\n\n indices = torch.multinomial(\n torch.Tensor(counts),\n self.total_size,\n replacement=True,\n generator=g)\n indices = indices.data.numpy().tolist()\n assert len(indices) == self.total_size\n\n indices = indices[self.rank:self.total_size:self.num_replicas]\n assert len(indices) == self.num_samples\n\n return iter(indices)\n", "path": "mmaction/datasets/samplers/distributed_sampler.py"}]}
| 2,230 | 651 |
gh_patches_debug_12617
|
rasdani/github-patches
|
git_diff
|
learningequality__kolibri-3529
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Can change username with duplicates
<!--
Instructions:
* Fill out the sections below, replace …'s with information about your issue
* Use the 'preview' function above this text box to verify formatting before submitting
-->
### Observed behavior
<!--
Description of the behavior that was observed, including screenshots or other references when applicable
-->
I was sure I fixed the issue before. It's now reoccurring as seen below:


### Expected behavior
<!--
Description of what behavior was expected but did not occur
-->
Should throw an error that says username already exists
### User-facing consequences
<!--
Implications and real-world consequences for learners, coaches, admins, and other users of the application
-->
Can't login?
### Errors and logs
<!--
Relevant logs from:
* the command line
* ~/.kolibri/kolibri.log
* the browser console
Please wrap errors in triple backticks for clean formatting like this:
```
01:10 info: something happened
01:12 error: something bad happened
```
-->
### Steps to reproduce
<!--
Precise steps that someone else can follow in order to see this behavior
-->
### Context
<!--
Tell us about your environment, including:
* Kolibri version
* Operating system
* Browser
-->
Kolibri version: develop
Same username suggestion while signing-in (case-sensitive feature not accounted for) after the user edits a name using the edit username feature.
### Observed behavior
When the user edits the username from his/her profile and then again tries to login, there can be 2 suggestions of same username based on case-sensitive nature, for eg sahilm, sahilM
### Expected behavior
The username must not be allowed to be the same (as mentioned in #3458) and suggestions must not be case-sensitive nature, for eg sahilm and sahilM cannot exist simultaneously.
### User-facing consequences
It would create confusion in the classroom if the students are shown same username.
### Errors and logs


### Steps to reproduce
1. Login as admin and give permission to the users to edit the username.
2. Login as a user and edit the username which is same as existing but has different cases, like abc1, ABC1.
3. Try to sign in as user and look for suggestions which will be same with different cases.
### Context
Kolibri version : kolibri 0.9.0
Operating system : ubuntu 14.04
Browser : Chrome
### Screenshot



--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kolibri/auth/serializers.py`
Content:
```
1 from __future__ import absolute_import
2 from __future__ import print_function
3 from __future__ import unicode_literals
4
5 from django.utils.translation import ugettext_lazy as _
6 from rest_framework import serializers
7 from rest_framework.validators import UniqueTogetherValidator
8
9 from .models import Classroom
10 from .models import Facility
11 from .models import FacilityDataset
12 from .models import FacilityUser
13 from .models import LearnerGroup
14 from .models import Membership
15 from .models import Role
16
17
18 class RoleSerializer(serializers.ModelSerializer):
19 collection_parent = serializers.SerializerMethodField()
20
21 class Meta:
22 model = Role
23 fields = ('id', 'kind', 'collection', 'user', 'collection_parent',)
24
25 def get_collection_parent(self, instance):
26 if instance.collection.parent is not None:
27 return instance.collection.parent.id
28 else:
29 return None
30
31
32 class FacilityUserSerializer(serializers.ModelSerializer):
33 roles = RoleSerializer(many=True, read_only=True)
34
35 class Meta:
36 model = FacilityUser
37 extra_kwargs = {'password': {'write_only': True}}
38 fields = ('id', 'username', 'full_name', 'password', 'facility', 'roles', 'is_superuser')
39
40 def create(self, validated_data):
41 if FacilityUser.objects.filter(username__iexact=validated_data['username']).exists():
42 raise serializers.ValidationError(_('An account with that username already exists'))
43 return super(FacilityUserSerializer, self).create(validated_data)
44
45
46 class FacilityUserSignupSerializer(FacilityUserSerializer):
47
48 def validate_username(self, value):
49 if FacilityUser.objects.filter(username__iexact=value).exists():
50 raise serializers.ValidationError(_('An account with that username already exists'))
51 return value
52
53
54 class FacilityUsernameSerializer(serializers.ModelSerializer):
55
56 class Meta:
57 model = FacilityUser
58 fields = ('username', )
59
60
61 class MembershipSerializer(serializers.ModelSerializer):
62
63 class Meta:
64 model = Membership
65 fields = ('id', 'collection', 'user')
66
67
68 class FacilityDatasetSerializer(serializers.ModelSerializer):
69
70 class Meta:
71 model = FacilityDataset
72 fields = ('id', 'learner_can_edit_username', 'learner_can_edit_name', 'learner_can_edit_password',
73 'learner_can_sign_up', 'learner_can_delete_account', 'learner_can_login_with_no_password',
74 'show_download_button_in_learn', 'description', 'location')
75
76
77 class FacilitySerializer(serializers.ModelSerializer):
78 dataset = FacilityDatasetSerializer(read_only=True)
79
80 class Meta:
81 model = Facility
82 extra_kwargs = {'id': {'read_only': True}, 'dataset': {'read_only': True}}
83 fields = ('id', 'name', 'dataset')
84
85
86 class PublicFacilitySerializer(serializers.ModelSerializer):
87
88 class Meta:
89 model = Facility
90 fields = ('dataset', 'name')
91
92
93 class ClassroomSerializer(serializers.ModelSerializer):
94 learner_count = serializers.SerializerMethodField()
95 coaches = serializers.SerializerMethodField()
96
97 def get_learner_count(self, instance):
98 return instance.get_members().count()
99
100 def get_coaches(self, instance):
101 return FacilityUserSerializer(instance.get_coaches(), many=True).data
102
103 class Meta:
104 model = Classroom
105 fields = (
106 'id',
107 'name',
108 'parent',
109 'learner_count',
110 'coaches',
111 )
112
113 validators = [
114 UniqueTogetherValidator(
115 queryset=Classroom.objects.all(),
116 fields=('parent', 'name')
117 )
118 ]
119
120
121 class LearnerGroupSerializer(serializers.ModelSerializer):
122
123 user_ids = serializers.SerializerMethodField()
124
125 def get_user_ids(self, group):
126 return [str(user_id['id']) for user_id in group.get_members().values('id')]
127
128 class Meta:
129 model = LearnerGroup
130 fields = ('id', 'name', 'parent', 'user_ids')
131
132 validators = [
133 UniqueTogetherValidator(
134 queryset=Classroom.objects.all(),
135 fields=('parent', 'name')
136 )
137 ]
138
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kolibri/auth/serializers.py b/kolibri/auth/serializers.py
--- a/kolibri/auth/serializers.py
+++ b/kolibri/auth/serializers.py
@@ -42,6 +42,11 @@
raise serializers.ValidationError(_('An account with that username already exists'))
return super(FacilityUserSerializer, self).create(validated_data)
+ def update(self, instance, validated_data):
+ if validated_data.get('username') and FacilityUser.objects.exclude(id__exact=instance.id).filter(username__iexact=validated_data['username']).exists():
+ raise serializers.ValidationError(_('An account with that username already exists'))
+ return super(FacilityUserSerializer, self).update(instance, validated_data)
+
class FacilityUserSignupSerializer(FacilityUserSerializer):
|
{"golden_diff": "diff --git a/kolibri/auth/serializers.py b/kolibri/auth/serializers.py\n--- a/kolibri/auth/serializers.py\n+++ b/kolibri/auth/serializers.py\n@@ -42,6 +42,11 @@\n raise serializers.ValidationError(_('An account with that username already exists'))\n return super(FacilityUserSerializer, self).create(validated_data)\n \n+ def update(self, instance, validated_data):\n+ if validated_data.get('username') and FacilityUser.objects.exclude(id__exact=instance.id).filter(username__iexact=validated_data['username']).exists():\n+ raise serializers.ValidationError(_('An account with that username already exists'))\n+ return super(FacilityUserSerializer, self).update(instance, validated_data)\n+\n \n class FacilityUserSignupSerializer(FacilityUserSerializer):\n", "issue": "Can change username with duplicates\n<!--\r\nInstructions:\r\n * Fill out the sections below, replace \u2026's with information about your issue\r\n * Use the 'preview' function above this text box to verify formatting before submitting\r\n-->\r\n\r\n### Observed behavior\r\n<!--\r\nDescription of the behavior that was observed, including screenshots or other references when applicable\r\n-->\r\n\r\nI was sure I fixed the issue before. It's now reoccurring as seen below:\r\n\r\n\r\n\r\n \r\n\r\n\r\n\r\n### Expected behavior\r\n<!--\r\nDescription of what behavior was expected but did not occur\r\n-->\r\n\r\nShould throw an error that says username already exists\r\n\r\n### User-facing consequences\r\n<!--\r\nImplications and real-world consequences for learners, coaches, admins, and other users of the application\r\n-->\r\n\r\nCan't login?\r\n\r\n### Errors and logs\r\n<!--\r\nRelevant logs from:\r\n * the command line\r\n * ~/.kolibri/kolibri.log\r\n * the browser console\r\n\r\nPlease wrap errors in triple backticks for clean formatting like this:\r\n```\r\n01:10 info: something happened\r\n01:12 error: something bad happened\r\n```\r\n-->\r\n\r\n### Steps to reproduce\r\n<!--\r\nPrecise steps that someone else can follow in order to see this behavior\r\n-->\r\n\r\n\r\n### Context\r\n<!--\r\nTell us about your environment, including:\r\n * Kolibri version\r\n * Operating system\r\n * Browser\r\n-->\r\n\r\nKolibri version: develop\r\n\nSame username suggestion while signing-in (case-sensitive feature not accounted for) after the user edits a name using the edit username feature.\n### Observed behavior\r\nWhen the user edits the username from his/her profile and then again tries to login, there can be 2 suggestions of same username based on case-sensitive nature, for eg sahilm, sahilM\r\n\r\n### Expected behavior\r\n\r\nThe username must not be allowed to be the same (as mentioned in #3458) and suggestions must not be case-sensitive nature, for eg sahilm and sahilM cannot exist simultaneously.\r\n\r\n### User-facing consequences\r\nIt would create confusion in the classroom if the students are shown same username.\r\n\r\n### Errors and logs\r\n\r\n\r\n\r\n\r\n\r\n\r\n### Steps to reproduce\r\n\r\n1. Login as admin and give permission to the users to edit the username.\r\n2. Login as a user and edit the username which is same as existing but has different cases, like abc1, ABC1.\r\n3. Try to sign in as user and look for suggestions which will be same with different cases.\r\n\r\n### Context\r\n\r\nKolibri version : kolibri 0.9.0\r\nOperating system : ubuntu 14.04\r\nBrowser : Chrome\r\n\r\n### Screenshot\r\n\r\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nfrom django.utils.translation import ugettext_lazy as _\nfrom rest_framework import serializers\nfrom rest_framework.validators import UniqueTogetherValidator\n\nfrom .models import Classroom\nfrom .models import Facility\nfrom .models import FacilityDataset\nfrom .models import FacilityUser\nfrom .models import LearnerGroup\nfrom .models import Membership\nfrom .models import Role\n\n\nclass RoleSerializer(serializers.ModelSerializer):\n collection_parent = serializers.SerializerMethodField()\n\n class Meta:\n model = Role\n fields = ('id', 'kind', 'collection', 'user', 'collection_parent',)\n\n def get_collection_parent(self, instance):\n if instance.collection.parent is not None:\n return instance.collection.parent.id\n else:\n return None\n\n\nclass FacilityUserSerializer(serializers.ModelSerializer):\n roles = RoleSerializer(many=True, read_only=True)\n\n class Meta:\n model = FacilityUser\n extra_kwargs = {'password': {'write_only': True}}\n fields = ('id', 'username', 'full_name', 'password', 'facility', 'roles', 'is_superuser')\n\n def create(self, validated_data):\n if FacilityUser.objects.filter(username__iexact=validated_data['username']).exists():\n raise serializers.ValidationError(_('An account with that username already exists'))\n return super(FacilityUserSerializer, self).create(validated_data)\n\n\nclass FacilityUserSignupSerializer(FacilityUserSerializer):\n\n def validate_username(self, value):\n if FacilityUser.objects.filter(username__iexact=value).exists():\n raise serializers.ValidationError(_('An account with that username already exists'))\n return value\n\n\nclass FacilityUsernameSerializer(serializers.ModelSerializer):\n\n class Meta:\n model = FacilityUser\n fields = ('username', )\n\n\nclass MembershipSerializer(serializers.ModelSerializer):\n\n class Meta:\n model = Membership\n fields = ('id', 'collection', 'user')\n\n\nclass FacilityDatasetSerializer(serializers.ModelSerializer):\n\n class Meta:\n model = FacilityDataset\n fields = ('id', 'learner_can_edit_username', 'learner_can_edit_name', 'learner_can_edit_password',\n 'learner_can_sign_up', 'learner_can_delete_account', 'learner_can_login_with_no_password',\n 'show_download_button_in_learn', 'description', 'location')\n\n\nclass FacilitySerializer(serializers.ModelSerializer):\n dataset = FacilityDatasetSerializer(read_only=True)\n\n class Meta:\n model = Facility\n extra_kwargs = {'id': {'read_only': True}, 'dataset': {'read_only': True}}\n fields = ('id', 'name', 'dataset')\n\n\nclass PublicFacilitySerializer(serializers.ModelSerializer):\n\n class Meta:\n model = Facility\n fields = ('dataset', 'name')\n\n\nclass ClassroomSerializer(serializers.ModelSerializer):\n learner_count = serializers.SerializerMethodField()\n coaches = serializers.SerializerMethodField()\n\n def get_learner_count(self, instance):\n return instance.get_members().count()\n\n def get_coaches(self, instance):\n return FacilityUserSerializer(instance.get_coaches(), many=True).data\n\n class Meta:\n model = Classroom\n fields = (\n 'id',\n 'name',\n 'parent',\n 'learner_count',\n 'coaches',\n )\n\n validators = [\n UniqueTogetherValidator(\n queryset=Classroom.objects.all(),\n fields=('parent', 'name')\n )\n ]\n\n\nclass LearnerGroupSerializer(serializers.ModelSerializer):\n\n user_ids = serializers.SerializerMethodField()\n\n def get_user_ids(self, group):\n return [str(user_id['id']) for user_id in group.get_members().values('id')]\n\n class Meta:\n model = LearnerGroup\n fields = ('id', 'name', 'parent', 'user_ids')\n\n validators = [\n UniqueTogetherValidator(\n queryset=Classroom.objects.all(),\n fields=('parent', 'name')\n )\n ]\n", "path": "kolibri/auth/serializers.py"}], "after_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nfrom django.utils.translation import ugettext_lazy as _\nfrom rest_framework import serializers\nfrom rest_framework.validators import UniqueTogetherValidator\n\nfrom .models import Classroom\nfrom .models import Facility\nfrom .models import FacilityDataset\nfrom .models import FacilityUser\nfrom .models import LearnerGroup\nfrom .models import Membership\nfrom .models import Role\n\n\nclass RoleSerializer(serializers.ModelSerializer):\n collection_parent = serializers.SerializerMethodField()\n\n class Meta:\n model = Role\n fields = ('id', 'kind', 'collection', 'user', 'collection_parent',)\n\n def get_collection_parent(self, instance):\n if instance.collection.parent is not None:\n return instance.collection.parent.id\n else:\n return None\n\n\nclass FacilityUserSerializer(serializers.ModelSerializer):\n roles = RoleSerializer(many=True, read_only=True)\n\n class Meta:\n model = FacilityUser\n extra_kwargs = {'password': {'write_only': True}}\n fields = ('id', 'username', 'full_name', 'password', 'facility', 'roles', 'is_superuser')\n\n def create(self, validated_data):\n if FacilityUser.objects.filter(username__iexact=validated_data['username']).exists():\n raise serializers.ValidationError(_('An account with that username already exists'))\n return super(FacilityUserSerializer, self).create(validated_data)\n\n def update(self, instance, validated_data):\n if validated_data.get('username') and FacilityUser.objects.exclude(id__exact=instance.id).filter(username__iexact=validated_data['username']).exists():\n raise serializers.ValidationError(_('An account with that username already exists'))\n return super(FacilityUserSerializer, self).update(instance, validated_data)\n\n\nclass FacilityUserSignupSerializer(FacilityUserSerializer):\n\n def validate_username(self, value):\n if FacilityUser.objects.filter(username__iexact=value).exists():\n raise serializers.ValidationError(_('An account with that username already exists'))\n return value\n\n\nclass FacilityUsernameSerializer(serializers.ModelSerializer):\n\n class Meta:\n model = FacilityUser\n fields = ('username', )\n\n\nclass MembershipSerializer(serializers.ModelSerializer):\n\n class Meta:\n model = Membership\n fields = ('id', 'collection', 'user')\n\n\nclass FacilityDatasetSerializer(serializers.ModelSerializer):\n\n class Meta:\n model = FacilityDataset\n fields = ('id', 'learner_can_edit_username', 'learner_can_edit_name', 'learner_can_edit_password',\n 'learner_can_sign_up', 'learner_can_delete_account', 'learner_can_login_with_no_password',\n 'show_download_button_in_learn', 'description', 'location')\n\n\nclass FacilitySerializer(serializers.ModelSerializer):\n dataset = FacilityDatasetSerializer(read_only=True)\n\n class Meta:\n model = Facility\n extra_kwargs = {'id': {'read_only': True}, 'dataset': {'read_only': True}}\n fields = ('id', 'name', 'dataset')\n\n\nclass PublicFacilitySerializer(serializers.ModelSerializer):\n\n class Meta:\n model = Facility\n fields = ('dataset', 'name')\n\n\nclass ClassroomSerializer(serializers.ModelSerializer):\n learner_count = serializers.SerializerMethodField()\n coaches = serializers.SerializerMethodField()\n\n def get_learner_count(self, instance):\n return instance.get_members().count()\n\n def get_coaches(self, instance):\n return FacilityUserSerializer(instance.get_coaches(), many=True).data\n\n class Meta:\n model = Classroom\n fields = (\n 'id',\n 'name',\n 'parent',\n 'learner_count',\n 'coaches',\n )\n\n validators = [\n UniqueTogetherValidator(\n queryset=Classroom.objects.all(),\n fields=('parent', 'name')\n )\n ]\n\n\nclass LearnerGroupSerializer(serializers.ModelSerializer):\n\n user_ids = serializers.SerializerMethodField()\n\n def get_user_ids(self, group):\n return [str(user_id['id']) for user_id in group.get_members().values('id')]\n\n class Meta:\n model = LearnerGroup\n fields = ('id', 'name', 'parent', 'user_ids')\n\n validators = [\n UniqueTogetherValidator(\n queryset=Classroom.objects.all(),\n fields=('parent', 'name')\n )\n ]\n", "path": "kolibri/auth/serializers.py"}]}
| 2,499 | 177 |
gh_patches_debug_57139
|
rasdani/github-patches
|
git_diff
|
wemake-services__wemake-python-styleguide-343
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Replace `sphinxcontrib-napoleon`
It is now bundled with `sphinx` as `sphinx.ext.napoleon`.
So, we need to remove this dependency from both:
- `pyproject.toml`
- `docs/requirements.txt`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Configuration file for the Sphinx documentation builder.
4 #
5 # This file does only contain a selection of the most common options. For a
6 # full list see the documentation:
7 # http://www.sphinx-doc.org/en/master/config
8
9 # -- Path setup --------------------------------------------------------------
10
11 # If extensions (or modules to document with autodoc) are in another directory,
12 # add these directories to sys.path here. If the directory is relative to the
13 # documentation root, use os.path.abspath to make it absolute, like shown here.
14
15 import os
16 import sys
17 sys.path.insert(0, os.path.abspath('..'))
18
19
20 # -- Project information -----------------------------------------------------
21
22 def _get_project_meta():
23 import tomlkit
24
25 with open('../pyproject.toml') as pyproject:
26 contents = pyproject.read()
27
28 return tomlkit.parse(contents)['tool']['poetry']
29
30
31 pkg_meta = _get_project_meta()
32 project = pkg_meta['name']
33 copyright = '2018, wemake.services'
34 author = 'wemake.services'
35
36 # The short X.Y version
37 version = pkg_meta['version']
38 # The full version, including alpha/beta/rc tags
39 release = version
40
41
42 # -- General configuration ---------------------------------------------------
43
44 # If your documentation needs a minimal Sphinx version, state it here.
45 #
46 # needs_sphinx = '1.0'
47
48 # Add any Sphinx extension module names here, as strings. They can be
49 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
50 # ones.
51 extensions = [
52 'sphinx.ext.autodoc',
53 'sphinx.ext.doctest',
54 'sphinx.ext.todo',
55 'sphinx.ext.coverage',
56 'sphinx.ext.viewcode',
57 'sphinx.ext.autosummary',
58
59 # Used to include .md files:
60 'm2r',
61
62 # Used to write python docstrings in a readable way:
63 'sphinxcontrib.napoleon',
64
65 # Used to insert typehints into the final docs:
66 'sphinx_autodoc_typehints',
67
68 # Used to embed values from the source code into the docs:
69 'added_value',
70 ]
71
72 autoclass_content = 'class'
73 autodoc_member_order = 'bysource'
74
75 autodoc_mock_imports = [
76 'attr',
77 ]
78
79 autodoc_member_order = 'bysource'
80 autodoc_default_flags = {
81 'members': '',
82 'undoc-members': 'code,error_template',
83 'exclude-members': '__dict__,__weakref__',
84 }
85
86 # Add any paths that contain templates here, relative to this directory.
87 templates_path = ['_templates']
88
89 # The suffix(es) of source filenames.
90 # You can specify multiple suffix as a list of string:
91
92 source_suffix = ['.rst', '.md']
93
94 # The master toctree document.
95 master_doc = 'index'
96
97 # The language for content autogenerated by Sphinx. Refer to documentation
98 # for a list of supported languages.
99 #
100 # This is also used if you do content translation via gettext catalogs.
101 # Usually you set "language" from the command line for these cases.
102 language = None
103
104 # List of patterns, relative to source directory, that match files and
105 # directories to ignore when looking for source files.
106 # This pattern also affects html_static_path and html_extra_path .
107 exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
108
109 # The name of the Pygments (syntax highlighting) style to use.
110 pygments_style = 'sphinx'
111
112 add_module_names = False
113
114 autodoc_default_options = {
115 'show-inheritance': True,
116 }
117
118
119 # -- Options for HTML output -------------------------------------------------
120
121 # The theme to use for HTML and HTML Help pages. See the documentation for
122 # a list of builtin themes.
123 #
124 html_theme = 'alabaster'
125
126 # Theme options are theme-specific and customize the look and feel of a theme
127 # further. For a list of options available for each theme, see the
128 # documentation.
129 html_theme_options = {
130 'sidebar_collapse': False,
131 'show_powered_by': False,
132 }
133
134 # Add any paths that contain custom static files (such as style sheets) here,
135 # relative to this directory. They are copied after the builtin static files,
136 # so a file named "default.css" will overwrite the builtin "default.css".
137 html_static_path = ['_static']
138
139 # Custom sidebar templates, must be a dictionary that maps document names
140 # to template names.
141 #
142 # This is required for the alabaster theme
143 # refs: http://alabaster.readthedocs.io/en/latest/installation.html#sidebars
144 html_sidebars = {
145 '**': [
146 'about.html',
147 'navigation.html',
148 'moreinfo.html',
149 'github.html',
150 'searchbox.html',
151 ]
152 }
153
154
155 # -- Options for HTMLHelp output ---------------------------------------------
156
157 # Output file base name for HTML help builder.
158 htmlhelp_basename = 'wemake-python-styleguidedoc'
159
160
161 # -- Options for LaTeX output ------------------------------------------------
162
163 latex_elements = {
164 # The paper size ('letterpaper' or 'a4paper').
165 #
166 # 'papersize': 'letterpaper',
167
168 # The font size ('10pt', '11pt' or '12pt').
169 #
170 # 'pointsize': '10pt',
171
172 # Additional stuff for the LaTeX preamble.
173 #
174 # 'preamble': '',
175
176 # Latex figure (float) alignment
177 #
178 # 'figure_align': 'htbp',
179 }
180
181 # Grouping the document tree into LaTeX files. List of tuples
182 # (source start file, target name, title,
183 # author, documentclass [howto, manual, or own class]).
184 latex_documents = [
185 (
186 master_doc,
187 'wemake-python-styleguide.tex',
188 'wemake-python-styleguide Documentation',
189 'wemake.services',
190 'manual',
191 ),
192 ]
193
194
195 # -- Options for manual page output ------------------------------------------
196
197 # One entry per manual page. List of tuples
198 # (source start file, name, description, authors, manual section).
199 man_pages = [
200 (
201 master_doc,
202 'wemake-python-styleguide',
203 'wemake-python-styleguide Documentation',
204 [author],
205 1,
206 )
207 ]
208
209
210 # -- Options for Texinfo output ----------------------------------------------
211
212 # Grouping the document tree into Texinfo files. List of tuples
213 # (source start file, target name, title, author,
214 # dir menu entry, description, category)
215 texinfo_documents = [
216 (
217 master_doc,
218 'wemake-python-styleguide',
219 'wemake-python-styleguide Documentation',
220 author,
221 'wemake-python-styleguide',
222 'One line description of project.',
223 'Miscellaneous',
224 ),
225 ]
226
227
228 # -- Extension configuration -------------------------------------------------
229
230 napoleon_numpy_docstring = False
231
232 # -- Options for todo extension ----------------------------------------------
233
234 # If true, `todo` and `todoList` produce output, else they produce nothing.
235 todo_include_todos = True
236
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -55,13 +55,11 @@
'sphinx.ext.coverage',
'sphinx.ext.viewcode',
'sphinx.ext.autosummary',
+ 'sphinx.ext.napoleon',
# Used to include .md files:
'm2r',
- # Used to write python docstrings in a readable way:
- 'sphinxcontrib.napoleon',
-
# Used to insert typehints into the final docs:
'sphinx_autodoc_typehints',
|
{"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -55,13 +55,11 @@\n 'sphinx.ext.coverage',\n 'sphinx.ext.viewcode',\n 'sphinx.ext.autosummary',\n+ 'sphinx.ext.napoleon',\n \n # Used to include .md files:\n 'm2r',\n \n- # Used to write python docstrings in a readable way:\n- 'sphinxcontrib.napoleon',\n-\n # Used to insert typehints into the final docs:\n 'sphinx_autodoc_typehints',\n", "issue": "Replace `sphinxcontrib-napoleon`\nIt is now bundled with `sphinx` as `sphinx.ext.napoleon`.\r\n\r\nSo, we need to remove this dependency from both:\r\n- `pyproject.toml`\r\n- `docs/requirements.txt`\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Configuration file for the Sphinx documentation builder.\n#\n# This file does only contain a selection of the most common options. For a\n# full list see the documentation:\n# http://www.sphinx-doc.org/en/master/config\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n\nimport os\nimport sys\nsys.path.insert(0, os.path.abspath('..'))\n\n\n# -- Project information -----------------------------------------------------\n\ndef _get_project_meta():\n import tomlkit\n\n with open('../pyproject.toml') as pyproject:\n contents = pyproject.read()\n\n return tomlkit.parse(contents)['tool']['poetry']\n\n\npkg_meta = _get_project_meta()\nproject = pkg_meta['name']\ncopyright = '2018, wemake.services'\nauthor = 'wemake.services'\n\n# The short X.Y version\nversion = pkg_meta['version']\n# The full version, including alpha/beta/rc tags\nrelease = version\n\n\n# -- General configuration ---------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.doctest',\n 'sphinx.ext.todo',\n 'sphinx.ext.coverage',\n 'sphinx.ext.viewcode',\n 'sphinx.ext.autosummary',\n\n # Used to include .md files:\n 'm2r',\n\n # Used to write python docstrings in a readable way:\n 'sphinxcontrib.napoleon',\n\n # Used to insert typehints into the final docs:\n 'sphinx_autodoc_typehints',\n\n # Used to embed values from the source code into the docs:\n 'added_value',\n]\n\nautoclass_content = 'class'\nautodoc_member_order = 'bysource'\n\nautodoc_mock_imports = [\n 'attr',\n]\n\nautodoc_member_order = 'bysource'\nautodoc_default_flags = {\n 'members': '',\n 'undoc-members': 'code,error_template',\n 'exclude-members': '__dict__,__weakref__',\n}\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n\nsource_suffix = ['.rst', '.md']\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path .\nexclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\nadd_module_names = False\n\nautodoc_default_options = {\n 'show-inheritance': True,\n}\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'alabaster'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\nhtml_theme_options = {\n 'sidebar_collapse': False,\n 'show_powered_by': False,\n}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# Custom sidebar templates, must be a dictionary that maps document names\n# to template names.\n#\n# This is required for the alabaster theme\n# refs: http://alabaster.readthedocs.io/en/latest/installation.html#sidebars\nhtml_sidebars = {\n '**': [\n 'about.html',\n 'navigation.html',\n 'moreinfo.html',\n 'github.html',\n 'searchbox.html',\n ]\n}\n\n\n# -- Options for HTMLHelp output ---------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'wemake-python-styleguidedoc'\n\n\n# -- Options for LaTeX output ------------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (\n master_doc,\n 'wemake-python-styleguide.tex',\n 'wemake-python-styleguide Documentation',\n 'wemake.services',\n 'manual',\n ),\n]\n\n\n# -- Options for manual page output ------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (\n master_doc,\n 'wemake-python-styleguide',\n 'wemake-python-styleguide Documentation',\n [author],\n 1,\n )\n]\n\n\n# -- Options for Texinfo output ----------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n master_doc,\n 'wemake-python-styleguide',\n 'wemake-python-styleguide Documentation',\n author,\n 'wemake-python-styleguide',\n 'One line description of project.',\n 'Miscellaneous',\n ),\n]\n\n\n# -- Extension configuration -------------------------------------------------\n\nnapoleon_numpy_docstring = False\n\n# -- Options for todo extension ----------------------------------------------\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = True\n", "path": "docs/conf.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Configuration file for the Sphinx documentation builder.\n#\n# This file does only contain a selection of the most common options. For a\n# full list see the documentation:\n# http://www.sphinx-doc.org/en/master/config\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n\nimport os\nimport sys\nsys.path.insert(0, os.path.abspath('..'))\n\n\n# -- Project information -----------------------------------------------------\n\ndef _get_project_meta():\n import tomlkit\n\n with open('../pyproject.toml') as pyproject:\n contents = pyproject.read()\n\n return tomlkit.parse(contents)['tool']['poetry']\n\n\npkg_meta = _get_project_meta()\nproject = pkg_meta['name']\ncopyright = '2018, wemake.services'\nauthor = 'wemake.services'\n\n# The short X.Y version\nversion = pkg_meta['version']\n# The full version, including alpha/beta/rc tags\nrelease = version\n\n\n# -- General configuration ---------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.doctest',\n 'sphinx.ext.todo',\n 'sphinx.ext.coverage',\n 'sphinx.ext.viewcode',\n 'sphinx.ext.autosummary',\n 'sphinx.ext.napoleon',\n\n # Used to include .md files:\n 'm2r',\n\n # Used to insert typehints into the final docs:\n 'sphinx_autodoc_typehints',\n\n # Used to embed values from the source code into the docs:\n 'added_value',\n]\n\nautoclass_content = 'class'\nautodoc_member_order = 'bysource'\n\nautodoc_mock_imports = [\n 'attr',\n]\n\nautodoc_member_order = 'bysource'\nautodoc_default_flags = {\n 'members': '',\n 'undoc-members': 'code,error_template',\n 'exclude-members': '__dict__,__weakref__',\n}\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n\nsource_suffix = ['.rst', '.md']\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path .\nexclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\nadd_module_names = False\n\nautodoc_default_options = {\n 'show-inheritance': True,\n}\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'alabaster'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\nhtml_theme_options = {\n 'sidebar_collapse': False,\n 'show_powered_by': False,\n}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# Custom sidebar templates, must be a dictionary that maps document names\n# to template names.\n#\n# This is required for the alabaster theme\n# refs: http://alabaster.readthedocs.io/en/latest/installation.html#sidebars\nhtml_sidebars = {\n '**': [\n 'about.html',\n 'navigation.html',\n 'moreinfo.html',\n 'github.html',\n 'searchbox.html',\n ]\n}\n\n\n# -- Options for HTMLHelp output ---------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'wemake-python-styleguidedoc'\n\n\n# -- Options for LaTeX output ------------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (\n master_doc,\n 'wemake-python-styleguide.tex',\n 'wemake-python-styleguide Documentation',\n 'wemake.services',\n 'manual',\n ),\n]\n\n\n# -- Options for manual page output ------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (\n master_doc,\n 'wemake-python-styleguide',\n 'wemake-python-styleguide Documentation',\n [author],\n 1,\n )\n]\n\n\n# -- Options for Texinfo output ----------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n master_doc,\n 'wemake-python-styleguide',\n 'wemake-python-styleguide Documentation',\n author,\n 'wemake-python-styleguide',\n 'One line description of project.',\n 'Miscellaneous',\n ),\n]\n\n\n# -- Extension configuration -------------------------------------------------\n\nnapoleon_numpy_docstring = False\n\n# -- Options for todo extension ----------------------------------------------\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = True\n", "path": "docs/conf.py"}]}
| 2,421 | 138 |
gh_patches_debug_61912
|
rasdani/github-patches
|
git_diff
|
ray-project__ray-8491
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Time to initialize a policy grows linearly with the number of agents
<!--
General questions should be asked on the mailing list [email protected].
Questions about how to use Ray should be asked on
[StackOverflow](https://stackoverflow.com/questions/tagged/ray).
Before submitting an issue, please fill out the following form.
-->
### System information
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Linux Ubuntu 18.04
- **Ray installed from (source or binary)**: Binary
- **Ray version**: 0.7.4
- **Python version**: 3.7.4
- **Exact command to reproduce**: N/A
<!--
You can obtain the Ray version with
python -c "import ray; print(ray.__version__)"
-->
### Describe the problem
<!-- Describe the problem clearly here. -->
I noticed that in multi agent settings, the time to initialize a policy per agent increases as more agents are initialized. In the sample output I provided below, you can see that the time to initialize a single DynamicTFPolicy grows from 4.6 seconds to 15.3 seconds from the first agent to the tenth agent created. Line 291 of `rllib/policy/dynamic_tf_policy.py` is
```python
self._sess.run(tf.global_variables_initializer())
```
which I believe will run one time for each agent initialized. If I'm not mistaken, this means that every variable in the computation graph is being initialized each time that we initialize a DynamicTFPolicy. If initializing a DynamicTFPolicy adds new variables to the computation graph (as I believe it does), this would explain why the time to initialize a DynamicTFPolicy grows over time: We are initializing every variable in the computation graph, and the computation graph is growing. My question is, why does line 291 run a global variables initializer? Is there a reason for this that I can't see inside this method? How hard would it be to modify this to only initialize variables in the individual policy that we care to initialize?
I'm asking this because as detailed in #5753, I'm trying to modify rllib to allow initialization and removal of policies during training. The overhead incurred by this initialization quickly slows the training script down enough to be useless. Also, if anyone knows what the resource bottleneck is for policy initialization, that would be very helpful to know for when we're picking new hardware. Does it need a ton of cores to run in parallel, or more memory, or a bigger GPU or more GPUs or something? Thanks.
### Source code / logs
<!-- Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem. -->
```
(pytorch) root@e3a955e42cae:~/bees/bees# python trainer.py settings/settings.json
2019-10-23 12:38:04,168 WARNING worker.py:1426 -- WARNING: Not updating worker name since `setproctitle` is not installed. Install this with `pip install setproctitle` (or ray[debug]) to enable monitoring of worker processes.
2019-10-23 12:38:04,169 INFO resource_spec.py:205 -- Starting Ray with 3.52 GiB memory available for workers and up to 1.78 GiB for objects. You can adjust these settings with ray.init(memory=<bytes>, object_store_memory=<bytes>).
2019-10-23 12:38:05,354 INFO trainer.py:344 -- Tip: set 'eager': true or the --eager flag to enable TensorFlow eager execution
2019-10-23 12:38:05,754 WARNING ppo.py:149 -- Using the simple minibatch optimizer. This will significantly reduce performance, consider simple_optimizer=False.
DTFP: 4.604122s
DTFP: 4.856234s
DTFP: 5.630484s
DTFP: 6.850456s
DTFP: 7.856700s
DTFP: 9.624164s
DTFP: 10.894944s
DTFP: 12.129192s
DTFP: 14.210247s
DTFP: 15.342738s
```
Line 130 in `tf_policy_template.py` (modified to print debug output above)
```python
t = time.time()
DynamicTFPolicy.__init__(
self,
obs_space,
action_space,
config,
loss_fn,
stats_fn=stats_fn,
grad_stats_fn=grad_stats_fn,
before_loss_init=before_loss_init_wrapper,
make_model=make_model,
action_sampler_fn=action_sampler_fn,
existing_model=existing_model,
existing_inputs=existing_inputs,
get_batch_divisibility_req=get_batch_divisibility_req,
obs_include_prev_action_reward=obs_include_prev_action_reward)
print("DTFP: %fs" % (time.time() - t))
```
Snippet of trainer script used.
```python
# pylint: disable=invalid-name
if __name__ == "__main__":
ray.init()
# Get ``settings`` file for now.
settings_file = sys.argv[1]
with open(settings_file, "r") as f:
settings = json.load(f)
env_config = settings["env"]
time_steps = env_config["time_steps"]
space_env = create_env(settings)
env = create_env(settings)
# Register environment
register_env("world", lambda _: env)
# Build environment instance to get ``obs_space``.
obs_space = space_env.observation_space
act_space = space_env.action_space
# You can also have multiple policies per trainer, but here we just
# show one each for PPO and DQN.
policies: Dict[str, Tuple[Any, gym.Space, gym.Space, Dict[Any, Any]]] = {
"0": (PPOTFPolicy, obs_space, act_space, {}),
"1": (PPOTFPolicy, obs_space, act_space, {}),
"2": (PPOTFPolicy, obs_space, act_space, {}),
"3": (PPOTFPolicy, obs_space, act_space, {}),
"4": (PPOTFPolicy, obs_space, act_space, {}),
"5": (PPOTFPolicy, obs_space, act_space, {}),
"6": (PPOTFPolicy, obs_space, act_space, {}),
"7": (PPOTFPolicy, obs_space, act_space, {}),
"8": (PPOTFPolicy, obs_space, act_space, {}),
"9": (PPOTFPolicy, obs_space, act_space, {}),
}
def policy_mapping_fn(agent_id: int) -> str:
""" Returns the given agent's policy identifier. """
return str(agent_id)
ppo_trainer = PPOTrainer(
env="bee_world",
config={
"multiagent": {
"policies": policies,
"policy_mapping_fn": policy_mapping_fn,
"policies_to_train": ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"],
},
"simple_optimizer": True,
# Disable filters, otherwise we would need to synchronize those
# as well to the DQN agent.
"observation_filter": "NoFilter",
"num_workers": 2,
"num_gpus": 1,
"train_batch_size": 2,
"sample_batch_size": 1,
"sgd_minibatch_size": 2,
},
)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/ray/experimental/tf_utils.py`
Content:
```
1 from collections import deque, OrderedDict
2 import numpy as np
3
4 from ray.rllib.utils import try_import_tf
5
6 tf = try_import_tf()
7
8
9 def unflatten(vector, shapes):
10 i = 0
11 arrays = []
12 for shape in shapes:
13 size = np.prod(shape, dtype=np.int)
14 array = vector[i:(i + size)].reshape(shape)
15 arrays.append(array)
16 i += size
17 assert len(vector) == i, "Passed weight does not have the correct shape."
18 return arrays
19
20
21 class TensorFlowVariables:
22 """A class used to set and get weights for Tensorflow networks.
23
24 Attributes:
25 sess (tf.Session): The tensorflow session used to run assignment.
26 variables (Dict[str, tf.Variable]): Extracted variables from the loss
27 or additional variables that are passed in.
28 placeholders (Dict[str, tf.placeholders]): Placeholders for weights.
29 assignment_nodes (Dict[str, tf.Tensor]): Nodes that assign weights.
30 """
31
32 def __init__(self, output, sess=None, input_variables=None):
33 """Creates TensorFlowVariables containing extracted variables.
34
35 The variables are extracted by performing a BFS search on the
36 dependency graph with loss as the root node. After the tree is
37 traversed and those variables are collected, we append input_variables
38 to the collected variables. For each variable in the list, the
39 variable has a placeholder and assignment operation created for it.
40
41 Args:
42 output (tf.Operation, List[tf.Operation]): The tensorflow
43 operation to extract all variables from.
44 sess (tf.Session): Session used for running the get and set
45 methods.
46 input_variables (List[tf.Variables]): Variables to include in the
47 list.
48 """
49 self.sess = sess
50 if not isinstance(output, (list, tuple)):
51 output = [output]
52 queue = deque(output)
53 variable_names = []
54 explored_inputs = set(output)
55
56 # We do a BFS on the dependency graph of the input function to find
57 # the variables.
58 while len(queue) != 0:
59 tf_obj = queue.popleft()
60 if tf_obj is None:
61 continue
62 # The object put into the queue is not necessarily an operation,
63 # so we want the op attribute to get the operation underlying the
64 # object. Only operations contain the inputs that we can explore.
65 if hasattr(tf_obj, "op"):
66 tf_obj = tf_obj.op
67 for input_op in tf_obj.inputs:
68 if input_op not in explored_inputs:
69 queue.append(input_op)
70 explored_inputs.add(input_op)
71 # Tensorflow control inputs can be circular, so we keep track of
72 # explored operations.
73 for control in tf_obj.control_inputs:
74 if control not in explored_inputs:
75 queue.append(control)
76 explored_inputs.add(control)
77 if ("Variable" in tf_obj.node_def.op
78 or "VarHandle" in tf_obj.node_def.op):
79 variable_names.append(tf_obj.node_def.name)
80 self.variables = OrderedDict()
81 variable_list = [
82 v for v in tf.global_variables()
83 if v.op.node_def.name in variable_names
84 ]
85 if input_variables is not None:
86 variable_list += input_variables
87 for v in variable_list:
88 self.variables[v.op.node_def.name] = v
89
90 self.placeholders = {}
91 self.assignment_nodes = {}
92
93 # Create new placeholders to put in custom weights.
94 for k, var in self.variables.items():
95 self.placeholders[k] = tf.placeholder(
96 var.value().dtype,
97 var.get_shape().as_list(),
98 name="Placeholder_" + k)
99 self.assignment_nodes[k] = var.assign(self.placeholders[k])
100
101 def set_session(self, sess):
102 """Sets the current session used by the class.
103
104 Args:
105 sess (tf.Session): Session to set the attribute with.
106 """
107 self.sess = sess
108
109 def get_flat_size(self):
110 """Returns the total length of all of the flattened variables.
111
112 Returns:
113 The length of all flattened variables concatenated.
114 """
115 return sum(
116 np.prod(v.get_shape().as_list()) for v in self.variables.values())
117
118 def _check_sess(self):
119 """Checks if the session is set, and if not throw an error message."""
120 assert self.sess is not None, ("The session is not set. Set the "
121 "session either by passing it into the "
122 "TensorFlowVariables constructor or by "
123 "calling set_session(sess).")
124
125 def get_flat(self):
126 """Gets the weights and returns them as a flat array.
127
128 Returns:
129 1D Array containing the flattened weights.
130 """
131 self._check_sess()
132 return np.concatenate([
133 v.eval(session=self.sess).flatten()
134 for v in self.variables.values()
135 ])
136
137 def set_flat(self, new_weights):
138 """Sets the weights to new_weights, converting from a flat array.
139
140 Note:
141 You can only set all weights in the network using this function,
142 i.e., the length of the array must match get_flat_size.
143
144 Args:
145 new_weights (np.ndarray): Flat array containing weights.
146 """
147 self._check_sess()
148 shapes = [v.get_shape().as_list() for v in self.variables.values()]
149 arrays = unflatten(new_weights, shapes)
150 placeholders = [
151 self.placeholders[k] for k, v in self.variables.items()
152 ]
153 self.sess.run(
154 list(self.assignment_nodes.values()),
155 feed_dict=dict(zip(placeholders, arrays)))
156
157 def get_weights(self):
158 """Returns a dictionary containing the weights of the network.
159
160 Returns:
161 Dictionary mapping variable names to their weights.
162 """
163 self._check_sess()
164 return {
165 k: v.eval(session=self.sess)
166 for k, v in self.variables.items()
167 }
168
169 def set_weights(self, new_weights):
170 """Sets the weights to new_weights.
171
172 Note:
173 Can set subsets of variables as well, by only passing in the
174 variables you want to be set.
175
176 Args:
177 new_weights (Dict): Dictionary mapping variable names to their
178 weights.
179 """
180 self._check_sess()
181 assign_list = [
182 self.assignment_nodes[name] for name in new_weights.keys()
183 if name in self.assignment_nodes
184 ]
185 assert assign_list, ("No variables in the input matched those in the "
186 "network. Possible cause: Two networks were "
187 "defined in the same TensorFlow graph. To fix "
188 "this, place each network definition in its own "
189 "tf.Graph.")
190 self.sess.run(
191 assign_list,
192 feed_dict={
193 self.placeholders[name]: value
194 for (name, value) in new_weights.items()
195 if name in self.placeholders
196 })
197
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/python/ray/experimental/tf_utils.py b/python/ray/experimental/tf_utils.py
--- a/python/ray/experimental/tf_utils.py
+++ b/python/ray/experimental/tf_utils.py
@@ -161,10 +161,7 @@
Dictionary mapping variable names to their weights.
"""
self._check_sess()
- return {
- k: v.eval(session=self.sess)
- for k, v in self.variables.items()
- }
+ return self.sess.run(self.variables)
def set_weights(self, new_weights):
"""Sets the weights to new_weights.
|
{"golden_diff": "diff --git a/python/ray/experimental/tf_utils.py b/python/ray/experimental/tf_utils.py\n--- a/python/ray/experimental/tf_utils.py\n+++ b/python/ray/experimental/tf_utils.py\n@@ -161,10 +161,7 @@\n Dictionary mapping variable names to their weights.\n \"\"\"\n self._check_sess()\n- return {\n- k: v.eval(session=self.sess)\n- for k, v in self.variables.items()\n- }\n+ return self.sess.run(self.variables)\n \n def set_weights(self, new_weights):\n \"\"\"Sets the weights to new_weights.\n", "issue": "Time to initialize a policy grows linearly with the number of agents\n<!--\r\nGeneral questions should be asked on the mailing list [email protected].\r\nQuestions about how to use Ray should be asked on\r\n[StackOverflow](https://stackoverflow.com/questions/tagged/ray).\r\n\r\nBefore submitting an issue, please fill out the following form.\r\n-->\r\n\r\n### System information\r\n- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Linux Ubuntu 18.04\r\n- **Ray installed from (source or binary)**: Binary\r\n- **Ray version**: 0.7.4\r\n- **Python version**: 3.7.4\r\n- **Exact command to reproduce**: N/A\r\n\r\n<!--\r\nYou can obtain the Ray version with\r\n\r\npython -c \"import ray; print(ray.__version__)\"\r\n-->\r\n\r\n### Describe the problem\r\n<!-- Describe the problem clearly here. -->\r\nI noticed that in multi agent settings, the time to initialize a policy per agent increases as more agents are initialized. In the sample output I provided below, you can see that the time to initialize a single DynamicTFPolicy grows from 4.6 seconds to 15.3 seconds from the first agent to the tenth agent created. Line 291 of `rllib/policy/dynamic_tf_policy.py` is\r\n```python\r\nself._sess.run(tf.global_variables_initializer())\r\n```\r\nwhich I believe will run one time for each agent initialized. If I'm not mistaken, this means that every variable in the computation graph is being initialized each time that we initialize a DynamicTFPolicy. If initializing a DynamicTFPolicy adds new variables to the computation graph (as I believe it does), this would explain why the time to initialize a DynamicTFPolicy grows over time: We are initializing every variable in the computation graph, and the computation graph is growing. My question is, why does line 291 run a global variables initializer? Is there a reason for this that I can't see inside this method? How hard would it be to modify this to only initialize variables in the individual policy that we care to initialize?\r\n\r\nI'm asking this because as detailed in #5753, I'm trying to modify rllib to allow initialization and removal of policies during training. The overhead incurred by this initialization quickly slows the training script down enough to be useless. Also, if anyone knows what the resource bottleneck is for policy initialization, that would be very helpful to know for when we're picking new hardware. Does it need a ton of cores to run in parallel, or more memory, or a bigger GPU or more GPUs or something? Thanks.\r\n\r\n\r\n### Source code / logs\r\n<!-- Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem. -->\r\n\r\n```\r\n(pytorch) root@e3a955e42cae:~/bees/bees# python trainer.py settings/settings.json\r\n2019-10-23 12:38:04,168 WARNING worker.py:1426 -- WARNING: Not updating worker name since `setproctitle` is not installed. Install this with `pip install setproctitle` (or ray[debug]) to enable monitoring of worker processes.\r\n2019-10-23 12:38:04,169 INFO resource_spec.py:205 -- Starting Ray with 3.52 GiB memory available for workers and up to 1.78 GiB for objects. You can adjust these settings with ray.init(memory=<bytes>, object_store_memory=<bytes>).\r\n2019-10-23 12:38:05,354 INFO trainer.py:344 -- Tip: set 'eager': true or the --eager flag to enable TensorFlow eager execution\r\n2019-10-23 12:38:05,754 WARNING ppo.py:149 -- Using the simple minibatch optimizer. This will significantly reduce performance, consider simple_optimizer=False.\r\nDTFP: 4.604122s\r\nDTFP: 4.856234s\r\nDTFP: 5.630484s\r\nDTFP: 6.850456s\r\nDTFP: 7.856700s\r\nDTFP: 9.624164s\r\nDTFP: 10.894944s\r\nDTFP: 12.129192s\r\nDTFP: 14.210247s\r\nDTFP: 15.342738s\r\n```\r\n\r\nLine 130 in `tf_policy_template.py` (modified to print debug output above)\r\n```python\r\n t = time.time()\r\n\r\n DynamicTFPolicy.__init__(\r\n self,\r\n obs_space,\r\n action_space,\r\n config,\r\n loss_fn,\r\n stats_fn=stats_fn,\r\n grad_stats_fn=grad_stats_fn,\r\n before_loss_init=before_loss_init_wrapper,\r\n make_model=make_model,\r\n action_sampler_fn=action_sampler_fn,\r\n existing_model=existing_model,\r\n existing_inputs=existing_inputs,\r\n get_batch_divisibility_req=get_batch_divisibility_req,\r\n obs_include_prev_action_reward=obs_include_prev_action_reward)\r\n\r\n print(\"DTFP: %fs\" % (time.time() - t))\r\n```\r\n\r\nSnippet of trainer script used.\r\n```python\r\n# pylint: disable=invalid-name\r\nif __name__ == \"__main__\":\r\n ray.init()\r\n\r\n # Get ``settings`` file for now.\r\n settings_file = sys.argv[1]\r\n with open(settings_file, \"r\") as f:\r\n settings = json.load(f)\r\n\r\n env_config = settings[\"env\"]\r\n time_steps = env_config[\"time_steps\"]\r\n\r\n space_env = create_env(settings)\r\n env = create_env(settings)\r\n\r\n # Register environment\r\n register_env(\"world\", lambda _: env)\r\n\r\n # Build environment instance to get ``obs_space``.\r\n obs_space = space_env.observation_space\r\n act_space = space_env.action_space\r\n\r\n # You can also have multiple policies per trainer, but here we just\r\n # show one each for PPO and DQN.\r\n policies: Dict[str, Tuple[Any, gym.Space, gym.Space, Dict[Any, Any]]] = {\r\n \"0\": (PPOTFPolicy, obs_space, act_space, {}),\r\n \"1\": (PPOTFPolicy, obs_space, act_space, {}),\r\n \"2\": (PPOTFPolicy, obs_space, act_space, {}),\r\n \"3\": (PPOTFPolicy, obs_space, act_space, {}),\r\n \"4\": (PPOTFPolicy, obs_space, act_space, {}),\r\n \"5\": (PPOTFPolicy, obs_space, act_space, {}),\r\n \"6\": (PPOTFPolicy, obs_space, act_space, {}),\r\n \"7\": (PPOTFPolicy, obs_space, act_space, {}),\r\n \"8\": (PPOTFPolicy, obs_space, act_space, {}),\r\n \"9\": (PPOTFPolicy, obs_space, act_space, {}),\r\n }\r\n\r\n def policy_mapping_fn(agent_id: int) -> str:\r\n \"\"\" Returns the given agent's policy identifier. \"\"\"\r\n return str(agent_id)\r\n\r\n ppo_trainer = PPOTrainer(\r\n env=\"bee_world\",\r\n config={\r\n \"multiagent\": {\r\n \"policies\": policies,\r\n \"policy_mapping_fn\": policy_mapping_fn,\r\n \"policies_to_train\": [\"0\", \"1\", \"2\", \"3\", \"4\", \"5\", \"6\", \"7\", \"8\", \"9\"],\r\n },\r\n \"simple_optimizer\": True,\r\n # Disable filters, otherwise we would need to synchronize those\r\n # as well to the DQN agent.\r\n \"observation_filter\": \"NoFilter\",\r\n \"num_workers\": 2,\r\n \"num_gpus\": 1,\r\n \"train_batch_size\": 2,\r\n \"sample_batch_size\": 1,\r\n \"sgd_minibatch_size\": 2,\r\n },\r\n )\r\n```\r\n\r\n\r\n\n", "before_files": [{"content": "from collections import deque, OrderedDict\nimport numpy as np\n\nfrom ray.rllib.utils import try_import_tf\n\ntf = try_import_tf()\n\n\ndef unflatten(vector, shapes):\n i = 0\n arrays = []\n for shape in shapes:\n size = np.prod(shape, dtype=np.int)\n array = vector[i:(i + size)].reshape(shape)\n arrays.append(array)\n i += size\n assert len(vector) == i, \"Passed weight does not have the correct shape.\"\n return arrays\n\n\nclass TensorFlowVariables:\n \"\"\"A class used to set and get weights for Tensorflow networks.\n\n Attributes:\n sess (tf.Session): The tensorflow session used to run assignment.\n variables (Dict[str, tf.Variable]): Extracted variables from the loss\n or additional variables that are passed in.\n placeholders (Dict[str, tf.placeholders]): Placeholders for weights.\n assignment_nodes (Dict[str, tf.Tensor]): Nodes that assign weights.\n \"\"\"\n\n def __init__(self, output, sess=None, input_variables=None):\n \"\"\"Creates TensorFlowVariables containing extracted variables.\n\n The variables are extracted by performing a BFS search on the\n dependency graph with loss as the root node. After the tree is\n traversed and those variables are collected, we append input_variables\n to the collected variables. For each variable in the list, the\n variable has a placeholder and assignment operation created for it.\n\n Args:\n output (tf.Operation, List[tf.Operation]): The tensorflow\n operation to extract all variables from.\n sess (tf.Session): Session used for running the get and set\n methods.\n input_variables (List[tf.Variables]): Variables to include in the\n list.\n \"\"\"\n self.sess = sess\n if not isinstance(output, (list, tuple)):\n output = [output]\n queue = deque(output)\n variable_names = []\n explored_inputs = set(output)\n\n # We do a BFS on the dependency graph of the input function to find\n # the variables.\n while len(queue) != 0:\n tf_obj = queue.popleft()\n if tf_obj is None:\n continue\n # The object put into the queue is not necessarily an operation,\n # so we want the op attribute to get the operation underlying the\n # object. Only operations contain the inputs that we can explore.\n if hasattr(tf_obj, \"op\"):\n tf_obj = tf_obj.op\n for input_op in tf_obj.inputs:\n if input_op not in explored_inputs:\n queue.append(input_op)\n explored_inputs.add(input_op)\n # Tensorflow control inputs can be circular, so we keep track of\n # explored operations.\n for control in tf_obj.control_inputs:\n if control not in explored_inputs:\n queue.append(control)\n explored_inputs.add(control)\n if (\"Variable\" in tf_obj.node_def.op\n or \"VarHandle\" in tf_obj.node_def.op):\n variable_names.append(tf_obj.node_def.name)\n self.variables = OrderedDict()\n variable_list = [\n v for v in tf.global_variables()\n if v.op.node_def.name in variable_names\n ]\n if input_variables is not None:\n variable_list += input_variables\n for v in variable_list:\n self.variables[v.op.node_def.name] = v\n\n self.placeholders = {}\n self.assignment_nodes = {}\n\n # Create new placeholders to put in custom weights.\n for k, var in self.variables.items():\n self.placeholders[k] = tf.placeholder(\n var.value().dtype,\n var.get_shape().as_list(),\n name=\"Placeholder_\" + k)\n self.assignment_nodes[k] = var.assign(self.placeholders[k])\n\n def set_session(self, sess):\n \"\"\"Sets the current session used by the class.\n\n Args:\n sess (tf.Session): Session to set the attribute with.\n \"\"\"\n self.sess = sess\n\n def get_flat_size(self):\n \"\"\"Returns the total length of all of the flattened variables.\n\n Returns:\n The length of all flattened variables concatenated.\n \"\"\"\n return sum(\n np.prod(v.get_shape().as_list()) for v in self.variables.values())\n\n def _check_sess(self):\n \"\"\"Checks if the session is set, and if not throw an error message.\"\"\"\n assert self.sess is not None, (\"The session is not set. Set the \"\n \"session either by passing it into the \"\n \"TensorFlowVariables constructor or by \"\n \"calling set_session(sess).\")\n\n def get_flat(self):\n \"\"\"Gets the weights and returns them as a flat array.\n\n Returns:\n 1D Array containing the flattened weights.\n \"\"\"\n self._check_sess()\n return np.concatenate([\n v.eval(session=self.sess).flatten()\n for v in self.variables.values()\n ])\n\n def set_flat(self, new_weights):\n \"\"\"Sets the weights to new_weights, converting from a flat array.\n\n Note:\n You can only set all weights in the network using this function,\n i.e., the length of the array must match get_flat_size.\n\n Args:\n new_weights (np.ndarray): Flat array containing weights.\n \"\"\"\n self._check_sess()\n shapes = [v.get_shape().as_list() for v in self.variables.values()]\n arrays = unflatten(new_weights, shapes)\n placeholders = [\n self.placeholders[k] for k, v in self.variables.items()\n ]\n self.sess.run(\n list(self.assignment_nodes.values()),\n feed_dict=dict(zip(placeholders, arrays)))\n\n def get_weights(self):\n \"\"\"Returns a dictionary containing the weights of the network.\n\n Returns:\n Dictionary mapping variable names to their weights.\n \"\"\"\n self._check_sess()\n return {\n k: v.eval(session=self.sess)\n for k, v in self.variables.items()\n }\n\n def set_weights(self, new_weights):\n \"\"\"Sets the weights to new_weights.\n\n Note:\n Can set subsets of variables as well, by only passing in the\n variables you want to be set.\n\n Args:\n new_weights (Dict): Dictionary mapping variable names to their\n weights.\n \"\"\"\n self._check_sess()\n assign_list = [\n self.assignment_nodes[name] for name in new_weights.keys()\n if name in self.assignment_nodes\n ]\n assert assign_list, (\"No variables in the input matched those in the \"\n \"network. Possible cause: Two networks were \"\n \"defined in the same TensorFlow graph. To fix \"\n \"this, place each network definition in its own \"\n \"tf.Graph.\")\n self.sess.run(\n assign_list,\n feed_dict={\n self.placeholders[name]: value\n for (name, value) in new_weights.items()\n if name in self.placeholders\n })\n", "path": "python/ray/experimental/tf_utils.py"}], "after_files": [{"content": "from collections import deque, OrderedDict\nimport numpy as np\n\nfrom ray.rllib.utils import try_import_tf\n\ntf = try_import_tf()\n\n\ndef unflatten(vector, shapes):\n i = 0\n arrays = []\n for shape in shapes:\n size = np.prod(shape, dtype=np.int)\n array = vector[i:(i + size)].reshape(shape)\n arrays.append(array)\n i += size\n assert len(vector) == i, \"Passed weight does not have the correct shape.\"\n return arrays\n\n\nclass TensorFlowVariables:\n \"\"\"A class used to set and get weights for Tensorflow networks.\n\n Attributes:\n sess (tf.Session): The tensorflow session used to run assignment.\n variables (Dict[str, tf.Variable]): Extracted variables from the loss\n or additional variables that are passed in.\n placeholders (Dict[str, tf.placeholders]): Placeholders for weights.\n assignment_nodes (Dict[str, tf.Tensor]): Nodes that assign weights.\n \"\"\"\n\n def __init__(self, output, sess=None, input_variables=None):\n \"\"\"Creates TensorFlowVariables containing extracted variables.\n\n The variables are extracted by performing a BFS search on the\n dependency graph with loss as the root node. After the tree is\n traversed and those variables are collected, we append input_variables\n to the collected variables. For each variable in the list, the\n variable has a placeholder and assignment operation created for it.\n\n Args:\n output (tf.Operation, List[tf.Operation]): The tensorflow\n operation to extract all variables from.\n sess (tf.Session): Session used for running the get and set\n methods.\n input_variables (List[tf.Variables]): Variables to include in the\n list.\n \"\"\"\n self.sess = sess\n if not isinstance(output, (list, tuple)):\n output = [output]\n queue = deque(output)\n variable_names = []\n explored_inputs = set(output)\n\n # We do a BFS on the dependency graph of the input function to find\n # the variables.\n while len(queue) != 0:\n tf_obj = queue.popleft()\n if tf_obj is None:\n continue\n # The object put into the queue is not necessarily an operation,\n # so we want the op attribute to get the operation underlying the\n # object. Only operations contain the inputs that we can explore.\n if hasattr(tf_obj, \"op\"):\n tf_obj = tf_obj.op\n for input_op in tf_obj.inputs:\n if input_op not in explored_inputs:\n queue.append(input_op)\n explored_inputs.add(input_op)\n # Tensorflow control inputs can be circular, so we keep track of\n # explored operations.\n for control in tf_obj.control_inputs:\n if control not in explored_inputs:\n queue.append(control)\n explored_inputs.add(control)\n if (\"Variable\" in tf_obj.node_def.op\n or \"VarHandle\" in tf_obj.node_def.op):\n variable_names.append(tf_obj.node_def.name)\n self.variables = OrderedDict()\n variable_list = [\n v for v in tf.global_variables()\n if v.op.node_def.name in variable_names\n ]\n if input_variables is not None:\n variable_list += input_variables\n for v in variable_list:\n self.variables[v.op.node_def.name] = v\n\n self.placeholders = {}\n self.assignment_nodes = {}\n\n # Create new placeholders to put in custom weights.\n for k, var in self.variables.items():\n self.placeholders[k] = tf.placeholder(\n var.value().dtype,\n var.get_shape().as_list(),\n name=\"Placeholder_\" + k)\n self.assignment_nodes[k] = var.assign(self.placeholders[k])\n\n def set_session(self, sess):\n \"\"\"Sets the current session used by the class.\n\n Args:\n sess (tf.Session): Session to set the attribute with.\n \"\"\"\n self.sess = sess\n\n def get_flat_size(self):\n \"\"\"Returns the total length of all of the flattened variables.\n\n Returns:\n The length of all flattened variables concatenated.\n \"\"\"\n return sum(\n np.prod(v.get_shape().as_list()) for v in self.variables.values())\n\n def _check_sess(self):\n \"\"\"Checks if the session is set, and if not throw an error message.\"\"\"\n assert self.sess is not None, (\"The session is not set. Set the \"\n \"session either by passing it into the \"\n \"TensorFlowVariables constructor or by \"\n \"calling set_session(sess).\")\n\n def get_flat(self):\n \"\"\"Gets the weights and returns them as a flat array.\n\n Returns:\n 1D Array containing the flattened weights.\n \"\"\"\n self._check_sess()\n return np.concatenate([\n v.eval(session=self.sess).flatten()\n for v in self.variables.values()\n ])\n\n def set_flat(self, new_weights):\n \"\"\"Sets the weights to new_weights, converting from a flat array.\n\n Note:\n You can only set all weights in the network using this function,\n i.e., the length of the array must match get_flat_size.\n\n Args:\n new_weights (np.ndarray): Flat array containing weights.\n \"\"\"\n self._check_sess()\n shapes = [v.get_shape().as_list() for v in self.variables.values()]\n arrays = unflatten(new_weights, shapes)\n placeholders = [\n self.placeholders[k] for k, v in self.variables.items()\n ]\n self.sess.run(\n list(self.assignment_nodes.values()),\n feed_dict=dict(zip(placeholders, arrays)))\n\n def get_weights(self):\n \"\"\"Returns a dictionary containing the weights of the network.\n\n Returns:\n Dictionary mapping variable names to their weights.\n \"\"\"\n self._check_sess()\n return self.sess.run(self.variables)\n\n def set_weights(self, new_weights):\n \"\"\"Sets the weights to new_weights.\n\n Note:\n Can set subsets of variables as well, by only passing in the\n variables you want to be set.\n\n Args:\n new_weights (Dict): Dictionary mapping variable names to their\n weights.\n \"\"\"\n self._check_sess()\n assign_list = [\n self.assignment_nodes[name] for name in new_weights.keys()\n if name in self.assignment_nodes\n ]\n assert assign_list, (\"No variables in the input matched those in the \"\n \"network. Possible cause: Two networks were \"\n \"defined in the same TensorFlow graph. To fix \"\n \"this, place each network definition in its own \"\n \"tf.Graph.\")\n self.sess.run(\n assign_list,\n feed_dict={\n self.placeholders[name]: value\n for (name, value) in new_weights.items()\n if name in self.placeholders\n })\n", "path": "python/ray/experimental/tf_utils.py"}]}
| 3,966 | 136 |
gh_patches_debug_20709
|
rasdani/github-patches
|
git_diff
|
keras-team__keras-nlp-131
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error with tf.data and MLMMaskGenerator for dense inputs
When using the MLMMaskGenerator for to map over dense, batched inputs in a tf.data.Dataset, we get the following error...
`TypeError: unsupported operand type(s) for +: 'int' and 'NoneType'`
This colab has a reproduction https://colab.research.google.com/gist/mattdangerw/4596df85105ff6e6731128fc79d16bf3/mlmmaskgenerator-bug.ipynb
tf.data + dense, batched inputs might be the most common use case for this layer, so this is an important one to fix.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `keras_nlp/layers/preprocessing/mlm_mask_generator.py`
Content:
```
1 # Copyright 2022 The KerasNLP Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import tensorflow as tf
16 import tensorflow_text as tf_text
17 from tensorflow import keras
18
19
20 class MLMMaskGenerator(keras.layers.Layer):
21 """Layer that applies language model masking.
22
23 This layer is useful for preparing inputs for masked languaged modeling
24 (MLM) tasks. It follows the masking strategy described in the [original BERT
25 paper](https://arxiv.org/abs/1810.04805). Given tokenized text,
26 it randomly selects certain number of tokens for masking. Then for each
27 selected token, it has a chance (configurable) to be replaced by
28 "mask token" or random token, or stay unchanged.
29
30 Users should use this layer with `tf.data` to generate masks.
31
32 Args:
33 vocabulary_size: int, the size of the vocabulary.
34 mask_selection_rate: float, the probability of a token is selected for
35 masking.
36 mask_token_id: int. The id of mask token.
37 mask_selection_length: int, defaults to None. Maximum number of tokens
38 selected for masking in each sequence. If set, the output
39 `mask_positions`, `mask_ids` and `mask_weights` will be padded
40 to dense tensors of length `mask_selection_length`,
41 otherwise the output will be a RaggedTensor.
42 unselectable_token_ids: A list of tokens, defaults to [0] (the default
43 `padding_token_id`).
44 mask_token_rate: float, defaults to 0.8. `mask_token_rate` must be
45 between 0 and 1 which indicates how often the mask_token is
46 substituted for tokens selected for masking.
47 random_token_rate: float, defaults to 0.1. `random_token_rate` must be
48 between 0 and 1 which indicates how often a random token is
49 substituted for tokens selected for masking. Default is 0.1.
50 Note: mask_token_rate + random_token_rate <= 1, and for
51 (1 - mask_token_rate - random_token_rate), the token will not be
52 changed.
53
54 Input:
55 A 1D integer tensor of shape [sequence_length] or a 2D integer tensor
56 of shape [batch_size, sequence_length], or a 2D integer RaggedTensor.
57 Represents the sequence to mask.
58
59 Returns:
60 A Dict with 4 keys:
61 tokens: Tensor or RaggedTensor, has the same type and shape of
62 input. Sequence after getting masked.
63 mask_positions: Tensor, or RaggedTensor if `mask_selection_length`
64 is None. The positions of tokens getting masked.
65 mask_ids: Tensor, or RaggedTensor if `mask_selection_length` is
66 None. The original token ids at masked positions.
67 mask_weights: Tensor, or RaggedTensor if `mask_selection_length` is
68 None. `mask_weights` has the same shape as `mask_positions` and
69 `mask_ids`. Each element in `mask_weights` should be 0 or 1,
70 1 means the corresponding position in `mask_positions` is an
71 actual mask, 0 means it is a pad.
72
73 Examples:
74
75 Basic usage.
76 >>> masker = keras_nlp.layers.preprocessing.MLMMaskGenerator( \
77 vocabulary_size=10, mask_selection_rate=0.2, mask_token_id=0, \
78 mask_selection_length=5)
79 >>> masker(tf.constant([1, 2, 3, 4, 5]))
80
81 Ragged Input:
82 >>> masker = keras_nlp.layers.preprocessing.MLMMaskGenerator( \
83 vocabulary_size=10, mask_selection_rate=0.5, mask_token_id=0, \
84 mask_selection_length=5)
85 >>> masker(tf.ragged.constant([[1, 2], [1, 2, 3, 4]]))
86 """
87
88 def __init__(
89 self,
90 vocabulary_size,
91 mask_selection_rate,
92 mask_token_id,
93 mask_selection_length=None,
94 unselectable_token_ids=[0],
95 mask_token_rate=0.8,
96 random_token_rate=0.1,
97 **kwargs,
98 ):
99 super().__init__(**kwargs)
100 self.vocabulary_size = vocabulary_size
101 self.unselectable_token_ids = unselectable_token_ids
102 self.mask_selection_rate = mask_selection_rate
103 self.mask_selection_length = mask_selection_length
104 self.mask_token_rate = mask_token_rate
105 self.random_token_rate = random_token_rate
106
107 if mask_token_id >= vocabulary_size:
108 raise ValueError(
109 f"Mask token id should be in range [0, vocabulary_size - 1], "
110 f"but received mask_token_id={mask_token_id}."
111 )
112 self.mask_token_id = mask_token_id
113
114 max_selections = self.mask_selection_length
115 if max_selections is None:
116 # Set a large number to remove the `max_selections_per_batch` cap.
117 max_selections = 2**31 - 1
118 self._random_selector = tf_text.RandomItemSelector(
119 max_selections_per_batch=max_selections,
120 selection_rate=self.mask_selection_rate,
121 unselectable_ids=self.unselectable_token_ids,
122 )
123 self._mask_values_chooser = tf_text.MaskValuesChooser(
124 self.vocabulary_size,
125 self.mask_token_id,
126 mask_token_rate=self.mask_token_rate,
127 random_token_rate=self.random_token_rate,
128 )
129
130 def call(self, inputs):
131 input_is_ragged = isinstance(inputs, tf.RaggedTensor)
132 input_is_1d = tf.rank(inputs) == 1
133 if input_is_1d:
134 # If inputs is of rank 1, we manually add the batch axis.
135 inputs = inputs[tf.newaxis, :]
136 if not input_is_ragged:
137 # `tf_text.mask_language_model` requires a ragged tensor, so
138 # convert dense to ragged.
139 inputs = tf.RaggedTensor.from_tensor(inputs)
140 (tokens, mask_positions, mask_ids,) = tf_text.mask_language_model(
141 inputs,
142 item_selector=self._random_selector,
143 mask_values_chooser=self._mask_values_chooser,
144 )
145
146 if not input_is_ragged:
147 # If we converted the input from dense to ragged, convert back.
148 tokens = tokens.to_tensor()
149
150 mask_weights = tf.ones_like(mask_positions, self.compute_dtype)
151 # If mask_selection_length is set, covert to raggeds to dense.
152 if self.mask_selection_length:
153 target_shape = tf.cast([-1, self.mask_selection_length], tf.int64)
154 mask_positions = mask_positions.to_tensor(shape=target_shape)
155 mask_ids = mask_ids.to_tensor(shape=target_shape)
156 mask_weights = mask_weights.to_tensor(shape=target_shape)
157
158 if input_is_1d:
159 # If inputs is 1D, we format the output to be 1D as well.
160 tokens = tf.squeeze(tokens, axis=0)
161 mask_positions = tf.squeeze(mask_positions, axis=0)
162 mask_ids = tf.squeeze(mask_ids, axis=0)
163 mask_weights = tf.squeeze(mask_weights, axis=0)
164
165 output_dict = {
166 "tokens": tokens,
167 "mask_positions": mask_positions,
168 "mask_ids": mask_ids,
169 "mask_weights": mask_weights,
170 }
171 return output_dict
172
173 def get_config(self):
174 config = super().get_config()
175 config.update(
176 {
177 "vocabulary_size": self.vocabulary_size,
178 "mask_selection_rate": self.mask_selection_rate,
179 "mask_selection_length": self.mask_selection_length,
180 "unselectable_token_ids": self.unselectable_token_ids,
181 "mask_token_id": self.mask_token_id,
182 "mask_token_rate": self.mask_token_rate,
183 "random_token_rate": self.random_token_rate,
184 }
185 )
186 return config
187
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/keras_nlp/layers/preprocessing/mlm_mask_generator.py b/keras_nlp/layers/preprocessing/mlm_mask_generator.py
--- a/keras_nlp/layers/preprocessing/mlm_mask_generator.py
+++ b/keras_nlp/layers/preprocessing/mlm_mask_generator.py
@@ -129,7 +129,7 @@
def call(self, inputs):
input_is_ragged = isinstance(inputs, tf.RaggedTensor)
- input_is_1d = tf.rank(inputs) == 1
+ input_is_1d = inputs.shape.rank == 1
if input_is_1d:
# If inputs is of rank 1, we manually add the batch axis.
inputs = inputs[tf.newaxis, :]
@@ -137,6 +137,7 @@
# `tf_text.mask_language_model` requires a ragged tensor, so
# convert dense to ragged.
inputs = tf.RaggedTensor.from_tensor(inputs)
+
(tokens, mask_positions, mask_ids,) = tf_text.mask_language_model(
inputs,
item_selector=self._random_selector,
|
{"golden_diff": "diff --git a/keras_nlp/layers/preprocessing/mlm_mask_generator.py b/keras_nlp/layers/preprocessing/mlm_mask_generator.py\n--- a/keras_nlp/layers/preprocessing/mlm_mask_generator.py\n+++ b/keras_nlp/layers/preprocessing/mlm_mask_generator.py\n@@ -129,7 +129,7 @@\n \n def call(self, inputs):\n input_is_ragged = isinstance(inputs, tf.RaggedTensor)\n- input_is_1d = tf.rank(inputs) == 1\n+ input_is_1d = inputs.shape.rank == 1\n if input_is_1d:\n # If inputs is of rank 1, we manually add the batch axis.\n inputs = inputs[tf.newaxis, :]\n@@ -137,6 +137,7 @@\n # `tf_text.mask_language_model` requires a ragged tensor, so\n # convert dense to ragged.\n inputs = tf.RaggedTensor.from_tensor(inputs)\n+\n (tokens, mask_positions, mask_ids,) = tf_text.mask_language_model(\n inputs,\n item_selector=self._random_selector,\n", "issue": "Error with tf.data and MLMMaskGenerator for dense inputs\nWhen using the MLMMaskGenerator for to map over dense, batched inputs in a tf.data.Dataset, we get the following error...\r\n\r\n`TypeError: unsupported operand type(s) for +: 'int' and 'NoneType'`\r\n\r\nThis colab has a reproduction https://colab.research.google.com/gist/mattdangerw/4596df85105ff6e6731128fc79d16bf3/mlmmaskgenerator-bug.ipynb\r\n\r\ntf.data + dense, batched inputs might be the most common use case for this layer, so this is an important one to fix.\n", "before_files": [{"content": "# Copyright 2022 The KerasNLP Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport tensorflow as tf\nimport tensorflow_text as tf_text\nfrom tensorflow import keras\n\n\nclass MLMMaskGenerator(keras.layers.Layer):\n \"\"\"Layer that applies language model masking.\n\n This layer is useful for preparing inputs for masked languaged modeling\n (MLM) tasks. It follows the masking strategy described in the [original BERT\n paper](https://arxiv.org/abs/1810.04805). Given tokenized text,\n it randomly selects certain number of tokens for masking. Then for each\n selected token, it has a chance (configurable) to be replaced by\n \"mask token\" or random token, or stay unchanged.\n\n Users should use this layer with `tf.data` to generate masks.\n\n Args:\n vocabulary_size: int, the size of the vocabulary.\n mask_selection_rate: float, the probability of a token is selected for\n masking.\n mask_token_id: int. The id of mask token.\n mask_selection_length: int, defaults to None. Maximum number of tokens\n selected for masking in each sequence. If set, the output\n `mask_positions`, `mask_ids` and `mask_weights` will be padded\n to dense tensors of length `mask_selection_length`,\n otherwise the output will be a RaggedTensor.\n unselectable_token_ids: A list of tokens, defaults to [0] (the default\n `padding_token_id`).\n mask_token_rate: float, defaults to 0.8. `mask_token_rate` must be\n between 0 and 1 which indicates how often the mask_token is\n substituted for tokens selected for masking.\n random_token_rate: float, defaults to 0.1. `random_token_rate` must be\n between 0 and 1 which indicates how often a random token is\n substituted for tokens selected for masking. Default is 0.1.\n Note: mask_token_rate + random_token_rate <= 1, and for\n (1 - mask_token_rate - random_token_rate), the token will not be\n changed.\n\n Input:\n A 1D integer tensor of shape [sequence_length] or a 2D integer tensor\n of shape [batch_size, sequence_length], or a 2D integer RaggedTensor.\n Represents the sequence to mask.\n\n Returns:\n A Dict with 4 keys:\n tokens: Tensor or RaggedTensor, has the same type and shape of\n input. Sequence after getting masked.\n mask_positions: Tensor, or RaggedTensor if `mask_selection_length`\n is None. The positions of tokens getting masked.\n mask_ids: Tensor, or RaggedTensor if `mask_selection_length` is\n None. The original token ids at masked positions.\n mask_weights: Tensor, or RaggedTensor if `mask_selection_length` is\n None. `mask_weights` has the same shape as `mask_positions` and\n `mask_ids`. Each element in `mask_weights` should be 0 or 1,\n 1 means the corresponding position in `mask_positions` is an\n actual mask, 0 means it is a pad.\n\n Examples:\n\n Basic usage.\n >>> masker = keras_nlp.layers.preprocessing.MLMMaskGenerator( \\\n vocabulary_size=10, mask_selection_rate=0.2, mask_token_id=0, \\\n mask_selection_length=5)\n >>> masker(tf.constant([1, 2, 3, 4, 5]))\n\n Ragged Input:\n >>> masker = keras_nlp.layers.preprocessing.MLMMaskGenerator( \\\n vocabulary_size=10, mask_selection_rate=0.5, mask_token_id=0, \\\n mask_selection_length=5)\n >>> masker(tf.ragged.constant([[1, 2], [1, 2, 3, 4]]))\n \"\"\"\n\n def __init__(\n self,\n vocabulary_size,\n mask_selection_rate,\n mask_token_id,\n mask_selection_length=None,\n unselectable_token_ids=[0],\n mask_token_rate=0.8,\n random_token_rate=0.1,\n **kwargs,\n ):\n super().__init__(**kwargs)\n self.vocabulary_size = vocabulary_size\n self.unselectable_token_ids = unselectable_token_ids\n self.mask_selection_rate = mask_selection_rate\n self.mask_selection_length = mask_selection_length\n self.mask_token_rate = mask_token_rate\n self.random_token_rate = random_token_rate\n\n if mask_token_id >= vocabulary_size:\n raise ValueError(\n f\"Mask token id should be in range [0, vocabulary_size - 1], \"\n f\"but received mask_token_id={mask_token_id}.\"\n )\n self.mask_token_id = mask_token_id\n\n max_selections = self.mask_selection_length\n if max_selections is None:\n # Set a large number to remove the `max_selections_per_batch` cap.\n max_selections = 2**31 - 1\n self._random_selector = tf_text.RandomItemSelector(\n max_selections_per_batch=max_selections,\n selection_rate=self.mask_selection_rate,\n unselectable_ids=self.unselectable_token_ids,\n )\n self._mask_values_chooser = tf_text.MaskValuesChooser(\n self.vocabulary_size,\n self.mask_token_id,\n mask_token_rate=self.mask_token_rate,\n random_token_rate=self.random_token_rate,\n )\n\n def call(self, inputs):\n input_is_ragged = isinstance(inputs, tf.RaggedTensor)\n input_is_1d = tf.rank(inputs) == 1\n if input_is_1d:\n # If inputs is of rank 1, we manually add the batch axis.\n inputs = inputs[tf.newaxis, :]\n if not input_is_ragged:\n # `tf_text.mask_language_model` requires a ragged tensor, so\n # convert dense to ragged.\n inputs = tf.RaggedTensor.from_tensor(inputs)\n (tokens, mask_positions, mask_ids,) = tf_text.mask_language_model(\n inputs,\n item_selector=self._random_selector,\n mask_values_chooser=self._mask_values_chooser,\n )\n\n if not input_is_ragged:\n # If we converted the input from dense to ragged, convert back.\n tokens = tokens.to_tensor()\n\n mask_weights = tf.ones_like(mask_positions, self.compute_dtype)\n # If mask_selection_length is set, covert to raggeds to dense.\n if self.mask_selection_length:\n target_shape = tf.cast([-1, self.mask_selection_length], tf.int64)\n mask_positions = mask_positions.to_tensor(shape=target_shape)\n mask_ids = mask_ids.to_tensor(shape=target_shape)\n mask_weights = mask_weights.to_tensor(shape=target_shape)\n\n if input_is_1d:\n # If inputs is 1D, we format the output to be 1D as well.\n tokens = tf.squeeze(tokens, axis=0)\n mask_positions = tf.squeeze(mask_positions, axis=0)\n mask_ids = tf.squeeze(mask_ids, axis=0)\n mask_weights = tf.squeeze(mask_weights, axis=0)\n\n output_dict = {\n \"tokens\": tokens,\n \"mask_positions\": mask_positions,\n \"mask_ids\": mask_ids,\n \"mask_weights\": mask_weights,\n }\n return output_dict\n\n def get_config(self):\n config = super().get_config()\n config.update(\n {\n \"vocabulary_size\": self.vocabulary_size,\n \"mask_selection_rate\": self.mask_selection_rate,\n \"mask_selection_length\": self.mask_selection_length,\n \"unselectable_token_ids\": self.unselectable_token_ids,\n \"mask_token_id\": self.mask_token_id,\n \"mask_token_rate\": self.mask_token_rate,\n \"random_token_rate\": self.random_token_rate,\n }\n )\n return config\n", "path": "keras_nlp/layers/preprocessing/mlm_mask_generator.py"}], "after_files": [{"content": "# Copyright 2022 The KerasNLP Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport tensorflow as tf\nimport tensorflow_text as tf_text\nfrom tensorflow import keras\n\n\nclass MLMMaskGenerator(keras.layers.Layer):\n \"\"\"Layer that applies language model masking.\n\n This layer is useful for preparing inputs for masked languaged modeling\n (MLM) tasks. It follows the masking strategy described in the [original BERT\n paper](https://arxiv.org/abs/1810.04805). Given tokenized text,\n it randomly selects certain number of tokens for masking. Then for each\n selected token, it has a chance (configurable) to be replaced by\n \"mask token\" or random token, or stay unchanged.\n\n Users should use this layer with `tf.data` to generate masks.\n\n Args:\n vocabulary_size: int, the size of the vocabulary.\n mask_selection_rate: float, the probability of a token is selected for\n masking.\n mask_token_id: int. The id of mask token.\n mask_selection_length: int, defaults to None. Maximum number of tokens\n selected for masking in each sequence. If set, the output\n `mask_positions`, `mask_ids` and `mask_weights` will be padded\n to dense tensors of length `mask_selection_length`,\n otherwise the output will be a RaggedTensor.\n unselectable_token_ids: A list of tokens, defaults to [0] (the default\n `padding_token_id`).\n mask_token_rate: float, defaults to 0.8. `mask_token_rate` must be\n between 0 and 1 which indicates how often the mask_token is\n substituted for tokens selected for masking.\n random_token_rate: float, defaults to 0.1. `random_token_rate` must be\n between 0 and 1 which indicates how often a random token is\n substituted for tokens selected for masking. Default is 0.1.\n Note: mask_token_rate + random_token_rate <= 1, and for\n (1 - mask_token_rate - random_token_rate), the token will not be\n changed.\n\n Input:\n A 1D integer tensor of shape [sequence_length] or a 2D integer tensor\n of shape [batch_size, sequence_length], or a 2D integer RaggedTensor.\n Represents the sequence to mask.\n\n Returns:\n A Dict with 4 keys:\n tokens: Tensor or RaggedTensor, has the same type and shape of\n input. Sequence after getting masked.\n mask_positions: Tensor, or RaggedTensor if `mask_selection_length`\n is None. The positions of tokens getting masked.\n mask_ids: Tensor, or RaggedTensor if `mask_selection_length` is\n None. The original token ids at masked positions.\n mask_weights: Tensor, or RaggedTensor if `mask_selection_length` is\n None. `mask_weights` has the same shape as `mask_positions` and\n `mask_ids`. Each element in `mask_weights` should be 0 or 1,\n 1 means the corresponding position in `mask_positions` is an\n actual mask, 0 means it is a pad.\n\n Examples:\n\n Basic usage.\n >>> masker = keras_nlp.layers.preprocessing.MLMMaskGenerator( \\\n vocabulary_size=10, mask_selection_rate=0.2, mask_token_id=0, \\\n mask_selection_length=5)\n >>> masker(tf.constant([1, 2, 3, 4, 5]))\n\n Ragged Input:\n >>> masker = keras_nlp.layers.preprocessing.MLMMaskGenerator( \\\n vocabulary_size=10, mask_selection_rate=0.5, mask_token_id=0, \\\n mask_selection_length=5)\n >>> masker(tf.ragged.constant([[1, 2], [1, 2, 3, 4]]))\n \"\"\"\n\n def __init__(\n self,\n vocabulary_size,\n mask_selection_rate,\n mask_token_id,\n mask_selection_length=None,\n unselectable_token_ids=[0],\n mask_token_rate=0.8,\n random_token_rate=0.1,\n **kwargs,\n ):\n super().__init__(**kwargs)\n self.vocabulary_size = vocabulary_size\n self.unselectable_token_ids = unselectable_token_ids\n self.mask_selection_rate = mask_selection_rate\n self.mask_selection_length = mask_selection_length\n self.mask_token_rate = mask_token_rate\n self.random_token_rate = random_token_rate\n\n if mask_token_id >= vocabulary_size:\n raise ValueError(\n f\"Mask token id should be in range [0, vocabulary_size - 1], \"\n f\"but received mask_token_id={mask_token_id}.\"\n )\n self.mask_token_id = mask_token_id\n\n max_selections = self.mask_selection_length\n if max_selections is None:\n # Set a large number to remove the `max_selections_per_batch` cap.\n max_selections = 2**31 - 1\n self._random_selector = tf_text.RandomItemSelector(\n max_selections_per_batch=max_selections,\n selection_rate=self.mask_selection_rate,\n unselectable_ids=self.unselectable_token_ids,\n )\n self._mask_values_chooser = tf_text.MaskValuesChooser(\n self.vocabulary_size,\n self.mask_token_id,\n mask_token_rate=self.mask_token_rate,\n random_token_rate=self.random_token_rate,\n )\n\n def call(self, inputs):\n input_is_ragged = isinstance(inputs, tf.RaggedTensor)\n input_is_1d = inputs.shape.rank == 1\n if input_is_1d:\n # If inputs is of rank 1, we manually add the batch axis.\n inputs = inputs[tf.newaxis, :]\n if not input_is_ragged:\n # `tf_text.mask_language_model` requires a ragged tensor, so\n # convert dense to ragged.\n inputs = tf.RaggedTensor.from_tensor(inputs)\n\n (tokens, mask_positions, mask_ids,) = tf_text.mask_language_model(\n inputs,\n item_selector=self._random_selector,\n mask_values_chooser=self._mask_values_chooser,\n )\n\n if not input_is_ragged:\n # If we converted the input from dense to ragged, convert back.\n tokens = tokens.to_tensor()\n\n mask_weights = tf.ones_like(mask_positions, self.compute_dtype)\n # If mask_selection_length is set, covert to raggeds to dense.\n if self.mask_selection_length:\n target_shape = tf.cast([-1, self.mask_selection_length], tf.int64)\n mask_positions = mask_positions.to_tensor(shape=target_shape)\n mask_ids = mask_ids.to_tensor(shape=target_shape)\n mask_weights = mask_weights.to_tensor(shape=target_shape)\n\n if input_is_1d:\n # If inputs is 1D, we format the output to be 1D as well.\n tokens = tf.squeeze(tokens, axis=0)\n mask_positions = tf.squeeze(mask_positions, axis=0)\n mask_ids = tf.squeeze(mask_ids, axis=0)\n mask_weights = tf.squeeze(mask_weights, axis=0)\n\n output_dict = {\n \"tokens\": tokens,\n \"mask_positions\": mask_positions,\n \"mask_ids\": mask_ids,\n \"mask_weights\": mask_weights,\n }\n return output_dict\n\n def get_config(self):\n config = super().get_config()\n config.update(\n {\n \"vocabulary_size\": self.vocabulary_size,\n \"mask_selection_rate\": self.mask_selection_rate,\n \"mask_selection_length\": self.mask_selection_length,\n \"unselectable_token_ids\": self.unselectable_token_ids,\n \"mask_token_id\": self.mask_token_id,\n \"mask_token_rate\": self.mask_token_rate,\n \"random_token_rate\": self.random_token_rate,\n }\n )\n return config\n", "path": "keras_nlp/layers/preprocessing/mlm_mask_generator.py"}]}
| 2,687 | 248 |
gh_patches_debug_5672
|
rasdani/github-patches
|
git_diff
|
sosreport__sos-471
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[block] Don't use parted human readable output - rhbz #1183770
Changed the parted command to return data in sectors units
instead of human readable form.
Signed-off-by: Shane Bradley [email protected]
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sos/plugins/block.py`
Content:
```
1 # This program is free software; you can redistribute it and/or modify
2 # it under the terms of the GNU General Public License as published by
3 # the Free Software Foundation; either version 2 of the License, or
4 # (at your option) any later version.
5
6 # This program is distributed in the hope that it will be useful,
7 # but WITHOUT ANY WARRANTY; without even the implied warranty of
8 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
9 # GNU General Public License for more details.
10
11 # You should have received a copy of the GNU General Public License
12 # along with this program; if not, write to the Free Software
13 # Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
14
15 import os
16 from sos.plugins import Plugin, RedHatPlugin, DebianPlugin, UbuntuPlugin
17
18
19 class Block(Plugin, RedHatPlugin, DebianPlugin, UbuntuPlugin):
20 """Block device information
21 """
22
23 plugin_name = 'block'
24 profiles = ('storage', 'hardware')
25
26 def setup(self):
27 self.add_cmd_output([
28 "lsblk",
29 "blkid -c /dev/null",
30 "ls -lanR /dev",
31 "ls -lanR /sys/block"
32 ])
33
34 # legacy location for non-/run distributions
35 self.add_copy_spec([
36 "/etc/blkid.tab",
37 "/run/blkid/blkid.tab",
38 "/proc/partitions",
39 "/proc/diskstats"
40 ])
41
42 if os.path.isdir("/sys/block"):
43 for disk in os.listdir("/sys/block"):
44 if disk in [".", ".."] or disk.startswith("ram"):
45 continue
46 disk_path = os.path.join('/dev/', disk)
47 self.add_cmd_output([
48 "udevadm info -ap /sys/block/%s" % (disk),
49 "parted -s %s print" % (disk_path),
50 "fdisk -l %s" % disk_path
51 ])
52
53 # vim: et ts=4 sw=4
54
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sos/plugins/block.py b/sos/plugins/block.py
--- a/sos/plugins/block.py
+++ b/sos/plugins/block.py
@@ -46,7 +46,7 @@
disk_path = os.path.join('/dev/', disk)
self.add_cmd_output([
"udevadm info -ap /sys/block/%s" % (disk),
- "parted -s %s print" % (disk_path),
+ "parted -s %s unit s print" % (disk_path),
"fdisk -l %s" % disk_path
])
|
{"golden_diff": "diff --git a/sos/plugins/block.py b/sos/plugins/block.py\n--- a/sos/plugins/block.py\n+++ b/sos/plugins/block.py\n@@ -46,7 +46,7 @@\n disk_path = os.path.join('/dev/', disk)\n self.add_cmd_output([\n \"udevadm info -ap /sys/block/%s\" % (disk),\n- \"parted -s %s print\" % (disk_path),\n+ \"parted -s %s unit s print\" % (disk_path),\n \"fdisk -l %s\" % disk_path\n ])\n", "issue": "[block] Don't use parted human readable output - rhbz #1183770\nChanged the parted command to return data in sectors units\ninstead of human readable form.\n\nSigned-off-by: Shane Bradley [email protected]\n\n", "before_files": [{"content": "# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, write to the Free Software\n# Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.\n\nimport os\nfrom sos.plugins import Plugin, RedHatPlugin, DebianPlugin, UbuntuPlugin\n\n\nclass Block(Plugin, RedHatPlugin, DebianPlugin, UbuntuPlugin):\n \"\"\"Block device information\n \"\"\"\n\n plugin_name = 'block'\n profiles = ('storage', 'hardware')\n\n def setup(self):\n self.add_cmd_output([\n \"lsblk\",\n \"blkid -c /dev/null\",\n \"ls -lanR /dev\",\n \"ls -lanR /sys/block\"\n ])\n\n # legacy location for non-/run distributions\n self.add_copy_spec([\n \"/etc/blkid.tab\",\n \"/run/blkid/blkid.tab\",\n \"/proc/partitions\",\n \"/proc/diskstats\"\n ])\n\n if os.path.isdir(\"/sys/block\"):\n for disk in os.listdir(\"/sys/block\"):\n if disk in [\".\", \"..\"] or disk.startswith(\"ram\"):\n continue\n disk_path = os.path.join('/dev/', disk)\n self.add_cmd_output([\n \"udevadm info -ap /sys/block/%s\" % (disk),\n \"parted -s %s print\" % (disk_path),\n \"fdisk -l %s\" % disk_path\n ])\n\n# vim: et ts=4 sw=4\n", "path": "sos/plugins/block.py"}], "after_files": [{"content": "# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, write to the Free Software\n# Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.\n\nimport os\nfrom sos.plugins import Plugin, RedHatPlugin, DebianPlugin, UbuntuPlugin\n\n\nclass Block(Plugin, RedHatPlugin, DebianPlugin, UbuntuPlugin):\n \"\"\"Block device information\n \"\"\"\n\n plugin_name = 'block'\n profiles = ('storage', 'hardware')\n\n def setup(self):\n self.add_cmd_output([\n \"lsblk\",\n \"blkid -c /dev/null\",\n \"ls -lanR /dev\",\n \"ls -lanR /sys/block\"\n ])\n\n # legacy location for non-/run distributions\n self.add_copy_spec([\n \"/etc/blkid.tab\",\n \"/run/blkid/blkid.tab\",\n \"/proc/partitions\",\n \"/proc/diskstats\"\n ])\n\n if os.path.isdir(\"/sys/block\"):\n for disk in os.listdir(\"/sys/block\"):\n if disk in [\".\", \"..\"] or disk.startswith(\"ram\"):\n continue\n disk_path = os.path.join('/dev/', disk)\n self.add_cmd_output([\n \"udevadm info -ap /sys/block/%s\" % (disk),\n \"parted -s %s unit s print\" % (disk_path),\n \"fdisk -l %s\" % disk_path\n ])\n\n# vim: et ts=4 sw=4\n", "path": "sos/plugins/block.py"}]}
| 848 | 128 |
gh_patches_debug_6080
|
rasdani/github-patches
|
git_diff
|
frappe__frappe-8232
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
bug(API): TypeError, cannot create document
```bash
curl -X POST https://my.erpnext.com/api/resource/Lead \
-H 'Accept: application/json' \
-H 'Authorization: Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX==' \
-H 'Content-Type: application/json' \
-d '{"lead_name": "Jon Doe"}'
```
Returns
```json
{"exc":"[\"Traceback (most recent call last):\\n File \\\"/home/frappe/frappe-bench/apps/frappe/frappe/app.py\\\", line 60, in application\\n response = frappe.api.handle()\\n File \\\"/home/frappe/frappe-bench/apps/frappe/frappe/api.py\\\", line 116, in handle\\n data = json.loads(frappe.local.form_dict.data)\\n File \\\"/usr/lib64/python2.7/json/__init__.py\\\", line 338, in loads\\n return _default_decoder.decode(s)\\n File \\\"/usr/lib64/python2.7/json/decoder.py\\\", line 366, in decode\\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\\nTypeError: expected string or buffer\\n\"]"}
```
Cleaned up stack trace:
```
Traceback (most recent call last):
File "/home/frappe/frappe-bench/apps/frappe/frappe/app.py", line 60, in application
response = frappe.api.handle()
File "/home/frappe/frappe-bench/apps/frappe/frappe/api.py", line 116, in handle
data = json.loads(frappe.local.form_dict.data)
File "/usr/lib64/python2.7/json/__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "/usr/lib64/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
TypeError: expected string or buffer
```
This seems to apply to any DocType – it's not possible to create documents this way.
@netchampfaris may [this](https://github.com/frappe/frappe/commit/f63ad574e580360996807931f9c9cfa363385c3d#diff-d65e8dff7122e8822cd2009d1ef1a963) be the cause?
### Versions
ERPNext: v12.0.6 (version-12)
Frappe Framework: v12.0.6 (version-12)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `frappe/api.py`
Content:
```
1 # Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors
2 # MIT License. See license.txt
3 from __future__ import unicode_literals
4
5 import json
6 import frappe
7 import frappe.handler
8 import frappe.client
9 from frappe.utils.response import build_response
10 from frappe import _
11 from six.moves.urllib.parse import urlparse, urlencode
12 import base64
13
14 def handle():
15 """
16 Handler for `/api` methods
17
18 ### Examples:
19
20 `/api/method/{methodname}` will call a whitelisted method
21
22 `/api/resource/{doctype}` will query a table
23 examples:
24 - `?fields=["name", "owner"]`
25 - `?filters=[["Task", "name", "like", "%005"]]`
26 - `?limit_start=0`
27 - `?limit_page_length=20`
28
29 `/api/resource/{doctype}/{name}` will point to a resource
30 `GET` will return doclist
31 `POST` will insert
32 `PUT` will update
33 `DELETE` will delete
34
35 `/api/resource/{doctype}/{name}?run_method={method}` will run a whitelisted controller method
36 """
37
38 validate_oauth()
39 validate_auth_via_api_keys()
40
41 parts = frappe.request.path[1:].split("/",3)
42 call = doctype = name = None
43
44 if len(parts) > 1:
45 call = parts[1]
46
47 if len(parts) > 2:
48 doctype = parts[2]
49
50 if len(parts) > 3:
51 name = parts[3]
52
53 if call=="method":
54 frappe.local.form_dict.cmd = doctype
55 return frappe.handler.handle()
56
57 elif call=="resource":
58 if "run_method" in frappe.local.form_dict:
59 method = frappe.local.form_dict.pop("run_method")
60 doc = frappe.get_doc(doctype, name)
61 doc.is_whitelisted(method)
62
63 if frappe.local.request.method=="GET":
64 if not doc.has_permission("read"):
65 frappe.throw(_("Not permitted"), frappe.PermissionError)
66 frappe.local.response.update({"data": doc.run_method(method, **frappe.local.form_dict)})
67
68 if frappe.local.request.method=="POST":
69 if not doc.has_permission("write"):
70 frappe.throw(_("Not permitted"), frappe.PermissionError)
71
72 frappe.local.response.update({"data": doc.run_method(method, **frappe.local.form_dict)})
73 frappe.db.commit()
74
75 else:
76 if name:
77 if frappe.local.request.method=="GET":
78 doc = frappe.get_doc(doctype, name)
79 if not doc.has_permission("read"):
80 raise frappe.PermissionError
81 frappe.local.response.update({"data": doc})
82
83 if frappe.local.request.method=="PUT":
84 data = json.loads(frappe.local.form_dict.data)
85 doc = frappe.get_doc(doctype, name)
86
87 if "flags" in data:
88 del data["flags"]
89
90 # Not checking permissions here because it's checked in doc.save
91 doc.update(data)
92
93 frappe.local.response.update({
94 "data": doc.save().as_dict()
95 })
96 frappe.db.commit()
97
98 if frappe.local.request.method=="DELETE":
99 # Not checking permissions here because it's checked in delete_doc
100 frappe.delete_doc(doctype, name, ignore_missing=False)
101 frappe.local.response.http_status_code = 202
102 frappe.local.response.message = "ok"
103 frappe.db.commit()
104
105
106 elif doctype:
107 if frappe.local.request.method=="GET":
108 if frappe.local.form_dict.get('fields'):
109 frappe.local.form_dict['fields'] = json.loads(frappe.local.form_dict['fields'])
110 frappe.local.form_dict.setdefault('limit_page_length', 20)
111 frappe.local.response.update({
112 "data": frappe.call(frappe.client.get_list,
113 doctype, **frappe.local.form_dict)})
114
115 if frappe.local.request.method=="POST":
116 data = json.loads(frappe.local.form_dict.data)
117 data.update({
118 "doctype": doctype
119 })
120 frappe.local.response.update({
121 "data": frappe.get_doc(data).insert().as_dict()
122 })
123 frappe.db.commit()
124 else:
125 raise frappe.DoesNotExistError
126
127 else:
128 raise frappe.DoesNotExistError
129
130 return build_response("json")
131
132 def validate_oauth():
133 from frappe.oauth import get_url_delimiter
134 form_dict = frappe.local.form_dict
135 authorization_header = frappe.get_request_header("Authorization").split(" ") if frappe.get_request_header("Authorization") else None
136 if authorization_header and authorization_header[0].lower() == "bearer":
137 from frappe.integrations.oauth2 import get_oauth_server
138 token = authorization_header[1]
139 r = frappe.request
140 parsed_url = urlparse(r.url)
141 access_token = { "access_token": token}
142 uri = parsed_url.scheme + "://" + parsed_url.netloc + parsed_url.path + "?" + urlencode(access_token)
143 http_method = r.method
144 body = r.get_data()
145 headers = r.headers
146
147 required_scopes = frappe.db.get_value("OAuth Bearer Token", token, "scopes").split(get_url_delimiter())
148
149 valid, oauthlib_request = get_oauth_server().verify_request(uri, http_method, body, headers, required_scopes)
150
151 if valid:
152 frappe.set_user(frappe.db.get_value("OAuth Bearer Token", token, "user"))
153 frappe.local.form_dict = form_dict
154
155
156 def validate_auth_via_api_keys():
157 """
158 authentication using api key and api secret
159
160 set user
161 """
162 try:
163 authorization_header = frappe.get_request_header("Authorization", None).split(" ") if frappe.get_request_header("Authorization") else None
164 if authorization_header and authorization_header[0] == 'Basic':
165 token = frappe.safe_decode(base64.b64decode(authorization_header[1])).split(":")
166 validate_api_key_secret(token[0], token[1])
167 elif authorization_header and authorization_header[0] == 'token':
168 token = authorization_header[1].split(":")
169 validate_api_key_secret(token[0], token[1])
170 except Exception as e:
171 raise e
172
173 def validate_api_key_secret(api_key, api_secret):
174 user = frappe.db.get_value(
175 doctype="User",
176 filters={"api_key": api_key},
177 fieldname=['name']
178 )
179 form_dict = frappe.local.form_dict
180 user_secret = frappe.utils.password.get_decrypted_password ("User", user, fieldname='api_secret')
181 if api_secret == user_secret:
182 frappe.set_user(user)
183 frappe.local.form_dict = form_dict
184
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/frappe/api.py b/frappe/api.py
--- a/frappe/api.py
+++ b/frappe/api.py
@@ -113,7 +113,10 @@
doctype, **frappe.local.form_dict)})
if frappe.local.request.method=="POST":
- data = json.loads(frappe.local.form_dict.data)
+ if frappe.local.form_dict.data is None:
+ data = json.loads(frappe.local.request.get_data())
+ else:
+ data = json.loads(frappe.local.form_dict.data)
data.update({
"doctype": doctype
})
|
{"golden_diff": "diff --git a/frappe/api.py b/frappe/api.py\n--- a/frappe/api.py\n+++ b/frappe/api.py\n@@ -113,7 +113,10 @@\n \t\t\t\t\t\t\tdoctype, **frappe.local.form_dict)})\n \n \t\t\t\tif frappe.local.request.method==\"POST\":\n-\t\t\t\t\tdata = json.loads(frappe.local.form_dict.data)\n+\t\t\t\t\tif frappe.local.form_dict.data is None:\n+\t\t\t\t\t\tdata = json.loads(frappe.local.request.get_data())\n+\t\t\t\t\telse:\n+\t\t\t\t\t\tdata = json.loads(frappe.local.form_dict.data)\n \t\t\t\t\tdata.update({\n \t\t\t\t\t\t\"doctype\": doctype\n \t\t\t\t\t})\n", "issue": "bug(API): TypeError, cannot create document \n\r\n```bash\r\ncurl -X POST https://my.erpnext.com/api/resource/Lead \\\r\n -H 'Accept: application/json' \\\r\n -H 'Authorization: Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX==' \\\r\n -H 'Content-Type: application/json' \\\r\n -d '{\"lead_name\": \"Jon Doe\"}'\r\n```\r\n\r\nReturns\r\n\r\n```json\r\n{\"exc\":\"[\\\"Traceback (most recent call last):\\\\n File \\\\\\\"/home/frappe/frappe-bench/apps/frappe/frappe/app.py\\\\\\\", line 60, in application\\\\n response = frappe.api.handle()\\\\n File \\\\\\\"/home/frappe/frappe-bench/apps/frappe/frappe/api.py\\\\\\\", line 116, in handle\\\\n data = json.loads(frappe.local.form_dict.data)\\\\n File \\\\\\\"/usr/lib64/python2.7/json/__init__.py\\\\\\\", line 338, in loads\\\\n return _default_decoder.decode(s)\\\\n File \\\\\\\"/usr/lib64/python2.7/json/decoder.py\\\\\\\", line 366, in decode\\\\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\\\\nTypeError: expected string or buffer\\\\n\\\"]\"}\r\n```\r\n\r\nCleaned up stack trace:\r\n\r\n```\r\nTraceback (most recent call last):\r\nFile \"/home/frappe/frappe-bench/apps/frappe/frappe/app.py\", line 60, in application\r\n\tresponse = frappe.api.handle()\r\nFile \"/home/frappe/frappe-bench/apps/frappe/frappe/api.py\", line 116, in handle\r\n\tdata = json.loads(frappe.local.form_dict.data)\r\nFile \"/usr/lib64/python2.7/json/__init__.py\", line 338, in loads\r\n\treturn _default_decoder.decode(s)\r\nFile \"/usr/lib64/python2.7/json/decoder.py\", line 366, in decode\r\n\tobj, end = self.raw_decode(s, idx=_w(s, 0).end())\r\nTypeError: expected string or buffer\r\n```\r\n\r\nThis seems to apply to any DocType \u2013 it's not possible to create documents this way.\r\n\r\n@netchampfaris may [this](https://github.com/frappe/frappe/commit/f63ad574e580360996807931f9c9cfa363385c3d#diff-d65e8dff7122e8822cd2009d1ef1a963) be the cause?\r\n\r\n### Versions\r\nERPNext: v12.0.6 (version-12)\r\nFrappe Framework: v12.0.6 (version-12)\n", "before_files": [{"content": "# Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors\n# MIT License. See license.txt\nfrom __future__ import unicode_literals\n\nimport json\nimport frappe\nimport frappe.handler\nimport frappe.client\nfrom frappe.utils.response import build_response\nfrom frappe import _\nfrom six.moves.urllib.parse import urlparse, urlencode\nimport base64\n\ndef handle():\n\t\"\"\"\n\tHandler for `/api` methods\n\n\t### Examples:\n\n\t`/api/method/{methodname}` will call a whitelisted method\n\n\t`/api/resource/{doctype}` will query a table\n\t\texamples:\n\t\t- `?fields=[\"name\", \"owner\"]`\n\t\t- `?filters=[[\"Task\", \"name\", \"like\", \"%005\"]]`\n\t\t- `?limit_start=0`\n\t\t- `?limit_page_length=20`\n\n\t`/api/resource/{doctype}/{name}` will point to a resource\n\t\t`GET` will return doclist\n\t\t`POST` will insert\n\t\t`PUT` will update\n\t\t`DELETE` will delete\n\n\t`/api/resource/{doctype}/{name}?run_method={method}` will run a whitelisted controller method\n\t\"\"\"\n\n\tvalidate_oauth()\n\tvalidate_auth_via_api_keys()\n\n\tparts = frappe.request.path[1:].split(\"/\",3)\n\tcall = doctype = name = None\n\n\tif len(parts) > 1:\n\t\tcall = parts[1]\n\n\tif len(parts) > 2:\n\t\tdoctype = parts[2]\n\n\tif len(parts) > 3:\n\t\tname = parts[3]\n\n\tif call==\"method\":\n\t\tfrappe.local.form_dict.cmd = doctype\n\t\treturn frappe.handler.handle()\n\n\telif call==\"resource\":\n\t\tif \"run_method\" in frappe.local.form_dict:\n\t\t\tmethod = frappe.local.form_dict.pop(\"run_method\")\n\t\t\tdoc = frappe.get_doc(doctype, name)\n\t\t\tdoc.is_whitelisted(method)\n\n\t\t\tif frappe.local.request.method==\"GET\":\n\t\t\t\tif not doc.has_permission(\"read\"):\n\t\t\t\t\tfrappe.throw(_(\"Not permitted\"), frappe.PermissionError)\n\t\t\t\tfrappe.local.response.update({\"data\": doc.run_method(method, **frappe.local.form_dict)})\n\n\t\t\tif frappe.local.request.method==\"POST\":\n\t\t\t\tif not doc.has_permission(\"write\"):\n\t\t\t\t\tfrappe.throw(_(\"Not permitted\"), frappe.PermissionError)\n\n\t\t\t\tfrappe.local.response.update({\"data\": doc.run_method(method, **frappe.local.form_dict)})\n\t\t\t\tfrappe.db.commit()\n\n\t\telse:\n\t\t\tif name:\n\t\t\t\tif frappe.local.request.method==\"GET\":\n\t\t\t\t\tdoc = frappe.get_doc(doctype, name)\n\t\t\t\t\tif not doc.has_permission(\"read\"):\n\t\t\t\t\t\traise frappe.PermissionError\n\t\t\t\t\tfrappe.local.response.update({\"data\": doc})\n\n\t\t\t\tif frappe.local.request.method==\"PUT\":\n\t\t\t\t\tdata = json.loads(frappe.local.form_dict.data)\n\t\t\t\t\tdoc = frappe.get_doc(doctype, name)\n\n\t\t\t\t\tif \"flags\" in data:\n\t\t\t\t\t\tdel data[\"flags\"]\n\n\t\t\t\t\t# Not checking permissions here because it's checked in doc.save\n\t\t\t\t\tdoc.update(data)\n\n\t\t\t\t\tfrappe.local.response.update({\n\t\t\t\t\t\t\"data\": doc.save().as_dict()\n\t\t\t\t\t})\n\t\t\t\t\tfrappe.db.commit()\n\n\t\t\t\tif frappe.local.request.method==\"DELETE\":\n\t\t\t\t\t# Not checking permissions here because it's checked in delete_doc\n\t\t\t\t\tfrappe.delete_doc(doctype, name, ignore_missing=False)\n\t\t\t\t\tfrappe.local.response.http_status_code = 202\n\t\t\t\t\tfrappe.local.response.message = \"ok\"\n\t\t\t\t\tfrappe.db.commit()\n\n\n\t\t\telif doctype:\n\t\t\t\tif frappe.local.request.method==\"GET\":\n\t\t\t\t\tif frappe.local.form_dict.get('fields'):\n\t\t\t\t\t\tfrappe.local.form_dict['fields'] = json.loads(frappe.local.form_dict['fields'])\n\t\t\t\t\tfrappe.local.form_dict.setdefault('limit_page_length', 20)\n\t\t\t\t\tfrappe.local.response.update({\n\t\t\t\t\t\t\"data\": frappe.call(frappe.client.get_list,\n\t\t\t\t\t\t\tdoctype, **frappe.local.form_dict)})\n\n\t\t\t\tif frappe.local.request.method==\"POST\":\n\t\t\t\t\tdata = json.loads(frappe.local.form_dict.data)\n\t\t\t\t\tdata.update({\n\t\t\t\t\t\t\"doctype\": doctype\n\t\t\t\t\t})\n\t\t\t\t\tfrappe.local.response.update({\n\t\t\t\t\t\t\"data\": frappe.get_doc(data).insert().as_dict()\n\t\t\t\t\t})\n\t\t\t\t\tfrappe.db.commit()\n\t\t\telse:\n\t\t\t\traise frappe.DoesNotExistError\n\n\telse:\n\t\traise frappe.DoesNotExistError\n\n\treturn build_response(\"json\")\n\ndef validate_oauth():\n\tfrom frappe.oauth import get_url_delimiter\n\tform_dict = frappe.local.form_dict\n\tauthorization_header = frappe.get_request_header(\"Authorization\").split(\" \") if frappe.get_request_header(\"Authorization\") else None\n\tif authorization_header and authorization_header[0].lower() == \"bearer\":\n\t\tfrom frappe.integrations.oauth2 import get_oauth_server\n\t\ttoken = authorization_header[1]\n\t\tr = frappe.request\n\t\tparsed_url = urlparse(r.url)\n\t\taccess_token = { \"access_token\": token}\n\t\turi = parsed_url.scheme + \"://\" + parsed_url.netloc + parsed_url.path + \"?\" + urlencode(access_token)\n\t\thttp_method = r.method\n\t\tbody = r.get_data()\n\t\theaders = r.headers\n\n\t\trequired_scopes = frappe.db.get_value(\"OAuth Bearer Token\", token, \"scopes\").split(get_url_delimiter())\n\n\t\tvalid, oauthlib_request = get_oauth_server().verify_request(uri, http_method, body, headers, required_scopes)\n\n\t\tif valid:\n\t\t\tfrappe.set_user(frappe.db.get_value(\"OAuth Bearer Token\", token, \"user\"))\n\t\t\tfrappe.local.form_dict = form_dict\n\n\ndef validate_auth_via_api_keys():\n\t\"\"\"\n\tauthentication using api key and api secret\n\n\tset user\n\t\"\"\"\n\ttry:\n\t\tauthorization_header = frappe.get_request_header(\"Authorization\", None).split(\" \") if frappe.get_request_header(\"Authorization\") else None\n\t\tif authorization_header and authorization_header[0] == 'Basic':\n\t\t\ttoken = frappe.safe_decode(base64.b64decode(authorization_header[1])).split(\":\")\n\t\t\tvalidate_api_key_secret(token[0], token[1])\n\t\telif authorization_header and authorization_header[0] == 'token':\n\t\t\ttoken = authorization_header[1].split(\":\")\n\t\t\tvalidate_api_key_secret(token[0], token[1])\n\texcept Exception as e:\n\t\traise e\n\ndef validate_api_key_secret(api_key, api_secret):\n\tuser = frappe.db.get_value(\n\t\tdoctype=\"User\",\n\t\tfilters={\"api_key\": api_key},\n\t\tfieldname=['name']\n\t)\n\tform_dict = frappe.local.form_dict\n\tuser_secret = frappe.utils.password.get_decrypted_password (\"User\", user, fieldname='api_secret')\n\tif api_secret == user_secret:\n\t\tfrappe.set_user(user)\n\t\tfrappe.local.form_dict = form_dict\n", "path": "frappe/api.py"}], "after_files": [{"content": "# Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors\n# MIT License. See license.txt\nfrom __future__ import unicode_literals\n\nimport json\nimport frappe\nimport frappe.handler\nimport frappe.client\nfrom frappe.utils.response import build_response\nfrom frappe import _\nfrom six.moves.urllib.parse import urlparse, urlencode\nimport base64\n\ndef handle():\n\t\"\"\"\n\tHandler for `/api` methods\n\n\t### Examples:\n\n\t`/api/method/{methodname}` will call a whitelisted method\n\n\t`/api/resource/{doctype}` will query a table\n\t\texamples:\n\t\t- `?fields=[\"name\", \"owner\"]`\n\t\t- `?filters=[[\"Task\", \"name\", \"like\", \"%005\"]]`\n\t\t- `?limit_start=0`\n\t\t- `?limit_page_length=20`\n\n\t`/api/resource/{doctype}/{name}` will point to a resource\n\t\t`GET` will return doclist\n\t\t`POST` will insert\n\t\t`PUT` will update\n\t\t`DELETE` will delete\n\n\t`/api/resource/{doctype}/{name}?run_method={method}` will run a whitelisted controller method\n\t\"\"\"\n\n\tvalidate_oauth()\n\tvalidate_auth_via_api_keys()\n\n\tparts = frappe.request.path[1:].split(\"/\",3)\n\tcall = doctype = name = None\n\n\tif len(parts) > 1:\n\t\tcall = parts[1]\n\n\tif len(parts) > 2:\n\t\tdoctype = parts[2]\n\n\tif len(parts) > 3:\n\t\tname = parts[3]\n\n\tif call==\"method\":\n\t\tfrappe.local.form_dict.cmd = doctype\n\t\treturn frappe.handler.handle()\n\n\telif call==\"resource\":\n\t\tif \"run_method\" in frappe.local.form_dict:\n\t\t\tmethod = frappe.local.form_dict.pop(\"run_method\")\n\t\t\tdoc = frappe.get_doc(doctype, name)\n\t\t\tdoc.is_whitelisted(method)\n\n\t\t\tif frappe.local.request.method==\"GET\":\n\t\t\t\tif not doc.has_permission(\"read\"):\n\t\t\t\t\tfrappe.throw(_(\"Not permitted\"), frappe.PermissionError)\n\t\t\t\tfrappe.local.response.update({\"data\": doc.run_method(method, **frappe.local.form_dict)})\n\n\t\t\tif frappe.local.request.method==\"POST\":\n\t\t\t\tif not doc.has_permission(\"write\"):\n\t\t\t\t\tfrappe.throw(_(\"Not permitted\"), frappe.PermissionError)\n\n\t\t\t\tfrappe.local.response.update({\"data\": doc.run_method(method, **frappe.local.form_dict)})\n\t\t\t\tfrappe.db.commit()\n\n\t\telse:\n\t\t\tif name:\n\t\t\t\tif frappe.local.request.method==\"GET\":\n\t\t\t\t\tdoc = frappe.get_doc(doctype, name)\n\t\t\t\t\tif not doc.has_permission(\"read\"):\n\t\t\t\t\t\traise frappe.PermissionError\n\t\t\t\t\tfrappe.local.response.update({\"data\": doc})\n\n\t\t\t\tif frappe.local.request.method==\"PUT\":\n\t\t\t\t\tdata = json.loads(frappe.local.form_dict.data)\n\t\t\t\t\tdoc = frappe.get_doc(doctype, name)\n\n\t\t\t\t\tif \"flags\" in data:\n\t\t\t\t\t\tdel data[\"flags\"]\n\n\t\t\t\t\t# Not checking permissions here because it's checked in doc.save\n\t\t\t\t\tdoc.update(data)\n\n\t\t\t\t\tfrappe.local.response.update({\n\t\t\t\t\t\t\"data\": doc.save().as_dict()\n\t\t\t\t\t})\n\t\t\t\t\tfrappe.db.commit()\n\n\t\t\t\tif frappe.local.request.method==\"DELETE\":\n\t\t\t\t\t# Not checking permissions here because it's checked in delete_doc\n\t\t\t\t\tfrappe.delete_doc(doctype, name, ignore_missing=False)\n\t\t\t\t\tfrappe.local.response.http_status_code = 202\n\t\t\t\t\tfrappe.local.response.message = \"ok\"\n\t\t\t\t\tfrappe.db.commit()\n\n\n\t\t\telif doctype:\n\t\t\t\tif frappe.local.request.method==\"GET\":\n\t\t\t\t\tif frappe.local.form_dict.get('fields'):\n\t\t\t\t\t\tfrappe.local.form_dict['fields'] = json.loads(frappe.local.form_dict['fields'])\n\t\t\t\t\tfrappe.local.form_dict.setdefault('limit_page_length', 20)\n\t\t\t\t\tfrappe.local.response.update({\n\t\t\t\t\t\t\"data\": frappe.call(frappe.client.get_list,\n\t\t\t\t\t\t\tdoctype, **frappe.local.form_dict)})\n\n\t\t\t\tif frappe.local.request.method==\"POST\":\n\t\t\t\t\tif frappe.local.form_dict.data is None:\n\t\t\t\t\t\tdata = json.loads(frappe.local.request.get_data())\n\t\t\t\t\telse:\n\t\t\t\t\t\tdata = json.loads(frappe.local.form_dict.data)\n\t\t\t\t\tdata.update({\n\t\t\t\t\t\t\"doctype\": doctype\n\t\t\t\t\t})\n\t\t\t\t\tfrappe.local.response.update({\n\t\t\t\t\t\t\"data\": frappe.get_doc(data).insert().as_dict()\n\t\t\t\t\t})\n\t\t\t\t\tfrappe.db.commit()\n\t\t\telse:\n\t\t\t\traise frappe.DoesNotExistError\n\n\telse:\n\t\traise frappe.DoesNotExistError\n\n\treturn build_response(\"json\")\n\ndef validate_oauth():\n\tfrom frappe.oauth import get_url_delimiter\n\tform_dict = frappe.local.form_dict\n\tauthorization_header = frappe.get_request_header(\"Authorization\").split(\" \") if frappe.get_request_header(\"Authorization\") else None\n\tif authorization_header and authorization_header[0].lower() == \"bearer\":\n\t\tfrom frappe.integrations.oauth2 import get_oauth_server\n\t\ttoken = authorization_header[1]\n\t\tr = frappe.request\n\t\tparsed_url = urlparse(r.url)\n\t\taccess_token = { \"access_token\": token}\n\t\turi = parsed_url.scheme + \"://\" + parsed_url.netloc + parsed_url.path + \"?\" + urlencode(access_token)\n\t\thttp_method = r.method\n\t\tbody = r.get_data()\n\t\theaders = r.headers\n\n\t\trequired_scopes = frappe.db.get_value(\"OAuth Bearer Token\", token, \"scopes\").split(get_url_delimiter())\n\n\t\tvalid, oauthlib_request = get_oauth_server().verify_request(uri, http_method, body, headers, required_scopes)\n\n\t\tif valid:\n\t\t\tfrappe.set_user(frappe.db.get_value(\"OAuth Bearer Token\", token, \"user\"))\n\t\t\tfrappe.local.form_dict = form_dict\n\n\ndef validate_auth_via_api_keys():\n\t\"\"\"\n\tauthentication using api key and api secret\n\n\tset user\n\t\"\"\"\n\ttry:\n\t\tauthorization_header = frappe.get_request_header(\"Authorization\", None).split(\" \") if frappe.get_request_header(\"Authorization\") else None\n\t\tif authorization_header and authorization_header[0] == 'Basic':\n\t\t\ttoken = frappe.safe_decode(base64.b64decode(authorization_header[1])).split(\":\")\n\t\t\tvalidate_api_key_secret(token[0], token[1])\n\t\telif authorization_header and authorization_header[0] == 'token':\n\t\t\ttoken = authorization_header[1].split(\":\")\n\t\t\tvalidate_api_key_secret(token[0], token[1])\n\texcept Exception as e:\n\t\traise e\n\ndef validate_api_key_secret(api_key, api_secret):\n\tuser = frappe.db.get_value(\n\t\tdoctype=\"User\",\n\t\tfilters={\"api_key\": api_key},\n\t\tfieldname=['name']\n\t)\n\tform_dict = frappe.local.form_dict\n\tuser_secret = frappe.utils.password.get_decrypted_password (\"User\", user, fieldname='api_secret')\n\tif api_secret == user_secret:\n\t\tfrappe.set_user(user)\n\t\tfrappe.local.form_dict = form_dict\n", "path": "frappe/api.py"}]}
| 2,850 | 138 |
gh_patches_debug_16546
|
rasdani/github-patches
|
git_diff
|
Azure__azure-cli-extensions-1444
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Typo in Documentation
### Description of issue (in as much detail as possible)
The delete command's description says Create instead of delete.
az monitor app-insights component delete | Create a new Application Insights resource.
-----
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/application-insights/setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 # --------------------------------------------------------------------------------------------
4 # Copyright (c) Microsoft Corporation. All rights reserved.
5 # Licensed under the MIT License. See License.txt in the project root for license information.
6 # --------------------------------------------------------------------------------------------
7
8 from codecs import open
9 from setuptools import setup, find_packages
10
11 VERSION = "0.1.5"
12
13 CLASSIFIERS = [
14 'Development Status :: 4 - Beta',
15 'Intended Audience :: Developers',
16 'Intended Audience :: System Administrators',
17 'Programming Language :: Python',
18 'Programming Language :: Python :: 2',
19 'Programming Language :: Python :: 2.7',
20 'Programming Language :: Python :: 3',
21 'Programming Language :: Python :: 3.4',
22 'Programming Language :: Python :: 3.5',
23 'Programming Language :: Python :: 3.6',
24 'License :: OSI Approved :: MIT License',
25 ]
26
27 DEPENDENCIES = []
28
29 with open('README.rst', 'r', encoding='utf-8') as f:
30 README = f.read()
31 with open('HISTORY.rst', 'r', encoding='utf-8') as f:
32 HISTORY = f.read()
33
34 setup(
35 name='application-insights',
36 version=VERSION,
37 description='Support for managing Application Insights components and querying metrics, events, and logs from such components.',
38 long_description=README + '\n\n' + HISTORY,
39 license='MIT',
40 author='Ace Eldeib',
41 author_email='[email protected]',
42 url='https://github.com/Azure/azure-cli-extensions/tree/master/src/application-insights',
43 classifiers=CLASSIFIERS,
44 packages=find_packages(exclude=["tests"]),
45 package_data={'azext_applicationinsights': ['azext_metadata.json']},
46 install_requires=DEPENDENCIES
47 )
48
```
Path: `src/application-insights/azext_applicationinsights/_help.py`
Content:
```
1 # --------------------------------------------------------------------------------------------
2 # Copyright (c) Microsoft Corporation. All rights reserved.
3 # Licensed under the MIT License. See License.txt in the project root for license information.
4 # --------------------------------------------------------------------------------------------
5
6 from knack.help_files import helps
7
8 # pylint: disable=line-too-long
9
10 helps['monitor app-insights'] = """
11 type: group
12 short-summary: Commands for querying data in Application Insights applications.
13 parameters:
14 - name: --offset
15 short-summary: >
16 Time offset of the query range, in ##d##h format.
17 long-summary: >
18 Can be used with either --start-time or --end-time. If used with --start-time, then
19 the end time will be calculated by adding the offset. If used with --end-time (default), then
20 the start time will be calculated by subtracting the offset. If --start-time and --end-time are
21 provided, then --offset will be ignored.
22 """
23
24 helps['monitor app-insights component'] = """
25 type: group
26 short-summary: Manage an Application Insights component or its subcomponents.
27 """
28
29 helps['monitor app-insights component create'] = """
30 type: command
31 short-summary: Create a new Application Insights resource.
32 parameters:
33 - name: --application-type
34 type: string
35 short-summary: Type of application being monitored. Possible values include 'web', 'other'. Default value is'web' .
36 - name: --kind -k
37 type: string
38 short-summary: The kind of application that this component refers to, used to customize UI. This value is a freeform string, values should typically be one of web, ios, other, store, java, phone.
39 examples:
40 - name: Create a component with kind web and location.
41 text: |
42 az monitor app-insights component create --app demoApp --location westus2 --kind web -g demoRg --application-type web
43 """
44
45 helps['monitor app-insights component update'] = """
46 type: command
47 short-summary: Update properties on an existing Application Insights resource. The primary value which can be updated is kind, which customizes the UI experience.
48 parameters:
49 - name: --kind -k
50 type: string
51 short-summary: The kind of application that this component refers to, used to customize UI. This value is a freeform string, values should typically be one of web, ios, other, store, java, phone.
52 examples:
53 - name: Update a component with kind web.
54 text: |
55 az monitor app-insights component update --app demoApp -k web -g demoRg
56 """
57
58 helps['monitor app-insights component update-tags'] = """
59 type: command
60 short-summary: Update tags on an existing Application Insights resource.
61 examples:
62 - name: Update the tag 'name' to equal 'value'.
63 text: |
64 az monitor app-insights component update-tags --app demoApp --tags name=value -g demoRg
65 """
66
67 helps['monitor app-insights component show'] = """
68 type: command
69 short-summary: Get an Application Insights resource.
70 examples:
71 - name: Get a component by name.
72 text: |
73 az monitor app-insights component show --app demoApp -g demoRg
74 - name: List components in a resource group.
75 text: |
76 az monitor app-insights component show -g demoRg
77 - name: List components in the currently selected subscription.
78 text: |
79 az monitor app-insights component show
80 """
81
82 helps['monitor app-insights component delete'] = """
83 type: command
84 short-summary: Create a new Application Insights resource.
85 examples:
86 - name: Create a component with kind web and location.
87 text: |
88 az monitor app-insights component delete --app demoApp -g demoRg
89 """
90
91 helps['monitor app-insights component billing'] = """
92 type: group
93 short-summary: Manage an Application Insights component billing features.
94 """
95
96 helps['monitor app-insights component billing show'] = """
97 type: command
98 short-summary: Show the billing features of an Application Insights resource.
99 examples:
100 - name: Show the billing features of an application insights component
101 text: |
102 az monitor app-insights component billing show --app demoApp -g demoRg
103 """
104
105 helps['monitor app-insights component billing update'] = """
106 type: command
107 short-summary: Update the billing features of an Application Insights resource.
108 examples:
109 - name: Update the daily cap of the billing features
110 text: |
111 az monitor app-insights component billing update --app demoApp -g demoRg --cap 200 --stop
112 """
113
114 helps['monitor app-insights api-key'] = """
115 type: group
116 short-summary: Operations on API keys associated with an Application Insights component.
117 """
118
119 helps['monitor app-insights api-key show'] = """
120 type: command
121 short-summary: Get all keys or a specific API key associated with an Application Insights resource.
122 parameters:
123 - name: --api-key
124 type: string
125 short-summary: name of the API key to fetch. Can be found using `api-keys show`.
126 examples:
127 - name: Fetch API Key.
128 text: |
129 az monitor app-insights api-key show --app demoApp -g demoRg --api-key demo-key
130 - name: Fetch API Keys.
131 text: |
132 az monitor app-insights api-key show --app demoApp -g demoRg
133 """
134
135 helps['monitor app-insights api-key delete'] = """
136 type: command
137 short-summary: Delete an API key from an Application Insights resource.
138 parameters:
139 - name: --api-key
140 type: string
141 short-summary: Name of the API key to delete. Can be found using `api-keys show`.
142 examples:
143 - name: Delete API Key.
144 text: |
145 az monitor app-insights api-key delete --app demoApp -g demoRg --api-key demo-key
146 """
147
148 helps['monitor app-insights api-key create'] = """
149 type: command
150 short-summary: Create a new API key for use with an Application Insights resource.
151 parameters:
152 - name: --api-key
153 type: string
154 short-summary: Name of the API key to create.
155 - name: --read-properties
156 type: list
157 short-summary: A space seperated list of names of read Roles for this API key to inherit. Possible values include ReadTelemetry and AuthenticateSDKControlChannel.
158 - name: --write-properties
159 type: list
160 short-summary: A space seperated list of names of write Roles for this API key to inherit. Possible values include WriteAnnotations.
161 examples:
162 - name: Create a component with kind web and location.
163 text: |
164 az monitor app-insights api-key create --api-key cli-demo --read-properties ReadTelemetry -g demoRg --app testApp
165 """
166
167 helps['monitor app-insights metrics'] = """
168 type: group
169 short-summary: Retrieve metrics from an application.
170 """
171
172 helps['monitor app-insights events'] = """
173 type: group
174 short-summary: Retrieve events from an application.
175 """
176
177 helps['monitor app-insights query'] = """
178 type: command
179 short-summary: Execute a query over data in your application.
180 parameters:
181 - name: --offset
182 short-summary: >
183 Time offset of the query range, in ##d##h format.
184 long-summary: >
185 Can be used with either --start-time or --end-time. If used with --start-time, then
186 the end time will be calculated by adding the offset. If used with --end-time (default), then
187 the start time will be calculated by subtracting the offset. If --start-time and --end-time are
188 provided, then --offset will be ignored.
189 examples:
190 - name: Execute a simple query over past 1 hour and 30 minutes.
191 text: |
192 az monitor app-insights query --app e292531c-eb03-4079-9bb0-fe6b56b99f8b --analytics-query 'requests | summarize count() by bin(timestamp, 1h)' --offset 1h30m
193 """
194
195 helps['monitor app-insights metrics show'] = """
196 type: command
197 short-summary: View the value of a single metric.
198 parameters:
199 - name: --interval
200 short-summary: >
201 The interval over which to aggregate metrics, in ##h##m format.
202 - name: --offset
203 short-summary: >
204 Time offset of the query range, in ##d##h format.
205 long-summary: >
206 Can be used with either --start-time or --end-time. If used with --start-time, then
207 the end time will be calculated by adding the offset. If used with --end-time (default), then
208 the start time will be calculated by subtracting the offset. If --start-time and --end-time are
209 provided, then --offset will be ignored.
210 examples:
211 - name: View the count of availabilityResults events.
212 text: |
213 az monitor app-insights metrics show --app e292531c-eb03-4079-9bb0-fe6b56b99f8b --metric availabilityResults/count
214 """
215
216 helps['monitor app-insights metrics get-metadata'] = """
217 type: command
218 short-summary: Get the metadata for metrics on a particular application.
219 examples:
220 - name: Views the metadata for the provided app.
221 text: |
222 az monitor app-insights metrics get-metadata --app e292531c-eb03-4079-9bb0-fe6b56b99f8b
223 """
224
225 helps['monitor app-insights events show'] = """
226 type: command
227 short-summary: List events by type or view a single event from an application, specified by type and ID.
228 parameters:
229 - name: --offset
230 short-summary: >
231 Time offset of the query range, in ##d##h format.
232 long-summary: >
233 Can be used with either --start-time or --end-time. If used with --start-time, then
234 the end time will be calculated by adding the offset. If used with --end-time (default), then
235 the start time will be calculated by subtracting the offset. If --start-time and --end-time are
236 provided, then --offset will be ignored.
237 examples:
238 - name: Get an availability result by ID.
239 text: |
240 az monitor app-insights events show --app 578f0e27-12e9-4631-bc02-50b965da2633 --type availabilityResults --event b2cf08df-bf42-4278-8d2c-5b55f85901fe
241 - name: List availability results from the last 24 hours.
242 text: |
243 az monitor app-insights events show --app 578f0e27-12e9-4631-bc02-50b965da2633 --type availabilityResults --offset 24h
244 """
245
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/application-insights/azext_applicationinsights/_help.py b/src/application-insights/azext_applicationinsights/_help.py
--- a/src/application-insights/azext_applicationinsights/_help.py
+++ b/src/application-insights/azext_applicationinsights/_help.py
@@ -81,9 +81,9 @@
helps['monitor app-insights component delete'] = """
type: command
- short-summary: Create a new Application Insights resource.
+ short-summary: Delete a new Application Insights resource.
examples:
- - name: Create a component with kind web and location.
+ - name: Delete a component with kind web and location.
text: |
az monitor app-insights component delete --app demoApp -g demoRg
"""
diff --git a/src/application-insights/setup.py b/src/application-insights/setup.py
--- a/src/application-insights/setup.py
+++ b/src/application-insights/setup.py
@@ -8,7 +8,7 @@
from codecs import open
from setuptools import setup, find_packages
-VERSION = "0.1.5"
+VERSION = "0.1.6"
CLASSIFIERS = [
'Development Status :: 4 - Beta',
|
{"golden_diff": "diff --git a/src/application-insights/azext_applicationinsights/_help.py b/src/application-insights/azext_applicationinsights/_help.py\n--- a/src/application-insights/azext_applicationinsights/_help.py\n+++ b/src/application-insights/azext_applicationinsights/_help.py\n@@ -81,9 +81,9 @@\n \n helps['monitor app-insights component delete'] = \"\"\"\n type: command\n- short-summary: Create a new Application Insights resource.\n+ short-summary: Delete a new Application Insights resource.\n examples:\n- - name: Create a component with kind web and location.\n+ - name: Delete a component with kind web and location.\n text: |\n az monitor app-insights component delete --app demoApp -g demoRg\n \"\"\"\ndiff --git a/src/application-insights/setup.py b/src/application-insights/setup.py\n--- a/src/application-insights/setup.py\n+++ b/src/application-insights/setup.py\n@@ -8,7 +8,7 @@\n from codecs import open\n from setuptools import setup, find_packages\n \n-VERSION = \"0.1.5\"\n+VERSION = \"0.1.6\"\n \n CLASSIFIERS = [\n 'Development Status :: 4 - Beta',\n", "issue": "Typo in Documentation\n### Description of issue (in as much detail as possible)\r\n\r\nThe delete command's description says Create instead of delete. \r\n\r\naz monitor app-insights component delete | Create a new Application Insights resource.\r\n\r\n\r\n\r\n-----\r\n\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\n# --------------------------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See License.txt in the project root for license information.\n# --------------------------------------------------------------------------------------------\n\nfrom codecs import open\nfrom setuptools import setup, find_packages\n\nVERSION = \"0.1.5\"\n\nCLASSIFIERS = [\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'License :: OSI Approved :: MIT License',\n]\n\nDEPENDENCIES = []\n\nwith open('README.rst', 'r', encoding='utf-8') as f:\n README = f.read()\nwith open('HISTORY.rst', 'r', encoding='utf-8') as f:\n HISTORY = f.read()\n\nsetup(\n name='application-insights',\n version=VERSION,\n description='Support for managing Application Insights components and querying metrics, events, and logs from such components.',\n long_description=README + '\\n\\n' + HISTORY,\n license='MIT',\n author='Ace Eldeib',\n author_email='[email protected]',\n url='https://github.com/Azure/azure-cli-extensions/tree/master/src/application-insights',\n classifiers=CLASSIFIERS,\n packages=find_packages(exclude=[\"tests\"]),\n package_data={'azext_applicationinsights': ['azext_metadata.json']},\n install_requires=DEPENDENCIES\n)\n", "path": "src/application-insights/setup.py"}, {"content": "# --------------------------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See License.txt in the project root for license information.\n# --------------------------------------------------------------------------------------------\n\nfrom knack.help_files import helps\n\n# pylint: disable=line-too-long\n\nhelps['monitor app-insights'] = \"\"\"\n type: group\n short-summary: Commands for querying data in Application Insights applications.\n parameters:\n - name: --offset\n short-summary: >\n Time offset of the query range, in ##d##h format.\n long-summary: >\n Can be used with either --start-time or --end-time. If used with --start-time, then\n the end time will be calculated by adding the offset. If used with --end-time (default), then\n the start time will be calculated by subtracting the offset. If --start-time and --end-time are\n provided, then --offset will be ignored.\n\"\"\"\n\nhelps['monitor app-insights component'] = \"\"\"\n type: group\n short-summary: Manage an Application Insights component or its subcomponents.\n\"\"\"\n\nhelps['monitor app-insights component create'] = \"\"\"\n type: command\n short-summary: Create a new Application Insights resource.\n parameters:\n - name: --application-type\n type: string\n short-summary: Type of application being monitored. Possible values include 'web', 'other'. Default value is'web' .\n - name: --kind -k\n type: string\n short-summary: The kind of application that this component refers to, used to customize UI. This value is a freeform string, values should typically be one of web, ios, other, store, java, phone.\n examples:\n - name: Create a component with kind web and location.\n text: |\n az monitor app-insights component create --app demoApp --location westus2 --kind web -g demoRg --application-type web\n\"\"\"\n\nhelps['monitor app-insights component update'] = \"\"\"\n type: command\n short-summary: Update properties on an existing Application Insights resource. The primary value which can be updated is kind, which customizes the UI experience.\n parameters:\n - name: --kind -k\n type: string\n short-summary: The kind of application that this component refers to, used to customize UI. This value is a freeform string, values should typically be one of web, ios, other, store, java, phone.\n examples:\n - name: Update a component with kind web.\n text: |\n az monitor app-insights component update --app demoApp -k web -g demoRg\n\"\"\"\n\nhelps['monitor app-insights component update-tags'] = \"\"\"\n type: command\n short-summary: Update tags on an existing Application Insights resource.\n examples:\n - name: Update the tag 'name' to equal 'value'.\n text: |\n az monitor app-insights component update-tags --app demoApp --tags name=value -g demoRg\n\"\"\"\n\nhelps['monitor app-insights component show'] = \"\"\"\n type: command\n short-summary: Get an Application Insights resource.\n examples:\n - name: Get a component by name.\n text: |\n az monitor app-insights component show --app demoApp -g demoRg\n - name: List components in a resource group.\n text: |\n az monitor app-insights component show -g demoRg\n - name: List components in the currently selected subscription.\n text: |\n az monitor app-insights component show\n\"\"\"\n\nhelps['monitor app-insights component delete'] = \"\"\"\n type: command\n short-summary: Create a new Application Insights resource.\n examples:\n - name: Create a component with kind web and location.\n text: |\n az monitor app-insights component delete --app demoApp -g demoRg\n\"\"\"\n\nhelps['monitor app-insights component billing'] = \"\"\"\n type: group\n short-summary: Manage an Application Insights component billing features.\n\"\"\"\n\nhelps['monitor app-insights component billing show'] = \"\"\"\n type: command\n short-summary: Show the billing features of an Application Insights resource.\n examples:\n - name: Show the billing features of an application insights component\n text: |\n az monitor app-insights component billing show --app demoApp -g demoRg\n\"\"\"\n\nhelps['monitor app-insights component billing update'] = \"\"\"\n type: command\n short-summary: Update the billing features of an Application Insights resource.\n examples:\n - name: Update the daily cap of the billing features\n text: |\n az monitor app-insights component billing update --app demoApp -g demoRg --cap 200 --stop\n\"\"\"\n\nhelps['monitor app-insights api-key'] = \"\"\"\n type: group\n short-summary: Operations on API keys associated with an Application Insights component.\n\"\"\"\n\nhelps['monitor app-insights api-key show'] = \"\"\"\n type: command\n short-summary: Get all keys or a specific API key associated with an Application Insights resource.\n parameters:\n - name: --api-key\n type: string\n short-summary: name of the API key to fetch. Can be found using `api-keys show`.\n examples:\n - name: Fetch API Key.\n text: |\n az monitor app-insights api-key show --app demoApp -g demoRg --api-key demo-key\n - name: Fetch API Keys.\n text: |\n az monitor app-insights api-key show --app demoApp -g demoRg\n\"\"\"\n\nhelps['monitor app-insights api-key delete'] = \"\"\"\n type: command\n short-summary: Delete an API key from an Application Insights resource.\n parameters:\n - name: --api-key\n type: string\n short-summary: Name of the API key to delete. Can be found using `api-keys show`.\n examples:\n - name: Delete API Key.\n text: |\n az monitor app-insights api-key delete --app demoApp -g demoRg --api-key demo-key\n\"\"\"\n\nhelps['monitor app-insights api-key create'] = \"\"\"\n type: command\n short-summary: Create a new API key for use with an Application Insights resource.\n parameters:\n - name: --api-key\n type: string\n short-summary: Name of the API key to create.\n - name: --read-properties\n type: list\n short-summary: A space seperated list of names of read Roles for this API key to inherit. Possible values include ReadTelemetry and AuthenticateSDKControlChannel.\n - name: --write-properties\n type: list\n short-summary: A space seperated list of names of write Roles for this API key to inherit. Possible values include WriteAnnotations.\n examples:\n - name: Create a component with kind web and location.\n text: |\n az monitor app-insights api-key create --api-key cli-demo --read-properties ReadTelemetry -g demoRg --app testApp\n\"\"\"\n\nhelps['monitor app-insights metrics'] = \"\"\"\n type: group\n short-summary: Retrieve metrics from an application.\n\"\"\"\n\nhelps['monitor app-insights events'] = \"\"\"\n type: group\n short-summary: Retrieve events from an application.\n\"\"\"\n\nhelps['monitor app-insights query'] = \"\"\"\n type: command\n short-summary: Execute a query over data in your application.\n parameters:\n - name: --offset\n short-summary: >\n Time offset of the query range, in ##d##h format.\n long-summary: >\n Can be used with either --start-time or --end-time. If used with --start-time, then\n the end time will be calculated by adding the offset. If used with --end-time (default), then\n the start time will be calculated by subtracting the offset. If --start-time and --end-time are\n provided, then --offset will be ignored.\n examples:\n - name: Execute a simple query over past 1 hour and 30 minutes.\n text: |\n az monitor app-insights query --app e292531c-eb03-4079-9bb0-fe6b56b99f8b --analytics-query 'requests | summarize count() by bin(timestamp, 1h)' --offset 1h30m\n\"\"\"\n\nhelps['monitor app-insights metrics show'] = \"\"\"\n type: command\n short-summary: View the value of a single metric.\n parameters:\n - name: --interval\n short-summary: >\n The interval over which to aggregate metrics, in ##h##m format.\n - name: --offset\n short-summary: >\n Time offset of the query range, in ##d##h format.\n long-summary: >\n Can be used with either --start-time or --end-time. If used with --start-time, then\n the end time will be calculated by adding the offset. If used with --end-time (default), then\n the start time will be calculated by subtracting the offset. If --start-time and --end-time are\n provided, then --offset will be ignored.\n examples:\n - name: View the count of availabilityResults events.\n text: |\n az monitor app-insights metrics show --app e292531c-eb03-4079-9bb0-fe6b56b99f8b --metric availabilityResults/count\n\"\"\"\n\nhelps['monitor app-insights metrics get-metadata'] = \"\"\"\n type: command\n short-summary: Get the metadata for metrics on a particular application.\n examples:\n - name: Views the metadata for the provided app.\n text: |\n az monitor app-insights metrics get-metadata --app e292531c-eb03-4079-9bb0-fe6b56b99f8b\n\"\"\"\n\nhelps['monitor app-insights events show'] = \"\"\"\n type: command\n short-summary: List events by type or view a single event from an application, specified by type and ID.\n parameters:\n - name: --offset\n short-summary: >\n Time offset of the query range, in ##d##h format.\n long-summary: >\n Can be used with either --start-time or --end-time. If used with --start-time, then\n the end time will be calculated by adding the offset. If used with --end-time (default), then\n the start time will be calculated by subtracting the offset. If --start-time and --end-time are\n provided, then --offset will be ignored.\n examples:\n - name: Get an availability result by ID.\n text: |\n az monitor app-insights events show --app 578f0e27-12e9-4631-bc02-50b965da2633 --type availabilityResults --event b2cf08df-bf42-4278-8d2c-5b55f85901fe\n - name: List availability results from the last 24 hours.\n text: |\n az monitor app-insights events show --app 578f0e27-12e9-4631-bc02-50b965da2633 --type availabilityResults --offset 24h\n\"\"\"\n", "path": "src/application-insights/azext_applicationinsights/_help.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n# --------------------------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See License.txt in the project root for license information.\n# --------------------------------------------------------------------------------------------\n\nfrom codecs import open\nfrom setuptools import setup, find_packages\n\nVERSION = \"0.1.6\"\n\nCLASSIFIERS = [\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'License :: OSI Approved :: MIT License',\n]\n\nDEPENDENCIES = []\n\nwith open('README.rst', 'r', encoding='utf-8') as f:\n README = f.read()\nwith open('HISTORY.rst', 'r', encoding='utf-8') as f:\n HISTORY = f.read()\n\nsetup(\n name='application-insights',\n version=VERSION,\n description='Support for managing Application Insights components and querying metrics, events, and logs from such components.',\n long_description=README + '\\n\\n' + HISTORY,\n license='MIT',\n author='Ace Eldeib',\n author_email='[email protected]',\n url='https://github.com/Azure/azure-cli-extensions/tree/master/src/application-insights',\n classifiers=CLASSIFIERS,\n packages=find_packages(exclude=[\"tests\"]),\n package_data={'azext_applicationinsights': ['azext_metadata.json']},\n install_requires=DEPENDENCIES\n)\n", "path": "src/application-insights/setup.py"}, {"content": "# --------------------------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See License.txt in the project root for license information.\n# --------------------------------------------------------------------------------------------\n\nfrom knack.help_files import helps\n\n# pylint: disable=line-too-long\n\nhelps['monitor app-insights'] = \"\"\"\n type: group\n short-summary: Commands for querying data in Application Insights applications.\n parameters:\n - name: --offset\n short-summary: >\n Time offset of the query range, in ##d##h format.\n long-summary: >\n Can be used with either --start-time or --end-time. If used with --start-time, then\n the end time will be calculated by adding the offset. If used with --end-time (default), then\n the start time will be calculated by subtracting the offset. If --start-time and --end-time are\n provided, then --offset will be ignored.\n\"\"\"\n\nhelps['monitor app-insights component'] = \"\"\"\n type: group\n short-summary: Manage an Application Insights component or its subcomponents.\n\"\"\"\n\nhelps['monitor app-insights component create'] = \"\"\"\n type: command\n short-summary: Create a new Application Insights resource.\n parameters:\n - name: --application-type\n type: string\n short-summary: Type of application being monitored. Possible values include 'web', 'other'. Default value is'web' .\n - name: --kind -k\n type: string\n short-summary: The kind of application that this component refers to, used to customize UI. This value is a freeform string, values should typically be one of web, ios, other, store, java, phone.\n examples:\n - name: Create a component with kind web and location.\n text: |\n az monitor app-insights component create --app demoApp --location westus2 --kind web -g demoRg --application-type web\n\"\"\"\n\nhelps['monitor app-insights component update'] = \"\"\"\n type: command\n short-summary: Update properties on an existing Application Insights resource. The primary value which can be updated is kind, which customizes the UI experience.\n parameters:\n - name: --kind -k\n type: string\n short-summary: The kind of application that this component refers to, used to customize UI. This value is a freeform string, values should typically be one of web, ios, other, store, java, phone.\n examples:\n - name: Update a component with kind web.\n text: |\n az monitor app-insights component update --app demoApp -k web -g demoRg\n\"\"\"\n\nhelps['monitor app-insights component update-tags'] = \"\"\"\n type: command\n short-summary: Update tags on an existing Application Insights resource.\n examples:\n - name: Update the tag 'name' to equal 'value'.\n text: |\n az monitor app-insights component update-tags --app demoApp --tags name=value -g demoRg\n\"\"\"\n\nhelps['monitor app-insights component show'] = \"\"\"\n type: command\n short-summary: Get an Application Insights resource.\n examples:\n - name: Get a component by name.\n text: |\n az monitor app-insights component show --app demoApp -g demoRg\n - name: List components in a resource group.\n text: |\n az monitor app-insights component show -g demoRg\n - name: List components in the currently selected subscription.\n text: |\n az monitor app-insights component show\n\"\"\"\n\nhelps['monitor app-insights component delete'] = \"\"\"\n type: command\n short-summary: Delete a new Application Insights resource.\n examples:\n - name: Delete a component with kind web and location.\n text: |\n az monitor app-insights component delete --app demoApp -g demoRg\n\"\"\"\n\nhelps['monitor app-insights component billing'] = \"\"\"\n type: group\n short-summary: Manage an Application Insights component billing features.\n\"\"\"\n\nhelps['monitor app-insights component billing show'] = \"\"\"\n type: command\n short-summary: Show the billing features of an Application Insights resource.\n examples:\n - name: Show the billing features of an application insights component\n text: |\n az monitor app-insights component billing show --app demoApp -g demoRg\n\"\"\"\n\nhelps['monitor app-insights component billing update'] = \"\"\"\n type: command\n short-summary: Update the billing features of an Application Insights resource.\n examples:\n - name: Update the daily cap of the billing features\n text: |\n az monitor app-insights component billing update --app demoApp -g demoRg --cap 200 --stop\n\"\"\"\n\nhelps['monitor app-insights api-key'] = \"\"\"\n type: group\n short-summary: Operations on API keys associated with an Application Insights component.\n\"\"\"\n\nhelps['monitor app-insights api-key show'] = \"\"\"\n type: command\n short-summary: Get all keys or a specific API key associated with an Application Insights resource.\n parameters:\n - name: --api-key\n type: string\n short-summary: name of the API key to fetch. Can be found using `api-keys show`.\n examples:\n - name: Fetch API Key.\n text: |\n az monitor app-insights api-key show --app demoApp -g demoRg --api-key demo-key\n - name: Fetch API Keys.\n text: |\n az monitor app-insights api-key show --app demoApp -g demoRg\n\"\"\"\n\nhelps['monitor app-insights api-key delete'] = \"\"\"\n type: command\n short-summary: Delete an API key from an Application Insights resource.\n parameters:\n - name: --api-key\n type: string\n short-summary: Name of the API key to delete. Can be found using `api-keys show`.\n examples:\n - name: Delete API Key.\n text: |\n az monitor app-insights api-key delete --app demoApp -g demoRg --api-key demo-key\n\"\"\"\n\nhelps['monitor app-insights api-key create'] = \"\"\"\n type: command\n short-summary: Create a new API key for use with an Application Insights resource.\n parameters:\n - name: --api-key\n type: string\n short-summary: Name of the API key to create.\n - name: --read-properties\n type: list\n short-summary: A space seperated list of names of read Roles for this API key to inherit. Possible values include ReadTelemetry and AuthenticateSDKControlChannel.\n - name: --write-properties\n type: list\n short-summary: A space seperated list of names of write Roles for this API key to inherit. Possible values include WriteAnnotations.\n examples:\n - name: Create a component with kind web and location.\n text: |\n az monitor app-insights api-key create --api-key cli-demo --read-properties ReadTelemetry -g demoRg --app testApp\n\"\"\"\n\nhelps['monitor app-insights metrics'] = \"\"\"\n type: group\n short-summary: Retrieve metrics from an application.\n\"\"\"\n\nhelps['monitor app-insights events'] = \"\"\"\n type: group\n short-summary: Retrieve events from an application.\n\"\"\"\n\nhelps['monitor app-insights query'] = \"\"\"\n type: command\n short-summary: Execute a query over data in your application.\n parameters:\n - name: --offset\n short-summary: >\n Time offset of the query range, in ##d##h format.\n long-summary: >\n Can be used with either --start-time or --end-time. If used with --start-time, then\n the end time will be calculated by adding the offset. If used with --end-time (default), then\n the start time will be calculated by subtracting the offset. If --start-time and --end-time are\n provided, then --offset will be ignored.\n examples:\n - name: Execute a simple query over past 1 hour and 30 minutes.\n text: |\n az monitor app-insights query --app e292531c-eb03-4079-9bb0-fe6b56b99f8b --analytics-query 'requests | summarize count() by bin(timestamp, 1h)' --offset 1h30m\n\"\"\"\n\nhelps['monitor app-insights metrics show'] = \"\"\"\n type: command\n short-summary: View the value of a single metric.\n parameters:\n - name: --interval\n short-summary: >\n The interval over which to aggregate metrics, in ##h##m format.\n - name: --offset\n short-summary: >\n Time offset of the query range, in ##d##h format.\n long-summary: >\n Can be used with either --start-time or --end-time. If used with --start-time, then\n the end time will be calculated by adding the offset. If used with --end-time (default), then\n the start time will be calculated by subtracting the offset. If --start-time and --end-time are\n provided, then --offset will be ignored.\n examples:\n - name: View the count of availabilityResults events.\n text: |\n az monitor app-insights metrics show --app e292531c-eb03-4079-9bb0-fe6b56b99f8b --metric availabilityResults/count\n\"\"\"\n\nhelps['monitor app-insights metrics get-metadata'] = \"\"\"\n type: command\n short-summary: Get the metadata for metrics on a particular application.\n examples:\n - name: Views the metadata for the provided app.\n text: |\n az monitor app-insights metrics get-metadata --app e292531c-eb03-4079-9bb0-fe6b56b99f8b\n\"\"\"\n\nhelps['monitor app-insights events show'] = \"\"\"\n type: command\n short-summary: List events by type or view a single event from an application, specified by type and ID.\n parameters:\n - name: --offset\n short-summary: >\n Time offset of the query range, in ##d##h format.\n long-summary: >\n Can be used with either --start-time or --end-time. If used with --start-time, then\n the end time will be calculated by adding the offset. If used with --end-time (default), then\n the start time will be calculated by subtracting the offset. If --start-time and --end-time are\n provided, then --offset will be ignored.\n examples:\n - name: Get an availability result by ID.\n text: |\n az monitor app-insights events show --app 578f0e27-12e9-4631-bc02-50b965da2633 --type availabilityResults --event b2cf08df-bf42-4278-8d2c-5b55f85901fe\n - name: List availability results from the last 24 hours.\n text: |\n az monitor app-insights events show --app 578f0e27-12e9-4631-bc02-50b965da2633 --type availabilityResults --offset 24h\n\"\"\"\n", "path": "src/application-insights/azext_applicationinsights/_help.py"}]}
| 3,877 | 266 |
gh_patches_debug_50224
|
rasdani/github-patches
|
git_diff
|
pex-tool__pex-1692
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 2.1.74
On the docket:
+ [x] Add support for locking VCS requirements. (#1687)
+ [x] Fix `--lock` for multiplatform via sdists. (#1689)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.73"
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.73"
+__version__ = "2.1.74"
|
{"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.73\"\n+__version__ = \"2.1.74\"\n", "issue": "Release 2.1.74\nOn the docket:\r\n+ [x] Add support for locking VCS requirements. (#1687)\r\n+ [x] Fix `--lock` for multiplatform via sdists. (#1689)\r\n\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.73\"\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.74\"\n", "path": "pex/version.py"}]}
| 363 | 96 |
gh_patches_debug_33963
|
rasdani/github-patches
|
git_diff
|
learningequality__kolibri-10461
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Removing last user from "on my own facility" does not remove the facility from the device
## Observed behavior
Reported by @rtibbles in the alpha9 bug bash
When the last user is migrated out of an on my own facility, that facility is not removed from the device
## Expected behavior
The facility should be removed from the device
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kolibri/plugins/user_profile/tasks.py`
Content:
```
1 import requests
2 from django.core.management import call_command
3 from morango.errors import MorangoError
4 from rest_framework import serializers
5 from rest_framework.exceptions import AuthenticationFailed
6 from rest_framework.status import HTTP_201_CREATED
7
8 from .utils import TokenGenerator
9 from kolibri.core.auth.constants import role_kinds
10 from kolibri.core.auth.models import FacilityUser
11 from kolibri.core.auth.tasks import PeerImportSingleSyncJobValidator
12 from kolibri.core.auth.utils.migrate import merge_users
13 from kolibri.core.device.models import DevicePermissions
14 from kolibri.core.device.utils import set_device_settings
15 from kolibri.core.tasks.decorators import register_task
16 from kolibri.core.tasks.job import JobStatus
17 from kolibri.core.tasks.job import Priority
18 from kolibri.core.tasks.permissions import IsFacilityAdmin
19 from kolibri.core.tasks.permissions import IsSelf
20 from kolibri.core.tasks.permissions import IsSuperAdmin
21 from kolibri.core.tasks.permissions import PermissionsFromAny
22 from kolibri.core.tasks.utils import get_current_job
23 from kolibri.core.utils.urls import reverse_remote
24 from kolibri.utils.translation import ugettext as _
25
26
27 class MergeUserValidator(PeerImportSingleSyncJobValidator):
28 local_user_id = serializers.PrimaryKeyRelatedField(
29 queryset=FacilityUser.objects.all()
30 )
31 new_superuser_id = serializers.PrimaryKeyRelatedField(
32 queryset=FacilityUser.objects.all(), required=False
33 )
34 facility_name = serializers.CharField(default="")
35
36 def validate(self, data):
37 try:
38 job_data = super(MergeUserValidator, self).validate(data)
39 except AuthenticationFailed:
40 self.create_remote_user(data)
41 job_data = super(MergeUserValidator, self).validate(data)
42
43 job_data["kwargs"]["local_user_id"] = data["local_user_id"].id
44 job_data["extra_metadata"].update(user_fullname=data["local_user_id"].full_name)
45 if data.get("new_superuser_id"):
46 job_data["kwargs"]["new_superuser_id"] = data["new_superuser_id"].id
47
48 return job_data
49
50 def create_remote_user(self, data):
51 baseurl = data["baseurl"]
52 facility = data["facility"]
53 user_data = {
54 "username": data["username"],
55 "password": data["password"],
56 "facility": facility,
57 }
58 for f in ["gender", "birth_year", "id_number", "full_name"]:
59 if getattr(data["local_user_id"], f, "NOT_SPECIFIED") != "NOT_SPECIFIED":
60 user_data[f] = getattr(data["local_user_id"], f, None)
61 public_signup_url = reverse_remote(baseurl, "kolibri:core:publicsignup-list")
62 response = requests.post(public_signup_url, data=user_data)
63 if response.status_code != HTTP_201_CREATED:
64 raise serializers.ValidationError(response.json()[0]["id"])
65
66
67 def status_fn(job):
68 # Translators: A notification title shown to users when their learner account is joining a new learning facility.
69 account_transfer_in_progress = _("Account transfer in progress")
70 # Translators: Notification text shown to users when their learner account is joining a new learning facility.
71 notification_text = _(
72 "Moving {learner_name} to learning facility {facility_name}"
73 ).format(
74 learner_name=job.extra_metadata["user_fullname"],
75 facility_name=job.extra_metadata["facility_name"],
76 )
77 return JobStatus(account_transfer_in_progress, notification_text)
78
79
80 @register_task(
81 queue="soud",
82 validator=MergeUserValidator,
83 priority=Priority.HIGH,
84 cancellable=False,
85 track_progress=True,
86 permission_classes=[
87 PermissionsFromAny(IsSelf(), IsSuperAdmin(), IsFacilityAdmin())
88 ],
89 status_fn=status_fn,
90 )
91 def mergeuser(command, **kwargs):
92 """
93 This is an example of the POST payload to create this task:
94 {
95 "type": "kolibri.plugins.user_profile.tasks.mergeuser",
96 "baseurl": "http://192.168.0.201:80/",
97 "facility": "41d0e8bb1600347f17ab3d9172fff87a",
98 "username": "uno",
99 "local_user_id": "05685392311d1d259fe01c65c7a6c28e"
100 }
101 being baseurl, facility and username all parameters of the remote server.
102 If the remote server requires password to authenticate user,
103 a "password" parameter must be added, otherwise it's not needed.
104
105 If the username/password does not exist in the remote server,
106 this task will try to create the user.
107 """
108
109 local_user_id = kwargs.pop("local_user_id")
110 local_user = FacilityUser.objects.get(id=local_user_id)
111 job = get_current_job()
112
113 # Sync with the server to get the remote user:
114 kwargs["no_push"] = True
115 try:
116 call_command(command, **kwargs)
117 except MorangoError:
118 # error syncing with the server, probably a networking issue
119 raise
120
121 remote_user = FacilityUser.objects.get(id=kwargs["user"])
122 merge_users(local_user, remote_user)
123 set_device_settings(subset_of_users_device=True)
124
125 # Resync with the server to update the merged records
126 del kwargs["no_push"]
127
128 try:
129 call_command(command, **kwargs)
130 except MorangoError:
131 # error syncing with the server, probably a networking issue
132 # syncing will happen later in scheduled syncs
133 from kolibri.core.auth.tasks import begin_request_soud_sync
134
135 begin_request_soud_sync(kwargs["baseurl"], remote_user.id)
136
137 new_superuser_id = kwargs.get("new_superuser_id")
138 if new_superuser_id:
139 new_superuser = FacilityUser.objects.get(id=new_superuser_id)
140 # make the user a new super user for this device:
141 new_superuser.facility.add_role(new_superuser, role_kinds.ADMIN)
142 DevicePermissions.objects.create(
143 user=new_superuser, is_superuser=True, can_manage_content=True
144 )
145
146 # create token to validate user in the new facility
147 # after it's deleted in the current facility:
148 remote_user_pk = job.kwargs["user"]
149 remote_user = FacilityUser.objects.get(pk=remote_user_pk)
150 token = TokenGenerator().make_token(remote_user)
151 job.extra_metadata["token"] = token
152 job.extra_metadata["remote_user_pk"] = remote_user_pk
153 job.save_meta()
154 job.update_progress(1.0, 1.0)
155 local_user.delete()
156
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kolibri/plugins/user_profile/tasks.py b/kolibri/plugins/user_profile/tasks.py
--- a/kolibri/plugins/user_profile/tasks.py
+++ b/kolibri/plugins/user_profile/tasks.py
@@ -9,6 +9,7 @@
from kolibri.core.auth.constants import role_kinds
from kolibri.core.auth.models import FacilityUser
from kolibri.core.auth.tasks import PeerImportSingleSyncJobValidator
+from kolibri.core.auth.utils.delete import delete_facility
from kolibri.core.auth.utils.migrate import merge_users
from kolibri.core.device.models import DevicePermissions
from kolibri.core.device.utils import set_device_settings
@@ -32,6 +33,7 @@
queryset=FacilityUser.objects.all(), required=False
)
facility_name = serializers.CharField(default="")
+ set_as_super_user = serializers.BooleanField(required=False)
def validate(self, data):
try:
@@ -44,6 +46,8 @@
job_data["extra_metadata"].update(user_fullname=data["local_user_id"].full_name)
if data.get("new_superuser_id"):
job_data["kwargs"]["new_superuser_id"] = data["new_superuser_id"].id
+ if data.get("set_as_super_user"):
+ job_data["kwargs"]["set_as_super_user"] = data["set_as_super_user"]
return job_data
@@ -152,4 +156,14 @@
job.extra_metadata["remote_user_pk"] = remote_user_pk
job.save_meta()
job.update_progress(1.0, 1.0)
- local_user.delete()
+
+ # check if current user should be set as superuser:
+ set_as_super_user = kwargs.get("set_as_super_user")
+ if set_as_super_user:
+ DevicePermissions.objects.create(
+ user=remote_user, is_superuser=True, can_manage_content=True
+ )
+ delete_facility(local_user.facility)
+ set_device_settings(default_facility=remote_user.facility)
+ else:
+ local_user.delete()
|
{"golden_diff": "diff --git a/kolibri/plugins/user_profile/tasks.py b/kolibri/plugins/user_profile/tasks.py\n--- a/kolibri/plugins/user_profile/tasks.py\n+++ b/kolibri/plugins/user_profile/tasks.py\n@@ -9,6 +9,7 @@\n from kolibri.core.auth.constants import role_kinds\n from kolibri.core.auth.models import FacilityUser\n from kolibri.core.auth.tasks import PeerImportSingleSyncJobValidator\n+from kolibri.core.auth.utils.delete import delete_facility\n from kolibri.core.auth.utils.migrate import merge_users\n from kolibri.core.device.models import DevicePermissions\n from kolibri.core.device.utils import set_device_settings\n@@ -32,6 +33,7 @@\n queryset=FacilityUser.objects.all(), required=False\n )\n facility_name = serializers.CharField(default=\"\")\n+ set_as_super_user = serializers.BooleanField(required=False)\n \n def validate(self, data):\n try:\n@@ -44,6 +46,8 @@\n job_data[\"extra_metadata\"].update(user_fullname=data[\"local_user_id\"].full_name)\n if data.get(\"new_superuser_id\"):\n job_data[\"kwargs\"][\"new_superuser_id\"] = data[\"new_superuser_id\"].id\n+ if data.get(\"set_as_super_user\"):\n+ job_data[\"kwargs\"][\"set_as_super_user\"] = data[\"set_as_super_user\"]\n \n return job_data\n \n@@ -152,4 +156,14 @@\n job.extra_metadata[\"remote_user_pk\"] = remote_user_pk\n job.save_meta()\n job.update_progress(1.0, 1.0)\n- local_user.delete()\n+\n+ # check if current user should be set as superuser:\n+ set_as_super_user = kwargs.get(\"set_as_super_user\")\n+ if set_as_super_user:\n+ DevicePermissions.objects.create(\n+ user=remote_user, is_superuser=True, can_manage_content=True\n+ )\n+ delete_facility(local_user.facility)\n+ set_device_settings(default_facility=remote_user.facility)\n+ else:\n+ local_user.delete()\n", "issue": "Removing last user from \"on my own facility\" does not remove the facility from the device\n\r\n## Observed behavior\r\nReported by @rtibbles in the alpha9 bug bash \r\n\r\nWhen the last user is migrated out of an on my own facility, that facility is not removed from the device\r\n\r\n## Expected behavior\r\nThe facility should be removed from the device\r\n\n", "before_files": [{"content": "import requests\nfrom django.core.management import call_command\nfrom morango.errors import MorangoError\nfrom rest_framework import serializers\nfrom rest_framework.exceptions import AuthenticationFailed\nfrom rest_framework.status import HTTP_201_CREATED\n\nfrom .utils import TokenGenerator\nfrom kolibri.core.auth.constants import role_kinds\nfrom kolibri.core.auth.models import FacilityUser\nfrom kolibri.core.auth.tasks import PeerImportSingleSyncJobValidator\nfrom kolibri.core.auth.utils.migrate import merge_users\nfrom kolibri.core.device.models import DevicePermissions\nfrom kolibri.core.device.utils import set_device_settings\nfrom kolibri.core.tasks.decorators import register_task\nfrom kolibri.core.tasks.job import JobStatus\nfrom kolibri.core.tasks.job import Priority\nfrom kolibri.core.tasks.permissions import IsFacilityAdmin\nfrom kolibri.core.tasks.permissions import IsSelf\nfrom kolibri.core.tasks.permissions import IsSuperAdmin\nfrom kolibri.core.tasks.permissions import PermissionsFromAny\nfrom kolibri.core.tasks.utils import get_current_job\nfrom kolibri.core.utils.urls import reverse_remote\nfrom kolibri.utils.translation import ugettext as _\n\n\nclass MergeUserValidator(PeerImportSingleSyncJobValidator):\n local_user_id = serializers.PrimaryKeyRelatedField(\n queryset=FacilityUser.objects.all()\n )\n new_superuser_id = serializers.PrimaryKeyRelatedField(\n queryset=FacilityUser.objects.all(), required=False\n )\n facility_name = serializers.CharField(default=\"\")\n\n def validate(self, data):\n try:\n job_data = super(MergeUserValidator, self).validate(data)\n except AuthenticationFailed:\n self.create_remote_user(data)\n job_data = super(MergeUserValidator, self).validate(data)\n\n job_data[\"kwargs\"][\"local_user_id\"] = data[\"local_user_id\"].id\n job_data[\"extra_metadata\"].update(user_fullname=data[\"local_user_id\"].full_name)\n if data.get(\"new_superuser_id\"):\n job_data[\"kwargs\"][\"new_superuser_id\"] = data[\"new_superuser_id\"].id\n\n return job_data\n\n def create_remote_user(self, data):\n baseurl = data[\"baseurl\"]\n facility = data[\"facility\"]\n user_data = {\n \"username\": data[\"username\"],\n \"password\": data[\"password\"],\n \"facility\": facility,\n }\n for f in [\"gender\", \"birth_year\", \"id_number\", \"full_name\"]:\n if getattr(data[\"local_user_id\"], f, \"NOT_SPECIFIED\") != \"NOT_SPECIFIED\":\n user_data[f] = getattr(data[\"local_user_id\"], f, None)\n public_signup_url = reverse_remote(baseurl, \"kolibri:core:publicsignup-list\")\n response = requests.post(public_signup_url, data=user_data)\n if response.status_code != HTTP_201_CREATED:\n raise serializers.ValidationError(response.json()[0][\"id\"])\n\n\ndef status_fn(job):\n # Translators: A notification title shown to users when their learner account is joining a new learning facility.\n account_transfer_in_progress = _(\"Account transfer in progress\")\n # Translators: Notification text shown to users when their learner account is joining a new learning facility.\n notification_text = _(\n \"Moving {learner_name} to learning facility {facility_name}\"\n ).format(\n learner_name=job.extra_metadata[\"user_fullname\"],\n facility_name=job.extra_metadata[\"facility_name\"],\n )\n return JobStatus(account_transfer_in_progress, notification_text)\n\n\n@register_task(\n queue=\"soud\",\n validator=MergeUserValidator,\n priority=Priority.HIGH,\n cancellable=False,\n track_progress=True,\n permission_classes=[\n PermissionsFromAny(IsSelf(), IsSuperAdmin(), IsFacilityAdmin())\n ],\n status_fn=status_fn,\n)\ndef mergeuser(command, **kwargs):\n \"\"\"\n This is an example of the POST payload to create this task:\n {\n \"type\": \"kolibri.plugins.user_profile.tasks.mergeuser\",\n \"baseurl\": \"http://192.168.0.201:80/\",\n \"facility\": \"41d0e8bb1600347f17ab3d9172fff87a\",\n \"username\": \"uno\",\n \"local_user_id\": \"05685392311d1d259fe01c65c7a6c28e\"\n }\n being baseurl, facility and username all parameters of the remote server.\n If the remote server requires password to authenticate user,\n a \"password\" parameter must be added, otherwise it's not needed.\n\n If the username/password does not exist in the remote server,\n this task will try to create the user.\n \"\"\"\n\n local_user_id = kwargs.pop(\"local_user_id\")\n local_user = FacilityUser.objects.get(id=local_user_id)\n job = get_current_job()\n\n # Sync with the server to get the remote user:\n kwargs[\"no_push\"] = True\n try:\n call_command(command, **kwargs)\n except MorangoError:\n # error syncing with the server, probably a networking issue\n raise\n\n remote_user = FacilityUser.objects.get(id=kwargs[\"user\"])\n merge_users(local_user, remote_user)\n set_device_settings(subset_of_users_device=True)\n\n # Resync with the server to update the merged records\n del kwargs[\"no_push\"]\n\n try:\n call_command(command, **kwargs)\n except MorangoError:\n # error syncing with the server, probably a networking issue\n # syncing will happen later in scheduled syncs\n from kolibri.core.auth.tasks import begin_request_soud_sync\n\n begin_request_soud_sync(kwargs[\"baseurl\"], remote_user.id)\n\n new_superuser_id = kwargs.get(\"new_superuser_id\")\n if new_superuser_id:\n new_superuser = FacilityUser.objects.get(id=new_superuser_id)\n # make the user a new super user for this device:\n new_superuser.facility.add_role(new_superuser, role_kinds.ADMIN)\n DevicePermissions.objects.create(\n user=new_superuser, is_superuser=True, can_manage_content=True\n )\n\n # create token to validate user in the new facility\n # after it's deleted in the current facility:\n remote_user_pk = job.kwargs[\"user\"]\n remote_user = FacilityUser.objects.get(pk=remote_user_pk)\n token = TokenGenerator().make_token(remote_user)\n job.extra_metadata[\"token\"] = token\n job.extra_metadata[\"remote_user_pk\"] = remote_user_pk\n job.save_meta()\n job.update_progress(1.0, 1.0)\n local_user.delete()\n", "path": "kolibri/plugins/user_profile/tasks.py"}], "after_files": [{"content": "import requests\nfrom django.core.management import call_command\nfrom morango.errors import MorangoError\nfrom rest_framework import serializers\nfrom rest_framework.exceptions import AuthenticationFailed\nfrom rest_framework.status import HTTP_201_CREATED\n\nfrom .utils import TokenGenerator\nfrom kolibri.core.auth.constants import role_kinds\nfrom kolibri.core.auth.models import FacilityUser\nfrom kolibri.core.auth.tasks import PeerImportSingleSyncJobValidator\nfrom kolibri.core.auth.utils.delete import delete_facility\nfrom kolibri.core.auth.utils.migrate import merge_users\nfrom kolibri.core.device.models import DevicePermissions\nfrom kolibri.core.device.utils import set_device_settings\nfrom kolibri.core.tasks.decorators import register_task\nfrom kolibri.core.tasks.job import JobStatus\nfrom kolibri.core.tasks.job import Priority\nfrom kolibri.core.tasks.permissions import IsFacilityAdmin\nfrom kolibri.core.tasks.permissions import IsSelf\nfrom kolibri.core.tasks.permissions import IsSuperAdmin\nfrom kolibri.core.tasks.permissions import PermissionsFromAny\nfrom kolibri.core.tasks.utils import get_current_job\nfrom kolibri.core.utils.urls import reverse_remote\nfrom kolibri.utils.translation import ugettext as _\n\n\nclass MergeUserValidator(PeerImportSingleSyncJobValidator):\n local_user_id = serializers.PrimaryKeyRelatedField(\n queryset=FacilityUser.objects.all()\n )\n new_superuser_id = serializers.PrimaryKeyRelatedField(\n queryset=FacilityUser.objects.all(), required=False\n )\n facility_name = serializers.CharField(default=\"\")\n set_as_super_user = serializers.BooleanField(required=False)\n\n def validate(self, data):\n try:\n job_data = super(MergeUserValidator, self).validate(data)\n except AuthenticationFailed:\n self.create_remote_user(data)\n job_data = super(MergeUserValidator, self).validate(data)\n\n job_data[\"kwargs\"][\"local_user_id\"] = data[\"local_user_id\"].id\n job_data[\"extra_metadata\"].update(user_fullname=data[\"local_user_id\"].full_name)\n if data.get(\"new_superuser_id\"):\n job_data[\"kwargs\"][\"new_superuser_id\"] = data[\"new_superuser_id\"].id\n if data.get(\"set_as_super_user\"):\n job_data[\"kwargs\"][\"set_as_super_user\"] = data[\"set_as_super_user\"]\n\n return job_data\n\n def create_remote_user(self, data):\n baseurl = data[\"baseurl\"]\n facility = data[\"facility\"]\n user_data = {\n \"username\": data[\"username\"],\n \"password\": data[\"password\"],\n \"facility\": facility,\n }\n for f in [\"gender\", \"birth_year\", \"id_number\", \"full_name\"]:\n if getattr(data[\"local_user_id\"], f, \"NOT_SPECIFIED\") != \"NOT_SPECIFIED\":\n user_data[f] = getattr(data[\"local_user_id\"], f, None)\n public_signup_url = reverse_remote(baseurl, \"kolibri:core:publicsignup-list\")\n response = requests.post(public_signup_url, data=user_data)\n if response.status_code != HTTP_201_CREATED:\n raise serializers.ValidationError(response.json()[0][\"id\"])\n\n\ndef status_fn(job):\n # Translators: A notification title shown to users when their learner account is joining a new learning facility.\n account_transfer_in_progress = _(\"Account transfer in progress\")\n # Translators: Notification text shown to users when their learner account is joining a new learning facility.\n notification_text = _(\n \"Moving {learner_name} to learning facility {facility_name}\"\n ).format(\n learner_name=job.extra_metadata[\"user_fullname\"],\n facility_name=job.extra_metadata[\"facility_name\"],\n )\n return JobStatus(account_transfer_in_progress, notification_text)\n\n\n@register_task(\n queue=\"soud\",\n validator=MergeUserValidator,\n priority=Priority.HIGH,\n cancellable=False,\n track_progress=True,\n permission_classes=[\n PermissionsFromAny(IsSelf(), IsSuperAdmin(), IsFacilityAdmin())\n ],\n status_fn=status_fn,\n)\ndef mergeuser(command, **kwargs):\n \"\"\"\n This is an example of the POST payload to create this task:\n {\n \"type\": \"kolibri.plugins.user_profile.tasks.mergeuser\",\n \"baseurl\": \"http://192.168.0.201:80/\",\n \"facility\": \"41d0e8bb1600347f17ab3d9172fff87a\",\n \"username\": \"uno\",\n \"local_user_id\": \"05685392311d1d259fe01c65c7a6c28e\"\n }\n being baseurl, facility and username all parameters of the remote server.\n If the remote server requires password to authenticate user,\n a \"password\" parameter must be added, otherwise it's not needed.\n\n If the username/password does not exist in the remote server,\n this task will try to create the user.\n \"\"\"\n\n local_user_id = kwargs.pop(\"local_user_id\")\n local_user = FacilityUser.objects.get(id=local_user_id)\n job = get_current_job()\n\n # Sync with the server to get the remote user:\n kwargs[\"no_push\"] = True\n try:\n call_command(command, **kwargs)\n except MorangoError:\n # error syncing with the server, probably a networking issue\n raise\n\n remote_user = FacilityUser.objects.get(id=kwargs[\"user\"])\n merge_users(local_user, remote_user)\n set_device_settings(subset_of_users_device=True)\n\n # Resync with the server to update the merged records\n del kwargs[\"no_push\"]\n\n try:\n call_command(command, **kwargs)\n except MorangoError:\n # error syncing with the server, probably a networking issue\n # syncing will happen later in scheduled syncs\n from kolibri.core.auth.tasks import begin_request_soud_sync\n\n begin_request_soud_sync(kwargs[\"baseurl\"], remote_user.id)\n\n new_superuser_id = kwargs.get(\"new_superuser_id\")\n if new_superuser_id:\n new_superuser = FacilityUser.objects.get(id=new_superuser_id)\n # make the user a new super user for this device:\n new_superuser.facility.add_role(new_superuser, role_kinds.ADMIN)\n DevicePermissions.objects.create(\n user=new_superuser, is_superuser=True, can_manage_content=True\n )\n\n # create token to validate user in the new facility\n # after it's deleted in the current facility:\n remote_user_pk = job.kwargs[\"user\"]\n remote_user = FacilityUser.objects.get(pk=remote_user_pk)\n token = TokenGenerator().make_token(remote_user)\n job.extra_metadata[\"token\"] = token\n job.extra_metadata[\"remote_user_pk\"] = remote_user_pk\n job.save_meta()\n job.update_progress(1.0, 1.0)\n\n # check if current user should be set as superuser:\n set_as_super_user = kwargs.get(\"set_as_super_user\")\n if set_as_super_user:\n DevicePermissions.objects.create(\n user=remote_user, is_superuser=True, can_manage_content=True\n )\n delete_facility(local_user.facility)\n set_device_settings(default_facility=remote_user.facility)\n else:\n local_user.delete()\n", "path": "kolibri/plugins/user_profile/tasks.py"}]}
| 2,098 | 441 |
gh_patches_debug_19495
|
rasdani/github-patches
|
git_diff
|
Pyomo__pyomo-1273
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
PyNumero support on Windows
We need to make sure PyNumero installs/runs on Windows and make sure any extra installation steps are well documented. Some discussion on this was started in #1253. When this is resolved we also need to make sure to enable the tests currently being skipped on Windows.
PyNumero support on Windows
We need to make sure PyNumero installs/runs on Windows and make sure any extra installation steps are well documented. Some discussion on this was started in #1253. When this is resolved we also need to make sure to enable the tests currently being skipped on Windows.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyomo/contrib/pynumero/extensions/utils.py`
Content:
```
1 # ___________________________________________________________________________
2 #
3 # Pyomo: Python Optimization Modeling Objects
4 # Copyright 2017 National Technology and Engineering Solutions of Sandia, LLC
5 # Under the terms of Contract DE-NA0003525 with National Technology and
6 # Engineering Solutions of Sandia, LLC, the U.S. Government retains certain
7 # rights in this software.
8 # This software is distributed under the 3-clause BSD License.
9 # ___________________________________________________________________________
10 from ctypes.util import find_library
11 import sys
12 import os
13
14
15 def find_pynumero_library(library_name):
16
17 asl_path = find_library(library_name)
18 if asl_path is not None:
19 return asl_path
20 else:
21 # try looking into extensions directory now
22 file_path = os.path.abspath(__file__)
23 dir_path = os.path.dirname(file_path)
24
25 if os.name in ['nt', 'dos']:
26 libname = 'lib/Windows/lib{}.dll'.format(library_name)
27 elif sys.platform in ['darwin']:
28 libname = 'lib/Darwin/lib{}.dylib'.format(library_name)
29 else:
30 libname = 'lib/Linux/lib{}.so'.format(library_name)
31
32 asl_lib_path = os.path.join(dir_path, libname)
33
34 if os.path.exists(asl_lib_path):
35 return asl_lib_path
36 return None
37
38
39 def found_pynumero_libraries():
40
41 p1 = find_pynumero_library('pynumero_ASL')
42 p2 = find_pynumero_library('pynumero_SPARSE')
43
44 if p1 is not None and p2 is not None:
45 return True
46 return False
47
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pyomo/contrib/pynumero/extensions/utils.py b/pyomo/contrib/pynumero/extensions/utils.py
--- a/pyomo/contrib/pynumero/extensions/utils.py
+++ b/pyomo/contrib/pynumero/extensions/utils.py
@@ -14,9 +14,14 @@
def find_pynumero_library(library_name):
- asl_path = find_library(library_name)
- if asl_path is not None:
- return asl_path
+ lib_path = find_library(library_name)
+ if lib_path is not None:
+ return lib_path
+
+ # On windows the library is prefixed with 'lib'
+ lib_path = find_library('lib'+library_name)
+ if lib_path is not None:
+ return lib_path
else:
# try looking into extensions directory now
file_path = os.path.abspath(__file__)
@@ -29,10 +34,10 @@
else:
libname = 'lib/Linux/lib{}.so'.format(library_name)
- asl_lib_path = os.path.join(dir_path, libname)
+ lib_path = os.path.join(dir_path, libname)
- if os.path.exists(asl_lib_path):
- return asl_lib_path
+ if os.path.exists(lib_path):
+ return lib_path
return None
|
{"golden_diff": "diff --git a/pyomo/contrib/pynumero/extensions/utils.py b/pyomo/contrib/pynumero/extensions/utils.py\n--- a/pyomo/contrib/pynumero/extensions/utils.py\n+++ b/pyomo/contrib/pynumero/extensions/utils.py\n@@ -14,9 +14,14 @@\n \n def find_pynumero_library(library_name):\n \n- asl_path = find_library(library_name)\n- if asl_path is not None:\n- return asl_path\n+ lib_path = find_library(library_name)\n+ if lib_path is not None:\n+ return lib_path\n+\n+ # On windows the library is prefixed with 'lib'\n+ lib_path = find_library('lib'+library_name)\n+ if lib_path is not None:\n+ return lib_path\n else:\n # try looking into extensions directory now\n file_path = os.path.abspath(__file__)\n@@ -29,10 +34,10 @@\n else:\n libname = 'lib/Linux/lib{}.so'.format(library_name)\n \n- asl_lib_path = os.path.join(dir_path, libname)\n+ lib_path = os.path.join(dir_path, libname)\n \n- if os.path.exists(asl_lib_path):\n- return asl_lib_path\n+ if os.path.exists(lib_path):\n+ return lib_path\n return None\n", "issue": "PyNumero support on Windows\nWe need to make sure PyNumero installs/runs on Windows and make sure any extra installation steps are well documented. Some discussion on this was started in #1253. When this is resolved we also need to make sure to enable the tests currently being skipped on Windows.\nPyNumero support on Windows\nWe need to make sure PyNumero installs/runs on Windows and make sure any extra installation steps are well documented. Some discussion on this was started in #1253. When this is resolved we also need to make sure to enable the tests currently being skipped on Windows.\n", "before_files": [{"content": "# ___________________________________________________________________________\n#\n# Pyomo: Python Optimization Modeling Objects\n# Copyright 2017 National Technology and Engineering Solutions of Sandia, LLC\n# Under the terms of Contract DE-NA0003525 with National Technology and\n# Engineering Solutions of Sandia, LLC, the U.S. Government retains certain\n# rights in this software.\n# This software is distributed under the 3-clause BSD License.\n# ___________________________________________________________________________\nfrom ctypes.util import find_library\nimport sys\nimport os\n\n\ndef find_pynumero_library(library_name):\n\n asl_path = find_library(library_name)\n if asl_path is not None:\n return asl_path\n else:\n # try looking into extensions directory now\n file_path = os.path.abspath(__file__)\n dir_path = os.path.dirname(file_path)\n\n if os.name in ['nt', 'dos']:\n libname = 'lib/Windows/lib{}.dll'.format(library_name)\n elif sys.platform in ['darwin']:\n libname = 'lib/Darwin/lib{}.dylib'.format(library_name)\n else:\n libname = 'lib/Linux/lib{}.so'.format(library_name)\n\n asl_lib_path = os.path.join(dir_path, libname)\n\n if os.path.exists(asl_lib_path):\n return asl_lib_path\n return None\n\n\ndef found_pynumero_libraries():\n\n p1 = find_pynumero_library('pynumero_ASL')\n p2 = find_pynumero_library('pynumero_SPARSE')\n\n if p1 is not None and p2 is not None:\n return True\n return False\n", "path": "pyomo/contrib/pynumero/extensions/utils.py"}], "after_files": [{"content": "# ___________________________________________________________________________\n#\n# Pyomo: Python Optimization Modeling Objects\n# Copyright 2017 National Technology and Engineering Solutions of Sandia, LLC\n# Under the terms of Contract DE-NA0003525 with National Technology and\n# Engineering Solutions of Sandia, LLC, the U.S. Government retains certain\n# rights in this software.\n# This software is distributed under the 3-clause BSD License.\n# ___________________________________________________________________________\nfrom ctypes.util import find_library\nimport sys\nimport os\n\n\ndef find_pynumero_library(library_name):\n\n lib_path = find_library(library_name)\n if lib_path is not None:\n return lib_path\n\n # On windows the library is prefixed with 'lib'\n lib_path = find_library('lib'+library_name)\n if lib_path is not None:\n return lib_path\n else:\n # try looking into extensions directory now\n file_path = os.path.abspath(__file__)\n dir_path = os.path.dirname(file_path)\n\n if os.name in ['nt', 'dos']:\n libname = 'lib/Windows/lib{}.dll'.format(library_name)\n elif sys.platform in ['darwin']:\n libname = 'lib/Darwin/lib{}.dylib'.format(library_name)\n else:\n libname = 'lib/Linux/lib{}.so'.format(library_name)\n\n lib_path = os.path.join(dir_path, libname)\n\n if os.path.exists(lib_path):\n return lib_path\n return None\n\n\ndef found_pynumero_libraries():\n\n p1 = find_pynumero_library('pynumero_ASL')\n p2 = find_pynumero_library('pynumero_SPARSE')\n\n if p1 is not None and p2 is not None:\n return True\n return False\n", "path": "pyomo/contrib/pynumero/extensions/utils.py"}]}
| 842 | 300 |
gh_patches_debug_3268
|
rasdani/github-patches
|
git_diff
|
electricitymaps__electricitymaps-contrib-1158
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
AU battery returning type error
```
fetch_production("AUS-SA") ->
Traceback (most recent call last):
File "AU.py", line 558, in <module>
print(fetch_production('AUS-SA'))
File "AU.py", line 422, in fetch_production
data['storage']['battery'] = AU_battery.fetch_SA_battery()
File "/home/chris/electricitymap/parsers/lib/AU_battery.py", line 30, in fetch_SA_battery
latest = json.loads(data[-1])
File "/usr/lib/python3.5/json/__init__.py", line 312, in loads
s.__class__.__name__))
TypeError: the JSON object must be str, not 'bytes'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `parsers/lib/AU_battery.py`
Content:
```
1 #!/usr/bin/env python3
2
3 """Parser for South Australia's 129MWh battery built by Tesla."""
4 import arrow
5 import json
6 import requests
7
8 # nemlog_url gets generation status in 5 min intervals.
9
10
11 def fetch_SA_battery(session=None):
12 """
13 Makes a request to the nemlog api for South Australia battery data.
14 Returns a float or None.
15 """
16
17 today = arrow.now('Australia/Adelaide')
18 current = today.format('YYYYMMDD')
19 old = today.shift(days=-2).format('YYYYMMDD')
20 nemlog_url = 'http://nemlog.com.au/api/unit/HPRL1/{}/{}/json'.format(old, current)
21
22 s = session or requests.Session()
23 req = s.get(nemlog_url)
24
25 data = []
26 for line in req.iter_lines():
27 data.append(line)
28
29 try:
30 latest = json.loads(data[-1])
31 except IndexError:
32 # No data available.
33 return None
34
35 state = float(latest["SCADAVALUE"])
36
37 # Source classifies charge/discharge opposite to EM.
38 battery_status = -1 * state
39
40 return battery_status
41
42
43 if __name__ == '__main__':
44 print('fetch_SA_battery() ->')
45 print(fetch_SA_battery())
46
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/parsers/lib/AU_battery.py b/parsers/lib/AU_battery.py
--- a/parsers/lib/AU_battery.py
+++ b/parsers/lib/AU_battery.py
@@ -21,11 +21,9 @@
s = session or requests.Session()
req = s.get(nemlog_url)
-
data = []
- for line in req.iter_lines():
+ for line in req.iter_lines(decode_unicode=True):
data.append(line)
-
try:
latest = json.loads(data[-1])
except IndexError:
|
{"golden_diff": "diff --git a/parsers/lib/AU_battery.py b/parsers/lib/AU_battery.py\n--- a/parsers/lib/AU_battery.py\n+++ b/parsers/lib/AU_battery.py\n@@ -21,11 +21,9 @@\n \n s = session or requests.Session()\n req = s.get(nemlog_url)\n-\n data = []\n- for line in req.iter_lines():\n+ for line in req.iter_lines(decode_unicode=True):\n data.append(line)\n-\n try:\n latest = json.loads(data[-1])\n except IndexError:\n", "issue": "AU battery returning type error\n```\r\nfetch_production(\"AUS-SA\") ->\r\nTraceback (most recent call last):\r\n File \"AU.py\", line 558, in <module>\r\n print(fetch_production('AUS-SA'))\r\n File \"AU.py\", line 422, in fetch_production\r\n data['storage']['battery'] = AU_battery.fetch_SA_battery()\r\n File \"/home/chris/electricitymap/parsers/lib/AU_battery.py\", line 30, in fetch_SA_battery\r\n latest = json.loads(data[-1])\r\n File \"/usr/lib/python3.5/json/__init__.py\", line 312, in loads\r\n s.__class__.__name__))\r\nTypeError: the JSON object must be str, not 'bytes'\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python3\n\n\"\"\"Parser for South Australia's 129MWh battery built by Tesla.\"\"\"\nimport arrow\nimport json\nimport requests\n\n# nemlog_url gets generation status in 5 min intervals.\n\n\ndef fetch_SA_battery(session=None):\n \"\"\"\n Makes a request to the nemlog api for South Australia battery data.\n Returns a float or None.\n \"\"\"\n\n today = arrow.now('Australia/Adelaide')\n current = today.format('YYYYMMDD')\n old = today.shift(days=-2).format('YYYYMMDD')\n nemlog_url = 'http://nemlog.com.au/api/unit/HPRL1/{}/{}/json'.format(old, current)\n\n s = session or requests.Session()\n req = s.get(nemlog_url)\n\n data = []\n for line in req.iter_lines():\n data.append(line)\n\n try:\n latest = json.loads(data[-1])\n except IndexError:\n # No data available.\n return None\n\n state = float(latest[\"SCADAVALUE\"])\n\n # Source classifies charge/discharge opposite to EM.\n battery_status = -1 * state\n\n return battery_status\n\n\nif __name__ == '__main__':\n print('fetch_SA_battery() ->')\n print(fetch_SA_battery())\n", "path": "parsers/lib/AU_battery.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\n\"\"\"Parser for South Australia's 129MWh battery built by Tesla.\"\"\"\nimport arrow\nimport json\nimport requests\n\n# nemlog_url gets generation status in 5 min intervals.\n\n\ndef fetch_SA_battery(session=None):\n \"\"\"\n Makes a request to the nemlog api for South Australia battery data.\n Returns a float or None.\n \"\"\"\n\n today = arrow.now('Australia/Adelaide')\n current = today.format('YYYYMMDD')\n old = today.shift(days=-2).format('YYYYMMDD')\n nemlog_url = 'http://nemlog.com.au/api/unit/HPRL1/{}/{}/json'.format(old, current)\n\n s = session or requests.Session()\n req = s.get(nemlog_url)\n data = []\n for line in req.iter_lines(decode_unicode=True):\n data.append(line)\n try:\n latest = json.loads(data[-1])\n except IndexError:\n # No data available.\n return None\n\n state = float(latest[\"SCADAVALUE\"])\n\n # Source classifies charge/discharge opposite to EM.\n battery_status = -1 * state\n\n return battery_status\n\n\nif __name__ == '__main__':\n print('fetch_SA_battery() ->')\n print(fetch_SA_battery())\n", "path": "parsers/lib/AU_battery.py"}]}
| 786 | 121 |
gh_patches_debug_16023
|
rasdani/github-patches
|
git_diff
|
databricks__koalas-161
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Show pandas style Table of Contents on the left side in docs
Right now our docs show a weird Table of Contents for only a section, rather than the entire doc. Can we fix it so it shows the Table of Contents of the entire docs, e.g. start from the top level?
<img width="647" alt="Screen Shot 2019-04-23 at 4 40 38 PM" src="https://user-images.githubusercontent.com/323388/56622865-9351b600-65e6-11e9-98b3-7930660b1c93.png">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/source/conf.py`
Content:
```
1 # Configuration file for the Sphinx documentation builder.
2 #
3 # This file only contains a selection of the most common options. For a full
4 # list see the documentation:
5 # http://www.sphinx-doc.org/en/master/config
6
7 # -- Path setup --------------------------------------------------------------
8
9 # If extensions (or modules to document with autodoc) are in another directory,
10 # add these directories to sys.path here. If the directory is relative to the
11 # documentation root, use os.path.abspath to make it absolute, like shown here.
12
13 import os
14 import sys
15 from databricks import koalas
16 sys.path.insert(0, os.path.abspath('.'))
17
18
19 # -- Project information -----------------------------------------------------
20
21 project = 'Koalas'
22 copyright = '2019, Databricks'
23 author = 'The Koalas Team'
24
25 # The full version, including alpha/beta/rc tags
26 release = os.environ.get('RELEASE_VERSION', koalas.__version__)
27
28
29 # -- General configuration ---------------------------------------------------
30
31 # If your documentation needs a minimal Sphinx version, state it here.
32 needs_sphinx = '1.2'
33
34 # Add any Sphinx extension module names here, as strings. They can be
35 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
36 # ones.
37 extensions = [
38 'sphinx.ext.autodoc',
39 'sphinx.ext.viewcode',
40 'numpydoc', # handle NumPy documentation formatted docstrings. Needs to install
41 'nbsphinx', # Jupyter Notebook. Needs to install
42 ]
43
44 # Add any paths that contain templates here, relative to this directory.
45 templates_path = ['_templates']
46
47 # List of patterns, relative to source directory, that match files and
48 # directories to ignore when looking for source files.
49 # This pattern also affects html_static_path and html_extra_path.
50 exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
51
52 # The name of the Pygments (syntax highlighting) style to use.
53 pygments_style = 'sphinx'
54
55 # The master toctree document.
56 master_doc = 'index'
57
58 numpydoc_show_class_members = False
59
60 # -- Options for auto output -------------------------------------------------
61
62 autoclass_content = 'both'
63 autosummary_generate = True
64
65
66 # -- Options for HTML output -------------------------------------------------
67
68 # The theme to use for HTML and HTML Help pages. See the documentation for
69 # a list of builtin themes.
70 #
71 html_theme = 'nature'
72
73 # Add any paths that contain custom static files (such as style sheets) here,
74 # relative to this directory. They are copied after the builtin static files,
75 # so a file named "default.css" will overwrite the builtin "default.css".
76 html_static_path = ['_static']
77
78 # If false, no index is generated.
79 html_use_index = False
80
81 # If false, no module index is generated.
82 html_domain_indices = False
83
84
85 # -- Options for manual page output ---------------------------------------
86
87 # One entry per manual page. List of tuples
88 # (source start file, name, description, authors, manual section).
89 man_pages = [
90 ('index', 'databricks.koalas', u'databricks.koalas Documentation',
91 [u'Author'], 1)
92 ]
93
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docs/source/conf.py b/docs/source/conf.py
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -68,13 +68,16 @@
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
-html_theme = 'nature'
+html_theme = 'nature_with_gtoc'
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
+# Add any paths that contain custom themes here, relative to this directory.
+html_theme_path = ['themes']
+
# If false, no index is generated.
html_use_index = False
|
{"golden_diff": "diff --git a/docs/source/conf.py b/docs/source/conf.py\n--- a/docs/source/conf.py\n+++ b/docs/source/conf.py\n@@ -68,13 +68,16 @@\n # The theme to use for HTML and HTML Help pages. See the documentation for\n # a list of builtin themes.\n #\n-html_theme = 'nature'\n+html_theme = 'nature_with_gtoc'\n \n # Add any paths that contain custom static files (such as style sheets) here,\n # relative to this directory. They are copied after the builtin static files,\n # so a file named \"default.css\" will overwrite the builtin \"default.css\".\n html_static_path = ['_static']\n \n+# Add any paths that contain custom themes here, relative to this directory.\n+html_theme_path = ['themes']\n+\n # If false, no index is generated.\n html_use_index = False\n", "issue": "Show pandas style Table of Contents on the left side in docs\nRight now our docs show a weird Table of Contents for only a section, rather than the entire doc. Can we fix it so it shows the Table of Contents of the entire docs, e.g. start from the top level?\r\n\r\n<img width=\"647\" alt=\"Screen Shot 2019-04-23 at 4 40 38 PM\" src=\"https://user-images.githubusercontent.com/323388/56622865-9351b600-65e6-11e9-98b3-7930660b1c93.png\">\r\n\r\n\n", "before_files": [{"content": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# http://www.sphinx-doc.org/en/master/config\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n\nimport os\nimport sys\nfrom databricks import koalas\nsys.path.insert(0, os.path.abspath('.'))\n\n\n# -- Project information -----------------------------------------------------\n\nproject = 'Koalas'\ncopyright = '2019, Databricks'\nauthor = 'The Koalas Team'\n\n# The full version, including alpha/beta/rc tags\nrelease = os.environ.get('RELEASE_VERSION', koalas.__version__)\n\n\n# -- General configuration ---------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\nneeds_sphinx = '1.2'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.viewcode',\n 'numpydoc', # handle NumPy documentation formatted docstrings. Needs to install\n 'nbsphinx', # Jupyter Notebook. Needs to install\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# The master toctree document.\nmaster_doc = 'index'\n\nnumpydoc_show_class_members = False\n\n# -- Options for auto output -------------------------------------------------\n\nautoclass_content = 'both'\nautosummary_generate = True\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'nature'\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# If false, no index is generated.\nhtml_use_index = False\n\n# If false, no module index is generated.\nhtml_domain_indices = False\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n ('index', 'databricks.koalas', u'databricks.koalas Documentation',\n [u'Author'], 1)\n]\n", "path": "docs/source/conf.py"}], "after_files": [{"content": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# http://www.sphinx-doc.org/en/master/config\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n\nimport os\nimport sys\nfrom databricks import koalas\nsys.path.insert(0, os.path.abspath('.'))\n\n\n# -- Project information -----------------------------------------------------\n\nproject = 'Koalas'\ncopyright = '2019, Databricks'\nauthor = 'The Koalas Team'\n\n# The full version, including alpha/beta/rc tags\nrelease = os.environ.get('RELEASE_VERSION', koalas.__version__)\n\n\n# -- General configuration ---------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\nneeds_sphinx = '1.2'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.viewcode',\n 'numpydoc', # handle NumPy documentation formatted docstrings. Needs to install\n 'nbsphinx', # Jupyter Notebook. Needs to install\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# The master toctree document.\nmaster_doc = 'index'\n\nnumpydoc_show_class_members = False\n\n# -- Options for auto output -------------------------------------------------\n\nautoclass_content = 'both'\nautosummary_generate = True\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'nature_with_gtoc'\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# Add any paths that contain custom themes here, relative to this directory.\nhtml_theme_path = ['themes']\n\n# If false, no index is generated.\nhtml_use_index = False\n\n# If false, no module index is generated.\nhtml_domain_indices = False\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n ('index', 'databricks.koalas', u'databricks.koalas Documentation',\n [u'Author'], 1)\n]\n", "path": "docs/source/conf.py"}]}
| 1,273 | 182 |
gh_patches_debug_23082
|
rasdani/github-patches
|
git_diff
|
microsoft__playwright-python-401
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Interactive mode (REPL) Error !!!
**pip install playwright==0.162.2**
from playwright import sync_playwright
**playwright = sync_playwright().start()**
Traceback (most recent call last):
File "<pyshell#1>", line 1, in
playwright = sync_playwright().start()
File "C:\Python37\lib\site-packages\playwright_init_.py", line 34, in sync_playwright
return SyncPlaywrightContextManager()
File "C:\Python37\lib\site-packages\playwright\main.py", line 81, in init
self._connection = run_driver()
File "C:\Python37\lib\site-packages\playwright\main.py", line 76, in run_driver
return loop.run_until_complete(run_driver_async())
File "C:\Python37\lib\asyncio\base_events.py", line 587, in run_until_complete
return future.result()
File "C:\Python37\lib\site-packages\playwright\main.py", line 61, in run_driver_async
stderr=_get_stderr_fileno(),
File "C:\Python37\lib\site-packages\playwright\main.py", line 54, in _get_stderr_fileno
return sys.stderr.fileno()
**AttributeError: 'NoneType' object has no attribute 'fileno'**
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `playwright/_impl/_transport.py`
Content:
```
1 # Copyright (c) Microsoft Corporation.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import asyncio
16 import json
17 import os
18 import sys
19 from pathlib import Path
20 from typing import Dict
21
22
23 class Transport:
24 def __init__(self, driver_executable: Path) -> None:
25 super().__init__()
26 self.on_message = lambda _: None
27 self._stopped = False
28 self._driver_executable = driver_executable
29 self._loop: asyncio.AbstractEventLoop
30
31 def stop(self) -> None:
32 self._stopped = True
33 self._output.close()
34
35 async def run(self) -> None:
36 self._loop = asyncio.get_running_loop()
37 driver_executable = self._driver_executable
38
39 proc = await asyncio.create_subprocess_exec(
40 str(driver_executable),
41 "run-driver",
42 stdin=asyncio.subprocess.PIPE,
43 stdout=asyncio.subprocess.PIPE,
44 stderr=sys.stderr,
45 limit=32768,
46 )
47 assert proc.stdout
48 assert proc.stdin
49 self._output = proc.stdin
50
51 while not self._stopped:
52 try:
53 buffer = await proc.stdout.readexactly(4)
54 length = int.from_bytes(buffer, byteorder="little", signed=False)
55 buffer = bytes(0)
56 while length:
57 to_read = min(length, 32768)
58 data = await proc.stdout.readexactly(to_read)
59 length -= to_read
60 if len(buffer):
61 buffer = buffer + data
62 else:
63 buffer = data
64 obj = json.loads(buffer)
65
66 if "DEBUGP" in os.environ: # pragma: no cover
67 print("\x1b[33mRECV>\x1b[0m", json.dumps(obj, indent=2))
68 self.on_message(obj)
69 except asyncio.IncompleteReadError:
70 break
71 await asyncio.sleep(0)
72
73 def send(self, message: Dict) -> None:
74 msg = json.dumps(message)
75 if "DEBUGP" in os.environ: # pragma: no cover
76 print("\x1b[32mSEND>\x1b[0m", json.dumps(message, indent=2))
77 data = msg.encode()
78 self._output.write(
79 len(data).to_bytes(4, byteorder="little", signed=False) + data
80 )
81
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/playwright/_impl/_transport.py b/playwright/_impl/_transport.py
--- a/playwright/_impl/_transport.py
+++ b/playwright/_impl/_transport.py
@@ -13,11 +13,25 @@
# limitations under the License.
import asyncio
+import io
import json
import os
import sys
from pathlib import Path
-from typing import Dict
+from typing import Dict, Optional
+
+
+# Sourced from: https://github.com/pytest-dev/pytest/blob/da01ee0a4bb0af780167ecd228ab3ad249511302/src/_pytest/faulthandler.py#L69-L77
+def _get_stderr_fileno() -> Optional[int]:
+ try:
+ return sys.stderr.fileno()
+ except (AttributeError, io.UnsupportedOperation):
+ # pytest-xdist monkeypatches sys.stderr with an object that is not an actual file.
+ # https://docs.python.org/3/library/faulthandler.html#issue-with-file-descriptors
+ # This is potentially dangerous, but the best we can do.
+ if not hasattr(sys, "__stderr__"):
+ return None
+ return sys.__stderr__.fileno()
class Transport:
@@ -41,7 +55,7 @@
"run-driver",
stdin=asyncio.subprocess.PIPE,
stdout=asyncio.subprocess.PIPE,
- stderr=sys.stderr,
+ stderr=_get_stderr_fileno(),
limit=32768,
)
assert proc.stdout
|
{"golden_diff": "diff --git a/playwright/_impl/_transport.py b/playwright/_impl/_transport.py\n--- a/playwright/_impl/_transport.py\n+++ b/playwright/_impl/_transport.py\n@@ -13,11 +13,25 @@\n # limitations under the License.\n \n import asyncio\n+import io\n import json\n import os\n import sys\n from pathlib import Path\n-from typing import Dict\n+from typing import Dict, Optional\n+\n+\n+# Sourced from: https://github.com/pytest-dev/pytest/blob/da01ee0a4bb0af780167ecd228ab3ad249511302/src/_pytest/faulthandler.py#L69-L77\n+def _get_stderr_fileno() -> Optional[int]:\n+ try:\n+ return sys.stderr.fileno()\n+ except (AttributeError, io.UnsupportedOperation):\n+ # pytest-xdist monkeypatches sys.stderr with an object that is not an actual file.\n+ # https://docs.python.org/3/library/faulthandler.html#issue-with-file-descriptors\n+ # This is potentially dangerous, but the best we can do.\n+ if not hasattr(sys, \"__stderr__\"):\n+ return None\n+ return sys.__stderr__.fileno()\n \n \n class Transport:\n@@ -41,7 +55,7 @@\n \"run-driver\",\n stdin=asyncio.subprocess.PIPE,\n stdout=asyncio.subprocess.PIPE,\n- stderr=sys.stderr,\n+ stderr=_get_stderr_fileno(),\n limit=32768,\n )\n assert proc.stdout\n", "issue": "Interactive mode (REPL) Error !!!\n**pip install playwright==0.162.2**\r\n\r\nfrom playwright import sync_playwright\r\n**playwright = sync_playwright().start()**\r\n\r\nTraceback (most recent call last):\r\nFile \"<pyshell#1>\", line 1, in\r\nplaywright = sync_playwright().start()\r\nFile \"C:\\Python37\\lib\\site-packages\\playwright_init_.py\", line 34, in sync_playwright\r\nreturn SyncPlaywrightContextManager()\r\nFile \"C:\\Python37\\lib\\site-packages\\playwright\\main.py\", line 81, in init\r\nself._connection = run_driver()\r\nFile \"C:\\Python37\\lib\\site-packages\\playwright\\main.py\", line 76, in run_driver\r\nreturn loop.run_until_complete(run_driver_async())\r\nFile \"C:\\Python37\\lib\\asyncio\\base_events.py\", line 587, in run_until_complete\r\nreturn future.result()\r\nFile \"C:\\Python37\\lib\\site-packages\\playwright\\main.py\", line 61, in run_driver_async\r\nstderr=_get_stderr_fileno(),\r\nFile \"C:\\Python37\\lib\\site-packages\\playwright\\main.py\", line 54, in _get_stderr_fileno\r\nreturn sys.stderr.fileno()\r\n**AttributeError: 'NoneType' object has no attribute 'fileno'**\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport asyncio\nimport json\nimport os\nimport sys\nfrom pathlib import Path\nfrom typing import Dict\n\n\nclass Transport:\n def __init__(self, driver_executable: Path) -> None:\n super().__init__()\n self.on_message = lambda _: None\n self._stopped = False\n self._driver_executable = driver_executable\n self._loop: asyncio.AbstractEventLoop\n\n def stop(self) -> None:\n self._stopped = True\n self._output.close()\n\n async def run(self) -> None:\n self._loop = asyncio.get_running_loop()\n driver_executable = self._driver_executable\n\n proc = await asyncio.create_subprocess_exec(\n str(driver_executable),\n \"run-driver\",\n stdin=asyncio.subprocess.PIPE,\n stdout=asyncio.subprocess.PIPE,\n stderr=sys.stderr,\n limit=32768,\n )\n assert proc.stdout\n assert proc.stdin\n self._output = proc.stdin\n\n while not self._stopped:\n try:\n buffer = await proc.stdout.readexactly(4)\n length = int.from_bytes(buffer, byteorder=\"little\", signed=False)\n buffer = bytes(0)\n while length:\n to_read = min(length, 32768)\n data = await proc.stdout.readexactly(to_read)\n length -= to_read\n if len(buffer):\n buffer = buffer + data\n else:\n buffer = data\n obj = json.loads(buffer)\n\n if \"DEBUGP\" in os.environ: # pragma: no cover\n print(\"\\x1b[33mRECV>\\x1b[0m\", json.dumps(obj, indent=2))\n self.on_message(obj)\n except asyncio.IncompleteReadError:\n break\n await asyncio.sleep(0)\n\n def send(self, message: Dict) -> None:\n msg = json.dumps(message)\n if \"DEBUGP\" in os.environ: # pragma: no cover\n print(\"\\x1b[32mSEND>\\x1b[0m\", json.dumps(message, indent=2))\n data = msg.encode()\n self._output.write(\n len(data).to_bytes(4, byteorder=\"little\", signed=False) + data\n )\n", "path": "playwright/_impl/_transport.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport asyncio\nimport io\nimport json\nimport os\nimport sys\nfrom pathlib import Path\nfrom typing import Dict, Optional\n\n\n# Sourced from: https://github.com/pytest-dev/pytest/blob/da01ee0a4bb0af780167ecd228ab3ad249511302/src/_pytest/faulthandler.py#L69-L77\ndef _get_stderr_fileno() -> Optional[int]:\n try:\n return sys.stderr.fileno()\n except (AttributeError, io.UnsupportedOperation):\n # pytest-xdist monkeypatches sys.stderr with an object that is not an actual file.\n # https://docs.python.org/3/library/faulthandler.html#issue-with-file-descriptors\n # This is potentially dangerous, but the best we can do.\n if not hasattr(sys, \"__stderr__\"):\n return None\n return sys.__stderr__.fileno()\n\n\nclass Transport:\n def __init__(self, driver_executable: Path) -> None:\n super().__init__()\n self.on_message = lambda _: None\n self._stopped = False\n self._driver_executable = driver_executable\n self._loop: asyncio.AbstractEventLoop\n\n def stop(self) -> None:\n self._stopped = True\n self._output.close()\n\n async def run(self) -> None:\n self._loop = asyncio.get_running_loop()\n driver_executable = self._driver_executable\n\n proc = await asyncio.create_subprocess_exec(\n str(driver_executable),\n \"run-driver\",\n stdin=asyncio.subprocess.PIPE,\n stdout=asyncio.subprocess.PIPE,\n stderr=_get_stderr_fileno(),\n limit=32768,\n )\n assert proc.stdout\n assert proc.stdin\n self._output = proc.stdin\n\n while not self._stopped:\n try:\n buffer = await proc.stdout.readexactly(4)\n length = int.from_bytes(buffer, byteorder=\"little\", signed=False)\n buffer = bytes(0)\n while length:\n to_read = min(length, 32768)\n data = await proc.stdout.readexactly(to_read)\n length -= to_read\n if len(buffer):\n buffer = buffer + data\n else:\n buffer = data\n obj = json.loads(buffer)\n\n if \"DEBUGP\" in os.environ: # pragma: no cover\n print(\"\\x1b[33mRECV>\\x1b[0m\", json.dumps(obj, indent=2))\n self.on_message(obj)\n except asyncio.IncompleteReadError:\n break\n await asyncio.sleep(0)\n\n def send(self, message: Dict) -> None:\n msg = json.dumps(message)\n if \"DEBUGP\" in os.environ: # pragma: no cover\n print(\"\\x1b[32mSEND>\\x1b[0m\", json.dumps(message, indent=2))\n data = msg.encode()\n self._output.write(\n len(data).to_bytes(4, byteorder=\"little\", signed=False) + data\n )\n", "path": "playwright/_impl/_transport.py"}]}
| 1,331 | 350 |
gh_patches_debug_16461
|
rasdani/github-patches
|
git_diff
|
conda__conda-build-3212
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
allow overriding the .so check for noarch: python?
`compas` is a pure-python package that could be made `noarch: python` except that it [ships some `.so` and `.dll` files](https://github.com/compas-dev/compas/tree/master/src/compas/numerical/fd/__fd_cpp) (that it doesn't compile, so they're the same across platforms). That's triggering [this check](https://github.com/conda/conda-build/blob/19f5d7847ee8a4c5b979680595103fa6cc21e4b1/conda_build/noarch_python.py#L60-L66) and killing the build ([PR](https://github.com/conda-forge/compas-feedstock/pull/6#issuecomment-429024394), [failed build](https://circleci.com/gh/conda-forge/compas-feedstock/37?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link)).
The check definitely makes sense in general. But maybe there should be a way to override it for cases like this?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conda_build/noarch_python.py`
Content:
```
1 import io
2 import json
3 import locale
4 import logging
5 import os
6 from os.path import basename, dirname, isdir, join, isfile
7 import shutil
8 import sys
9
10 ISWIN = sys.platform.startswith('win')
11
12
13 def _force_dir(dirname):
14 if not isdir(dirname):
15 os.makedirs(dirname)
16
17
18 def _error_exit(exit_message):
19 sys.exit("[noarch_python] %s" % exit_message)
20
21
22 def rewrite_script(fn, prefix):
23 """Take a file from the bin directory and rewrite it into the python-scripts
24 directory with the same permissions after it passes some sanity checks for
25 noarch pacakges"""
26
27 # Load and check the source file for not being a binary
28 src = join(prefix, 'Scripts' if ISWIN else 'bin', fn)
29 with io.open(src, encoding=locale.getpreferredencoding()) as fi:
30 try:
31 data = fi.read()
32 except UnicodeDecodeError: # file is binary
33 _error_exit("Noarch package contains binary script: %s" % fn)
34 src_mode = os.stat(src).st_mode
35 os.unlink(src)
36
37 # Get rid of '-script.py' suffix on Windows
38 if ISWIN and fn.endswith('-script.py'):
39 fn = fn[:-10]
40
41 # Rewrite the file to the python-scripts directory
42 dst_dir = join(prefix, 'python-scripts')
43 _force_dir(dst_dir)
44 dst = join(dst_dir, fn)
45 with open(dst, 'w') as fo:
46 fo.write(data)
47 os.chmod(dst, src_mode)
48 return fn
49
50
51 def handle_file(f, d, prefix):
52 """Process a file for inclusion in a noarch python package.
53 """
54 path = join(prefix, f)
55
56 # Ignore egg-info and pyc files.
57 if f.endswith(('.egg-info', '.pyc', '.pyo')):
58 os.unlink(path)
59
60 # The presence of .so indicated this is not a noarch package
61 elif f.endswith(('.so', '.dll', '.pyd', '.exe', '.dylib')):
62 if f.endswith('.exe') and (isfile(os.path.join(prefix, f[:-4] + '-script.py')) or
63 basename(f[:-4]) in d['python-scripts']):
64 os.unlink(path) # this is an entry point with a matching xx-script.py
65 return
66 _error_exit("Error: Binary library or executable found: %s" % f)
67
68 elif 'site-packages' in f:
69 nsp = join(prefix, 'site-packages')
70 _force_dir(nsp)
71
72 g = f[f.find('site-packages'):]
73 dst = join(prefix, g)
74 dst_dir = dirname(dst)
75 _force_dir(dst_dir)
76 shutil.move(path, dst)
77 d['site-packages'].append(g[14:])
78
79 # Treat scripts specially with the logic from above
80 elif f.startswith(('bin/', 'Scripts')):
81 fn = basename(path)
82 fn = rewrite_script(fn, prefix)
83 d['python-scripts'].append(fn)
84
85 # Include examples in the metadata doc
86 elif f.startswith(('Examples/', 'Examples\\')):
87 d['Examples'].append(f[9:])
88 # No special treatment for other files
89 # leave them as-is
90 else:
91 # this should be the built-in logging module, not conda-build's stuff, because this file is standalone.
92 log = logging.getLogger(__name__)
93 log.debug("Don't know how to handle file: %s. Including it as-is." % f)
94
95
96 def populate_files(m, files, prefix, entry_point_scripts=None):
97 d = {'dist': m.dist(),
98 'site-packages': [],
99 'python-scripts': [],
100 'Examples': []}
101
102 # Populate site-package, python-scripts, and Examples into above
103 for f in files:
104 handle_file(f, d, prefix)
105
106 # Windows path conversion
107 if ISWIN:
108 for fns in (d['site-packages'], d['Examples']):
109 for i, fn in enumerate(fns):
110 fns[i] = fn.replace('\\', '/')
111
112 if entry_point_scripts:
113 for entry_point in entry_point_scripts:
114 src = join(prefix, entry_point)
115 if os.path.isfile(src):
116 os.unlink(src)
117
118 return d
119
120
121 def transform(m, files, prefix):
122 bin_dir = join(prefix, 'bin')
123 _force_dir(bin_dir)
124
125 scripts_dir = join(prefix, 'Scripts')
126 _force_dir(scripts_dir)
127
128 name = m.name()
129
130 # Create *nix prelink script
131 # Note: it's important to use LF newlines or it wont work if we build on Win
132 with open(join(bin_dir, '.%s-pre-link.sh' % name), 'wb') as fo:
133 fo.write('''\
134 #!/bin/bash
135 $PREFIX/bin/python $SOURCE_DIR/link.py
136 '''.encode('utf-8'))
137
138 # Create windows prelink script (be nice and use Windows newlines)
139 with open(join(scripts_dir, '.%s-pre-link.bat' % name), 'wb') as fo:
140 fo.write('''\
141 @echo off
142 "%PREFIX%\\python.exe" "%SOURCE_DIR%\\link.py"
143 '''.replace('\n', '\r\n').encode('utf-8'))
144
145 d = populate_files(m, files, prefix)
146
147 # Find our way to this directory
148 this_dir = dirname(__file__)
149
150 # copy in windows exe shims if there are any python-scripts
151 if d['python-scripts']:
152 for fn in 'cli-32.exe', 'cli-64.exe':
153 shutil.copyfile(join(this_dir, fn), join(prefix, fn))
154
155 # Read the local _link.py
156 with open(join(this_dir, '_link.py')) as fi:
157 link_code = fi.read()
158
159 # Write the package metadata, and bumper with code for linking
160 with open(join(prefix, 'link.py'), 'w') as fo:
161 fo.write('DATA = ')
162 json.dump(d, fo, indent=2, sort_keys=True)
163 fo.write('\n## END DATA\n\n')
164 fo.write(link_code)
165
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/conda_build/noarch_python.py b/conda_build/noarch_python.py
--- a/conda_build/noarch_python.py
+++ b/conda_build/noarch_python.py
@@ -57,13 +57,10 @@
if f.endswith(('.egg-info', '.pyc', '.pyo')):
os.unlink(path)
- # The presence of .so indicated this is not a noarch package
- elif f.endswith(('.so', '.dll', '.pyd', '.exe', '.dylib')):
- if f.endswith('.exe') and (isfile(os.path.join(prefix, f[:-4] + '-script.py')) or
- basename(f[:-4]) in d['python-scripts']):
- os.unlink(path) # this is an entry point with a matching xx-script.py
- return
- _error_exit("Error: Binary library or executable found: %s" % f)
+ if f.endswith('.exe') and (isfile(os.path.join(prefix, f[:-4] + '-script.py')) or
+ basename(f[:-4]) in d['python-scripts']):
+ os.unlink(path) # this is an entry point with a matching xx-script.py
+ return
elif 'site-packages' in f:
nsp = join(prefix, 'site-packages')
|
{"golden_diff": "diff --git a/conda_build/noarch_python.py b/conda_build/noarch_python.py\n--- a/conda_build/noarch_python.py\n+++ b/conda_build/noarch_python.py\n@@ -57,13 +57,10 @@\n if f.endswith(('.egg-info', '.pyc', '.pyo')):\n os.unlink(path)\n \n- # The presence of .so indicated this is not a noarch package\n- elif f.endswith(('.so', '.dll', '.pyd', '.exe', '.dylib')):\n- if f.endswith('.exe') and (isfile(os.path.join(prefix, f[:-4] + '-script.py')) or\n- basename(f[:-4]) in d['python-scripts']):\n- os.unlink(path) # this is an entry point with a matching xx-script.py\n- return\n- _error_exit(\"Error: Binary library or executable found: %s\" % f)\n+ if f.endswith('.exe') and (isfile(os.path.join(prefix, f[:-4] + '-script.py')) or\n+ basename(f[:-4]) in d['python-scripts']):\n+ os.unlink(path) # this is an entry point with a matching xx-script.py\n+ return\n \n elif 'site-packages' in f:\n nsp = join(prefix, 'site-packages')\n", "issue": "allow overriding the .so check for noarch: python?\n`compas` is a pure-python package that could be made `noarch: python` except that it [ships some `.so` and `.dll` files](https://github.com/compas-dev/compas/tree/master/src/compas/numerical/fd/__fd_cpp) (that it doesn't compile, so they're the same across platforms). That's triggering [this check](https://github.com/conda/conda-build/blob/19f5d7847ee8a4c5b979680595103fa6cc21e4b1/conda_build/noarch_python.py#L60-L66) and killing the build ([PR](https://github.com/conda-forge/compas-feedstock/pull/6#issuecomment-429024394), [failed build](https://circleci.com/gh/conda-forge/compas-feedstock/37?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link)).\r\n\r\nThe check definitely makes sense in general. But maybe there should be a way to override it for cases like this?\n", "before_files": [{"content": "import io\nimport json\nimport locale\nimport logging\nimport os\nfrom os.path import basename, dirname, isdir, join, isfile\nimport shutil\nimport sys\n\nISWIN = sys.platform.startswith('win')\n\n\ndef _force_dir(dirname):\n if not isdir(dirname):\n os.makedirs(dirname)\n\n\ndef _error_exit(exit_message):\n sys.exit(\"[noarch_python] %s\" % exit_message)\n\n\ndef rewrite_script(fn, prefix):\n \"\"\"Take a file from the bin directory and rewrite it into the python-scripts\n directory with the same permissions after it passes some sanity checks for\n noarch pacakges\"\"\"\n\n # Load and check the source file for not being a binary\n src = join(prefix, 'Scripts' if ISWIN else 'bin', fn)\n with io.open(src, encoding=locale.getpreferredencoding()) as fi:\n try:\n data = fi.read()\n except UnicodeDecodeError: # file is binary\n _error_exit(\"Noarch package contains binary script: %s\" % fn)\n src_mode = os.stat(src).st_mode\n os.unlink(src)\n\n # Get rid of '-script.py' suffix on Windows\n if ISWIN and fn.endswith('-script.py'):\n fn = fn[:-10]\n\n # Rewrite the file to the python-scripts directory\n dst_dir = join(prefix, 'python-scripts')\n _force_dir(dst_dir)\n dst = join(dst_dir, fn)\n with open(dst, 'w') as fo:\n fo.write(data)\n os.chmod(dst, src_mode)\n return fn\n\n\ndef handle_file(f, d, prefix):\n \"\"\"Process a file for inclusion in a noarch python package.\n \"\"\"\n path = join(prefix, f)\n\n # Ignore egg-info and pyc files.\n if f.endswith(('.egg-info', '.pyc', '.pyo')):\n os.unlink(path)\n\n # The presence of .so indicated this is not a noarch package\n elif f.endswith(('.so', '.dll', '.pyd', '.exe', '.dylib')):\n if f.endswith('.exe') and (isfile(os.path.join(prefix, f[:-4] + '-script.py')) or\n basename(f[:-4]) in d['python-scripts']):\n os.unlink(path) # this is an entry point with a matching xx-script.py\n return\n _error_exit(\"Error: Binary library or executable found: %s\" % f)\n\n elif 'site-packages' in f:\n nsp = join(prefix, 'site-packages')\n _force_dir(nsp)\n\n g = f[f.find('site-packages'):]\n dst = join(prefix, g)\n dst_dir = dirname(dst)\n _force_dir(dst_dir)\n shutil.move(path, dst)\n d['site-packages'].append(g[14:])\n\n # Treat scripts specially with the logic from above\n elif f.startswith(('bin/', 'Scripts')):\n fn = basename(path)\n fn = rewrite_script(fn, prefix)\n d['python-scripts'].append(fn)\n\n # Include examples in the metadata doc\n elif f.startswith(('Examples/', 'Examples\\\\')):\n d['Examples'].append(f[9:])\n # No special treatment for other files\n # leave them as-is\n else:\n # this should be the built-in logging module, not conda-build's stuff, because this file is standalone.\n log = logging.getLogger(__name__)\n log.debug(\"Don't know how to handle file: %s. Including it as-is.\" % f)\n\n\ndef populate_files(m, files, prefix, entry_point_scripts=None):\n d = {'dist': m.dist(),\n 'site-packages': [],\n 'python-scripts': [],\n 'Examples': []}\n\n # Populate site-package, python-scripts, and Examples into above\n for f in files:\n handle_file(f, d, prefix)\n\n # Windows path conversion\n if ISWIN:\n for fns in (d['site-packages'], d['Examples']):\n for i, fn in enumerate(fns):\n fns[i] = fn.replace('\\\\', '/')\n\n if entry_point_scripts:\n for entry_point in entry_point_scripts:\n src = join(prefix, entry_point)\n if os.path.isfile(src):\n os.unlink(src)\n\n return d\n\n\ndef transform(m, files, prefix):\n bin_dir = join(prefix, 'bin')\n _force_dir(bin_dir)\n\n scripts_dir = join(prefix, 'Scripts')\n _force_dir(scripts_dir)\n\n name = m.name()\n\n # Create *nix prelink script\n # Note: it's important to use LF newlines or it wont work if we build on Win\n with open(join(bin_dir, '.%s-pre-link.sh' % name), 'wb') as fo:\n fo.write('''\\\n #!/bin/bash\n $PREFIX/bin/python $SOURCE_DIR/link.py\n '''.encode('utf-8'))\n\n # Create windows prelink script (be nice and use Windows newlines)\n with open(join(scripts_dir, '.%s-pre-link.bat' % name), 'wb') as fo:\n fo.write('''\\\n @echo off\n \"%PREFIX%\\\\python.exe\" \"%SOURCE_DIR%\\\\link.py\"\n '''.replace('\\n', '\\r\\n').encode('utf-8'))\n\n d = populate_files(m, files, prefix)\n\n # Find our way to this directory\n this_dir = dirname(__file__)\n\n # copy in windows exe shims if there are any python-scripts\n if d['python-scripts']:\n for fn in 'cli-32.exe', 'cli-64.exe':\n shutil.copyfile(join(this_dir, fn), join(prefix, fn))\n\n # Read the local _link.py\n with open(join(this_dir, '_link.py')) as fi:\n link_code = fi.read()\n\n # Write the package metadata, and bumper with code for linking\n with open(join(prefix, 'link.py'), 'w') as fo:\n fo.write('DATA = ')\n json.dump(d, fo, indent=2, sort_keys=True)\n fo.write('\\n## END DATA\\n\\n')\n fo.write(link_code)\n", "path": "conda_build/noarch_python.py"}], "after_files": [{"content": "import io\nimport json\nimport locale\nimport logging\nimport os\nfrom os.path import basename, dirname, isdir, join, isfile\nimport shutil\nimport sys\n\nISWIN = sys.platform.startswith('win')\n\n\ndef _force_dir(dirname):\n if not isdir(dirname):\n os.makedirs(dirname)\n\n\ndef _error_exit(exit_message):\n sys.exit(\"[noarch_python] %s\" % exit_message)\n\n\ndef rewrite_script(fn, prefix):\n \"\"\"Take a file from the bin directory and rewrite it into the python-scripts\n directory with the same permissions after it passes some sanity checks for\n noarch pacakges\"\"\"\n\n # Load and check the source file for not being a binary\n src = join(prefix, 'Scripts' if ISWIN else 'bin', fn)\n with io.open(src, encoding=locale.getpreferredencoding()) as fi:\n try:\n data = fi.read()\n except UnicodeDecodeError: # file is binary\n _error_exit(\"Noarch package contains binary script: %s\" % fn)\n src_mode = os.stat(src).st_mode\n os.unlink(src)\n\n # Get rid of '-script.py' suffix on Windows\n if ISWIN and fn.endswith('-script.py'):\n fn = fn[:-10]\n\n # Rewrite the file to the python-scripts directory\n dst_dir = join(prefix, 'python-scripts')\n _force_dir(dst_dir)\n dst = join(dst_dir, fn)\n with open(dst, 'w') as fo:\n fo.write(data)\n os.chmod(dst, src_mode)\n return fn\n\n\ndef handle_file(f, d, prefix):\n \"\"\"Process a file for inclusion in a noarch python package.\n \"\"\"\n path = join(prefix, f)\n\n # Ignore egg-info and pyc files.\n if f.endswith(('.egg-info', '.pyc', '.pyo')):\n os.unlink(path)\n\n if f.endswith('.exe') and (isfile(os.path.join(prefix, f[:-4] + '-script.py')) or\n basename(f[:-4]) in d['python-scripts']):\n os.unlink(path) # this is an entry point with a matching xx-script.py\n return\n\n elif 'site-packages' in f:\n nsp = join(prefix, 'site-packages')\n _force_dir(nsp)\n\n g = f[f.find('site-packages'):]\n dst = join(prefix, g)\n dst_dir = dirname(dst)\n _force_dir(dst_dir)\n shutil.move(path, dst)\n d['site-packages'].append(g[14:])\n\n # Treat scripts specially with the logic from above\n elif f.startswith(('bin/', 'Scripts')):\n fn = basename(path)\n fn = rewrite_script(fn, prefix)\n d['python-scripts'].append(fn)\n\n # Include examples in the metadata doc\n elif f.startswith(('Examples/', 'Examples\\\\')):\n d['Examples'].append(f[9:])\n # No special treatment for other files\n # leave them as-is\n else:\n # this should be the built-in logging module, not conda-build's stuff, because this file is standalone.\n log = logging.getLogger(__name__)\n log.debug(\"Don't know how to handle file: %s. Including it as-is.\" % f)\n\n\ndef populate_files(m, files, prefix, entry_point_scripts=None):\n d = {'dist': m.dist(),\n 'site-packages': [],\n 'python-scripts': [],\n 'Examples': []}\n\n # Populate site-package, python-scripts, and Examples into above\n for f in files:\n handle_file(f, d, prefix)\n\n # Windows path conversion\n if ISWIN:\n for fns in (d['site-packages'], d['Examples']):\n for i, fn in enumerate(fns):\n fns[i] = fn.replace('\\\\', '/')\n\n if entry_point_scripts:\n for entry_point in entry_point_scripts:\n src = join(prefix, entry_point)\n if os.path.isfile(src):\n os.unlink(src)\n\n return d\n\n\ndef transform(m, files, prefix):\n bin_dir = join(prefix, 'bin')\n _force_dir(bin_dir)\n\n scripts_dir = join(prefix, 'Scripts')\n _force_dir(scripts_dir)\n\n name = m.name()\n\n # Create *nix prelink script\n # Note: it's important to use LF newlines or it wont work if we build on Win\n with open(join(bin_dir, '.%s-pre-link.sh' % name), 'wb') as fo:\n fo.write('''\\\n #!/bin/bash\n $PREFIX/bin/python $SOURCE_DIR/link.py\n '''.encode('utf-8'))\n\n # Create windows prelink script (be nice and use Windows newlines)\n with open(join(scripts_dir, '.%s-pre-link.bat' % name), 'wb') as fo:\n fo.write('''\\\n @echo off\n \"%PREFIX%\\\\python.exe\" \"%SOURCE_DIR%\\\\link.py\"\n '''.replace('\\n', '\\r\\n').encode('utf-8'))\n\n d = populate_files(m, files, prefix)\n\n # Find our way to this directory\n this_dir = dirname(__file__)\n\n # copy in windows exe shims if there are any python-scripts\n if d['python-scripts']:\n for fn in 'cli-32.exe', 'cli-64.exe':\n shutil.copyfile(join(this_dir, fn), join(prefix, fn))\n\n # Read the local _link.py\n with open(join(this_dir, '_link.py')) as fi:\n link_code = fi.read()\n\n # Write the package metadata, and bumper with code for linking\n with open(join(prefix, 'link.py'), 'w') as fo:\n fo.write('DATA = ')\n json.dump(d, fo, indent=2, sort_keys=True)\n fo.write('\\n## END DATA\\n\\n')\n fo.write(link_code)\n", "path": "conda_build/noarch_python.py"}]}
| 2,246 | 287 |
gh_patches_debug_9046
|
rasdani/github-patches
|
git_diff
|
ray-project__ray-5687
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[rllib] Integer entropy coeff cannot be passed in
<!--
General questions should be asked on the mailing list [email protected].
Questions about how to use Ray should be asked on
[StackOverflow](https://stackoverflow.com/questions/tagged/ray).
Before submitting an issue, please fill out the following form.
-->
### System information
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Steropes
- **Ray installed from (source or binary)**: pip install -U <latest whl>
- **Ray version**: nightly
- **Python version**: 3.7
- **Exact command to reproduce**: Pass integer value of entropy_coeff into run() with PPO
<!--
You can obtain the Ray version with
python -c "import ray; print(ray.__version__)"
-->
### Describe the problem
<!-- Describe the problem clearly here. -->
### Source code / logs
<!-- Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem. -->
```
2019-09-11 00:11:50,889 ERROR trial_runner.py:552 -- Error processing event.
Traceback (most recent call last):
File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/tune/trial_runner.py", line 498, in _process_trial
result = self.trial_executor.fetch_result(trial)
File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/tune/ray_trial_executor.py", line 347, in fetch_result
result = ray.get(trial_future[0])
File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/worker.py", line 2340, in get
raise value
ray.exceptions.RayTaskError: ray_PPO:train() (pid=11050, host=steropes)
File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py", line 527, in _apply_op_helper
preferred_dtype=default_dtype)
File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1224, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1018, in _TensorTensorConversionFunction
(dtype.name, t.dtype.name, str(t)))
ValueError: Tensor conversion requested dtype int32 for Tensor with dtype float32: 'Tensor("default_policy/Sum_5:0", shape=(?,), dtype=float32)'
During handling of the above exception, another exception occurred:
ray_PPO:train() (pid=11050, host=steropes)
File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
Trainer.__init__(self, config, env, logger_creator)
File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/agents/trainer.py", line 366, in __init__
Trainable.__init__(self, config, logger_creator)
File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/tune/trainable.py", line 99, in __init__
self._setup(copy.deepcopy(self.config))
File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/agents/trainer.py", line 486, in _setup
self._init(self.config, self.env_creator)
File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/agents/trainer_template.py", line 109, in _init
self.config["num_workers"])
File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/agents/trainer.py", line 531, in _make_workers
logdir=self.logdir)
File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/evaluation/worker_set.py", line 64, in __init__
RolloutWorker, env_creator, policy, 0, self._local_config)
File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/evaluation/worker_set.py", line 220, in _make_worker
_fake_sampler=config.get("_fake_sampler", False))
File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/evaluation/rollout_worker.py", line 348, in __init__
self._build_policy_map(policy_dict, policy_config)
File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/evaluation/rollout_worker.py", line 762, in _build_policy_map
policy_map[name] = cls(obs_space, act_space, merged_conf)
File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/policy/tf_policy_template.py", line 143, in __init__
obs_include_prev_action_reward=obs_include_prev_action_reward)
File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/policy/dynamic_tf_policy.py", line 196, in __init__
self._initialize_loss()
File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/policy/dynamic_tf_policy.py", line 337, in _initialize_loss
loss = self._do_loss_init(train_batch)
File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/policy/dynamic_tf_policy.py", line 349, in _do_loss_init
loss = self._loss_fn(self, self.model, self._dist_class, train_batch)
File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/agents/ppo/ppo_policy.py", line 146, in ppo_surrogate_loss
model_config=policy.config["model"])
File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/agents/ppo/ppo_policy.py", line 106, in __init__
vf_loss_coeff * vf_loss - entropy_coeff * curr_entropy)
File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/variables.py", line 1045, in _run_op
return tensor_oper(a.value(), *args, **kwargs)
File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/math_ops.py", line 884, in binary_op_wrapper
return func(x, y, name=name)
File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/math_ops.py", line 1180, in _mul_dispatch
return gen_math_ops.mul(x, y, name=name)
File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/gen_math_ops.py", line 6490, in mul
"Mul", x=x, y=y, name=name)
File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py", line 563, in _apply_op_helper
inferred_from[input_arg.type_attr]))
TypeError: Input 'y' of 'Mul' Op has type float32 that does not match type int32 of argument 'x'.
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rllib/agents/ppo/ppo.py`
Content:
```
1 from __future__ import absolute_import
2 from __future__ import division
3 from __future__ import print_function
4
5 import logging
6
7 from ray.rllib.agents import with_common_config
8 from ray.rllib.agents.ppo.ppo_policy import PPOTFPolicy
9 from ray.rllib.agents.trainer_template import build_trainer
10 from ray.rllib.optimizers import SyncSamplesOptimizer, LocalMultiGPUOptimizer
11 from ray.rllib.utils import try_import_tf
12
13 tf = try_import_tf()
14 logger = logging.getLogger(__name__)
15
16 # yapf: disable
17 # __sphinx_doc_begin__
18 DEFAULT_CONFIG = with_common_config({
19 # If true, use the Generalized Advantage Estimator (GAE)
20 # with a value function, see https://arxiv.org/pdf/1506.02438.pdf.
21 "use_gae": True,
22 # GAE(lambda) parameter
23 "lambda": 1.0,
24 # Initial coefficient for KL divergence
25 "kl_coeff": 0.2,
26 # Size of batches collected from each worker
27 "sample_batch_size": 200,
28 # Number of timesteps collected for each SGD round
29 "train_batch_size": 4000,
30 # Total SGD batch size across all devices for SGD
31 "sgd_minibatch_size": 128,
32 # Whether to shuffle sequences in the batch when training (recommended)
33 "shuffle_sequences": True,
34 # Number of SGD iterations in each outer loop
35 "num_sgd_iter": 30,
36 # Stepsize of SGD
37 "lr": 5e-5,
38 # Learning rate schedule
39 "lr_schedule": None,
40 # Share layers for value function. If you set this to True, it's important
41 # to tune vf_loss_coeff.
42 "vf_share_layers": False,
43 # Coefficient of the value function loss. It's important to tune this if
44 # you set vf_share_layers: True
45 "vf_loss_coeff": 1.0,
46 # Coefficient of the entropy regularizer
47 "entropy_coeff": 0.0,
48 # Decay schedule for the entropy regularizer
49 "entropy_coeff_schedule": None,
50 # PPO clip parameter
51 "clip_param": 0.3,
52 # Clip param for the value function. Note that this is sensitive to the
53 # scale of the rewards. If your expected V is large, increase this.
54 "vf_clip_param": 10.0,
55 # If specified, clip the global norm of gradients by this amount
56 "grad_clip": None,
57 # Target value for KL divergence
58 "kl_target": 0.01,
59 # Whether to rollout "complete_episodes" or "truncate_episodes"
60 "batch_mode": "truncate_episodes",
61 # Which observation filter to apply to the observation
62 "observation_filter": "NoFilter",
63 # Uses the sync samples optimizer instead of the multi-gpu one. This does
64 # not support minibatches.
65 "simple_optimizer": False,
66 })
67 # __sphinx_doc_end__
68 # yapf: enable
69
70
71 def choose_policy_optimizer(workers, config):
72 if config["simple_optimizer"]:
73 return SyncSamplesOptimizer(
74 workers,
75 num_sgd_iter=config["num_sgd_iter"],
76 train_batch_size=config["train_batch_size"],
77 sgd_minibatch_size=config["sgd_minibatch_size"])
78
79 return LocalMultiGPUOptimizer(
80 workers,
81 sgd_batch_size=config["sgd_minibatch_size"],
82 num_sgd_iter=config["num_sgd_iter"],
83 num_gpus=config["num_gpus"],
84 sample_batch_size=config["sample_batch_size"],
85 num_envs_per_worker=config["num_envs_per_worker"],
86 train_batch_size=config["train_batch_size"],
87 standardize_fields=["advantages"],
88 shuffle_sequences=config["shuffle_sequences"])
89
90
91 def update_kl(trainer, fetches):
92 if "kl" in fetches:
93 # single-agent
94 trainer.workers.local_worker().for_policy(
95 lambda pi: pi.update_kl(fetches["kl"]))
96 else:
97
98 def update(pi, pi_id):
99 if pi_id in fetches:
100 pi.update_kl(fetches[pi_id]["kl"])
101 else:
102 logger.debug("No data for {}, not updating kl".format(pi_id))
103
104 # multi-agent
105 trainer.workers.local_worker().foreach_trainable_policy(update)
106
107
108 def warn_about_bad_reward_scales(trainer, result):
109 # Warn about bad clipping configs
110 if trainer.config["vf_clip_param"] <= 0:
111 rew_scale = float("inf")
112 elif result["policy_reward_mean"]:
113 rew_scale = 0 # punt on handling multiagent case
114 else:
115 rew_scale = round(
116 abs(result["episode_reward_mean"]) /
117 trainer.config["vf_clip_param"], 0)
118 if rew_scale > 200:
119 logger.warning(
120 "The magnitude of your environment rewards are more than "
121 "{}x the scale of `vf_clip_param`. ".format(rew_scale) +
122 "This means that it will take more than "
123 "{} iterations for your value ".format(rew_scale) +
124 "function to converge. If this is not intended, consider "
125 "increasing `vf_clip_param`.")
126
127
128 def validate_config(config):
129 if config["entropy_coeff"] < 0:
130 raise DeprecationWarning("entropy_coeff must be >= 0")
131 if config["sgd_minibatch_size"] > config["train_batch_size"]:
132 raise ValueError(
133 "Minibatch size {} must be <= train batch size {}.".format(
134 config["sgd_minibatch_size"], config["train_batch_size"]))
135 if config["batch_mode"] == "truncate_episodes" and not config["use_gae"]:
136 raise ValueError(
137 "Episode truncation is not supported without a value "
138 "function. Consider setting batch_mode=complete_episodes.")
139 if config["multiagent"]["policies"] and not config["simple_optimizer"]:
140 logger.info(
141 "In multi-agent mode, policies will be optimized sequentially "
142 "by the multi-GPU optimizer. Consider setting "
143 "simple_optimizer=True if this doesn't work for you.")
144 if config["simple_optimizer"]:
145 logger.warning(
146 "Using the simple minibatch optimizer. This will significantly "
147 "reduce performance, consider simple_optimizer=False.")
148 elif tf and tf.executing_eagerly():
149 config["simple_optimizer"] = True # multi-gpu not supported
150
151
152 PPOTrainer = build_trainer(
153 name="PPO",
154 default_config=DEFAULT_CONFIG,
155 default_policy=PPOTFPolicy,
156 make_policy_optimizer=choose_policy_optimizer,
157 validate_config=validate_config,
158 after_optimizer_step=update_kl,
159 after_train_result=warn_about_bad_reward_scales)
160
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/rllib/agents/ppo/ppo.py b/rllib/agents/ppo/ppo.py
--- a/rllib/agents/ppo/ppo.py
+++ b/rllib/agents/ppo/ppo.py
@@ -128,6 +128,8 @@
def validate_config(config):
if config["entropy_coeff"] < 0:
raise DeprecationWarning("entropy_coeff must be >= 0")
+ if isinstance(config["entropy_coeff"], int):
+ config["entropy_coeff"] = float(config["entropy_coeff"])
if config["sgd_minibatch_size"] > config["train_batch_size"]:
raise ValueError(
"Minibatch size {} must be <= train batch size {}.".format(
|
{"golden_diff": "diff --git a/rllib/agents/ppo/ppo.py b/rllib/agents/ppo/ppo.py\n--- a/rllib/agents/ppo/ppo.py\n+++ b/rllib/agents/ppo/ppo.py\n@@ -128,6 +128,8 @@\n def validate_config(config):\n if config[\"entropy_coeff\"] < 0:\n raise DeprecationWarning(\"entropy_coeff must be >= 0\")\n+ if isinstance(config[\"entropy_coeff\"], int):\n+ config[\"entropy_coeff\"] = float(config[\"entropy_coeff\"])\n if config[\"sgd_minibatch_size\"] > config[\"train_batch_size\"]:\n raise ValueError(\n \"Minibatch size {} must be <= train batch size {}.\".format(\n", "issue": "[rllib] Integer entropy coeff cannot be passed in\n<!--\r\nGeneral questions should be asked on the mailing list [email protected].\r\nQuestions about how to use Ray should be asked on\r\n[StackOverflow](https://stackoverflow.com/questions/tagged/ray).\r\n\r\nBefore submitting an issue, please fill out the following form.\r\n-->\r\n\r\n### System information\r\n- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Steropes\r\n- **Ray installed from (source or binary)**: pip install -U <latest whl>\r\n- **Ray version**: nightly\r\n- **Python version**: 3.7\r\n- **Exact command to reproduce**: Pass integer value of entropy_coeff into run() with PPO\r\n\r\n<!--\r\nYou can obtain the Ray version with\r\n\r\npython -c \"import ray; print(ray.__version__)\"\r\n-->\r\n\r\n### Describe the problem\r\n<!-- Describe the problem clearly here. -->\r\n\r\n### Source code / logs\r\n<!-- Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem. -->\r\n```\r\n2019-09-11 00:11:50,889 ERROR trial_runner.py:552 -- Error processing event.\r\nTraceback (most recent call last):\r\n File \"/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/tune/trial_runner.py\", line 498, in _process_trial\r\n result = self.trial_executor.fetch_result(trial)\r\n File \"/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/tune/ray_trial_executor.py\", line 347, in fetch_result\r\n result = ray.get(trial_future[0])\r\n File \"/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/worker.py\", line 2340, in get\r\n raise value\r\nray.exceptions.RayTaskError: ray_PPO:train() (pid=11050, host=steropes)\r\n File \"/data/ashwineep/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py\", line 527, in _apply_op_helper\r\n preferred_dtype=default_dtype)\r\n File \"/data/ashwineep/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/ops.py\", line 1224, in internal_convert_to_tensor\r\n ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)\r\n File \"/data/ashwineep/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/ops.py\", line 1018, in _TensorTensorConversionFunction\r\n (dtype.name, t.dtype.name, str(t)))\r\nValueError: Tensor conversion requested dtype int32 for Tensor with dtype float32: 'Tensor(\"default_policy/Sum_5:0\", shape=(?,), dtype=float32)'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nray_PPO:train() (pid=11050, host=steropes)\r\n File \"/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/agents/trainer_template.py\", line 90, in __init__\r\n Trainer.__init__(self, config, env, logger_creator)\r\n File \"/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/agents/trainer.py\", line 366, in __init__\r\n Trainable.__init__(self, config, logger_creator)\r\n File \"/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/tune/trainable.py\", line 99, in __init__\r\n self._setup(copy.deepcopy(self.config))\r\n File \"/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/agents/trainer.py\", line 486, in _setup\r\n self._init(self.config, self.env_creator)\r\n File \"/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/agents/trainer_template.py\", line 109, in _init\r\n self.config[\"num_workers\"])\r\n File \"/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/agents/trainer.py\", line 531, in _make_workers\r\n logdir=self.logdir)\r\n File \"/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/evaluation/worker_set.py\", line 64, in __init__\r\n RolloutWorker, env_creator, policy, 0, self._local_config)\r\n File \"/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/evaluation/worker_set.py\", line 220, in _make_worker\r\n _fake_sampler=config.get(\"_fake_sampler\", False))\r\n File \"/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/evaluation/rollout_worker.py\", line 348, in __init__\r\n self._build_policy_map(policy_dict, policy_config)\r\n File \"/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/evaluation/rollout_worker.py\", line 762, in _build_policy_map\r\n policy_map[name] = cls(obs_space, act_space, merged_conf)\r\n File \"/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/policy/tf_policy_template.py\", line 143, in __init__\r\n obs_include_prev_action_reward=obs_include_prev_action_reward)\r\n File \"/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/policy/dynamic_tf_policy.py\", line 196, in __init__\r\n self._initialize_loss()\r\n File \"/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/policy/dynamic_tf_policy.py\", line 337, in _initialize_loss\r\n loss = self._do_loss_init(train_batch)\r\n File \"/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/policy/dynamic_tf_policy.py\", line 349, in _do_loss_init\r\n loss = self._loss_fn(self, self.model, self._dist_class, train_batch)\r\n File \"/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/agents/ppo/ppo_policy.py\", line 146, in ppo_surrogate_loss\r\n model_config=policy.config[\"model\"])\r\n File \"/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/agents/ppo/ppo_policy.py\", line 106, in __init__\r\n vf_loss_coeff * vf_loss - entropy_coeff * curr_entropy)\r\n File \"/data/ashwineep/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/variables.py\", line 1045, in _run_op\r\n return tensor_oper(a.value(), *args, **kwargs)\r\n File \"/data/ashwineep/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/math_ops.py\", line 884, in binary_op_wrapper\r\n return func(x, y, name=name)\r\n File \"/data/ashwineep/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/math_ops.py\", line 1180, in _mul_dispatch\r\n return gen_math_ops.mul(x, y, name=name)\r\n File \"/data/ashwineep/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/gen_math_ops.py\", line 6490, in mul\r\n \"Mul\", x=x, y=y, name=name)\r\n File \"/data/ashwineep/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py\", line 563, in _apply_op_helper\r\n inferred_from[input_arg.type_attr]))\r\nTypeError: Input 'y' of 'Mul' Op has type float32 that does not match type int32 of argument 'x'.\r\n```\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport logging\n\nfrom ray.rllib.agents import with_common_config\nfrom ray.rllib.agents.ppo.ppo_policy import PPOTFPolicy\nfrom ray.rllib.agents.trainer_template import build_trainer\nfrom ray.rllib.optimizers import SyncSamplesOptimizer, LocalMultiGPUOptimizer\nfrom ray.rllib.utils import try_import_tf\n\ntf = try_import_tf()\nlogger = logging.getLogger(__name__)\n\n# yapf: disable\n# __sphinx_doc_begin__\nDEFAULT_CONFIG = with_common_config({\n # If true, use the Generalized Advantage Estimator (GAE)\n # with a value function, see https://arxiv.org/pdf/1506.02438.pdf.\n \"use_gae\": True,\n # GAE(lambda) parameter\n \"lambda\": 1.0,\n # Initial coefficient for KL divergence\n \"kl_coeff\": 0.2,\n # Size of batches collected from each worker\n \"sample_batch_size\": 200,\n # Number of timesteps collected for each SGD round\n \"train_batch_size\": 4000,\n # Total SGD batch size across all devices for SGD\n \"sgd_minibatch_size\": 128,\n # Whether to shuffle sequences in the batch when training (recommended)\n \"shuffle_sequences\": True,\n # Number of SGD iterations in each outer loop\n \"num_sgd_iter\": 30,\n # Stepsize of SGD\n \"lr\": 5e-5,\n # Learning rate schedule\n \"lr_schedule\": None,\n # Share layers for value function. If you set this to True, it's important\n # to tune vf_loss_coeff.\n \"vf_share_layers\": False,\n # Coefficient of the value function loss. It's important to tune this if\n # you set vf_share_layers: True\n \"vf_loss_coeff\": 1.0,\n # Coefficient of the entropy regularizer\n \"entropy_coeff\": 0.0,\n # Decay schedule for the entropy regularizer\n \"entropy_coeff_schedule\": None,\n # PPO clip parameter\n \"clip_param\": 0.3,\n # Clip param for the value function. Note that this is sensitive to the\n # scale of the rewards. If your expected V is large, increase this.\n \"vf_clip_param\": 10.0,\n # If specified, clip the global norm of gradients by this amount\n \"grad_clip\": None,\n # Target value for KL divergence\n \"kl_target\": 0.01,\n # Whether to rollout \"complete_episodes\" or \"truncate_episodes\"\n \"batch_mode\": \"truncate_episodes\",\n # Which observation filter to apply to the observation\n \"observation_filter\": \"NoFilter\",\n # Uses the sync samples optimizer instead of the multi-gpu one. This does\n # not support minibatches.\n \"simple_optimizer\": False,\n})\n# __sphinx_doc_end__\n# yapf: enable\n\n\ndef choose_policy_optimizer(workers, config):\n if config[\"simple_optimizer\"]:\n return SyncSamplesOptimizer(\n workers,\n num_sgd_iter=config[\"num_sgd_iter\"],\n train_batch_size=config[\"train_batch_size\"],\n sgd_minibatch_size=config[\"sgd_minibatch_size\"])\n\n return LocalMultiGPUOptimizer(\n workers,\n sgd_batch_size=config[\"sgd_minibatch_size\"],\n num_sgd_iter=config[\"num_sgd_iter\"],\n num_gpus=config[\"num_gpus\"],\n sample_batch_size=config[\"sample_batch_size\"],\n num_envs_per_worker=config[\"num_envs_per_worker\"],\n train_batch_size=config[\"train_batch_size\"],\n standardize_fields=[\"advantages\"],\n shuffle_sequences=config[\"shuffle_sequences\"])\n\n\ndef update_kl(trainer, fetches):\n if \"kl\" in fetches:\n # single-agent\n trainer.workers.local_worker().for_policy(\n lambda pi: pi.update_kl(fetches[\"kl\"]))\n else:\n\n def update(pi, pi_id):\n if pi_id in fetches:\n pi.update_kl(fetches[pi_id][\"kl\"])\n else:\n logger.debug(\"No data for {}, not updating kl\".format(pi_id))\n\n # multi-agent\n trainer.workers.local_worker().foreach_trainable_policy(update)\n\n\ndef warn_about_bad_reward_scales(trainer, result):\n # Warn about bad clipping configs\n if trainer.config[\"vf_clip_param\"] <= 0:\n rew_scale = float(\"inf\")\n elif result[\"policy_reward_mean\"]:\n rew_scale = 0 # punt on handling multiagent case\n else:\n rew_scale = round(\n abs(result[\"episode_reward_mean\"]) /\n trainer.config[\"vf_clip_param\"], 0)\n if rew_scale > 200:\n logger.warning(\n \"The magnitude of your environment rewards are more than \"\n \"{}x the scale of `vf_clip_param`. \".format(rew_scale) +\n \"This means that it will take more than \"\n \"{} iterations for your value \".format(rew_scale) +\n \"function to converge. If this is not intended, consider \"\n \"increasing `vf_clip_param`.\")\n\n\ndef validate_config(config):\n if config[\"entropy_coeff\"] < 0:\n raise DeprecationWarning(\"entropy_coeff must be >= 0\")\n if config[\"sgd_minibatch_size\"] > config[\"train_batch_size\"]:\n raise ValueError(\n \"Minibatch size {} must be <= train batch size {}.\".format(\n config[\"sgd_minibatch_size\"], config[\"train_batch_size\"]))\n if config[\"batch_mode\"] == \"truncate_episodes\" and not config[\"use_gae\"]:\n raise ValueError(\n \"Episode truncation is not supported without a value \"\n \"function. Consider setting batch_mode=complete_episodes.\")\n if config[\"multiagent\"][\"policies\"] and not config[\"simple_optimizer\"]:\n logger.info(\n \"In multi-agent mode, policies will be optimized sequentially \"\n \"by the multi-GPU optimizer. Consider setting \"\n \"simple_optimizer=True if this doesn't work for you.\")\n if config[\"simple_optimizer\"]:\n logger.warning(\n \"Using the simple minibatch optimizer. This will significantly \"\n \"reduce performance, consider simple_optimizer=False.\")\n elif tf and tf.executing_eagerly():\n config[\"simple_optimizer\"] = True # multi-gpu not supported\n\n\nPPOTrainer = build_trainer(\n name=\"PPO\",\n default_config=DEFAULT_CONFIG,\n default_policy=PPOTFPolicy,\n make_policy_optimizer=choose_policy_optimizer,\n validate_config=validate_config,\n after_optimizer_step=update_kl,\n after_train_result=warn_about_bad_reward_scales)\n", "path": "rllib/agents/ppo/ppo.py"}], "after_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport logging\n\nfrom ray.rllib.agents import with_common_config\nfrom ray.rllib.agents.ppo.ppo_policy import PPOTFPolicy\nfrom ray.rllib.agents.trainer_template import build_trainer\nfrom ray.rllib.optimizers import SyncSamplesOptimizer, LocalMultiGPUOptimizer\nfrom ray.rllib.utils import try_import_tf\n\ntf = try_import_tf()\nlogger = logging.getLogger(__name__)\n\n# yapf: disable\n# __sphinx_doc_begin__\nDEFAULT_CONFIG = with_common_config({\n # If true, use the Generalized Advantage Estimator (GAE)\n # with a value function, see https://arxiv.org/pdf/1506.02438.pdf.\n \"use_gae\": True,\n # GAE(lambda) parameter\n \"lambda\": 1.0,\n # Initial coefficient for KL divergence\n \"kl_coeff\": 0.2,\n # Size of batches collected from each worker\n \"sample_batch_size\": 200,\n # Number of timesteps collected for each SGD round\n \"train_batch_size\": 4000,\n # Total SGD batch size across all devices for SGD\n \"sgd_minibatch_size\": 128,\n # Whether to shuffle sequences in the batch when training (recommended)\n \"shuffle_sequences\": True,\n # Number of SGD iterations in each outer loop\n \"num_sgd_iter\": 30,\n # Stepsize of SGD\n \"lr\": 5e-5,\n # Learning rate schedule\n \"lr_schedule\": None,\n # Share layers for value function. If you set this to True, it's important\n # to tune vf_loss_coeff.\n \"vf_share_layers\": False,\n # Coefficient of the value function loss. It's important to tune this if\n # you set vf_share_layers: True\n \"vf_loss_coeff\": 1.0,\n # Coefficient of the entropy regularizer\n \"entropy_coeff\": 0.0,\n # Decay schedule for the entropy regularizer\n \"entropy_coeff_schedule\": None,\n # PPO clip parameter\n \"clip_param\": 0.3,\n # Clip param for the value function. Note that this is sensitive to the\n # scale of the rewards. If your expected V is large, increase this.\n \"vf_clip_param\": 10.0,\n # If specified, clip the global norm of gradients by this amount\n \"grad_clip\": None,\n # Target value for KL divergence\n \"kl_target\": 0.01,\n # Whether to rollout \"complete_episodes\" or \"truncate_episodes\"\n \"batch_mode\": \"truncate_episodes\",\n # Which observation filter to apply to the observation\n \"observation_filter\": \"NoFilter\",\n # Uses the sync samples optimizer instead of the multi-gpu one. This does\n # not support minibatches.\n \"simple_optimizer\": False,\n})\n# __sphinx_doc_end__\n# yapf: enable\n\n\ndef choose_policy_optimizer(workers, config):\n if config[\"simple_optimizer\"]:\n return SyncSamplesOptimizer(\n workers,\n num_sgd_iter=config[\"num_sgd_iter\"],\n train_batch_size=config[\"train_batch_size\"],\n sgd_minibatch_size=config[\"sgd_minibatch_size\"])\n\n return LocalMultiGPUOptimizer(\n workers,\n sgd_batch_size=config[\"sgd_minibatch_size\"],\n num_sgd_iter=config[\"num_sgd_iter\"],\n num_gpus=config[\"num_gpus\"],\n sample_batch_size=config[\"sample_batch_size\"],\n num_envs_per_worker=config[\"num_envs_per_worker\"],\n train_batch_size=config[\"train_batch_size\"],\n standardize_fields=[\"advantages\"],\n shuffle_sequences=config[\"shuffle_sequences\"])\n\n\ndef update_kl(trainer, fetches):\n if \"kl\" in fetches:\n # single-agent\n trainer.workers.local_worker().for_policy(\n lambda pi: pi.update_kl(fetches[\"kl\"]))\n else:\n\n def update(pi, pi_id):\n if pi_id in fetches:\n pi.update_kl(fetches[pi_id][\"kl\"])\n else:\n logger.debug(\"No data for {}, not updating kl\".format(pi_id))\n\n # multi-agent\n trainer.workers.local_worker().foreach_trainable_policy(update)\n\n\ndef warn_about_bad_reward_scales(trainer, result):\n # Warn about bad clipping configs\n if trainer.config[\"vf_clip_param\"] <= 0:\n rew_scale = float(\"inf\")\n elif result[\"policy_reward_mean\"]:\n rew_scale = 0 # punt on handling multiagent case\n else:\n rew_scale = round(\n abs(result[\"episode_reward_mean\"]) /\n trainer.config[\"vf_clip_param\"], 0)\n if rew_scale > 200:\n logger.warning(\n \"The magnitude of your environment rewards are more than \"\n \"{}x the scale of `vf_clip_param`. \".format(rew_scale) +\n \"This means that it will take more than \"\n \"{} iterations for your value \".format(rew_scale) +\n \"function to converge. If this is not intended, consider \"\n \"increasing `vf_clip_param`.\")\n\n\ndef validate_config(config):\n if config[\"entropy_coeff\"] < 0:\n raise DeprecationWarning(\"entropy_coeff must be >= 0\")\n if isinstance(config[\"entropy_coeff\"], int):\n config[\"entropy_coeff\"] = float(config[\"entropy_coeff\"])\n if config[\"sgd_minibatch_size\"] > config[\"train_batch_size\"]:\n raise ValueError(\n \"Minibatch size {} must be <= train batch size {}.\".format(\n config[\"sgd_minibatch_size\"], config[\"train_batch_size\"]))\n if config[\"batch_mode\"] == \"truncate_episodes\" and not config[\"use_gae\"]:\n raise ValueError(\n \"Episode truncation is not supported without a value \"\n \"function. Consider setting batch_mode=complete_episodes.\")\n if config[\"multiagent\"][\"policies\"] and not config[\"simple_optimizer\"]:\n logger.info(\n \"In multi-agent mode, policies will be optimized sequentially \"\n \"by the multi-GPU optimizer. Consider setting \"\n \"simple_optimizer=True if this doesn't work for you.\")\n if config[\"simple_optimizer\"]:\n logger.warning(\n \"Using the simple minibatch optimizer. This will significantly \"\n \"reduce performance, consider simple_optimizer=False.\")\n elif tf and tf.executing_eagerly():\n config[\"simple_optimizer\"] = True # multi-gpu not supported\n\n\nPPOTrainer = build_trainer(\n name=\"PPO\",\n default_config=DEFAULT_CONFIG,\n default_policy=PPOTFPolicy,\n make_policy_optimizer=choose_policy_optimizer,\n validate_config=validate_config,\n after_optimizer_step=update_kl,\n after_train_result=warn_about_bad_reward_scales)\n", "path": "rllib/agents/ppo/ppo.py"}]}
| 3,930 | 157 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.